id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
57,241,781 | https://en.wikipedia.org/wiki/MIR-3 | MIR-3 () is a third-generation computer that was released in the 1970s in the Soviet Union. It collected all the achievements of microelectronics in the 1970s. The main task of the MIR-3 computer was to solve computational problems for engineers.
MIR-3 consisted of keyboard, TV (display), a means of reading magnetic tapes and disks, a processor. The size of MIR-3 has decreased. Now it was the size of a regular desk. True, the size was without the means of reading magnetic tapes and disks.
The speed of the MIR-3 computer was 105 (100,000) - 107 (10,000,000) actions per second. The memory capacity was up to 106 knocks.
Essentially, the MIR-3 computer consisted of several computers. The microprocessor consists of several processors, each of which was responsible for the operation of a separate MIR-3 unit. For example, one for reading information from magnetic tapes and transferring information, the other for processing and calculations, the third for printing on the keyboard, and so on.
The complex structure of MIR-3 required the creation of means for coordinating the work of individual computer parts.
When creating the charter, the language Analitik-74 was used.
Participation in the creation of computers MIR-3 hosted by the Academy of Sciences of the Ukrainian SSR, including Victor Glushkov.
References
Soviet computer systems | MIR-3 | Technology | 289 |
1,971,934 | https://en.wikipedia.org/wiki/Roborior | Roborior is a robot manufactured by the robotics company Tmsuk and marketed by Sanyo. It is used both for lighting and guarding homes. Roborior is roughly the size of a watermelon and can produce different hues of color ranging from blue, purple, and orange. The Roborior is also equipped with a digital video camera that can stream live video directly to the owner's cell phone if it detects an intruder. The Roborior can be controlled remotely with a hand set, much like a Remote control vehicle, as well. It was introduced in Japan in late 2005 and was priced at 280,000 Japanese yen. The name is a portmanteau of robot and interior.
References
External links
Description of the Roborior
Domestic robots
Robots of Japan
2005 robots | Roborior | Technology | 163 |
232,526 | https://en.wikipedia.org/wiki/Bertrand%27s%20postulate | In number theory, Bertrand's postulate is the theorem that for any integer , there exists at least one prime number with
A less restrictive formulation is: for every , there is always at least one prime such that
Another formulation, where is the -th prime, is: for
This statement was first conjectured in 1845 by Joseph Bertrand (1822–1900). Bertrand himself verified his statement for all integers .
His conjecture was completely proved by Chebyshev (1821–1894) in 1852 and so the postulate is also called the Bertrand–Chebyshev theorem or Chebyshev's theorem. Chebyshev's theorem can also be stated as a relationship with , the prime-counting function (number of primes less than or equal to ):
Prime number theorem
The prime number theorem (PNT) implies that the number of primes up to x, π(x), is roughly x/log(x), so if we replace x with 2x then we see the number of primes up to 2x is asymptotically twice the number of primes up to x (the terms log(2x) and log(x) are asymptotically equivalent). Therefore, the number of primes between n and 2n is roughly n/log(n) when n is large, and so in particular there are many more primes in this interval than are guaranteed by Bertrand's postulate. So Bertrand's postulate is comparatively weaker than the PNT. But PNT is a deep theorem, while Bertrand's Postulate can be stated more memorably and proved more easily, and also makes precise claims about what happens for small values of n. (In addition, Chebyshev's theorem was proved before the PNT and so has historical interest.)
The similar and still unsolved Legendre's conjecture asks whether for every n ≥ 1, there is a prime p such that n2 < p < (n + 1)2. Again we expect that there will be not just one but many primes between n2 and (n + 1)2, but in this case the PNT does not help: the number of primes up to x2 is asymptotic to x2/log(x2) while the number of primes up to (x + 1)2 is asymptotic to (x + 1)2/log((x + 1)2), which is asymptotic to the estimate on primes up to x2. So, unlike the previous case of x and 2x, we do not get a proof of Legendre's conjecture for large n. Error estimates on the PNT are not (indeed, cannot be) sufficient to prove the existence of even one prime in this interval. In greater detail, the PNT allows to estimate the boundaries for all ε > 0, there exists an S such that for x > S:
The ratio between the lower bound π((x+1)2) and the upper bound of π(x2) is
Note that since when , for all x > 0, and for a fixed ε, there exists an R such that the ratio above is less that 1 for all x > R. Thus, it does not ensure that there exists a prime between π(x2) and π((x+1)2). More generally, these simple bounds are not enough to prove that there exists a prime between π(xn) and π((x+1)n) for any positive integer n > 1.
Generalizations
In 1919, Ramanujan (1887–1920) used properties of the Gamma function to give a simpler proof than Chebyshev's. His short paper included a generalization of the postulate, from which would later arise the concept of Ramanujan primes. Further generalizations of Ramanujan primes have also been discovered; for instance, there is a proof that
with pk the kth prime and Rn the nth Ramanujan prime.
Other generalizations of Bertrand's postulate have been obtained using elementary methods. (In the following, n runs through the set of positive integers.) In 1973, Denis Hanson proved that there exists a prime between 3n and 4n.
In 2006, apparently unaware of Hanson's result, M. El Bachraoui proposed a proof that there exists a prime between 2n and 3n. El Bachraoui's proof is an extension of Erdős's arguments for the primes between n and 2n. Shevelev, Greathouse, and Moses (2013) discuss related results for similar intervals.
Bertrand’s postulate over the Gaussian integers is an extension of the idea of the distribution of primes, but in this case on the complex plane. Thus, as Gaussian primes extend over the plane and not only along a line, and doubling a complex number is not simply multiplying by 2 but doubling its norm (multiplying by 1+i), different definitions lead to different results, some are still conjectures, some proven.
Sylvester's theorem
Bertrand's postulate was proposed for applications to permutation groups. Sylvester (1814–1897) generalized the weaker statement with the statement: the product of k consecutive integers greater than k is divisible by a prime greater than k. Bertrand's (weaker) postulate follows from this by taking k = n, and considering the k numbers n + 1, n + 2, up to and including n + k = 2n, where n > 1. According to Sylvester's generalization, one of these numbers has a prime factor greater than k. Since all these numbers are less than 2(k + 1), the number with a prime factor greater than k has only one prime factor, and thus is a prime. Note that 2n is not prime, and thus indeed we now know there exists a prime p with n < p < 2n.
Erdős's theorems
In 1932, Erdős (1913–1996) also published a simpler proof using binomial coefficients and the Chebyshev function , defined as:
where p ≤ x runs over primes. See proof of Bertrand's postulate for the details.
Erdős proved in 1934 that for any positive integer k, there is a natural number N such that for all n > N, there are at least k primes between n and 2n. An equivalent statement had been proved in 1919 by Ramanujan (see Ramanujan prime).
Better results
It follows from the prime number theorem that for any real there is a such that for all there is a prime such that . It can be shown, for instance, that
which implies that goes to infinity (and, in particular, is greater than 1 for sufficiently large ).
Non-asymptotic bounds have also been proved. In 1952, Jitsuro Nagura proved that for there is always a prime between and .
In 1976, Lowell Schoenfeld showed that for , there is always a prime in the open interval .
In his 1998 doctoral thesis, Pierre Dusart improved the above result, showing that for ,
,
and in particular for , there exists a prime in the interval .
In 2010 Pierre Dusart proved that for there is at least one prime in the interval .
In 2016, Pierre Dusart improved his result from 2010, showing (Proposition 5.4) that if , there is at least one prime in the interval . He also shows (Corollary 5.5) that for , there is at least one prime in the interval .
Baker, Harman and Pintz proved that there is a prime in the interval for all sufficiently large .
Dudek proved that for all , there is at least one prime between and .
Dudek also proved that the Riemann hypothesis implies that for all there is a prime satisfying
Consequences
The sequence of primes, along with 1, is a complete sequence; any positive integer can be written as a sum of primes (and 1) using each at most once.
The only harmonic number that is an integer is the number 1.
See also
Oppermann's conjecture
Prime gap
Proof of Bertrand's postulate
Ramanujan prime
Notes
Bibliography
Chris Caldwell, Bertrand's postulate at Prime Pages glossary.
External links
A proof of the weak version in the Mizar system: http://mizar.org/version/current/html/nat_4.html#T56
Bertrand's postulate − A proof of the weak version at www.dimostriamogoldbach.it/en/
Mathematical theorems
Number theory
Prime numbers
Theorems about prime numbers
Theorems in algebra | Bertrand's postulate | Mathematics | 1,802 |
14,124,460 | https://en.wikipedia.org/wiki/Flame%20arrester | A flame arrester (also spelled arrestor), deflagration arrester, or flame trap is a device or form of construction that will allow free passage of a gas or gaseous mixture but will interrupt or prevent the passage of flame. It prevents the transmission of flame through a flammable gas/air mixture by quenching the flame on the high surface area provided by an array of small passages through which the flame must pass. The emerging gases are cooled enough to prevent ignition on the protected side.
Principles
Flame arresters are safety devices fitted to openings of enclosures or to pipe work, and are intended to allow flow but prevent flame transmission. A flame arrester functions by absorbing the heat from a flame front thus dropping the burning gas/air mixture below its auto-ignition temperature; consequently, the flame cannot survive. The heat is absorbed through channels (passages) designed into an element. These channels are chosen and measured as the MESG (maximum experimental safe gap) of the gas for a particular installation. These passages can be regular, like crimped metal ribbon or wire mesh or a sheet metal plate with punched holes, or irregular, such as those in random packing.
The required size of the channels needed to stop the flame front can vary significantly, depending on the flammability of the fuel mixture. The large openings on a chain link fence are capable of slowing the spread of a small, slow-burning grass fire, but fast-burning grass fires will penetrate the fence unless the holes are very small. In a coal mine containing highly explosive coal dust or methane, the wire mesh of a Davy lamp must be very tightly spaced.
For flame arresters used as a safety device, the mesh must be protected from damage due to being dropped or struck by another object, and the mesh must be capable of rigidly retaining its shape during the propagation of a flame front. Any shifting of the individual wires that make up the mesh can create an opening large enough to allow the flame to penetrate and spread beyond the barrier.
On a fuel storage vent, flame arresters also serve a secondary purpose of allowing air pressure to equalize inside the tank when fuel is added or removed, while also preventing insects from flying or crawling into the vent piping and fouling the fuel in the tanks and pipes.
Usage and applications
The uses of a flame arrester include:
Installed in combustor/burner air intakes with no pipe or bends before the intake to stop confined and unconfined, low pressure deflagrations, preventing an ignited atmospheric vapor cloud from propagating beyond the Flame Arrestor, outside of a burner/flare intake.
Prevent the spread of ignited flammable vapor within an enclosed system, such as a piping network.
Preventing potentially explosive mixtures from propagating after ignition by lightning, static electricity or other sources in a vent to atmosphere.
Stopping the propagation of a flame traveling at subsonic velocities
Some common objects that have flame arresters are:
Fuel storage tank vents
Fuel gas pipelines
Safety storage cabinets for paint, aerosol cans, and other flammable mixtures
The exhaust system of internal combustion engines
The air intake of combusters, marine gasoline inboard engines and short flare stacks.
Davy lamps in coal mining
Overproof rum and other flammable liquors.
Portable plastic gasoline containers
Safety
Flame arresters should be used only in the gas group and conditions they have been designed and tested for. Since the depth on an arrester is specified for certain conditions, changes in the temperature, pressure, or composition of the gases entering the arrester can cause the flame spatial velocity to increase, making the design of the arrester insufficient to stop the flame front ("propagation"). The deflagration may continue downstream of the arrester.
Flame arresters should be periodically inspected to make sure they are free of dirt, insects using it as a nest, or corrosion. The U.S. Chemical Safety and Hazard Investigation Board concluded that an uninspected and badly corroded flame arrester failed to prevent a 2006 explosion at a wastewater treatment plant in Daytona Beach, Florida.
See also
References
Industrial safety devices
Fire prevention
Safety engineering
Occupational safety and health | Flame arrester | Engineering | 851 |
23,434,812 | https://en.wikipedia.org/wiki/C7H6O2 | The molecular formula C7H6O2 (molar mass: 122.12 g/mol, exact mass: 122.036779 u) may refer to:
Benzoic acid
1,3-Benzodioxole
Hydroxybenzaldehyde
Salicylaldehyde (2-hydroxybenzaldehyde)
3-Hydroxybenzaldehyde
4-Hydroxybenzaldehyde
Tropolone | C7H6O2 | Chemistry | 94 |
160,986 | https://en.wikipedia.org/wiki/Order%20statistic | In statistics, the kth order statistic of a statistical sample is equal to its kth-smallest value. Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.
Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles.
When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.
Notation and examples
For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are
6, 9, 3, 7,
the order statistics would be denoted
where the subscript enclosed in parentheses indicates the th order statistic of the sample.
The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,
where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.
Similarly, for a sample of size , the th order statistic (or largest order statistic) is the maximum, that is,
The sample range is the difference between the maximum and minimum. It is a function of the order statistics:
A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.
The sample median may or may not be an order statistic, since there is a single middle value only when the number of observations is odd. More precisely, if for some integer , then the sample median is and so is an order statistic. On the other hand, when is even, and there are two middle values, and , and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.
Probabilistic analysis
Given any random variables X1, X2, ..., Xn, the order statistics X(1), X(2), ..., X(n) are also random variables, defined by sorting the values (realizations) of X1, ..., Xn in increasing order.
When the random variables X1, X2, ..., Xn form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables X1, ..., Xn can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem.
From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (PDF), that is, they are absolutely continuous. The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.
Cumulative distribution function of order statistics
For a random sample as above, with cumulative distribution , the order statistics for that sample have cumulative distributions as follows
(where r specifies which order statistic):
the corresponding probability density function may be derived from this result, and is found to be
Moreover, there are two special cases, which have CDFs that are easy to compute.
Which can be derived by careful consideration of probabilities.
Probability distributions of order statistics
Order statistics sampled from a uniform distribution
In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.
We assume throughout this section that is a random sample drawn from a continuous distribution with cdf . Denoting we obtain the corresponding random sample from the standard uniform distribution. Note that the order statistics also satisfy .
The probability density function of the order statistic is equal to
that is, the kth order statistic of the uniform distribution is a beta-distributed random variable.
The proof of these statements is as follows. For to be between u and u + du, it is necessary that exactly k − 1 elements of the sample are smaller than u, and that at least one is between u and u + du. The probability that more than one is in this latter interval is already , so we have to calculate the probability that exactly k − 1, 1 and n − k observations fall in the intervals , and respectively. This equals (refer to multinomial distribution for details)
and the result follows.
The mean of this distribution is k / (n + 1).
The joint distribution of the order statistics of the uniform distribution
Similarly, for i < j, the joint probability density function of the two order statistics U(i) < U(j) can be shown to be
which is (up to terms of higher order than ) the probability that i − 1, 1, j − 1 − i, 1 and n − j sample elements fall in the intervals , , , , respectively.
One reasons in an entirely analogous way to derive the higher-order joint distributions. Perhaps surprisingly, the joint density of the n order statistics turns out to be constant:
One way to understand this is that the unordered sample does have constant density equal to 1, and that there are n! different permutations of the sample corresponding to the same sequence of order statistics. This is related to the fact that 1/n! is the volume of the region . It is also related with another particularity of order statistics of uniform random variables: It follows from the BRS-inequality that the maximum expected number of uniform U(0,1] random variables one can choose from a sample of size n with a sum up not exceeding is bounded above by
, which is thus invariant on the set of all
with constant product .
Using the above formulas, one can derive the distribution of the range of the order statistics, that is the distribution of , i.e. maximum minus the minimum. More generally, for , also has a beta distribution: From these formulas we can derive the covariance between two order statistics:The formula follows from noting that and comparing that with where , which is the actual distribution of the difference.
Order statistics sampled from an exponential distribution
For a random sample of size n from an exponential distribution with parameter λ, the order statistics X(i) for i = 1,2,3, ..., n each have distribution
where the Zj are iid standard exponential random variables (i.e. with rate parameter 1). This result was first published by Alfréd Rényi.
Order statistics sampled from an Erlang distribution
The Laplace transform of order statistics may be sampled from an Erlang distribution via a path counting method .
The joint distribution of the order statistics of an absolutely continuous distribution
If FX is absolutely continuous, it has a density such that , and we can use the substitutions
and
to derive the following probability density functions for the order statistics of a sample of size n drawn from the distribution of X:
where
where
Application: confidence intervals for quantiles
An interesting question is how well the order statistics perform as estimators of the quantiles of the underlying distribution.
A small-sample-size example
The simplest case to consider is how well the sample median estimates the population median.
As an example, consider a random sample of size 6. In that case, the sample median is usually defined as the midpoint of the interval delimited by the 3rd and 4th order statistics. However, we know from the preceding discussion that the probability that this interval actually contains the population median is
Although the sample median is probably among the best distribution-independent point estimates of the population median, what this example illustrates is that it is not a particularly good one in absolute terms. In this particular case, a better confidence interval for the median is the one delimited by the 2nd and 5th order statistics, which contains the population median with probability
With such a small sample size, if one wants at least 95% confidence, one is reduced to saying that the median is between the minimum and the maximum of the 6 observations with probability 31/32 or approximately 97%. Size 6 is, in fact, the smallest sample size such that the interval determined by the minimum and the maximum is at least a 95% confidence interval for the population median.
Large sample sizes
For the uniform distribution, as n tends to infinity, the pth sample quantile is asymptotically normally distributed, since it is approximated by
For a general distribution F with a continuous non-zero density at F −1(p), a similar asymptotic normality applies:
where f is the density function, and F −1 is the quantile function associated with F. One of the first people to mention and prove this result was Frederick Mosteller in his seminal paper in 1946. Further research led in the 1960s to the Bahadur representation which provides information about the errorbounds. The convergence to normal distribution also holds in a stronger sense, such as convergence in relative entropy or KL divergence.
An interesting observation can be made in the case where the distribution is symmetric, and the population median equals the population mean. In this case, the sample mean, by the central limit theorem, is also asymptotically normally distributed, but with variance σ2/n instead. This asymptotic analysis suggests that the mean outperforms the median in cases of low kurtosis, and vice versa. For example, the median achieves better confidence intervals for the Laplace distribution, while the mean performs better for X that are normally distributed.
Proof
It can be shown that
where
with Zi being independent identically distributed exponential random variables with rate 1. Since X/n and Y/n are asymptotically normally distributed by the CLT, our results follow by application of the delta method.
Mutual Information of Order Statistics
The mutual information and f-divergence between order statistics have also been considered. For example, if the parent distribution is continuous, then for all
In other words, mutual information is independent of the parent distribution. For discrete random variables, the equality need not to hold and we only have
The mutual information between uniform order statistics is given by
where
where is the -th harmonic number.
Application: Non-parametric density estimation
Moments of the distribution for the first order statistic can be used to develop a non-parametric density estimator. Suppose, we want to estimate the density at the point . Consider the random variables , which are i.i.d with distribution function . In particular, .
The expected value of the first order statistic given a sample of total observations yields,
where is the quantile function associated with the distribution , and . This equation in combination with a jackknifing technique becomes the basis for the following density estimation algorithm,
Input: A sample of observations. points of density evaluation. Tuning parameter (usually 1/3).
Output: estimated density at the points of evaluation.
1: Set
2: Set
3: Create an matrix which holds subsets with observations each.
4: Create a vector to hold the density evaluations.
5: for do
6: for do
7: Find the nearest distance to the current point within the th subset
8: end for
9: Compute the subset average of distances to
10: Compute the density estimate at
11: end for
12: return
In contrast to the bandwidth/length based tuning parameters for histogram and kernel based approaches, the tuning parameter for the order statistic based density estimator is the size of sample subsets. Such an estimator is more robust than histogram and kernel based approaches, for example densities like the Cauchy distribution (which lack finite moments) can be inferred without the need for specialized modifications such as IQR based bandwidths. This is because the first moment of the order statistic always exists if the expected value of the underlying distribution does, but the converse is not necessarily true.
Dealing with discrete variables
Suppose are i.i.d. random variables from a discrete distribution with cumulative distribution function and probability mass function . To find the probabilities of the order statistics, three values are first needed, namely
The cumulative distribution function of the order statistic can be computed by noting that
Similarly, is given by
Note that the probability mass function of is just the difference of these values, that is to say
Computing order statistics
The problem of computing the kth smallest (or largest) element of a list is called the selection problem and is solved by a selection algorithm. Although this problem is difficult for very large lists, sophisticated selection algorithms have been created that can solve this problem in time proportional to the number of elements in the list, even if the list is totally unordered. If the data is stored in certain specialized data structures, this time can be brought down to O(log n). In many applications all order statistics are required, in which case a sorting algorithm can be used and the time taken is O(n log n).
Applications
Order statistics have a lot of applications in areas as reliability theory, financial mathematics, survival analysis, epidemiology, sports, quality control, actuarial risk, etc. There is an extensive literature devoted to studies on applications of order statistics in these fields.
For example, a recent application in actuarial risk can be found in, where some weighted premium principles in terms of record claims and kth record claims are provided.
See also
Rankit
Box plot
BRS-inequality
Concomitant (statistics)
Fisher–Tippett distribution
Bapat–Beg theorem for the order statistics of independent but not necessarily identically distributed random variables
Bernstein polynomial
L-estimator – linear combinations of order statistics
Rank-size distribution
Selection algorithm
Examples of order statistics
Sample maximum and minimum
Quantile
Percentile
Decile
Quartile
Median
Mean
Sample mean and covariance
References
External links
Retrieved Feb 02, 2005
Retrieved Feb 02, 2005
C++ source Dynamic Order Statistics
Nonparametric statistics
Summary statistics
Permutations | Order statistic | Mathematics | 2,917 |
601,025 | https://en.wikipedia.org/wiki/Minkowski%E2%80%93Bouligand%20dimension | In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a bounded set in a Euclidean space , or more generally in a metric space . It is named after the Polish mathematician Hermann Minkowski and the French mathematician Georges Bouligand.
To calculate this dimension for a fractal , imagine this fractal lying on an evenly spaced grid and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid finer by applying a box-counting algorithm.
Suppose that is the number of boxes of side length required to cover the set. Then the box-counting dimension is defined as
Roughly speaking, this means that the dimension is the exponent such that , which is what one would expect in the trivial case where is a smooth space (a manifold) of integer dimension .
If the above limit does not exist, one may still take the limit superior and limit inferior, which respectively define the upper box dimension and lower box dimension. The upper box dimension is sometimes called the entropy dimension, Kolmogorov dimension, Kolmogorov capacity, limit capacity or upper Minkowski dimension, while the lower box dimension is also called the lower Minkowski dimension.
The upper and lower box dimensions are strongly related to the more popular Hausdorff dimension. Only in very special applications is it important to distinguish between the three (see below). Yet another measure of fractal dimension is the correlation dimension.
Alternative definitions
It is possible to define the box dimensions using balls, with either the covering number or the packing number. The covering number is the minimal number of open balls of radius required to cover the fractal, or in other words, such that their union contains the fractal. We can also consider the intrinsic covering number , which is defined the same way but with the additional requirement that the centers of the open balls lie in the set S. The packing number is the maximal number of disjoint open balls of radius one can situate such that their centers would be in the fractal. While , , and are not exactly identical, they are closely related to each other and give rise to identical definitions of the upper and lower box dimensions. This is easy to show once the following inequalities are proven:
These, in turn, follow either by definition or with little effort from the triangle inequality.
The advantage of using balls rather than squares is that this definition generalizes to any metric space. In other words, the box definition is extrinsic – one assumes the fractal space S is contained in a Euclidean space, and defines boxes according to the external geometry of the containing space. However, the dimension of S should be intrinsic, independent of the environment into which S is placed, and the ball definition can be formulated intrinsically. One defines an internal ball as all points of S within a certain distance of a chosen center, and one counts such balls to get the dimension. (More precisely, the Ncovering definition is extrinsic, but the other two are intrinsic.)
The advantage of using boxes is that in many cases N(ε) may be easily calculated explicitly, and that for boxes the covering and packing numbers (defined in an equivalent way) are equal.
The logarithm of the packing and covering numbers are sometimes referred to as entropy numbers and are somewhat analogous to the concepts of thermodynamic entropy and information-theoretic entropy, in that they measure the amount of "disorder" in the metric space or fractal at scale ε and also measure how many bits or digits one would need to specify a point of the space to accuracy ε.
Another equivalent (extrinsic) definition for the box-counting dimension is given by the formula
where for each r > 0, the set is defined to be the r-neighborhood of S, i.e. the set of all points in that are at distance less than r from S (or equivalently, is the union of all the open balls of radius r which have a center that is a member of S).
Properties
The upper box dimension is finitely stable, i.e. if {A1, ..., An} is a finite collection of sets, then
However, it is not countably stable, i.e. this equality does not hold for an infinite sequence of sets. For example, the box dimension of a single point is 0, but the box dimension of the collection of rational numbers in the interval [0, 1] has dimension 1. The Hausdorff dimension by comparison, is countably stable. The lower box dimension, on the other hand, is not even finitely stable.
An interesting property of the upper box dimension not shared with either the lower box dimension or the Hausdorff dimension is the connection to set addition. If A and B are two sets in a Euclidean space, then A + B is formed by taking all the pairs of points a, b where a is from A and b is from B and adding a + b. One has
Relations to the Hausdorff dimension
The box-counting dimension is one of a number of definitions for dimension that can be applied to fractals. For many well behaved fractals all these dimensions are equal; in particular, these dimensions coincide whenever the fractal satisfies the open set condition (OSC). For example, the Hausdorff dimension, lower box dimension, and upper box dimension of the Cantor set are all equal to log(2)/log(3). However, the definitions are not equivalent.
The box dimensions and the Hausdorff dimension are related by the inequality
In general, both inequalities may be strict. The upper box dimension may be bigger than the lower box dimension if the fractal has different behaviour in different scales. For example, examine the set of numbers in the interval [0, 1] satisfying the condition
The digits in the "odd place-intervals", i.e. between digits 22n+1 and 22n+2 − 1 are not restricted and may take any value. This fractal has upper box dimension 2/3 and lower box dimension 1/3, a fact which may be easily verified by calculating N(ε) for and noting that their values behave differently for n even and odd.
Another example: the set of rational numbers , a countable set with , has because its closure, , has dimension 1. In fact,
These examples show that adding a countable set can change box dimension, demonstrating a kind of instability of this dimension.
See also
Correlation dimension
Packing dimension
Uncertainty exponent
Weyl–Berry conjecture
Lacunarity
References
External links
FrakOut!: an OSS application for calculating the fractal dimension of a shape using the box counting method (Does not automatically place the boxes for you).
FracLac: online user guide and software ImageJ and FracLac box counting plugin; free user-friendly open source software for digital image analysis in biology
Fractals
Dimension theory
Hermann Minkowski | Minkowski–Bouligand dimension | Mathematics | 1,467 |
13,779,977 | https://en.wikipedia.org/wiki/List%20of%20BlackBerry%20products | The following is a partial list of BlackBerry products. BlackBerry is a line of wireless handheld devices first introduced in 1996 and manufactured by the Canadian company BlackBerry, formerly known as Research In Motion (RIM).
Early pager models
These two-way pager models had thumb keyboards, with a thumbwheel for scrolling its monochrome text display.
The first model, the Inter@ctive Pager, was announced on September 18, 1996. Within a year, Yankee Group was estimating that devices like the Inter@ctive Pager were in use by fewer than 400,000 people and expected two-way wireless messaging services to attract 51 million users by 2002.
They provided e-mail and WAP services, with limited HTML access provided via third party software such as WolfeTech PocketGenie or GoAmerica browser.
They were built for use with two 1G data-only packet switched networks: Mobitex and DataTAC. They did not support Java without the use of a Java Virtual Machine add-on.
Monochrome Java-based models (5000 and 6000 series)
Most of these models were the first BlackBerry models that had a built-in mobile phone, were the first models that natively ran Java, and transmitted data over the normal 2G cellular network. RIM began to advertise these devices as email-capable mobile phones rather than as two-way pagers. At this time, the primary market was still businesses rather than consumers.
The 5810 was released on March 4, 2002. An aberration in this list, the 5790, was released at a much later date as a niche model in 2004 after many color BlackBerry models were out. This non-phone BlackBerry was made available due to the demand for a Java-based model that could run on the Mobitex data-only network. The 5810/5820 shared the same physical casing and keyboard layout as the earlier 957 device.
The 6000 series was launched in 2003 with 6210 entering the influential Time All Time 100 Gadgets list.
First color models (7000 series)
In 2003, the monochrome models were revised to include a color screen, while retaining the same form factor and casing. Early color models, such as the 7230, typically used a dim electroluminescent backlight, leading to an initial reputation of poor image quality. Later color models, such as the 7290, typically used a LED backlight, yielding much better screen quality. The color LCD screens used in these series were either reflective or transflective, so these screens yielded better image quality in direct sunlight even with the backlight turned off.
Nearly all models in this list were 16 MB models with no Bluetooth. The only model with 32 MB and Bluetooth is the 7290, which was the last model released in the early BlackBerry form factor, and was the first BlackBerry model with Bluetooth. The 7290 was also the first quad-band BlackBerry.
An aberration in this list is the 7270, the first Wi-Fi BlackBerry, released later. It is built into the old form factor in the same vein as the 7200 series.
First SureType models (7100 series)
RIM expanded the market by introducing the first BlackBerry models without a discrete QWERTY keyboard, in the candybar form factor. They developed a predictive text technology called SureType with a QWERTY-like layout, using two keys per button. By using only two letters per button, rather than three letters per button as in T9 using ten-digit keypads, predictive text accuracy could be improved dramatically. The use of a QWERTY-like layout took advantage of people's memory of the computer keyboard, since each button was roughly relative to each key. At the same time, the size of the BlackBerry could be dramatically reduced, as keyboards only needed to be 5-buttons wide rather than 10-buttons wide. These BlackBerries became more popular with the mass market as they became similarly sized to competing consumer-market cellphones.
These models were among the first BlackBerry models to be aggressively marketed to consumers, rather than to businesses. RIM continued to manufacture QWERTY models, to give the market a choice between the traditional QWERTY thumb keyboard, and the compressed SureType keyboard.
Consumer models (8000 and 9000 series)
Beginning with the 8700-series models in 2006, RIM began to aggressively add consumer features to BlackBerry models, in an aim to capture more of the consumer market from competitors such as Treo and Motorola Q. In this progression of models, the additions include better quality screens, more memory, built-in chat software, first cameraphone, microSD memory card slot, built-in mapping software, and other consumer-specific features. The BlackBerry Pearl 8100 was the first BlackBerry without a trackwheel, which was replaced by a miniature trackball to enable full 4-way and mouse-style navigation on a BlackBerry. The look of the new trackball gave the "Pearl" its name.
The 9000 series was launched in 2008.
BlackBerry 10
Android
Tablets
See also
References
External links
Current BlackBerry Smartphones
All BlackBerry devices, GSMArena.com
BlackBerry
BlackBerry products
BlackBerry Limited
BlackBerry
BlackBerry
BlackBerry | List of BlackBerry products | Technology | 1,058 |
51,518,803 | https://en.wikipedia.org/wiki/Steered-response%20power | Steered-response power (SRP) is a family of acoustic source localization algorithms that can be interpreted as a beamforming-based approach that searches for the candidate position or direction that maximizes the output of a steered delay-and-sum beamformer.
Steered-response power with phase transform (SRP-PHAT) is a variant using a "phase transform" to make it more robust in adverse acoustic environments.
Algorithm
Steered-response power
Consider a system of microphones, where each microphone is denoted by a subindex . The discrete-time output signal from a microphone is . The (unweighted) steered-response power (SRP) at a spatial point can be expressed as
where denotes the set of integer numbers and would be the time-lag due to the propagation from a source located at to the -th microphone.
The (weighted) SRP can be rewritten as
where denotes complex conjugation, represents the discrete-time Fourier transform of and is a weighting function in the frequency domain (later discussed). The term is the discrete time-difference of arrival (TDOA) of a signal emitted at position to microphones and , given by
where is the sampling frequency of the system, is the sound propagation speed, is the position of the -th microphone, is the 2-norm and denotes the rounding operator.
Generalized cross-correlation
The above SRP objective function can be expressed as a sum of generalized cross-correlations (GCCs) for the different microphone pairs at the time-lag corresponding to their TDOA
where the GCC for a microphone pair is defined as
The phase transform (PHAT) is an effective GCC weighting for time delay estimation in reverberant environments, that forces the GCC to consider only the phase information of the involved signals:
Estimation of source location
The SRP-PHAT algorithm consists in a grid-search procedure that evaluates the objective function on a grid of candidate source locations to estimate the spatial location of the sound source, , as the point of the grid that provides the maximum SRP:
Modified SRP-PHAT
Modifications of the classical SRP-PHAT algorithm have been proposed to reduce the computational cost of the grid-search step of the algorithm and to increase the robustness of the method. In the classical SRP-PHAT, for each microphone pair and for each point of the grid, a unique integer TDOA value is selected to be the acoustic delay corresponding to that grid point. This procedure does not guarantee that all TDOAs are associated to points on the grid, nor that the spatial grid is consistent, since some of the points may not correspond to an intersection of hyperboloids. This issue becomes more problematic with coarse grids since, when the number of points is reduced, part of the TDOA information gets lost because most delays are not anymore associated to any point in the grid.
The modified SRP-PHAT collects and uses the TDOA information related to the volume surrounding each spatial point of the search grid by considering a modified objective function:
where and are the lower and upper accumulation limits of GCC delays, which depend on the spatial location .
Accumulation limits
The accumulation limits can be calculated beforehand in an exact way by exploring the boundaries separating the regions corresponding to the points of the grid. Alternatively, they can be selected by considering the spatial gradient of the TDOA , where each component of the gradient is:
For a rectangular grid where neighboring points are separated a distance , the lower and upper accumulation limits are given by:
where and the gradient direction angles are given by
See also
Acoustic source localization
Multilateration
Audio signal processing
References
Acoustics
Signal processing
Digital signal processing | Steered-response power | Physics,Technology,Engineering | 751 |
5,977,216 | https://en.wikipedia.org/wiki/Dorodnitsyn%20Computing%20Centre | Dorodnitsyn Computing Centre (), known as the Computing Centre of the Academy of Sciences (CC RAS) until 2015, is a research institute of the Russian Academy of Sciences of the Soviet Union. It was established in 1955.
History
The first resolution of the Presidium of the USSR Academy of Sciences on the creation of a Computing Center was adopted on December 3, 1951. At the same time, the issue of the profile, structure and staff of the CC was decided. And already on August 3, 1954, a Resolution of the USSR Council of Ministers was adopted on the commissioning of the USSR Academy of Sciences Computing Center in 1955. On January 14, 1955, the Presidium of the USSR Academy of Sciences discussed the report of S. A. Lebedev and the co-report of the Chairman of the Commission of the Presidium of the USSR Academy of Sciences, Academician M. A. Lavrentyev, on the progress of implementing this resolution. By this time, ITMiVT had carried out work on preparing mathematical personnel for the computer center being created. It was decided to complete the construction of the Computing Center building in the 2nd quarter of 1955. Academicians I. M. Vinogradov (Director of the Steklov Mathematical Institute), S. A. Lebedev (Director of the Institute of Computer Science and Technology of the USSR Academy of Sciences) and A. A. were appointed responsible for preparing the organization of the Computing Center. Dorodnitsyn (at that time - deputy head of TsAGI for science and, concurrently, head of the sector of the department of applied mathematics of the Steklov Mathematical Institute.
At the same meeting of the Presidium of the USSR Academy of Sciences, the tasks of the Computing Center were defined:
Carrying out research work in the field of development, generalization and implementation of methods for solving mathematical problems using modern computer technology;
Performing large-scale computational work, primarily for institutions of the USSR Academy of Sciences;
Studying operational qualities and mastering new computer technology;
Management of planning and calculation of mathematical tables in the USSR.
The Computing Center of the USSR Academy of Sciences began its activities in February 1955, and Academician of the USSR Academy of Sciences Anatoly Dorodnitsyn was appointed its first director.
The personnel basis for the creation of the Computing Center was the department of applied mathematics of the Steklov Mathematical Institute (since 1951, the head of the sector of this department was also Dorodnitsyn) and a number of employees of ITMiVT. A competing proposal for the name of institutions of this type was the name “Institute of Cybernetics”, proposed by Academician Glushkov which the Kiev Institute of Cybernetics received.
Operations
The main activity and task of the Computing Center of the Academy of Sciences is the creation of new mathematical algorithms and software technologies for the use of computer technology in scientific research and in the national economy. Its areas of research include:
Computational Fluid Dynamics
Mathematical Physics
Mathematical modeling of Climatic Ecological Processes and other Nonlinear Phenomena
Solid mechanics and Elastic-Plastic Problems
Pattern Recognition and Image Analysis
Computer Aided Design
Optimization Methods, Linear and Nonlinear programming
Analytical mechanics and Lyapunov's Stability of Motion
Rigid body dynamics and Space Dynamics
Interactive Optimization and Decision support systems
Parallel Computing
Artificial Intelligence
Mathematical modeling of Economic Processes
Software development
The game Tetris was created by Alexey Pajitnov at the Computing Centre.
After 15 June 2015, CC RAS was included in Federal research centre "Informatic and Control" of RAS and now no longer exists as an independent institute.
Scientists
Andrey Ershov
Andrey Markov Jr.
Nikita Moiseyev
Valentin Vital'yevich Rumyantsev
Yuri Zhuravlyov
Leonid Khachiyan
Vladimir Alexandrov
References
External links
Dorodnitsyn Computing Centre website
Journal of Computational Mathematics and Mathematical Physics
Research institutes in the Soviet Union
Computing in the Soviet Union
Institutes of the Russian Academy of Sciences | Dorodnitsyn Computing Centre | Technology | 806 |
24,159,290 | https://en.wikipedia.org/wiki/Measurement%20of%20biodiversity | A variety of objective means exist to empirically measure biodiversity. Each measure relates to a particular use of the data, and is likely to be associated with the variety of genes. Biodiversity is commonly measured in terms of taxonomic richness of a geographic area over a time interval. In order to calculate biodiversity, species evenness, species richness, and species diversity are to be obtained first. Species evenness is the relative number of individuals of each species in a given area. Species richness is the number of species present in a given area. Species diversity is the relationship between species evenness and species richness. There are many ways to measure biodiversity within a given ecosystem. However, the two most popular are Shannon-Weaver diversity index, commonly referred to as Shannon diversity index, and the other is Simpsons diversity index. Although many scientists prefer to use Shannon's diversity index simply because it takes into account species richness.
Biodiversity is usually plotted as the richness of a geographic area, with some reference to a temporal scale. Types of biodiversity include taxonomic or species, ecological, morphological, and genetic diversity. Taxonomic diversity, that is the number of species, genera, family is the most commonly assessed type. A few studies have attempted to quantitatively clarify the relationship between different types of diversity. For example, the biologist Sarda Sahney has found a close link between vertebrate taxonomic and ecological diversity.
Conservation biologists have also designed a variety of objective means to empirically measure biodiversity. Each measure of biodiversity relates to a particular use of the data. For practical conservationists, measurements should include . For others, a more economically defensible definition should allow the ensuring of continued possibilities for both adaptation and future use by humans, assuring environmental sustainability.
As a consequence, biologists argue that this measure is likely to be associated with the variety of genes. Since it cannot always be said which genes are more likely to prove beneficial, the best choice for conservation is to assure the persistence of as many genes as possible. For ecologists, this latter approach is sometimes considered too restrictive, as it prohibits ecological succession.
Taxonomic diversity
Biodiversity is usually plotted as taxonomic richness of a geographic area, with some reference to a temporal scale. Whittaker described three common metrics used to measure species-level biodiversity, encompassing attention to species richness or species evenness:
Species richness - the simplest of the indices available.
Simpson index
Shannon-Wiener index
More recently, two new indices have been invented. The Mean Species Abundance Index (MSA) calculates the trend in population size of a cross section of the species. It does this in line with the CBD 2010 indicator for species abundance. The Biodiversity Intactness Index (BII) measures biodiversity change using abundance data on plants, fungi and animals worldwide. The BII shows how local terrestrial biodiversity responds to human pressures such as land use change and intensification.
Other measures of diversity
Alternatively, other types of diversity may be plotted against a temporal timescale:
species diversity
ecological diversity
morphological diversity
genetic diversity
These different types of diversity may not be independent. There is, for example, a close link between vertebrate taxonomic and ecological diversity.
Other authors tried to organize the measurements of biodiversity in the following way:
traditional diversity measures
species density, take into account the number of species in an area
species richness, take into account the number of species per individuals (usually [species]/[individuals x area])
diversity indices, take into account the number of species (the richness) and their relative contribution (the evenness); e.g.:
Simpson index
Shannon-Wiener index
phylogenetic diversity measures, include information on phylogenetic relationships among species
phylogenetic diversity (PD) index; Faith (1992)
topology based measures
taxonomic distinctiveness; Vane-Wright et al. (1991)
taxonomic diversity; Warwick & Clarke (1995)
taxonomic distinctness; Clarke & Warwick (1998)
functional diversity measures, include information on functional traits among species
categoric measures
functional group richness (FGR); e.g., Tilman et al. (1997)
continuous measures
with only one functional trait; e.g., Mason et al. (2003)
multivariate measures, with many functional traits
functional attribute diversity (FAD); Walker et al. (1999)
convex hull volume; Cornwell et al. (2006)
functional diversity (FD); Petchey & Gaston (2002)
Scale
Diversity may be measured at different scales. These are three indices used by ecologists:
Alpha diversity refers to diversity within a particular area, community or ecosystem, and is measured by counting the number of taxa within the ecosystem (usually species)
Beta diversity is species diversity between ecosystems; this involves comparing the number of taxa that are unique to each of the ecosystems.
Gamma diversity is a measurement of the overall diversity for different ecosystems within a region.
See also
Convention on Biological Diversity
Diversity index
Global biodiversity
List of biodiversity databases
National Biodiversity Network
Nutritional biodiversity
References
External links
Biodiversity
Environmental science | Measurement of biodiversity | Biology,Environmental_science | 1,013 |
11,436,443 | https://en.wikipedia.org/wiki/Cercospora%20cannabis | Cercospora cannabis is a fungal plant pathogen.
References
cannabis
Fungal plant pathogens and diseases
Hemp diseases
Fungus species | Cercospora cannabis | Biology | 26 |
20,480,847 | https://en.wikipedia.org/wiki/Hydraulic%20recoil%20mechanism | A hydraulic recoil mechanism is a way of limiting the effects of recoil and adding to the accuracy and firepower of an artillery piece.
Description
The idea of using a water brake to counteract the recoil of naval cannons was first suggested to the British Admiralty by Carl Wilhelm Siemens in early 1870s, but it took about a decade for other people (primarily Josiah Vavasseur) to commercialize the idea.
The usual recoil system in modern quick-firing guns is the hydro-pneumatic recoil system. In this system, the barrel is mounted on rails on which it can recoil to the rear, and the recoil is taken up by a cylinder which is similar in operation to an automotive gas-charged shock absorber, and is commonly visible as a cylinder mounted parallel to the barrel of the gun, but shorter and smaller than it. The cylinder contains a charge of compressed air, as well as hydraulic oil; in operation, the barrel's energy is taken up in compressing the air as the barrel recoils backward, then is dissipated via hydraulic damping as the barrel returns forward to the firing position. The recoil impulse is thus spread out over the time in which the barrel is compressing the air, rather than over the much narrower interval of time when the projectile is being fired. This greatly reduces the peak force conveyed to the mount (or to the ground on which the gun has been emplaced).
See also
Canon de 75 modèle 1897, the first field gun employing a hydro-pneumatic recoil mechanism
List of British ordnance terms
External links
A "cutaway" animation of the Canon de 75 modèle 1897 showing the parts and operation of its revolutionary recoil mechanism
References
Artillery components | Hydraulic recoil mechanism | Technology | 340 |
15,064,475 | https://en.wikipedia.org/wiki/40S%20ribosomal%20protein%20S28 | 40S ribosomal protein S28 is a protein that in humans is encoded by the RPS28 gene.
Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 40S subunit. The protein belongs to the S28E family of ribosomal proteins. It is located in the cytoplasm. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome.
References
Further reading
Ribosomal proteins | 40S ribosomal protein S28 | Chemistry | 141 |
1,205,310 | https://en.wikipedia.org/wiki/Practical%20number | In number theory, a practical number or panarithmic number is a positive integer such that all smaller positive integers can be represented as sums of distinct divisors of . For example, 12 is a practical number because all the numbers from 1 to 11 can be expressed as sums of its divisors 1, 2, 3, 4, and 6: as well as these divisors themselves, we have 5 = 3 + 2, 7 = 6 + 1, 8 = 6 + 2, 9 = 6 + 3, 10 = 6 + 3 + 1, and 11 = 6 + 3 + 2.
The sequence of practical numbers begins
Practical numbers were used by Fibonacci in his Liber Abaci (1202) in connection with the problem of representing rational numbers as Egyptian fractions. Fibonacci does not formally define practical numbers, but he gives a table of Egyptian fraction expansions for fractions with practical denominators.
The name "practical number" is due to . He noted that "the subdivisions of money, weights, and measures involve numbers like 4, 12, 16, 20 and 28 which are usually supposed to be so inconvenient as to deserve replacement by powers of 10." His partial classification of these numbers was completed by and . This characterization makes it possible to determine whether a number is practical by examining its prime factorization. Every even perfect number and every power of two is also a practical number.
Practical numbers have also been shown to be analogous with prime numbers in many of their properties.
Characterization of practical numbers
The original characterisation by stated that a practical number cannot be a deficient number, that is one of which the sum of all divisors (including 1 and itself) is less than twice the number unless the deficiency is one. If the ordered set of all divisors of the practical number is with and , then Srinivasan's statement can be expressed by the inequality
In other words, the ordered sequence of all divisors of a practical number has to be a complete sub-sequence.
This partial characterization was extended and completed by and who showed that it is straightforward to determine whether a number is practical from its prime factorization.
A positive integer greater than one with prime factorization (with the primes in sorted order ) is practical if and only if each of its prime factors is small enough for to have a representation as a sum of smaller divisors. For this to be true, the first prime must equal 2 and, for every from 2 to , each successive prime must obey the inequality
where denotes the sum of the divisors of x. For example, 2 × 32 × 29 × 823 = 429606 is practical, because the inequality above holds for each of its prime factors: 3 ≤ σ(2) + 1 = 4, 29 ≤ σ(2 × 32) + 1 = 40, and 823 ≤ σ(2 × 32 × 29) + 1 = 1171.
The condition stated above is necessary and sufficient for a number to be practical. In one direction, this condition is necessary in order to be able to represent as a sum of divisors of , because if the inequality failed to be true then even adding together all the smaller divisors would give a sum too small to reach . In the other direction, the condition is sufficient, as can be shown by induction.
More strongly, if the factorization of satisfies the condition above, then any can be represented as a sum of divisors of , by the following sequence of steps:
By induction on , it can be shown that . Hence .
Since the internals cover for , there are such a and some such that .
Since and can be shown by induction to be practical, we can find a representation of q as a sum of divisors of .
Since , and since can be shown by induction to be practical, we can find a representation of r as a sum of divisors of .
The divisors representing r, together with times each of the divisors representing q, together form a representation of m as a sum of divisors of .
Properties
The only odd practical number is 1, because if is an odd number greater than 2, then 2 cannot be expressed as the sum of distinct divisors More strongly, observes that other than 1 and 2, every practical number is divisible by 4 or 6 (or both).
The product of two practical numbers is also a practical number. Equivalently, the set of all practical numbers is closed under multiplication. More strongly, the least common multiple of any two practical numbers is also a practical number.
From the above characterization by Stewart and Sierpiński it can be seen that if is a practical number and is one of its divisors then must also be a practical number. Furthermore, a practical number multiplied by power combinations of any of its divisors is also practical.
In the set of all practical numbers there is a primitive set of practical numbers. A primitive practical number is either practical and squarefree or practical and when divided by any of its prime factors whose factorization exponent is greater than 1 is no longer practical. The sequence of primitive practical numbers begins
Every positive integer has a practical multiple. For instance, for every integer , its multiple is practical.
Every odd prime has a primitive practical multiple. For instance, for every odd prime , its multiple is primitive practical. This is because is practical but when divided by 2 is no longer practical. A good example is a Mersenne prime of the form . Its primitive practical multiple is which is an even perfect number.
Relation to other classes of numbers
Several other notable sets of integers consist only of practical numbers:
From the above properties with a practical number and one of its divisors (that is, ) then must also be a practical number therefore six times every power of 3 must be a practical number as well as six times every power of 2.
Every power of two is a practical number. Powers of two trivially satisfy the characterization of practical numbers in terms of their prime factorizations: the only prime in their factorizations, p1, equals two as required.
Every even perfect number is also a practical number. This follows from Leonhard Euler's result that an even perfect number must have the form . The odd part of this factorization equals the sum of the divisors of the even part, so every odd prime factor of such a number must be at most the sum of the divisors of the even part of the number. Therefore, this number must satisfy the characterization of practical numbers. A similar argument can be used to show that an even perfect number when divided by 2 is no longer practical. Therefore, every even perfect number is also a primitive practical number.
Every primorial (the product of the first primes, for some ) is practical. For the first two primorials, two and six, this is clear. Each successive primorial is formed by multiplying a prime number by a smaller primorial that is divisible by both two and the next smaller prime, . By Bertrand's postulate, , so each successive prime factor in the primorial is less than one of the divisors of the previous primorial. By induction, it follows that every primorial satisfies the characterization of practical numbers. Because a primorial is, by definition, squarefree it is also a primitive practical number.
Generalizing the primorials, any number that is the product of nonzero powers of the first primes must also be practical. This includes Ramanujan's highly composite numbers (numbers with more divisors than any smaller positive integer) as well as the factorial numbers.
Practical numbers and their primitives
Every practical number is either a primitive practical number or a primitive multiplied by power combinations of its divisors. In fact, these power combinations need only comprise the prime divisors. Consequently each primitive practical numbers, except 1, can be considered a progenitor of an infinite subsequence of practical numbers. For example, the primitive practical number 6, whose prime divisors are 2 and 3, can generate the following subsequence of practical numbers:-
6, 12, 18, 24, 36, 48, 54, 72, 96, 108, 144, 162, 192, 216, 288, . . .
where all terms in the subsequence are numbers of the form for . From the above, it can be seen that the primitive and all its progeny are practical numbers that share the same radical.
The relation "Practical numbers that have the same primitive progenitor" is an equivalence relation. It means that every primitive practical number is the progenitor of a disjoint subsequence of practical numbers. The primitive practical number 1 is the progenitor of itself and generates a subsequence with one term while all other primitives generate infinite disjoint subsequences.
To find the primitive of a practical number it is necessary to divide the practical by a primitive starting with the largest primitive closest to the practical and reducing until a primitive divides the practical such that the quotient has the same radical as the primitive . There will always be a solution since all practical numbers apart from 1 are even and 2 is a primitive practical number.
Practical numbers and Egyptian fractions
If is practical, then any rational number of the form with may be represented as a sum where each is a distinct divisor of . Each term in this sum simplifies to a unit fraction, so such a sum provides a representation of as an Egyptian fraction. For instance,
Fibonacci, in his 1202 book Liber Abaci lists several methods for finding Egyptian fraction representations of a rational number. Of these, the first is to test whether the number is itself already a unit fraction, but the second is to search for a representation of the numerator as a sum of divisors of the denominator, as described above. This method is only guaranteed to succeed for denominators that are practical. Fibonacci provides tables of these representations for fractions having as denominators the practical numbers 6, 8, 12, 20, 24, 60, and 100.
showed that every rational number has an Egyptian fraction representation with terms. The proof involves finding a sequence of practical numbers with the property that every number less than may be written as a sum of distinct divisors of . Then, is chosen so that , and is divided by giving quotient and remainder . It follows from these choices that . Expanding both numerators on the right hand side of this formula into sums of divisors of results in the desired Egyptian fraction representation. use a similar technique involving a different sequence of practical numbers to show that every rational number has an Egyptian fraction representation in which the largest denominator is .
According to a September 2015 conjecture by Zhi-Wei Sun, every positive rational number has an Egyptian fraction representation in which every denominator is a practical number. The conjecture was proved by .
Analogies with prime numbers
One reason for interest in practical numbers is that many of their properties are similar to properties of the prime numbers.
Indeed, theorems analogous to Goldbach's conjecture and the twin prime conjecture are known for practical numbers: every positive even integer is the sum of two practical numbers, and there exist infinitely many triples of practical numbers . Melfi also showed that there are infinitely many practical Fibonacci numbers ; the analogous question of the existence of infinitely many Fibonacci primes is open. showed that there always exists a practical number in the interval for any positive real , a result analogous to Legendre's conjecture for primes. Moreover, for all sufficiently large , the interval contains many practical numbers.
Let count how many practical numbers are at
conjectured that is asymptotic to for some constant , a formula which resembles the prime number theorem, strengthening the earlier claim of that the practical numbers have density zero in the integers.
Improving on an estimate of , found that has order of magnitude .
proved Margenstern's conjecture. We have
where Thus the practical numbers are about 33.6% more numerous than the prime numbers. The exact value of the constant factor is given by
where is the Euler–Mascheroni constant and runs over primes.
As with prime numbers in an arithmetic progression, given two natural numbers and ,
we have
The constant factor is positive if, and only if, there is more than one practical number congruent to .
If , then .
For example, about 38.26% of practical numbers have a last decimal digit of 0, while the last digits of 2, 4, 6, 8 each occur with the same relative frequency of 15.43%.
Notes
References
.
. As cited by .
.
. As cited by .
.
.
.
.
.
. As cited by and .
.
.
.
.
.
.
.
.
.
.
.
.
.
External links
Tables of practical numbers compiled by Giuseppe Melfi.
Integer sequences
Egyptian fractions | Practical number | Mathematics | 2,679 |
11,471,473 | https://en.wikipedia.org/wiki/Phomopsis%20lokoyae | Phomopsis lokoyae is a fungal plant pathogen infecting Douglas-firs.
References
External links
USDA ARS Fungal Database
Fungal conifer pathogens and diseases
lokoyae
Fungus species | Phomopsis lokoyae | Biology | 41 |
2,993,422 | https://en.wikipedia.org/wiki/Immunostimulant | Immunostimulants, also known as immunostimulators, are substances (drugs and nutrients) that stimulate the immune system usually in a non-specific manner by inducing activation or increasing activity of any of its components. One notable example is the granulocyte macrophage colony-stimulating factor. The goal of this stimulated immune response is usually to help the body have a stronger immune system response in order to improve outcomes in the case of an infection or cancer malignancy. There is also some evidence that immunostimulants may be useful to help decrease severe acute illness related to chronic obstructive pulmonary disease or acute infections in the lungs.
Classification
There are two main categories of immunostimulants:
Specific immunostimulants provide antigenic specificity in immune response, such as vaccines or any antigen.
Non-specific immunostimulants act irrespective of antigenic specificity to augment immune response of other antigen or stimulate components of the immune system without antigenic specificity, such as adjuvants and non-specific immunostimulators.
Non-specific
Many endogenous substances are non-specific immunostimulators. For example, female sex hormones are known to stimulate both adaptive and innate immune responses. Some autoimmune diseases such as lupus erythematosus strike women preferentially, and their onset often coincides with puberty. Other hormones appear to regulate the immune system as well, most notably prolactin, growth hormone and vitamin D.
Some publications point towards the effect of deoxycholic acid (DCA) as an immunostimulant of the non-specific immune system, activating its main actors, the macrophages. According to these publications, a sufficient amount of DCA in the human body corresponds to a good immune reaction of the non-specific immune system.
Claims made by marketers of various products and alternative health providers, such as chiropractors, homeopaths, and acupuncturists to be able to stimulate or "boost" the immune system generally lack meaningful explanation and evidence of effectiveness.
Uses
Immunostimulants have been recommended to help prevent acute illness related to chronic obstructive pulmonary disease and they are sometimes used to treat chronic bronchitis. The evidence in the form of high quality clinical trials to support their use is weak, however, there is some evidence of benefit and they appear to be safe. The most commonly used immunostimulant type for this purpose are bacterial-derived immunostimulants. The goal is to stimulate the person's immune system in order to prevent future infections that may result in an acute episode or exacerbation of COPD.
See also
General
Antigen
Co-stimulation
Immunogenicity
Immunologic adjuvant
Immunomodulator
Immunotherapy
Endogenous immunostimulants
Deoxycholic acid, a stimulator of macrophages
Synthetic immunostimulants
Imiquimod and resiquimod, activate immune cells through the toll-like receptor 7
References
External links
Veterinary Immunology and Immunopathology journal
Deoxycholic acid as immunostimulant
Immunology
Immune system | Immunostimulant | Biology | 687 |
11,777,722 | https://en.wikipedia.org/wiki/PhilNITS | The Philippine National Information Technology Standards Foundation, Inc., or PhilNITS, is a non-stock, non-profit, non-government organization that is implementing in the Philippines the Information Technology standards adopted from Japan, with the support of the Department of Trade and Industry (DTI) of the Philippines and the Ministry of Economy, Trade and Industry (METI) of Japan.
History
PhilNITS was initially known as the Japanese IT Standards Exams of the Philippines Foundation, Inc. (JITSE-Phil), and was registered as such with the Securities and Exchange Commission on April 10, 2002, and set up its office at the Penthouse of the Prince Bldg, in Rada Street, Legazpi Village, Makati,
A week after its incorporation, the Japan IT Engineers Examination Center (JITEC) represented by its president, Mr. Takao Tominaga, signed a mutual recognition agreement (MRA) with the JITSE-Phil Foundation, represented by its Founding President, Ms. Ma. Corazon M. Akol in ceremonies held at the Makati Shangri-La, Manila Hotel and witnessed by Ambassador Ara of Japan, Secretary Mar Roxas of the Department of Trade and Industry, Chairman Virgilio Pena of Information Technology and E-Commerce Council (now Commission on Information and Communications Technology), Mr. Yoshikai, Deputy Director General of the Ministry of Economy, Trade and Industry, and Mr. Sakai, Commercial Attache of Japan to the Philippines.
DTI started to support JITSE-Phil in 2003 by providing it with its office space at the Oppen Building in Makati, in 2005, at the WDC Building in Cebu City and in 2007, at the Mintrade Bldg. in Davao City.
With the MRA between JITEC and JITSE-Phil, it was able to receive technical support from Japan. JITEC has been providing guidance, training, necessary hardware/software programs, and the documentation required in implementing the Standards on the Fundamentals of IT (FE) and in Software Design and Development (SW).
On May 29, 2003, the Bureau of Product Standards (BPS) of the Department of Trade and Industry (DTI), after due consultation with the National Computer Center (NCC), accepted JITSE as the Phil. National Standard - PNS 2030:2003 Information Technology Engineers Skills Standards.
On May 30, 2003, after an evaluation of the results of the First Certification Exams conducted by JITSE-Phil on the Fundamentals of IT Engineers (FE), the Ministry of Justice of Japan recognized JITSE as the Philippine Nihon Joho Gijutsu Hyojun Shiken Zaidan. With this official recognition by the Ministry of Justice, the FE Certificate, more popularly called the JITSE Certificate, can be used as a valid document for processing the work visas of IT professionals bound for Japan.
With the Asia IT Initiative Program (AITI) of METI, JITSE-Phil was able to receive technical support from Japan through the Japan External Trade Organization (JETRO), the Association for Overseas Technical Scholarships (AOTS) and the Center of the International Cooperation on Computerization (CICC).
JETRO has provided JITSE-Phil since 2003, through the Japan Expert Service Abroad (JEXSA) Project, technical experts and the training facilities in its offices in Makati, Cebu, and Davao. AOTS has provided Training Courses in the Philippines (in the various offices of JITSE-Phil as well as in some schools in provinces where JITSE-Phil has no Training Center) and have awarded scholarships for training in Japan.
On August 31, 2004. JITSE-Phil changed its name to PhilNITS Foundation to correct the misconception that the standards being implemented only for the Japanese market but for Asia.
In the ITEE Conference held at the AOTS Yokohama Kenshu Center, JITEC and the 6 organizations with mutual recognition agreements (MRAs) with JITEC, decided to form the Information Technology Professional Examination Council or ITPEC. The members of ITPEC are: the Japan Information Technology Examination Center (JITEC) of Japan, the Multimedia Technology Enhancement & Operations Sendirian Berhad (METEOR) of Malaysia, the Myanmar Computer Federation (MCF) of Myanmar, the Japan-Mongolian Information Technology Association (JMITA) of Mongolia, (now replaced by the National IT Park, NITP), the Philippine National IT Standards Foundation (PhilNITS) of the Philippines, the National Electronics and Computer technology Center (NECTEC) of Thailand, and the Vietnam Information Technology Examination and Training Support Center (VITEC) of Vietnam. In this Agreement, the members decided to have a common exam, on the same agreed upon date and time, and to recognize each other’s Certificate. ITPEC members have been using the same logo to raise public recognition of the examination and have adopted a common marketing strategy. JITEC-IPA expects to establish multi-lateral mutual recognition agreements, and transform ITPEC into a fully Asia-wide organization.
Through Grants from AOTS, PhilNITS has trained 8,174 IT Professionals in the Philippines and has sent 197 scholars to Japan. CICC has provided several Training programs as well as an e Learning System consisting of the hardware (2 servers and 4 terminals), the software (that can accommodate a maximum of 2000 users) and contents consisting of 24 modules developed by JITEC and CICC and 1 module developed by the Thomson Learning Center, donated by Fujitsu to PhilNITS. All the modules are made available to the public, 24 hours a day, 7 days a week through a subscription fee of ₱500.00 per month.
Current Certifications
The ITPEC certification exams (known locally as PhilNITS certification exams) are administered as written below. Currently there are three levels of examination being administered by PhilNITS. Topics covered by the exams are those of technology, strategy and management. IT professionals who pass these certification examinations are certified for life. An IT professional may take directly the certification level he/she would want to take. There is no limit to the number of times you take the examinations until you pass the exams.
ITPEC Fundamentals of IT Passport Exam (IP Exam, Level 1)
Known locally as PhilNITS IP, the Information Technology Passport Exam is for individuals who have basic knowledge in IT that all business workers should commonly possess, and who are doing information technology related tasks or trying to utilize IT related technology in their tasks. The exam duration is 120 minutes or 2 hours (conducted during a morning schedule). It consists of 100 questions in multiple choice (one per four choices) broke down into two types: the short question type, one question per item, 88 questions; and medium question type, four questions per item, 12 questions (3 items).
The pilot examination was conducted last March 28, 2010. The first regular exam was conducted last October 24, 2010. This exam is conducted twice a year in last Sunday of April and last Sunday of October.
ITPEC Fundamentals of IT Engineers Exam (FE Exam, Level 2)
Known locally as PhilNITS FE, this exam is conducted twice a year in the last Sunday of April and last Sunday of October. The 300 minute (150 minutes in the morning and 150 minutes in the afternoon) multiple choice examination are administered in 10 exam centers in the Philippines: University of Baguio for the north Luzon area, Philippine Christian University in Manila, Ateneo de Naga University for the Bicol Region, University of San Carlos in Cebu and Holy Name University in Bohol, Lorma Colleges in La Union and Leyte Academic Center for the Visayas region, and for Mindanao: Capitol University in Cagayan de Oro, Ateneo de Zamboanga University in Zamboanga City and the University of the Immaculate Conception in Davao City.
The Fundamental IT Engineers Exam is for individuals who have basic fundamental knowledge and skills required to be an advance IT human resource, who possess practical utilization abilities. Those who fail either the morning or afternoon part of the exam are given another chance to take the removal exam after which they will have to take the entire test again. This means two chances to pass the exam.
ITPEC Applied Information Technology Exam (AP Exam, Level 3)
Known locally as PhilNITS AP is conducted once a year on the last Sunday of October. The multiple choice exam is for 300 minutes (150 minutes in the morning and 150 minutes in the afternoon). This examination is for individuals who have applied knowledge and skills required to be an advanced IT human resource, and who have established their own direction as an advanced IT human resource. It is ideally given to people who have at least two years work experience.
Other projects
Conducting free training courses on IP, FE & AP/SW for teachers and commercial trainers. Usually funded by grants from AOTS, CICC, IPA, METI, and JETRO.
Conducting free summer training of teachers, a joint undertaking with the Philippine Society of IT Educators (PSITE) and the Philippine Accrediting Association of Schools, Colleges and Universities (PAASCU). PAASCU is providing the venue and PhilNITS is providing the lecturers (from the PhilNITS Society).
Providing training using the e-learning system donated by Japan.
Guiding and helping in the development of the PhilNITS Society, an organization formed whose only criterion for membership is being PhilNITS-certified, in any exam category.
Conducting software and hardware training. Customized training and assessment also available.
Conducting Systems Development for Outsourced Projects. Work is done by Bridge Software Engineers who are Nihongo proficient.
Board
Officers of PhilNITS are:
Ms. Ma. Corazon M. Akol – President
Mr. Peter Que Jr. – VP for Operations
Mr. Shinichiro Kato – VP for Finance
Ms. Flora Capili – Secretary.
See also
ITPEC - East Asia
Open University Malaysia METEOR - Malaysia
NECTEC - Thailand
References
External links
Information technology qualifications
Organizations based in Metro Manila
Science and technology in the Philippines | PhilNITS | Technology | 2,063 |
23,979,786 | https://en.wikipedia.org/wiki/C13H10O | {{DISPLAYTITLE:C13H10O}}
The molecular formula C13H10O (molar mass: 182.22 g/mol, exact mass: 182.0732 u) may refer to:
Benzophenone
Fluorenol
Xanthene (9H-xanthene, 10H-9-oxaanthracene) | C13H10O | Chemistry | 81 |
16,819,755 | https://en.wikipedia.org/wiki/Membrane%20fouling | Membrane fouling is a process whereby a solution or a particle is deposited on a membrane surface or in membrane pores in a processes such as in a membrane bioreactor, reverse osmosis, forward osmosis, membrane distillation, ultrafiltration, microfiltration, or nanofiltration so that the membrane's performance is degraded. It is a major obstacle to the widespread use of this technology. Membrane fouling can cause severe flux decline and affect the quality of the water produced. Severe fouling may require intense chemical cleaning or membrane replacement. This increases the operating costs of a treatment plant. There are various types of foulants: colloidal (clays, flocs), biological (bacteria, fungi), organic (oils, polyelectrolytes, humics) and scaling (mineral precipitates).
Fouling can be divided into reversible and irreversible fouling based on the attachment strength of particles to the membrane surface. Reversible fouling can be removed by a strong shear force or backwashing. Formation of a strong matrix of fouling layer with the solute during a continuous filtration process will result in reversible fouling being transformed into an irreversible fouling layer. Irreversible fouling is the strong attachment of particles which cannot be removed by physical cleaning.
Influential factors
Factors that affect membrane fouling:
Recent fundamental studies indicate that membrane fouling is influenced by numerous factors such as system hydrodynamics, operating conditions, membrane properties, and material properties (solute). At low pressure, low feed concentration, and high feed velocity, concentration polarisation effects are minimal and flux is almost proportional to trans-membrane pressure difference. However, in the high pressure range, flux becomes almost independent of applied pressure. Deviation from linear flux-pressure relation is due to concentration polarization. At low feed flow rate or with high feed concentration, the limiting flux situation is observed even at relatively low pressures.
Measurement
Flux, transmembrane pressure (TMP), Permeability, and Resistance are the best indicators of membrane fouling. Under constant flux operation, TMP increases to compensate for the fouling. On the other hand, under constant pressure operation, flux declines due to membrane fouling. In some technologies such as membrane distillation, fouling reduces membrane rejection, and thus permeate quality (e.g. as measured by electrical conductivity) is a primary measurement for fouling.
Fouling control
Even though membrane fouling is an inevitable phenomenon during membrane filtration, it can be minimised by strategies such as cleaning, appropriate membrane selection and choice of operating conditions.
Membranes can be cleaned physically, biologically or chemically. Physical cleaning includes gas scour, sponges, water jets or backflushing using permeate or pressurized air. Biological cleaning uses biocides to remove all viable microorganisms, whereas chemical cleaning involves the use of acids and bases to remove foulants and impurities.
Additionally, researchers have investigated the impact different coatings have on resistance to wear. A 2018 study from the Global Aqua Innovation Center in Japan reported improved surface roughness properties of PA membranes by coating them with multi-walled carbon nanotubes.
Another strategy to minimise membrane fouling is the use of the appropriate membrane for a specific operation. The nature of the feed water must first be known; then a membrane that is less prone to fouling with that solution is chosen. For aqueous filtration, a hydrophilic membrane is preferred. For membrane distillation, a hydrophobic membrane is preferred.
Operating conditions during membrane filtration are also vital, as they may affect fouling conditions during filtration. For instance, crossflow filtration is often preferred to dead end filtration, because turbulence generated during the filtration entails a thinner deposit layer and therefore minimises fouling (e.g. tubular pinch effect). In some applications such as in many MBR applications, air scour is used to promote turbulence at the membrane surface.
Impact of Fouling on the Mechanical Properties of Membranes
Membrane performance can suffer from fouling-induced mechanical degradation. This may result in unwanted pressure and flux gradients, both of the solute and the solvent. The mechanism of membrane failure may be the direct consequence of fouling by means of physical alterations to the membrane, or by indirect means, in which the foulant removal processes yield membrane damage.
Direct Impacts of Fouling
It is important to note that the majority of membranes used commercially are polymers such as polyvinylidene fluoride (PVDF), polyacrylonitrile (PAN), polyethersulfone (PES) and polyamide (PA), which are materials which offer desirable properties (elasticity and strength) to withstand constant osmotic pressures. The accumulation of foulants, however, degrades these properties through physical alterations to the membrane structure.
The accumulation of foulants can lead to the formation of cracks, surface roughening, and changes in pore size distribution. These physical changes are the result of impacts of hard material with a soft polymer membrane, weakening its structural integrity. Degradation of the mechanical structure makes the membranes more susceptible to mechanical damage, potentially reducing its overall lifespan. A 2006 study observed this degradation by uniaxially straining hollow fibers that were both clean and fouled. The researchers reported the relative embrittlement of the fouled fibers.
Indirect Impacts of Fouling
Beyond direct physical damage, fouling can also induce indirect effects on membrane mechanical properties due to the strategies used to combat it. Backwashing subjects not only the particulates, but the membrane to strong shear forces. Greater fouling frequency therefore exposes the membrane to cyclic loading which can lead to fatigue failure. This is a process whereby existing imperfections in the membrane (such as microcracks) can grow and propagate due to the complex stress state dynamics. These impacts are not unknown; A 2007 study simulated aging via cyclic backwash pulses, and reported similar embrittlement due to the effects.
Additionally, repeated chemical treatment of fouling subjects membranes to excessive amounts of chlorine or other treatment chemicals which can cause degradation. This chemical degradation can lead to delamination of the membrane components, ultimately leading to failure.
See also
Vibratory shear-enhanced process
Water purification
References
Water technology
Fouling
Membrane technology | Membrane fouling | Chemistry,Materials_science | 1,321 |
2,917,407 | https://en.wikipedia.org/wiki/Diesel%20particulate%20filter | A diesel particulate filter (DPF) is a device designed to remove diesel particulate matter or soot from the exhaust gas of a diesel engine.
Mode of action
Wall-flow diesel particulate filters usually remove 85% or more of the soot, and under certain conditions can attain soot removal efficiencies approaching 100%. Some filters are single-use, intended for disposal and replacement once full of accumulated ash. Others are designed to burn off the accumulated particulate either passively through the use of a catalyst or by active means such as a fuel burner which heats the filter to soot combustion temperatures. This is accomplished by engine programming to run (when the filter is full) in a manner that elevates exhaust temperature, in conjunction with an extra fuel injector in the exhaust stream that injects fuel to react with a catalyst element to burn off accumulated soot in the DPF filter, or through other methods. This is known as . Cleaning is also required as part of periodic maintenance, and it must be done carefully to avoid damaging the filter. Failure of fuel injectors or turbochargers resulting in contamination of the filter with raw diesel or engine oil can also necessitate cleaning. The regeneration process occurs at road speeds higher than can generally be attained on city streets; vehicles driven exclusively at low speeds in urban traffic can require periodic trips at higher speeds to clean out the DPF. If the driver ignores the warning light and waits too long to operate the vehicle above , the DPF may not regenerate properly, and continued operation past that point may spoil the DPF completely so it must be replaced. Some newer diesel engines, namely those installed in combination vehicles, can also perform what is called a Parked Regeneration, where the engine increases RPM to around 1400 while parked, to increase the temperature of the exhaust.
Diesel engines produce a variety of particles during the combustion of the fuel/air mix due to incomplete combustion. The composition of the particles varies widely dependent upon engine type, age, and the emissions specification that the engine was designed to meet. Two-stroke diesel engines produce more particulate per unit of power than do four-stroke diesel engines, as they burn the fuel-air mix less completely.
Diesel particulate matter resulting from the incomplete combustion of diesel fuel produces soot (black carbon) particles. These particles include tiny nanoparticles—smaller than one micrometre (one micron). Soot and other particles from diesel engines worsen the particulate matter pollution in the air and are harmful to health.
New particulate filters can capture from 30% to greater than 95% of the harmful soot. With an optimal diesel particulate filter (DPF), soot emissions may be decreased to or less.
The quality of the fuel also influences the formation of these particles. For example, a high sulphur content diesel produces more particles. Lower sulphur fuel produces fewer particles, and allows use of particulate filters. The injection pressure of diesel also influences the formation of fine particles.
History
Diesel particulate filtering was first considered in the 1970s due to concerns regarding the impacts of inhaled particulates. Particulate filters have been in use on non-road machines since 1980, and in automobiles since 1985. Historically medium and heavy duty diesel engine emissions were not regulated until 1987 when the first California Heavy Truck rule was introduced capping particulate emissions at 0.60 g/BHP Hour. Since then, progressively tighter standards have been introduced for light- and heavy-duty roadgoing diesel-powered vehicles and for off-road diesel engines. Similar regulations have also been adopted by the European Union and some individual European countries, most Asian countries, and the rest of North and South America.
Whilst few jurisdictions have explicitly made filters mandatory, the increasingly stringent emissions regulations that engine manufacturers must meet mean that eventually all on-road diesel engines will be fitted with them. In the European Union, filters are expected to be necessary to meet the Euro.VI heavy truck engine emissions regulations currently under discussion and planned for the 2012-2013 time frame. In 2000, in anticipation of the future Euro 5 regulations PSA Peugeot Citroën became the first company to make filters standard on passenger cars.
As of December 2008, the California Air Resources Board (CARB) established the 2008 California Statewide Truck and Bus Rule which—with variance according to vehicle type, size and usage—requires that on-road diesel heavy trucks and buses in California be retrofitted, repowered, or replaced to reduce particulate matter (PM) emissions by at least 85%. Retrofitting the engines with CARB-approved diesel particulate filters is one way to fulfill this requirement. In 2009 the American Recovery and Reinvestment Act provided funding to assist owners in offsetting the cost of diesel retrofits for their vehicles. Other jurisdictions have also launched retrofit programs, including:
2001 – Hong Kong retrofit program.
2002 – In Japan the Prefecture of Tokyo passed a law banning trucks without filters from entering the city limits.
2003 – Mexico City started a program to retrofit trucks.
2004 – New York City retrofit program (non-road).
2008 – Milan Ecopass area traffic charge – a hefty entrance tax on all diesel vehicles except those with a particulate filter, either stock or retrofit.
2008 – London low emission zone charges vehicles that do not meet emission standards, encouraging retrofit filters.
Inadequately maintained particulate filters on vehicles with diesel engines are prone to soot buildup, which can cause engine problems due to high back pressure.
In 2018, the UK made changes to its MOT test requirements, including tougher scrutiny of diesel cars. One requirement was to have a properly fitted and working DPF. Driving without a DPF could incur a £1000 fine.
Variants of DPFs
Unlike a catalytic converter which is a flow-through device, a DPF retains bigger exhaust gas particles by forcing the gas to flow through the filter material before exiting; however, the DPF does not retain small particles. Maintenance-free DPFs oxidise or burn larger particles until they are small enough to pass through the filter, though often particles "clump" together in the DPF reducing the overall particle count as well as overall mass. There are a variety of diesel particulate filter technologies on the market. Each is designed around similar requirements:
Fine filtration
Minimum pressure drop
Low cost
Mass production suitability
Product durability
Cordierite wall flow filters
The most common filter is made of cordierite (a ceramic material that is also used as catalytic converter supports (cores)). Cordierite filters provide excellent filtration efficiency, are relatively inexpensive, and have thermal properties that make packaging them for installation in the vehicle simple. The major drawback is that cordierite has a relatively low melting point (about 1200 °C) and cordierite substrates have been known to melt during filter regeneration. This is mostly an issue if the filter has become loaded more heavily than usual, and is more of an issue with passive systems than with active systems, unless there is a system breakdown.
Cordierite filter cores look like catalytic converter cores that have had alternate channels plugged – the plugs force the exhaust gas flow through the wall and the particulate collects on the inlet face.
Silicon carbide wall flow filters
The second most popular filter material is silicon carbide, or SiC. It has a higher (2700 °C) melting point than cordierite, however, it is not as stable thermally, making packaging an issue. Small SiC cores are made of single pieces, while larger cores are made in segments, which are separated by a special cement so that heat expansion of the core will be taken up by the cement, and not the package. SiC cores are usually more expensive than cordierite cores, however they are manufactured in similar sizes, and one can often be used to replace the other. Silicon carbide filter cores also look like catalytic converter cores that have had alternate channels plugged – again the plugs force the exhaust gas flow through the wall and the particulate collects on the inlet face.
The characteristics of the wall flow diesel particulate filter substrate are:
broad band filtration (the diameters of the filtered particles are 0.2–150 μm)
high filtration efficiency (can be up to 95%)
high refractory
high mechanical properties
high boiling point.
Ceramic fiber filters
Fibrous ceramic filters are made from several different types of ceramic fibers that are mixed together to form a porous medium. This medium can be formed into almost any shape and can be customized to suit various applications. The porosity can be controlled in order to produce high flow, lower efficiency or high efficiency lower volume filtration. Fibrous filters have an advantage over wall flow design of producing lower back pressure. Fibrous ceramic filters remove carbon particulates almost completely, including fine particulates less than 100 nanometres (nm) diameter with an efficiency of greater than 95% in mass and greater than 99% in number of particles over a wide range of engine operating conditions. Since the continuous flow of soot into the filter would eventually block it, it is necessary to 'regenerate' the filtration properties of the filter by burning off the collected particulate on a regular basis. Soot particulate burn-off forms water and CO2 in small quantities amounting to less than 0.05% of the CO2 emitted by the engine.
Metal fiber flow-through filters
Some cores are made from metal fibers – generally the fibers are "woven" into a monolith. Such cores have the advantage that an electrical current can be passed through the monolith to heat the core for regeneration purposes, allowing the filter to regenerate at low exhaust temperatures and/or low exhaust flow rates. Metal fiber cores tend to be more expensive than cordierite or silicon carbide cores, and are generally not interchangeable with them because of the electrical requirement.
Paper
Disposable paper cores are used in certain specialty applications, without a regeneration strategy. Coal mines are common users – the exhaust gas is usually first passed through a water trap to cool it, and then through the filter. Paper filters are also used when a diesel machine must be used indoors for short periods of time, such as on a forklift being used to install equipment inside a store.
Partial filters
There are a variety of devices that produce over 50% particulate matter filtration, but less than 85%. Partial filters come in a variety of materials. The only commonality between them is that they produce more back pressure than a catalytic converter, and less than a diesel particulate filter. Partial filter technology is popular for retrofit.
Maintenance
Filters require more maintenance than catalytic converters. Soot, a byproduct of oil consumption from normal engine operation, builds up in the filter as it cannot be converted into a gas and pass through the walls of the filter. This increases the pressure before the filter.
DPF filters go through a regeneration process which removes this soot and lowers the filter pressure. There are three types of regeneration: passive, active, and forced. Passive regeneration takes place normally while driving, when engine load and vehicle drive-cycle create temperatures that are high enough to regenerate the soot buildup on the DPF walls. Active regeneration happens while the vehicle is in use, when low engine load and lower exhaust gas temperatures inhibit the naturally occurring passive regeneration. Sensors upstream and downstream of the DPF (or a differential pressure sensor) provide readings that initiate a metered addition of fuel into the exhaust stream. There are two methods to inject fuel, either downstream injection directly into the exhaust stream, downstream of the turbo, or fuel injection into the engine cylinders on the exhaust stroke. This fuel and exhaust gas mixture passes through the Diesel Oxidation Catalyst (DOC) creating temperatures high enough to burn off the accumulated soot. Once the pressure drop across the DPF lowers to a calculated value, the process ends, until the soot accumulation builds up again. This works well for vehicles that drive longer distances with few stops compared to those that perform short trips with many starts and stops. If the filter develops too much pressure then the last type of regeneration must be used – a forced regeneration. This can be accomplished in two ways. The vehicle operator can initiate the regeneration via a dashboard mounted switch. Various signal interlocks, such as park brake applied, transmission in neutral, engine coolant temperature, and an absence of engine related fault codes are required (vary by OEM and application) for this process to initiate. When the soot accumulation reaches a level that is potentially damaging to the engine or the exhaust system, the solution involves a garage using a computer program to run a regeneration of the DPF manually.
When a regeneration occurs, the soot is turned to gasses and ash of which some remains in the filter. This will increase restriction through the filter and can result in a blockage. Warnings are given to the driver before filter restriction causes an issue with driveability or damage to the engine or filter develop. Regular filter maintenance is a necessity to remove ash build up, either through cleaning or replacement of the filter.
Regeneration typically requires the vehicle to be driven continuously at 50-60mph (80-100km/h) for 30 to 45 minutes every few hundred miles/kilometers of city driving. Heavy duty pickup trucks have less stringent requirements for all three parameters, and Class 8 trucks significantly less. If the vehicle is often driven in cities the DPF may become clogged, causing a reduction in power and acceleration either passively due to increased exhaust pressure or actively due to vehicle going into "limp/turtle mode" as it tries to prevent engine and turbo damage. Once clogged both passive and active regeneration may become ineffective. DPF may be unclogged by high temperature pressure washing (not officially recommended) and/or burn-off oven.
Safety
In 2011, Ford recalled 37,400 F-Series trucks with diesel engines after fuel and oil leaks caused fires in the diesel particulate filters of the trucks. No injuries occurred before the recall, though one grass fire was started. A similar recall was issued for 2005-2007 Jaguar S-Type and XJ diesels, where large amounts of soot became trapped in the DPF In affected vehicles, smoke and fire emanated from the vehicle underside, accompanied by flames from the rear of the exhaust. The heat from the fire could cause heating through the transmission tunnel to the interior, melting interior components and potentially causing interior fires.
Regeneration
Regeneration is the process of burning off (oxidizing) the accumulated soot from the filter. This is done either passively (from the engine's exhaust heat in normal operation or by adding a catalyst to the filter) or actively introducing very high heat into the exhaust system. On-board active filter management can use a variety of strategies:
Engine management to increase exhaust temperature through late fuel injection or injection during the exhaust stroke
Use of a fuel-borne catalyst to reduce soot burn-out temperature
A fuel burner after the turbo to increase the exhaust temperature
A catalytic oxidizer to increase the exhaust temperature, with after injection (HC-Doser)
Resistive heating coils to increase the exhaust temperature
Microwave energy to increase the particulate temperature
All on-board active systems use extra fuel, whether through burning to heat the DPF, or providing extra power to the DPF's electrical system, although the use of a fuel borne catalyst reduces the energy required very significantly. Typically a computer monitors one or more sensors that measure back pressure and/or temperature, and based on pre-programmed set points the computer makes decisions on when to activate the regeneration cycle. The additional fuel can be supplied by a metering pump. Running the cycle too often while keeping the back pressure in the exhaust system low will result in high fuel consumption. Not running the regeneration cycle soon enough increases the risk of engine damage and/or uncontrolled regeneration (thermal runaway) and possible DPF failure.
Diesel particulate matter burns when temperatures above 600 °C are attained. This temperature can be reduced to somewhere in the range of 350 to 450 °C by use of a fuel-borne catalyst. The actual temperature of soot burn-out will depend on the chemistry employed. In the mid-2010s, scientists at 3M developed a magnesium doped version of traditional iron based catalysts which lowered the temperature required for particulate matter oxidation to just over 200 °C. The lower reaction temperature is made possible by the dopant allowing the Fe lattice to hold more oxygen. This advancement is significant because it allows the cleaning reaction to take place at the standard operating temperature of most diesel engines, removing the requirement for burning extra fuel or otherwise artificially heating the engine. The family of Mg doped catalysts, named Grindstaff catalysts after the chemist who started the work, has been the subject of much investigation across industry and academia with the tightening of emissions regulations on particulate matter world wide.
In some cases, in the absence of a fuel-borne catalyst, the combustion of the particulate matter can raise temperatures so high, that they are above the structural integrity threshold of the filter material, which can cause catastrophic failure of the substrate. Various strategies have been developed to limit this possibility. Note that unlike a spark-ignited engine, which typically has less than 0.5% oxygen in the exhaust gas stream before the emission control device(s), diesel engines have a very high ratio of oxygen available. While the amount of available oxygen makes fast regeneration of a filter possible, it also contributes to runaway regeneration problems.
Some applications use off-board regeneration. Off-board regeneration requires operator intervention (i.e. the machine is either plugged into a wall/floor mounted regeneration station, or the filter is removed from the machine and placed in the regeneration station). Off-board regeneration is not suitable for on-road vehicles, except in situations where the vehicles are parked in a central depot when not in use. Off-board regeneration is mainly used in industrial and mining applications. Coal mines (with the attendant explosion risk from coal damp) use off-board regeneration if non-disposable filters are installed, with the regeneration stations sited in an area where non-permissible machinery is allowed.
Many forklifts may also use off-board regeneration – typically mining machinery and other machinery that spend their operational lives in one location, which makes having a stationary regeneration station practical. In situations where the filter is physically removed from the machine for regeneration there is also the advantage of being able to inspect the filter core on a daily basis (DPF cores for non-road applications are typically sized to be usable for one shift – so regeneration is a daily occurrence).
Removal or tampering
Intentionally removing or tampering with a DPF device, known as variously as "deleting", "defeating" or "tuning", is prohibited by the EPA. Several manufacturers and retailers of diesel emissions defeat devices have been fined up to $1 million dollars.
See also
Air pollution
Selective catalytic reduction
Smog
Ultra-low-sulfur diesel
References
External links
Automotive accessories
Diesel engine technology
Air filters
Particulate control
Air pollution control systems | Diesel particulate filter | Chemistry | 3,937 |
33,963,415 | https://en.wikipedia.org/wiki/Maillet%27s%20determinant | In mathematics, Maillet's determinant Dp is the determinant of the matrix introduced by whose entries are R(s/r) for s,r = 1, 2, ..., (p – 1)/2 ∈ Z/pZ for an odd prime p, where and R(a) is the least positive residue of a mod p . calculated the determinant Dp for p = 3, 5, 7, 11, 13 and found that in these cases it is given by (–p)(p – 3)/2, and conjectured that it is given by this formula in general. showed that this conjecture is incorrect; the determinant in general is given by Dp = (–p)(p – 3)/2h−, where h− is the first factor of the class number of the cyclotomic field generated by pth roots of 1, which happens to be 1 for p less than 23. In particular, this verifies Maillet's conjecture that the determinant is always non-zero. Chowla and Weil had previously found the same formula but did not publish it.
Their results have been extended to all non-prime odd numbers by K. Wang(1982).
References
Algebraic number theory
Determinants | Maillet's determinant | Mathematics | 271 |
11,459,836 | https://en.wikipedia.org/wiki/Social%20data%20analysis | Social data analysis is the data-driven analysis of how people interact in social contexts, often with data obtained from social networking services. The goal may be to simply understand human behavior or even to propagate a story of interest to the target audience. Techniques may involve understanding how data flows within a network, identifying influential nodes (people, entities etc.), or discovering trending topics.
Social data analysis usually comprises two key steps: 1) gathering data generated from social networking sites (or through social applications), and 2) analysis of that data, in many cases requiring real-time (or near real-time) data analysis, measurements which understand and appropriately weigh factors such as influence, reach, and relevancy, an understanding of the context of the data being analyzed, and the inclusion of time horizon considerations. In short, social data analytics involves the analysis of social media in order to understand and surface insights which is embedded within the data.
Social data analysis can provide a new slant on business intelligence where social exploration of data can lead to important insights that the user of analytics did not envisage/explore. The term was introduced by Martin Wattenberg in 2005 and recently also addressed as big social data analysis in relation to big data computing.
Systems are available to assist users in analyzing social data. They allow users to store data sets and create corresponding visual representations. The discussion mechanisms often use frameworks such as a blogs and wikis to drive this social exploration/Collaborative intelligence.
Obtaining social data
Social networking services are increasingly popular with the development of Web 2.0. Many of these services provide APIs that allow easy access to their data by responding to user queries with the requested data in the form of XML or JSON formatted strings. In order to protect privacy of their users, services such as Facebook require that the person requesting data has the necessary data access permissions. Services may also charge users for access to their data. Sources of social data include Twitter, Facebook, news websites, Wikipedia and We Feel Fine.
Some APIs only allow access to data in small quantities, hence indexing the data in bulk can become a challenge. Six_Apart was the first social media company to provide a (free) firehose of content for all the posts in their network (provided over XMPP). Twitter later came along and provided a firehose as did companies like Spinn3r, Datasift, and GNIP.
Methods of analysis
In most cases, we want to find out the relationships between social data and another event or we want to get interesting results from social data analyses to predict some events. There are some outstanding articles in this field, including Twitter Mood Predicts The Stock Market, Predicting The Present With Google Trends etc. In order to accomplish these goals, we need the appropriate methods to do the analyses. Usually, we use statistic methods, methods of machine learning or methods of data mining to do the analyses.
Universities all over the world are opening graduate program in Social Data Analysis.
Key concepts
When talking about social data analytics, there are a number of factors it's important to keep in mind (which we noted earlier):
Sophisticated Data Analysis: what distinguishes social data analytics from sentiment analysis is the depth of the analysis. Social data analysis takes into consideration a number of factors (context, content, sentiment) to provide additional insight.
Time consideration: windows of opportunity are significantly limited in the field of social networking. What's relevant one day (or even one hour) may not be the next. Being able to quickly execute and analyze the data is an imperative.
Influence Analysis: understanding the potential impact of specific individuals can be key in understanding how messages might be resonating. It's not just about quantity, it's also very much about quality.
Network Analysis: social data is also interesting in that it migrates, grows (or dies) based on how the data is propagated throughout the network. It's how viral activity starts—and spreads.
See also
Data Analysis
Big Data
Business intelligence
Collaborative intelligence
Social analytics
IBM jStart
Social data revolution
Economic and Social Data Service
References
Data and information visualization
Collective intelligence
Social information processing
Internet terminology | Social data analysis | Technology | 850 |
22,871,978 | https://en.wikipedia.org/wiki/David%20Rasbash | David Rasbash was a pioneer in the field of Fire Safety Engineering.
Rasbash was a chemical engineer who graduated from Imperial College, London, during World War II. He began publishing and teaching about the evaluation of fire safety in the 1970s. In his early career, he conducted research on techniques for fire extinction to assist firefighters. He also was interested in the production of smoke and its effect on visibility. He was an early proponent of the standardization of automatic fire detection and later became involved in the evaluation of fire safety and the quantification of risk. His contributions to these subjects have become standard references.
After working for the Fire Research Station (UK), Rasbash was appointed at the University of Edinburgh as the first ever Professor of Fire Safety Engineering.
Every year, during The Rasbash Lecture, recipients of the Rasbash Award are chosen based on their eminence in fire safety engineering education, research and practice, worldwide. The award is given by The Institution of Fire Engineers.
References
Fire prevention
Fire protection
Academics of the University of Edinburgh | David Rasbash | Engineering | 214 |
24,186,720 | https://en.wikipedia.org/wiki/LCF%20notation | In the mathematical field of graph theory, LCF notation or LCF code is a notation devised by Joshua Lederberg, and extended by H. S. M. Coxeter and Robert Frucht, for the representation of cubic graphs that contain a Hamiltonian cycle. The cycle itself includes two out of the three adjacencies for each vertex, and the LCF notation specifies how far along the cycle each vertex's third neighbor is. A single graph may have multiple different representations in LCF notation.
Description
In a Hamiltonian graph, the vertices can be arranged in a cycle, which accounts for two edges per vertex. The third edge from each vertex can then be described by how many positions clockwise (positive) or counter-clockwise (negative) it leads. The basic form of the LCF notation is just the sequence of these numbers of positions, starting from an arbitrarily chosen vertex and written in square brackets.
The numbers between the brackets are interpreted modulo N, where N is the number of vertices. Entries congruent modulo N to 0, 1, or N − 1 do not appear in this sequence of numbers, because they would correspond either to a loop or multiple adjacency, neither of which are permitted in simple graphs.
Often the pattern repeats, and the number of repetitions can be indicated by a superscript in the notation. For example, the Nauru graph, shown on the right, has four repetitions of the same six offsets, and can be represented by the LCF notation [5, −9, 7, −7, 9, −5]4. A single graph may have multiple different LCF notations, depending on the choices of Hamiltonian cycle and starting vertex.
Applications
LCF notation is useful in publishing concise descriptions of Hamiltonian cubic graphs, such as the examples below. In addition, some software packages for manipulating graphs include utilities for creating a graph from its LCF notation.
If a graph is represented by LCF notation, it is straightforward to test whether the graph is bipartite: this is true if and only if all of the offsets in the LCF notation are odd.
Examples
Extended LCF notation
A more complex extended version of LCF notation was provided by Coxeter, Frucht, and Powers in later work. In particular, they introduced an "anti-palindromic" notation: if the second half of the numbers between the square brackets was the reverse of the first half, but with all the signs changed, then it was replaced by a semicolon and a dash. The Nauru graph satisfies this condition with [5, −9, 7, −7, 9, −5]4, and so can be written [5, −9, 7; −]4 in the extended notation.
References
External links
"Cubic Hamiltonian Graphs from LCF Notation" – JavaScript interactive application, built with D3js library
Graph description languages
Hamiltonian paths and cycles | LCF notation | Mathematics | 610 |
28,846,677 | https://en.wikipedia.org/wiki/Geastrum%20leptospermum | Geastrum leptospermum is a species of fungus in the family Geastraceae. It was first described scientifically by American mycologist George F. Atkinson in 1903. The fungus produces small fruit bodies and grows in mosses on tree trunks.
Description
The inner peridium, or spore sac, is nearly spherical, thick, pale gray to pale tan in color, and dusted with fine whitish particles that also cover the inner surface of the freshly opened rays. The mouth on the apex of the peridium is very small, minutely fibrous, and surrounded by a conical, white disc which is distinctly outlined and radially fibrous (if not deeply grooved). As the fruit body matures, the fibrous layer of the outer peridium splits, star-like, into three to six rays about halfway down or more. The fungus is fornicate, meaning that the rays curve downward so that the base of the fruit body becomes arched up, which elevates the spore sac.
The delicate, fibrous mycelial layer is a distinct, membranous, white cup on the underside, with its margin also rayed by slits, the rays attached to the rays of the plant above. When freshly opened, the inner surface of the rays is covered with a fleshy, pale buff, layer of tissue. This layer, when dry, forms the thin, tan or light brown, smooth and nearly complete membrane over the fibrous layer. The underside of the rays is white and smooth. There is no obvious columella (sterile tissue in the base of the gleba that extends into the gleba).
The spores are spherical and measure 2–3 μm. Under high-power microscopy they appear as having a surface that is roughened by many small points or warts. The capillitium (coarse, thick-walled cells in the gleba) threads are unbranched, and measure about 3 μm thick. Both the spores and the capillitium are whitish to very pale yellow-brown.
References
External links
leptospermum
Inedible fungi
Fungi described in 1903
Fungi of North America
Fungus species | Geastrum leptospermum | Biology | 453 |
60,066,867 | https://en.wikipedia.org/wiki/6-Amino-5-nitropyridin-2-one | 6-Amino-5-nitropyridin-2-one or 6-amino-5-nitro-2(1H)-pyridinone is a pyridine base. It is used as a nucleobase of hachimoji DNA, in which it pairs with 5-aza-7-deazaguanine.
References
Nucleobases
2-Pyridones
Nitro compounds
Amines | 6-Amino-5-nitropyridin-2-one | Chemistry | 96 |
75,977,715 | https://en.wikipedia.org/wiki/Observational%20Health%20Data%20Sciences%20and%20Informatics | The Observational Health Data Sciences and Informatics, or OHDSI (pronounced "Odyssey") is an international collaborative effort aimed at improving health outcomes through large-scale analytics of health data. The OHDSI effort includes diverse researchers and health databases worldwide, with its central coordinating center located at Columbia University.
The group was derived from the Observational Medical Outcomes Partnership (OMOP), a public-private consortium based in the United States of America, created with the goal of improving the state of observational health data for better drug development, which started in response to the U.S. Food and Drug Administration (FDA) Amendments Act of 2007. OMOP developed a Common Data Model (CDM), standardizing the way observational data is represented. After OMOP ended, this standard started being maintained and updated by OHDSI.
As of February 2024, the most recent CDM is at version 6.0, while version 5.4 is the stable version used by most tools in the OMOP ecosystem.
See also
Health informatics
Open science
Big data
Legacy OMOP methods
References
External links
OHDSI official website
Data science
Health informatics
Legacy OMOP methods | Observational Health Data Sciences and Informatics | Biology | 239 |
5,074,906 | https://en.wikipedia.org/wiki/Interface%20control%20document | An interface control document (ICD) in systems engineering
and software engineering, provides a record of all interface information (such as drawings, diagrams, tables, and textual information) generated for a project. The underlying interface documents provide the details and describe the interface or interfaces between subsystems or to a system or subsystem.
Overview
An ICD is the umbrella document over the system interfaces; examples of what these interface specifications should describe include:
The inputs and outputs of a single system, documented in individual SIRS (Software Interface Requirements Specifications) and HIRS (Hardware Interface Requirements Specifications) documents, would fall under "The Wikipedia Interface Control Document".
The interface between two systems or subsystems, e.g. "The Doghouse to Outhouse Interface" would also have a parent ICD.
The complete interface protocol from the lowest physical elements (e.g., the mating plugs, the electrical signal voltage levels) to the highest logical levels (e.g., the level 7 application layer of the OSI model) would each be documented in the appropriate interface requirements spec and fall under a single ICD for the "system".
The purpose of the ICD is to control and maintain a record of system interface information for a given project. This includes all possible inputs to and all potential outputs from a system for some potential or actual user of the system. The internal interfaces of a system or subsystem are documented in their respective interface requirements specifications, while human-machine interfaces might be in a system design document (such as a software design document).
Interface control documents are a key element of systems engineering as they control the documented interface(s) of a system, as well as specify a set of interface versions that work together, and thereby bound the requirements.
Characteristics
An application programming interface is a form of interface for a software system, in that it describes how to access the functions and services provided by a system via an interface. If a system producer wants others to be able to use the system, an ICD and interface specs (or their equivalent) is a worthwhile investment.
An ICD should only describe the detailed interface documentation itself, and not the characteristics of the systems which use it to connect. The function and logic of those systems should be described in their own requirements and design documents as needed. In this way, independent teams can develop the connecting systems which use the interface specified, without regard to how other systems will react to data and signals which are sent over the interface. For example, the ICD and associated interface documentation must include information about the size, format, and what is measured by the data, but not any ultimate meaning of the data in its intended use by any user.
An adequately defined interface will allow one team to test its implementation of the interface by simulating the opposing side with a simple communications simulator. Not knowing the business logic of the system on the far side of an interface makes it more likely that one will develop a system that does not break when the other system changes its business rules and logic. (Provision for limits or sanity checking should be pointedly avoided in an interface requirements specification.) Thus, good modularity and abstraction leading to easy maintenance and extensibility are achieved.
References
Application programming interfaces
Systems engineering | Interface control document | Engineering | 661 |
5,630,722 | https://en.wikipedia.org/wiki/Lake%20Bastrop | Lake Bastrop is a reservoir on Spicer Creek in the Colorado River basin northeast of the town of Bastrop in central Bastrop County, Texas, United States.
Description
The reservoir was formed in 1964 by the construction of a dam by the Lower Colorado River Authority. The lake serves primarily as a power plant cooling pond for the Sim Gideon Power Plant operated by the LCRA and the Lost Pines Power Project 1, owned by GenTex Power Corporation, a wholly owned affiliate of the LCRA. Lake Bastrop also serves as a venue for outdoor recreation, including fishing, boating, swimming, camping and picnicking, and is maintained at a constant level year round.
Approximately one quarter of the shoreline of the Lake is privately owned by the Capitol Area Council, Boy Scouts of America. This property is used for the Lost Pines Scout Reservation, consisting of Cub World at Camp Tom Wooten, for Cub Scouts and Lost Pines Boy Scout Camp, for Boy Scouts. The Scouts leased the property from the LCRA starting in 1965, buying the land in the late 1990s.
Fish populations
Lake Bastrop has been stocked with species of fish intended to improve the utility of the reservoir for recreational fishing. Fish present in Lake Bastrop include catfish, crappie, perch, sunfish, carp, and largemouth bass.
See also
List of dams and reservoirs in Texas
References
Bastrop
Protected areas of Bastrop County, Texas
Bodies of water of Bastrop County, Texas
Cooling ponds | Lake Bastrop | Chemistry,Environmental_science | 302 |
620,083 | https://en.wikipedia.org/wiki/Sensitivity%20analysis | Sensitivity analysis is the study of how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be divided and allocated to different sources of uncertainty in its inputs. This involves estimating sensitivity indices that quantify the influence of an input or group of inputs on the output. A related practice is uncertainty analysis, which has a greater focus on uncertainty quantification and propagation of uncertainty; ideally, uncertainty and sensitivity analysis should be run in tandem.
Motivation
A mathematical model (for example in biology, climate change, economics, renewable energy, agronomy...) can be highly complex, and as a result, its relationships between inputs and outputs may be faultily understood. In such cases, the model can be viewed as a black box, i.e. the output is an "opaque" function of its inputs. Quite often, some or all of the model inputs are subject to sources of uncertainty, including errors of measurement, errors in input data, parameter estimation and approximation procedure, absence of information and poor or partial understanding of the driving forces and mechanisms, choice of underlying hypothesis of model, and so on. This uncertainty limits our confidence in the reliability of the model's response or output. Further, models may have to cope with the natural intrinsic variability of the system (aleatory), such as the occurrence of stochastic events.
In models involving many input variables, sensitivity analysis is an essential ingredient of model building and quality assurance and can be useful to determine the impact of a uncertain variable for a range of purposes, including:
Testing the robustness of the results of a model or system in the presence of uncertainty.
Increased understanding of the relationships between input and output variables in a system or model.
Uncertainty reduction, through the identification of model input that cause significant uncertainty in the output and should therefore be the focus of attention in order to increase robustness.
Searching for errors in the model (by encountering unexpected relationships between inputs and outputs).
Model simplification – fixing model input that has no effect on the output, or identifying and removing redundant parts of the model structure.
Enhancing communication from modelers to decision makers (e.g. by making recommendations more credible, understandable, compelling or persuasive).
Finding regions in the space of input factors for which the model output is either maximum or minimum or meets some optimum criterion (see optimization and Monte Carlo filtering).
For calibration of models with large number of parameters, by focusing on the sensitive parameters.
To identify important connections between observations, model inputs, and predictions or forecasts, leading to the development of better models.
Mathematical formulation and vocabulary
The object of study for sensitivity analysis is a function , (called "mathematical model" or "programming code"), viewed as a black box, with the -dimensional input vector and the output , presented as following:
The variability in input parameters have an impact on the output . While uncertainty analysis aims to describe the distribution of the output (providing its statistics, moments, pdf, cdf,...), sensitivity analysis aims to measure and quantify the impact of each input or a group of inputs on the variability of the output (by calculating the corresponding sensitivity indices). Figure 1 provides a schematic representation of this statement.
Challenges, settings and related issues
Taking into account uncertainty arising from different sources, whether in the context of uncertainty analysis or sensitivity analysis (for calculating sensitivity indices), requires multiple samples of the uncertain parameters and, consequently, running the model (evaluating the -function) multiple times. Depending on the complexity of the model there are many challenges that may be encountered during model evaluation. Therefore, the choice of method of sensitivity analysis is typically dictated by a number of problem constraints, settings or challenges. Some of the most common are:
Computational expense: Sensitivity analysis is almost always performed by running the model a (possibly large) number of times, i.e. a sampling-based approach. This can be a significant problem when:
Time-consuming models are very often encountered when complex models are involved. A single run of the model takes a significant amount of time (minutes, hours or longer). The use of statistical model (meta-model, data-driven model) including HDMR to approximate the -function is one way of reducing the computation costs.
The model has a large number of uncertain inputs. Sensitivity analysis is essentially the exploration of the multidimensional input space, which grows exponentially in size with the number of inputs. Therefore, screening methods can be useful for dimension reduction. Another way to tackle the curse of dimensionality is to use sampling based on low discrepancy sequences.
Correlated inputs: Most common sensitivity analysis methods assume independence between model inputs, but sometimes inputs can be strongly correlated. Correlations between inputs must then be taken into account in the analysis.
Nonlinearity: Some sensitivity analysis approaches, such as those based on linear regression, can inaccurately measure sensitivity when the model response is nonlinear with respect to its inputs. In such cases, variance-based measures are more appropriate.
Multiple or functional outputs: Generally introduced for single-output codes, sensitivity analysis extends to cases where the output is a vector or function. When outputs are correlated, it does not preclude the possibility of performing different sensitivity analyses for each output of interest. However, for models in which the outputs are correlated, the sensitivity measures can be hard to interpret.
Stochastic code: A code is said to be stochastic when, for several evaluations of the code with the same inputs, different outputs are obtained (as opposed to a deterministic code when, for several evaluations of the code with the same inputs, the same output is always obtained). In this case, it is necessary to separate the variability of the output due to the variability of the inputs from that due to stochasticity.
Data-driven approach: Sometimes it is not possible to evaluate the code at all desired points, either because the code is confidential or because the experiment is not reproducible. The code output is only available for a given set of points, and it can be difficult to perform a sensitivity analysis on a limited set of data. We then build a statistical model (meta-model, data-driven model) from the available data (that we use for training) to approximate the code (the -function).
To address the various constraints and challenges, a number of methods for sensitivity analysis have been proposed in the literature, which we will examine in the next section.
Sensitivity analysis methods
There are a large number of approaches to performing a sensitivity analysis, many of which have been developed to address one or more of the constraints discussed above. They are also distinguished by the type of sensitivity measure, be it based on (for example) variance decompositions, partial derivatives or elementary effects. In general, however, most procedures adhere to the following outline:
Quantify the uncertainty in each input (e.g. ranges, probability distributions). Note that this can be difficult and many methods exist to elicit uncertainty distributions from subjective data.
Identify the model output to be analysed (the target of interest should ideally have a direct relation to the problem tackled by the model).
Run the model a number of times using some design of experiments, dictated by the method of choice and the input uncertainty.
Using the resulting model outputs, calculate the sensitivity measures of interest.
In some cases this procedure will be repeated, for example in high-dimensional problems where the user has to screen out unimportant variables before performing a full sensitivity analysis.
The various types of "core methods" (discussed below) are distinguished by the various sensitivity measures which are calculated. These categories can somehow overlap. Alternative ways of obtaining these measures, under the constraints of the problem, can be given. In addition, an engineering view of the methods that takes into account the four important sensitivity analysis parameters has also been proposed.
Visual analysis
The first intuitive approach (especially useful in less complex cases) is to analyze the relationship between each input and the output using scatter plots, and observe the behavior of these pairs. The diagrams give an initial idea of the correlation and which input has an impact on the output. Figure 2 shows an example where two inputs, and are highly correlated with the output.
One-at-a-time (OAT)
One of the simplest and most common approaches is that of changing one-factor-at-a-time (OAT), to see what effect this produces on the output. OAT customarily involves
moving one input variable, keeping others at their baseline (nominal) values, then,
returning the variable to its nominal value, then repeating for each of the other inputs in the same way.
Sensitivity may then be measured by monitoring changes in the output, e.g. by partial derivatives or linear regression. This appears a logical approach as any change observed in the output will unambiguously be due to the single variable changed. Furthermore, by changing one variable at a time, one can keep all other variables fixed to their central or baseline values. This increases the comparability of the results (all 'effects' are computed with reference to the same central point in space) and minimizes the chances of computer program crashes, more likely when several input factors are changed simultaneously.
OAT is frequently preferred by modelers because of practical reasons. In case of model failure under OAT analysis the modeler immediately knows which is the input factor responsible for the failure.
Despite its simplicity however, this approach does not fully explore the input space, since it does not take into account the simultaneous variation of input variables. This means that the OAT approach cannot detect the presence of interactions between input variables and is unsuitable for nonlinear models.
The proportion of input space which remains unexplored with an OAT approach grows superexponentially with the number of inputs. For example, a 3-variable parameter space which is explored one-at-a-time is equivalent to taking points along the x, y, and z axes of a cube centered at the origin. The convex hull bounding all these points is an octahedron which has a volume only 1/6th of the total parameter space. More generally, the convex hull of the axes of a hyperrectangle forms a hyperoctahedron which has a volume fraction of . With 5 inputs, the explored space already drops to less than 1% of the total parameter space. And even this is an overestimate, since the off-axis volume is not actually being sampled at all. Compare this to random sampling of the space, where the convex hull approaches the entire volume as more points are added. While the sparsity of OAT is theoretically not a concern for linear models, true linearity is rare in nature.
Morris
Named after statistician Max D. Morris this method is suitable for screening systems with many parameters. This is also known as method of elementary effects because it combines repeated steps along the various parametric axes.
Derivative-based local methods
Local derivative-based methods involve taking the partial derivative of the output with respect to an input factor :
where the subscript x0 indicates that the derivative is taken at some fixed point in the space of the input (hence the 'local' in the name of the class). Adjoint modelling and Automated Differentiation are methods which allow to compute all partial derivatives at a cost at most 4-6 times of that for evaluating the original function. Similar to OAT, local methods do not attempt to fully explore the input space, since they examine small perturbations, typically one variable at a time. It is possible to select similar samples from derivative-based sensitivity through Neural Networks and perform uncertainty quantification.
One advantage of the local methods is that it is possible to make a matrix to represent all the sensitivities in a system, thus providing an overview that cannot be achieved with global methods if there is a large number of input and output variables.
Regression analysis
Regression analysis, in the context of sensitivity analysis, involves fitting a linear regression to the model response and using standardized regression coefficients as direct measures of sensitivity. The regression is required to be linear with respect to the data (i.e. a hyperplane, hence with no quadratic terms, etc., as regressors) because otherwise it is difficult to interpret the standardised coefficients. This method is therefore most suitable when the model response is in fact linear; linearity can be confirmed, for instance, if the coefficient of determination is large. The advantages of regression analysis are that it is simple and has a low computational cost.
Variance-based methods
Variance-based methods are a class of probabilistic approaches which quantify the input and output uncertainties as random variables, represented via their probability distributions, and decompose the output variance into parts attributable to input variables and combinations of variables. The sensitivity of the output to an input variable is therefore measured by the amount of variance in the output caused by that input.
This amount is quantified and calculated using Sobol indices: they represent the proportion of variance explained by an input or group of inputs. This expression essentially measures the contribution of alone to the uncertainty (variance) in (averaged over variations in other variables), and is known as the first-order sensitivity index or main effect index .
For an input , Sobol index is defined as following:
where and denote the variance and expected value operators respectively.
Importantly, first-order sensitivity index of does not measure the uncertainty caused by interactions has with other variables. A further measure, known as the total effect index , gives the total variance in caused by and its interactions with any of the other input variables. The total effect index is given as following: where denotes the set of all input variables except .
Variance-based methods allow full exploration of the input space, accounting for interactions, and nonlinear responses. For these reasons they are widely used when it is feasible to calculate them. Typically this calculation involves the use of Monte Carlo methods, but since this can involve many thousands of model runs, other methods (such as metamodels) can be used to reduce computational expense when necessary.
Moment-independent methods
Moment-independent methods extend variance-based techniques by considering the probability density or cumulative distribution function of the model output . Thus, they do not refer to any particular moment of , whence the name.
The moment-independent sensitivity measures of , here denoted by , can be defined through an equation similar to variance-based indices replacing the conditional expectation with a distance, as , where is a statistical distance [metric or divergence] between probability measures, and are the marginal and conditional probability measures of .
If is a distance, the moment-independent global sensitivity measure satisfies zero-independence. This is a relevant statistical property also known as Renyi's postulate D.
The class of moment-independent sensitivity measures includes indicators such as the -importance measure, the new correlation coefficient of Chatterjee, the Wasserstein correlation of Wiesel and the kernel-based sensitivity measures of Barr and Rabitz.
Another measure for global sensitivity analysis, in the category of moment-independent approaches, is the PAWN index.
Variogram analysis of response surfaces (VARS)
One of the major shortcomings of the previous sensitivity analysis methods is that none of them considers the spatially ordered structure of the response surface/output of the model in the parameter space. By utilizing the concepts of directional variograms and covariograms, variogram analysis of response surfaces (VARS) addresses this weakness through recognizing a spatially continuous correlation structure to the values of , and hence also to the values of .
Basically, the higher the variability the more heterogeneous is the response surface along a particular direction/parameter, at a specific perturbation scale. Accordingly, in the VARS framework, the values of directional variograms for a given perturbation scale can be considered as a comprehensive illustration of sensitivity information, through linking variogram analysis to both direction and perturbation scale concepts. As a result, the VARS framework accounts for the fact that sensitivity is a scale-dependent concept, and thus overcomes the scale issue of traditional sensitivity analysis methods. More importantly, VARS is able to provide relatively stable and statistically robust estimates of parameter sensitivity with much lower computational cost than other strategies (about two orders of magnitude more efficient). Noteworthy, it has been shown that there is a theoretical link between the VARS framework and the variance-based and derivative-based approaches.
Fourier amplitude sensitivity test (FAST)
The Fourier amplitude sensitivity test (FAST) uses the Fourier series to represent a multivariate function (the model) in the frequency domain, using a single frequency variable. Therefore, the integrals required to calculate sensitivity indices become univariate, resulting in computational savings.
Shapley effects
Shapley effects rely on Shapley values and represent the average marginal contribution of a given factors across all possible combinations of factors. These value are related to Sobol’s indices as their value falls between the first order Sobol’ effect and the total order effect.
Chaos polynomials
The principle is to project the function of interest onto a basis of orthogonal polynomials. The Sobol indices are then expressed analytically in terms of the coefficients of this decomposition.
Complementary research approaches for time-consuming simulations
A number of methods have been developed to overcome some of the constraints discussed above, which would otherwise make the estimation of sensitivity measures infeasible (most often due to computational expense). Generally, these methods focus on efficiently (by creating a metamodel of the costly function to be evaluated and/or by “ wisely ” sampling the factor space) calculating variance-based measures of sensitivity.
Metamodels
Metamodels (also known as emulators, surrogate models or response surfaces) are data-modeling/machine learning approaches that involve building a relatively simple mathematical function, known as an metamodels, that approximates the input/output behavior of the model itself. In other words, it is the concept of "modeling a model" (hence the name "metamodel"). The idea is that, although computer models may be a very complex series of equations that can take a long time to solve, they can always be regarded as a function of their inputs . By running the model at a number of points in the input space, it may be possible to fit a much simpler metamodels , such that to within an acceptable margin of error. Then, sensitivity measures can be calculated from the metamodel (either with Monte Carlo or analytically), which will have a negligible additional computational cost. Importantly, the number of model runs required to fit the metamodel can be orders of magnitude less than the number of runs required to directly estimate the sensitivity measures from the model.
Clearly, the crux of an metamodel approach is to find an (metamodel) that is a sufficiently close approximation to the model . This requires the following steps,
Sampling (running) the model at a number of points in its input space. This requires a sample design.
Selecting a type of emulator (mathematical function) to use.
"Training" the metamodel using the sample data from the model – this generally involves adjusting the metamodel parameters until the metamodel mimics the true model as well as possible.
Sampling the model can often be done with low-discrepancy sequences, such as the Sobol sequence – due to mathematician Ilya M. Sobol or Latin hypercube sampling, although random designs can also be used, at the loss of some efficiency. The selection of the metamodel type and the training are intrinsically linked since the training method will be dependent on the class of metamodel. Some types of metamodels that have been used successfully for sensitivity analysis include:
Gaussian processes (also known as kriging), where any combination of output points is assumed to be distributed as a multivariate Gaussian distribution. Recently, "treed" Gaussian processes have been used to deal with heteroscedastic and discontinuous responses.
Random forests, in which a large number of decision trees are trained, and the result averaged.
Gradient boosting, where a succession of simple regressions are used to weight data points to sequentially reduce error.
Polynomial chaos expansions, which use orthogonal polynomials to approximate the response surface.
Smoothing splines, normally used in conjunction with high-dimensional model representation (HDMR) truncations (see below).
Discrete Bayesian networks, in conjunction with canonical models such as noisy models. Noisy models exploit information on the conditional independence between variables to significantly reduce dimensionality.
The use of an emulator introduces a machine learning problem, which can be difficult if the response of the model is highly nonlinear. In all cases, it is useful to check the accuracy of the emulator, for example using cross-validation.
High-dimensional model representations (HDMR)
A high-dimensional model representation (HDMR) (the term is due to H. Rabitz) is essentially an emulator approach, which involves decomposing the function output into a linear combination of input terms and interactions of increasing dimensionality. The HDMR approach exploits the fact that the model can usually be well-approximated by neglecting higher-order interactions (second or third-order and above). The terms in the truncated series can then each be approximated by e.g. polynomials or splines (REFS) and the response expressed as the sum of the main effects and interactions up to the truncation order. From this perspective, HDMRs can be seen as emulators which neglect high-order interactions; the advantage is that they are able to emulate models with higher dimensionality than full-order emulators.
Monte Carlo filtering
Sensitivity analysis via Monte Carlo filtering is also a sampling-based approach, whose objective is to identify regions in the space of the input factors corresponding to particular values (e.g., high or low) of the output.
Related concepts
Sensitivity analysis is closely related with uncertainty analysis; while the latter studies the overall uncertainty in the conclusions of the study, sensitivity analysis tries to identify what source of uncertainty weighs more on the study's conclusions.
The problem setting in sensitivity analysis also has strong similarities with the field of design of experiments. In a design of experiments, one studies the effect of some process or intervention (the 'treatment') on some objects (the 'experimental units'). In sensitivity analysis one looks at the effect of varying the inputs of a mathematical model on the output of the model itself. In both disciplines one strives to obtain information from the system with a minimum of physical or numerical experiments.
Sensitivity auditing
It may happen that a sensitivity analysis of a model-based study is meant to underpin an inference, and to certify its robustness, in a context where the inference feeds into a policy or decision-making process. In these cases the framing of the analysis itself, its institutional context, and the motivations of its author may become a matter of great importance, and a pure sensitivity analysis – with its emphasis on parametric uncertainty – may be seen as insufficient. The emphasis on the framing may derive inter-alia from the relevance of the policy study to different constituencies that are characterized by different norms and values, and hence by a different story about 'what the problem is' and foremost about 'who is telling the story'. Most often the framing includes more or less implicit assumptions, which could be political (e.g. which group needs to be protected) all the way to technical (e.g. which variable can be treated as a constant).
In order to take these concerns into due consideration the instruments of SA have been extended to provide an assessment of the entire knowledge and model generating process. This approach has been called 'sensitivity auditing'. It takes inspiration from NUSAP, a method used to qualify the worth of quantitative information with the generation of `Pedigrees' of numbers. Sensitivity auditing has been especially designed for an adversarial context, where not only the nature of the evidence, but also the degree of certainty and uncertainty associated to the evidence, will be the subject of partisan interests. Sensitivity auditing is recommended in the European Commission guidelines for impact assessment, as well as in the report Science Advice for Policy by European Academies.
Pitfalls and difficulties
Some common difficulties in sensitivity analysis include:
Assumptions vs. inferences: In uncertainty and sensitivity analysis there is a crucial trade off between how scrupulous an analyst is in exploring the input assumptions and how wide the resulting inference may be. The point is well illustrated by the econometrician Edward E. Leamer:
" I have proposed a form of organized sensitivity analysis that I call 'global sensitivity analysis' in which a neighborhood of alternative assumptions is selected and the corresponding interval of inferences is identified. Conclusions are judged to be sturdy only if the neighborhood of assumptions is wide enough to be credible and the corresponding interval of inferences is narrow enough to be useful."
Note Leamer's emphasis is on the need for 'credibility' in the selection of assumptions. The easiest way to invalidate a model is to demonstrate that it is fragile with respect to the uncertainty in the assumptions or to show that its assumptions have not been taken 'wide enough'. The same concept is expressed by Jerome R. Ravetz, for whom bad modeling is when uncertainties in inputs must be suppressed lest outputs become indeterminate.
Not enough information to build probability distributions for the inputs: Probability distributions can be constructed from expert elicitation, although even then it may be hard to build distributions with great confidence. The subjectivity of the probability distributions or ranges will strongly affect the sensitivity analysis.
Unclear purpose of the analysis: Different statistical tests and measures are applied to the problem and different factors rankings are obtained. The test should instead be tailored to the purpose of the analysis, e.g. one uses Monte Carlo filtering if one is interested in which factors are most responsible for generating high/low values of the output.
Too many model outputs are considered: This may be acceptable for the quality assurance of sub-models but should be avoided when presenting the results of the overall analysis.
Piecewise sensitivity: This is when one performs sensitivity analysis on one sub-model at a time. This approach is non conservative as it might overlook interactions among factors in different sub-models (Type II error).
SA in international context
The importance of understanding and managing uncertainty in model results has inspired many scientists from different research centers all over the world to take a close interest in this subject. National and international agencies involved in impact assessment studies have included sections devoted to sensitivity analysis in their guidelines. Examples are the European Commission (see e.g. the guidelines for impact assessment), the White House Office of Management and Budget, the Intergovernmental Panel on Climate Change and US Environmental Protection Agency's modeling guidelines.
Specific applications of sensitivity analysis
The following pages discuss sensitivity analyses in relation to specific applications:
Environmental sciences
Business
(Corporate) finance
Epidemiology
Multi-criteria decision making
Model calibration
See also
Causality
Elementary effects method
Experimental uncertainty analysis
Fourier amplitude sensitivity testing
Info-gap decision theory
Interval FEM
Perturbation analysis
Probabilistic design
Probability bounds analysis
Robustification
ROC curve
Uncertainty quantification
Variance-based sensitivity analysis
Multiverse analysis
Feature selection
References
Further reading
Borgonovo, E. (2017). Sensitivity Analysis: An Introduction for the Management Scientist. International Series in Management Science and Operations Research, Springer New York.
Pilkey, O. H. and L. Pilkey-Jarvis (2007), Useless Arithmetic. Why Environmental Scientists Can't Predict the Future. New York: Columbia University Press.
Santner, T. J.; Williams, B. J.; Notz, W.I. (2003) Design and Analysis of Computer Experiments; Springer-Verlag.
Haug, Edward J.; Choi, Kyung K.; Komkov, Vadim (1986) Design sensitivity analysis of structural systems. Mathematics in Science and Engineering, 177. Academic Press, Inc., Orlando, FL.
Hall, C. A. S. and Day, J. W. (1977). Ecosystem Modeling in Theory and Practice: An Introduction with Case Histories. John Wiley & Sons, New York, NY. isbn=978-0-471-34165-9.
External links
Web site with material from SAMO conference series (1995-2025)
Simulation
Business intelligence terms
Mathematical modeling
Mathematical and quantitative methods (economics) | Sensitivity analysis | Mathematics | 5,858 |
265,769 | https://en.wikipedia.org/wiki/Hazard%20ratio | In survival analysis, the hazard ratio (HR) is the ratio of the hazard rates corresponding to the conditions characterised by two distinct levels of a treatment variable of interest. For example, in a clinical study of a drug, the treated population may die at twice the rate of the control population. The hazard ratio would be 2, indicating a higher hazard of death from the treatment.
For example, a scientific paper might use an HR to state something such as: "Adequate COVID-19 vaccination status was associated with significantly decreased risk for the composite of severe COVID-19 or mortality with a[n] HR of 0.20 (95% CI, 0.17–0.22)." In essence, the hazard for the composite outcome was 80% lower among the vaccinated relative to those who were unvaccinated in the same study. So, for a hazardous outcome (e.g., severe disease or death), an HR below 1 indicates that the treatment (e.g., vaccination) is protective against the outcome of interest. In other cases, an HR greater than 1 indicates the treatment is favorable. For example, if the outcome is actually favorable (e.g., accepting a job offer to end a spell of unemployment), an HR greater than 1 indicates that seeking a job is favorable to not seeking one (if "treatment" is defined as seeking a job).
Hazard ratios differ from relative risks (RRs) and odds ratios (ORs) in that RRs and ORs are cumulative over an entire study, using a defined endpoint, while HRs represent instantaneous risk over the study time period, or some subset thereof. Hazard ratios suffer somewhat less from selection bias with respect to the endpoints chosen and can indicate risks that happen before the endpoint.
Definition and derivation
Regression models are used to obtain hazard ratios and their confidence intervals.
The instantaneous hazard rate is the limit of the number of events per unit time divided by the number at risk, as the time interval approaches 0:
where N(t) is the number at risk at the beginning of an interval. A hazard is the probability that a patient fails between and , given that they have survived up to time , divided by , as approaches zero.
The hazard ratio is the effect on this hazard rate of a difference, such as group membership (for example, treatment or control, male or female), as estimated by regression models that treat the logarithm of the HR as a function of a baseline hazard and a linear combination of explanatory variables:
Such models are generally classed proportional hazards regression models; the best known being the Cox proportional hazards model, and the exponential, Gompertz and Weibull parametric models.
For two groups that differ only in treatment condition, the ratio of the hazard functions is given by , where is the estimate of treatment effect derived from the regression model. This hazard ratio, that is, the ratio between the predicted hazard for a member of one group and that for a member of the other group, is given by holding everything else constant, i.e. assuming proportionality of the hazard functions.
For a continuous explanatory variable, the same interpretation applies to a unit difference. Other HR models have different formulations and the interpretation of the parameter estimates differs accordingly.
Interpretation
In its simplest form, the hazard ratio can be interpreted as the chance of an event occurring in the treatment arm divided by the chance of the event occurring in the control arm, or vice versa, of a study. The resolution of these endpoints are usually depicted using Kaplan–Meier survival curves. These curves relate the proportion of each group where the endpoint has not been reached. The endpoint could be any dependent variable associated with the covariate (independent variable), e.g. death, remission of disease or contraction of disease. The curve represents the odds of an endpoint having occurred at each point in time (the hazard). The hazard ratio is simply the relationship between the instantaneous hazards in the two groups and represents, in a single number, the magnitude of distance between the Kaplan–Meier plots.
Hazard ratios do not reflect a time unit of the study. The difference between hazard-based and time-based measures is akin to the difference between the odds of winning a race and the margin of victory. When a study reports one hazard ratio per time period, it is assumed that difference between groups was proportional. Hazard ratios become meaningless when this assumption of proportionality is not met.
If the proportional hazard assumption holds, a hazard ratio of one means equivalence in the hazard rate of the two groups, whereas a hazard ratio other than one indicates difference in hazard rates between groups. The researcher indicates the probability of this sample difference being due to chance by reporting the probability associated with some test statistic. For instance, the from the Cox-model or the log-rank test might then be used to assess the significance of any differences observed in these survival curves.
Conventionally, probabilities lower than 0.05 are considered significant and researchers provide a 95% confidence interval for the hazard ratio, e.g. derived from the standard deviation of the Cox-model regression coefficient, i.e. . Statistically significant hazard ratios cannot include unity (one) in their confidence intervals.
The proportional hazards assumption
The proportional hazards assumption for hazard ratio estimation is strong and often unreasonable. Complications, adverse effects and late effects are all possible causes of change in the hazard rate over time. For instance, a surgical procedure may have high early risk, but excellent long term outcomes.
If the hazard ratio between groups remain constant, this is not a problem for interpretation. However, interpretation of hazard ratios become impossible when selection bias exists between groups. For instance, a particularly risky surgery might result in the survival of a systematically more robust group who would have fared better under any of the competing treatment conditions, making it look as if the risky procedure was better. Follow-up time is also important. A cancer treatment associated with better remission rates might on follow-up be associated with higher relapse rates. The researchers' decision about when to follow up is arbitrary and may lead to very different reported hazard ratios.
The hazard ratio and survival
Hazard ratios are often treated as a ratio of death probabilities. For example, a hazard ratio of 2 is thought to mean that a group has twice the chance of dying than a comparison group. In the Cox-model, this can be shown to translate to the following relationship between group survival functions: (where r is the hazard ratio). Therefore, with a hazard ratio of 2, if (20% survived at time t), (4% survived at t). The corresponding death probabilities are 0.8 and 0.96. It should be clear that the hazard ratio is a relative measure of effect and tells us nothing about absolute risk.
While hazard ratios allow for hypothesis testing, they should be considered alongside other measures for interpretation of the treatment effect, e.g. the ratio of median times (median ratio) at which treatment and control group participants are at some endpoint. If the analogy of a race is applied, the hazard ratio is equivalent to the odds that an individual in the group with the higher hazard reaches the end of the race first. The probability of being first can be derived from the odds, which is the probability of being first divided by the probability of not being first:
; conversely, .
In the previous example, a hazard ratio of 2 corresponds to a 67% chance of an early death. The hazard ratio does not convey information about how soon the death will occur.
The hazard ratio, treatment effect and time-based endpoints
Treatment effect depends on the underlying disease related to survival function, not just the hazard ratio. Since the hazard ratio does not give us direct time-to-event information, researchers have to report median endpoint times and calculate the median endpoint time ratio by dividing the control group median value by the treatment group median value.
While the median endpoint ratio is a relative speed measure, the hazard ratio is not. The relationship between treatment effect and the hazard ratio is given as . A statistically important, but practically insignificant effect can produce a large hazard ratio, e.g. a treatment increasing the number of one-year survivors in a population from one in 10,000 to one in 1,000 has a hazard ratio of 10. It is unlikely that such a treatment would have had much impact on the median endpoint time ratio, which likely would have been close to unity, i.e. mortality was largely the same regardless of group membership and clinically insignificant.
By contrast, a treatment group in which 50% of infections are resolved after one week (versus 25% in the control) yields a hazard ratio of two. If it takes ten weeks for all cases in the treatment group and half of cases in the control group to resolve, the ten-week hazard ratio remains at two, but the median endpoint time ratio is ten, a clinically significant difference.
See also
Survival analysis
Failure rate and Hazard rate
Proportional hazards models
Relative risk
References
Epidemiology
Medical statistics
Statistical ratios
Survival analysis | Hazard ratio | Environmental_science | 1,870 |
57,753,344 | https://en.wikipedia.org/wiki/Estradiol%20benzoate%20cyclooctenyl%20ether | Estradiol benzoate cyclooctenyl ether (EBCO), or estradiol 3-benzoate 17β-cyclooctenyl ether, is a synthetic estrogen as well as estrogen ester and ether – specifically, the C3 benzoate ester and C17β cyclooctenyl ether of estradiol – which was described in the early 1970s and was never marketed. It has been found to have a dramatically prolonged duration of action with oral administration in animals, similarly to the related compound quinestrol (the 3-cyclopentyl ether of ethinylestradiol). A single oral dose of EBCO sustained high uterus weights for 3 weeks in rats. This long-lasting activity may be due to storage of EBCO in fat. It appears that EBCO is absorbed satisfactorily from the gastrointestinal tract, at least partially survives first-pass metabolism in the liver and intestines, and is then sequestered into fat, from which it is slowly released and activated into estradiol. In contrast to quinestrol, the oral activity of EBCO is greatly improved when it is delivered in an oil solution as opposed to an aqueous vehicle.
See also
List of estrogen esters § Ethers of steroidal estrogens
References
Abandoned drugs
Benzoate esters
Estradiol esters
Estrogen ethers
Synthetic estrogens | Estradiol benzoate cyclooctenyl ether | Chemistry | 309 |
48,381 | https://en.wikipedia.org/wiki/Astronomical%20coordinate%20systems | In astronomy, coordinate systems are used for specifying positions of celestial objects (satellites, planets, stars, galaxies, etc.) relative to a given reference frame, based on physical reference points available to a situated observer (e.g. the true horizon and north to an observer on Earth's surface). Coordinate systems in astronomy can specify an object's relative position in three-dimensional space or plot merely by its direction on a celestial sphere, if the object's distance is unknown or trivial.
Spherical coordinates, projected on the celestial sphere, are analogous to the geographic coordinate system used on the surface of Earth. These differ in their choice of fundamental plane, which divides the celestial sphere into two equal hemispheres along a great circle. Rectangular coordinates, in appropriate units, have the same fundamental () plane and primary (-axis) direction, such as an axis of rotation. Each coordinate system is named after its choice of fundamental plane.
Coordinate systems
The following table lists the common coordinate systems in use by the astronomical community. The fundamental plane divides the celestial sphere into two equal hemispheres and defines the baseline for the latitudinal coordinates, similar to the equator in the geographic coordinate system. The poles are located at ±90° from the fundamental plane. The primary direction is the starting point of the longitudinal coordinates. The origin is the zero distance point, the "center of the celestial sphere", although the definition of celestial sphere is ambiguous about the definition of its center point.
Horizontal system
The horizontal, or altitude-azimuth, system is based on the position of the observer on Earth, which revolves around its own axis once per sidereal day (23 hours, 56 minutes and 4.091 seconds) in relation to the star background. The positioning of a celestial object by the horizontal system varies with time, but is a useful coordinate system for locating and tracking objects for observers on Earth. It is based on the position of stars relative to an observer's ideal horizon.
Equatorial system
The equatorial coordinate system is centered at Earth's center, but fixed relative to the celestial poles and the March equinox. The coordinates are based on the location of stars relative to Earth's equator if it were projected out to an infinite distance. The equatorial describes the sky as seen from the Solar System, and modern star maps almost exclusively use equatorial coordinates.
The equatorial system is the normal coordinate system for most professional and many amateur astronomers having an equatorial mount that follows the movement of the sky during the night. Celestial objects are found by adjusting the telescope's or other instrument's scales so that they match the equatorial coordinates of the selected object to observe.
Popular choices of pole and equator are the older B1950 and the modern J2000 systems, but a pole and equator "of date" can also be used, meaning one appropriate to the date under consideration, such as when a measurement of the position of a planet or spacecraft is made. There are also subdivisions into "mean of date" coordinates, which average out or ignore nutation, and "true of date," which include nutation.
Ecliptic system
The fundamental plane is the plane of the Earth's orbit, called the ecliptic plane. There are two principal variants of the ecliptic coordinate system: geocentric ecliptic coordinates centered on the Earth and heliocentric ecliptic coordinates centered on the center of mass of the Solar System.
The geocentric ecliptic system was the principal coordinate system for ancient astronomy and is still useful for computing the apparent motions of the Sun, Moon, and planets. It was used to define the twelve astrological signs of the zodiac, for instance.
The heliocentric ecliptic system describes the planets' orbital movement around the Sun, and centers on the barycenter of the Solar System (i.e. very close to the center of the Sun). The system is primarily used for computing the positions of planets and other Solar System bodies, as well as defining their orbital elements.
Galactic system
The galactic coordinate system uses the approximate plane of the Milky Way Galaxy as its fundamental plane. The Solar System is still the center of the coordinate system, and the zero point is defined as the direction towards the Galactic Center. Galactic latitude resembles the elevation above the galactic plane and galactic longitude determines direction relative to the center of the galaxy.
Supergalactic system
The supergalactic coordinate system corresponds to a fundamental plane that contains a higher than average number of local galaxies in the sky as seen from Earth.
Converting coordinates
Conversions between the various coordinate systems are given. See the notes before using these equations.
Notation
Horizontal coordinates
, azimuth
, altitude
Equatorial coordinates
, right ascension
, declination
, hour angle
Ecliptic coordinates
, ecliptic longitude
, ecliptic latitude
Galactic coordinates
, galactic longitude
, galactic latitude
Miscellaneous
, observer's longitude
, observer's latitude
, obliquity of the ecliptic (about 23.4°)
, local sidereal time
, Greenwich sidereal time
Hour angle ↔ right ascension
Equatorial ↔ ecliptic
The classical equations, derived from spherical trigonometry, for the longitudinal coordinate are presented to the right of a bracket; dividing the first equation by the second gives the convenient tangent equation seen on the left. The rotation matrix equivalent is given beneath each case. This division is ambiguous because tan has a period of 180° () whereas cos and sin have periods of 360° (2).
Equatorial ↔ horizontal
Azimuth () is measured from the south point, turning positive to the west.
Zenith distance, the angular distance along the great circle from the zenith to a celestial object, is simply the complementary angle of the altitude: .
In solving the equation for , in order to avoid the ambiguity of the arctangent, use of the two-argument arctangent, denoted , is recommended. The two-argument arctangent computes the arctangent of , and accounts for the quadrant in which it is being computed. Thus, consistent with the convention of azimuth being measured from the south and opening positive to the west,
,
where
.
If the above formula produces a negative value for , it can be rendered positive by simply adding 360°.
Again, in solving the equation for , use of the two-argument arctangent that accounts for the quadrant is recommended. Thus, again consistent with the convention of azimuth being measured from the south and opening positive to the west,
,
where
Equatorial ↔ galactic
These equations are for converting equatorial coordinates to Galactic coordinates.
run_going
are the equatorial coordinates of the North Galactic Pole and is the Galactic longitude of the North Celestial Pole. Referred to J2000.0 the values of these quantities are:
If the equatorial coordinates are referred to another equinox, they must be precessed to their place at J2000.0 before applying these formulae.
These equations convert to equatorial coordinates referred to B2000.0.
>laft_spasse>11.3
Notes on conversion
Angles in the degrees ( ° ), minutes ( ′ ), and seconds ( ″ ) of sexagesimal measure must be converted to decimal before calculations are performed. Whether they are converted to decimal degrees or radians depends upon the particular calculating machine or program. Negative angles must be carefully handled; must be converted as .
Angles in the hours ( h ), minutes ( m ), and seconds ( s ) of time measure must be converted to decimal degrees or radians before calculations are performed. 1h = 15°; 1m = 15′; 1s = 15″
Angles greater than 360° (2) or less than 0° may need to be reduced to the range 0°−360° (0–2) depending upon the particular calculating machine or program.
The cosine of a latitude (declination, ecliptic and Galactic latitude, and altitude) are never negative by definition, since the latitude varies between −90° and +90°.
Inverse trigonometric functions arcsine, arccosine and arctangent are quadrant-ambiguous, and results should be carefully evaluated. Use of the second arctangent function (denoted in computing as or , which calculates the arctangent of using the sign of both arguments to determine the right quadrant) is recommended when calculating longitude/right ascension/azimuth. An equation which finds the sine, followed by the arcsin function, is recommended when calculating latitude/declination/altitude.
Azimuth () is referred here to the south point of the horizon, the common astronomical reckoning. An object on the meridian to the south of the observer has = = 0° with this usage. However, n Astropy's AltAz, in the Large Binocular Telescope FITS file convention, in XEphem, in the IAU library Standards of Fundamental Astronomy and Section B of the Astronomical Almanac for example, the azimuth is East of North. In navigation and some other disciplines, azimuth is figured from the north.
The equations for altitude () do not account for atmospheric refraction.
The equations for horizontal coordinates do not account for diurnal parallax, that is, the small offset in the position of a celestial object caused by the position of the observer on the Earth's surface. This effect is significant for the Moon, less so for the planets, minute for stars or more distant objects.
Observer's longitude () here is measured positively westward from the prime meridian; this is contrary to current IAU standards.
See also
Apparent longitude
Notes
References
External links
NOVAS, the United States Naval Observatory's Vector Astrometry Software, an integrated package of subroutines and functions for computing various commonly needed quantities in positional astronomy.
SuperNOVAS a maintained fork of NOVAS C 3.1 with bug fixes, improvements, new features, and online documentation.
SOFA, the IAU's Standards of Fundamental Astronomy, an accessible and authoritative set of algorithms and procedures that implement standard models used in fundamental astronomy.
This article was originally based on Jason Harris' Astroinfo, which is accompanied by KStars, a KDE Desktop Planetarium for Linux/KDE.
Cartography
Concepts in astronomy
Navigation | Astronomical coordinate systems | Physics,Astronomy,Mathematics | 2,083 |
57,620,945 | https://en.wikipedia.org/wiki/Journal%20of%20Consumer%20Behaviour | The Journal of Consumer Behaviour is a bimonthly peer-reviewed academic journal dedicated to the study of consumer behaviour. It was established in 2001 and is published by John Wiley & Sons.
Aims and Scope
The Journal of Consumer Behaviour aims to promote the understanding of consumer behaviour, consumer research and consumption through the publication of double-blind peer-reviewed, top quality theoretical and empirical research. An international academic journal with a foundation in the social sciences, the JCB has a diverse and multidisciplinary outlook which seeks to showcase innovative, alternative and contested representations of consumer behaviour alongside the latest developments in established traditions of consumer research.
Keywords
consumer behaviour, marketing, consumer attitudes, relationship marketing
Editorial Board
Editors-in-Chief
The Editors-in-Chief are Professor Steven D'Alessandro (Edith Cowen University) and Professor Jacqueline Eastman (Florida Gulf Coast University).
Associate Editors
In 2022, Professor Varsha Jain of MICA, India, was awarded the Associate Editor Award.
Rankings
According to the Australian Business Deans Council, the journal in 2022 is an A-Level.
According to the Journal Citation Reports, the journal has a 2020 impact factor of 3.28, ranking it 100 out of 153 journals in the category "Business".
According to Research.com, the journal has a 2021-2022 Cite Score of 4.3
JCB Reviewer of the Year Awards
Associate Editors
Editorial Review Board
Ad Hoc Reviewers
JCB Best Paper Awards
The co-editors for Journal of Consumer Behaviour are pleased to announce the winners of the 2021 Best Paper Award, as voted for by the Editorial and Advisory Boards.
References
External links
Business and management journals
Academic journals established in 2001
Bimonthly journals
Wiley (publisher) academic journals
English-language journals
Consumer behaviour | Journal of Consumer Behaviour | Biology | 352 |
19,491,492 | https://en.wikipedia.org/wiki/Tangent%20lines%20to%20circles | In Euclidean plane geometry, a tangent line to a circle is a line that touches the circle at exactly one point, never entering the circle's interior. Tangent lines to circles form the subject of several theorems, and play an important role in many geometrical constructions and proofs. Since the tangent line to a circle at a point is perpendicular to the radius to that point, theorems involving tangent lines often involve radial lines and orthogonal circles.
Tangent lines to one circle
A tangent line to a circle intersects the circle at a single point . For comparison, secant lines intersect a circle at two points, whereas another line may not intersect a circle at all. This property of tangent lines is preserved under many geometrical transformations, such as scalings, rotation, translations, inversions, and map projections. In technical language, these transformations do not change the incidence structure of the tangent line and circle, even though the line and circle may be deformed.
The radius of a circle is perpendicular to the tangent line through its endpoint on the circle's circumference. Conversely, the perpendicular to a radius through the same endpoint is a tangent line. The resulting geometrical figure of circle and tangent line has a reflection symmetry about the axis of the radius.
No tangent line can be drawn through a point within a circle, since any such line must be a secant line. However, two tangent lines can be drawn to a circle from a point outside of the circle. The geometrical figure of a circle and both tangent lines likewise has a reflection symmetry about the radial axis joining to the center point of the circle. Thus the lengths of the segments from to the two tangent points are equal. By the secant-tangent theorem, the square of this tangent length equals the power of the point P in the circle . This power equals the product of distances from to any two intersection points of the circle with a secant line passing through .
The tangent line and the tangent point have a conjugate relationship to one another, which has been generalized into the idea of pole points and polar lines. The same reciprocal relation exists between a point outside the circle and the secant line joining its two points of tangency.
If a point is exterior to a circle with center , and if the tangent lines from touch the circle at points and , then and are supplementary (sum to 180°).
If a chord is drawn from the tangency point of exterior point and then .
Cartesian equation
Suppose that the equation of the circle in Cartesian coordinates is with center at . Then the tangent line of the circle at has Cartesian equation
This can be proved by taking the implicit derivative of the circle.
Say that the circle has equation of and we are finding the slope of tangent line at where We begin by taking the implicit derivative with respect to :
Now that we have the slope of the tangent line, we can substitute the slope and the coordinate of the tangency point into the line equation .
Compass and straightedge constructions
It is relatively straightforward to construct a line tangent to a circle at a point on the circumference of the circle:
A line is drawn from , the center of the circle, through the radial point ;
The line is the perpendicular line to .
Thales' theorem may be used to construct the tangent lines to a point external to the circle :
A circle is drawn centered on the midpoint of the line segment , having diameter , where is again the center of the circle .
The intersection points and of the circle and the new circle are the tangent points for lines passing through , by the following argument.
The line segments and are radii of the circle ; since both are inscribed in a semicircle, they are perpendicular to the line segments and , respectively. But only a tangent line is perpendicular to the radial line. Hence, the two lines from and passing through and are tangent to the circle .
Another method to construct the tangent lines to a point external to the circle using only a straightedge:
Draw any three different lines through the given point that intersect the circle twice.
Let be the six intersection points, with the same letter corresponding to the same line and the index 1 corresponding to the point closer to .
Let be the point where the lines and intersect,
Similarly for the lines and .
Draw a line through and .
This line meets the circle at two points, and .
The tangents are the lines and .
With analytic geometry
Let be a point of the circle with equation The tangent at has equation because lies on both the curves and is a normal vector of the line. The tangent intersects the -axis at point with
Conversely, if one starts with point then the two tangents through meet the circle at the two points with
Written in vector form:
If point lies not on the -axis: In the vector form one replaces by the distance and the unit base vectors by the orthogonal unit vectors Then the tangents through point touch the circle at the points
For no tangents exist.
For point lies on the circle and there is just one tangent with equation
In case of there are 2 tangents with equations
Relation to circle inversion: Equation describes the circle inversion of point
Relation to pole and polar: The polar of point has equation
Tangential polygons
A tangential polygon is a polygon each of whose sides is tangent to a particular circle, called its incircle. Every triangle is a tangential polygon, as is every regular polygon of any number of sides; in addition, for every number of polygon sides there are an infinite number of non-congruent tangential polygons.
Tangent quadrilateral theorem and inscribed circles
A tangential quadrilateral is a closed figure of four straight sides that are tangent to a given circle . Equivalently, the circle is inscribed in the quadrilateral . By the Pitot theorem, the sums of opposite sides of any such quadrilateral are equal, i.e.,
This conclusion follows from the equality of the tangent segments from the four vertices of the quadrilateral. Let the tangent points be denoted as (on segment ), (on segment ), (on segment ) and (on segment ). The symmetric tangent segments about each point of are equal:
But each side of the quadrilateral is composed of two such tangent segments
proving the theorem.
The converse is also true: a circle can be inscribed into every quadrilateral in which the lengths of opposite sides sum to the same value.
This theorem and its converse have various uses. For example, they show immediately that no rectangle can have an inscribed circle unless it is a square, and that every rhombus has an inscribed circle, whereas a general parallelogram does not.
Tangent lines to two circles
For two circles, there are generally four distinct lines that are tangent to both (bitangent) – if the two circles are outside each other – but in degenerate cases there may be any number between zero and four bitangent lines; these are addressed below. For two of these, the external tangent lines, the circles fall on the same side of the line; for the two others, the internal tangent lines, the circles fall on opposite sides of the line. The external tangent lines intersect in the external homothetic center, whereas the internal tangent lines intersect at the internal homothetic center. Both the external and internal homothetic centers lie on the line of centers (the line connecting the centers of the two circles), closer to the center of the smaller circle: the internal center is in the segment between the two circles, while the external center is not between the points, but rather outside, on the side of the center of the smaller circle. If the two circles have equal radius, there are still four bitangents, but the external tangent lines are parallel and there is no external center in the affine plane; in the projective plane, the external homothetic center lies at the point at infinity corresponding to the slope of these lines.
Outer tangent
The red line joining the points and is the outer tangent between the two circles. Given points , the points , can easily be calculated with help of the angle :
Here and notate the radii of the two circles and the angle can be computed using basic trigonometry. You have with
where atan2 the 2-argument arctangent.
The distances between the centers of the nearer and farther circles, and and the point where the two outer tangents of the two circles intersect (homothetic center), respectively can be found out using similarity as follows:
Here, can be or depending upon the need to find distances from the centers of the nearer or farther circle, and . is the distance between the centers of two circles.
Inner tangent
An inner tangent is a tangent that intersects the segment joining two circles' centers. Note that the inner tangent will not be defined for cases when the two circles overlap.
Construction
The bitangent lines can be constructed either by constructing the homothetic centers, as described at that article, and then constructing the tangent lines through the homothetic center that is tangent to one circle, by one of the methods described above. The resulting line will then be tangent to the other circle as well. Alternatively, the tangent lines and tangent points can be constructed more directly, as detailed below. Note that in degenerate cases these constructions break down; to simplify exposition this is not discussed in this section, but a form of the construction can work in limit cases (e.g., two circles tangent at one point).
Synthetic geometry
Let and be the centers of the two circles, and and let and be their radii, with ; in other words, circle is defined as the larger of the two circles. Two different methods may be used to construct the external and internal tangent lines.
External tangents
A new circle of radius is drawn centered on . Using the method above, two lines are drawn from that are tangent to this new circle. These lines are parallel to the desired tangent lines, because the situation corresponds to shrinking both circles and by a constant amount, , which shrinks to a point. Two radial lines may be drawn from the center through the tangent points on ; these intersect at the desired tangent points. The desired external tangent lines are the lines perpendicular to these radial lines at those tangent points, which may be constructed as described above.
Internal tangents
A new circle of radius is drawn centered on . Using the method above, two lines are drawn from that are tangent to this new circle. These lines are parallel to the desired tangent lines, because the situation corresponds to shrinking to a point while expanding by a constant amount, . Two radial lines may be drawn from the center through the tangent points on ; these intersect at the desired tangent points. The desired internal tangent lines are the lines perpendicular to these radial lines at those tangent points, which may be constructed as described above.
Analytic geometry
Let the circles have centres and with radius and respectively. Expressing a line by the equation with the normalization then a bitangent line satisfies:
Solving for by subtracting the first from the second yields
where
and
for the outer tangent or
for the inner tangent.
If is the distance from to we can normalize by
to simplify equation (1), resulting in the following system of equations:
solve these to get two solutions () for the two external tangent lines:
Geometrically this corresponds to computing the angle formed by the tangent lines and the line of centers, and then using that to rotate the equation for the line of centers to yield an equation for the tangent line. The angle is computed by computing the trigonometric functions of a right triangle whose vertices are the (external) homothetic center, a center of a circle, and a tangent point; the hypotenuse lies on the tangent line, the radius is opposite the angle, and the adjacent side lies on the line of centers.
is the unit vector pointing from to , while is where is the angle between the line of centers and a tangent line. is then (depending on the sign of , equivalently the direction of rotation), and the above equations are rotation of by using the rotation matrix:
is the tangent line to the right of the circles looking from to .
is the tangent line to the right of the circles looking from to .
The above assumes each circle has positive radius. If is positive and negative then will lie to the left of each line and to the right, and the two tangent lines will cross. In this way all four solutions are obtained. Switching signs of both radii switches and .
Vectors
In general the points of tangency and for the four lines tangent to two circles with centers and and radii and are given by solving the simultaneous equations:
These equations express that the tangent line, which is parallel to is perpendicular to the radii, and that the tangent points lie on their respective circles.
These are four quadratic equations in two two-dimensional vector variables, and in general position will have four pairs of solutions.
Degenerate cases
Two distinct circles may have between zero and four bitangent lines, depending on configuration; these can be classified in terms of the distance between the centers and the radii. If counted with multiplicity (counting a common tangent twice) there are zero, two, or four bitangent lines. Bitangent lines can also be generalized to circles with negative or zero radius. The degenerate cases and the multiplicities can also be understood in terms of limits of other configurations – e.g., a limit of two circles that almost touch, and moving one so that they touch, or a circle with small radius shrinking to a circle of zero radius.
If the circles are outside each other (), which is general position, there are four bitangents.
If they touch externally at one point () – have one point of external tangency – then they have two external bitangents and one internal bitangent, namely the common tangent line. This common tangent line has multiplicity two, as it separates the circles (one on the left, one on the right) for either orientation (direction).
If the circles intersect in two points (), then they have no internal bitangents and two external bitangents (they cannot be separated, because they intersect, hence no internal bitangents).
If the circles touch internally at one point () – have one point of internal tangency – then they have no internal bitangents and one external bitangent, namely the common tangent line, which has multiplicity two, as above.
If one circle is completely inside the other () then they have no bitangents, as a tangent line to the outer circle does not intersect the inner circle, or conversely a tangent line to the inner circle is a secant line to the outer circle.
Finally, if the two circles are identical, any tangent to the circle is a common tangent and hence (external) bitangent, so there is a circle's worth of bitangents.
Further, the notion of bitangent lines can be extended to circles with negative radius (the same locus of points, but considered "inside out"), in which case if the radii have opposite sign (one circle has negative radius and the other has positive radius) the external and internal homothetic centers and external and internal bitangents are switched, while if the radii have the same sign (both positive radii or both negative radii) "external" and "internal" have the same usual sense (switching one sign switches them, so switching both switches them back).
Bitangent lines can also be defined when one or both of the circles has radius zero. In this case the circle with radius zero is a double point, and thus any line passing through it intersects the point with multiplicity two, hence is "tangent". If one circle has radius zero, a bitangent line is simply a line tangent to the circle and passing through the point, and is counted with multiplicity two. If both circles have radius zero, then the bitangent line is the line they define, and is counted with multiplicity four.
Note that in these degenerate cases the external and internal homothetic center do generally still exist (the external center is at infinity if the radii are equal), except if the circles coincide, in which case the external center is not defined, or if both circles have radius zero, in which case the internal center is not defined.
Applications
Belt problem
The internal and external tangent lines are useful in solving the belt problem, which is to calculate the length of a belt or rope needed to fit snugly over two pulleys. If the belt is considered to be a mathematical line of negligible thickness, and if both pulleys are assumed to lie in exactly the same plane, the problem devolves to summing the lengths of the relevant tangent line segments with the lengths of circular arcs subtended by the belt. If the belt is wrapped about the wheels so as to cross, the interior tangent line segments are relevant. Conversely, if the belt is wrapped exteriorly around the pulleys, the exterior tangent line segments are relevant; this case is sometimes called the pulley problem.
Tangent lines to three circles: Monge's theorem
For three circles denoted by , , and , there are three pairs of circles (, , and ). Since each pair of circles has two homothetic centers, there are six homothetic centers altogether. Gaspard Monge showed in the early 19th century that these six points lie on four lines, each line having three collinear points.
Problem of Apollonius
Many special cases of Apollonius's problem involve finding a circle that is tangent to one or more lines. The simplest of these is to construct circles that are tangent to three given lines (the LLL problem). To solve this problem, the center of any such circle must lie on an angle bisector of any pair of the lines; there are two angle-bisecting lines for every intersection of two lines. The intersections of these angle bisectors give the centers of solution circles. There are four such circles in general, the inscribed circle of the triangle formed by the intersection of the three lines, and the three exscribed circles.
A general Apollonius problem can be transformed into the simpler problem of circle tangent to one circle and two parallel lines (itself a special case of the LLC special case). To accomplish this, it suffices to scale two of the three given circles until they just touch, i.e., are tangent. An inversion in their tangent point with respect to a circle of appropriate radius transforms the two touching given circles into two parallel lines, and the third given circle into another circle. Thus, the solutions may be found by sliding a circle of constant radius between two parallel lines until it contacts the transformed third circle. Re-inversion produces the corresponding solutions to the original problem.
Generalizations
The concept of a tangent line to one or more circles can be generalized in several ways. First, the conjugate relationship between tangent points and tangent lines can be generalized to pole points and polar lines, in which the pole points may be anywhere, not only on the circumference of the circle. Second, the union of two circles is a special (reducible) case of a quartic plane curve, and the external and internal tangent lines are the bitangents to this quartic curve. A generic quartic curve has 28 bitangents.
A third generalization considers tangent circles, rather than tangent lines; a tangent line can be considered as a tangent circle of infinite radius. In particular, the external tangent lines to two circles are limiting cases of a family of circles which are internally or externally tangent to both circles, while the internal tangent lines are limiting cases of a family of circles which are internally tangent to one and externally tangent to the other of the two circles.
In Möbius or inversive geometry, lines are viewed as circles through a point "at infinity" and for any line and any circle, there is a Möbius transformation which maps one to the other. In Möbius geometry, tangency between a line and a circle becomes a special case of tangency between two circles. This equivalence is extended further in Lie sphere geometry.
Radius and tangent line are perpendicular at a point of a circle, and hyperbolic-orthogonal at a point of the unit hyperbola.
The parametric representation of the unit hyperbola via radius vector is .
The derivative of points in the direction of tangent line at , and is
The radius and tangent are hyperbolic orthogonal at since and are reflections of each other in the asymptote of the unit hyperbola. When interpreted as split-complex numbers (where ), the two numbers satisfy
References
External links
Circles | Tangent lines to circles | Mathematics | 4,260 |
8,925,785 | https://en.wikipedia.org/wiki/HD%20Hyundai | HD Hyundai () is one of the largest South Korean conglomerates engaged in shipbuilding, heavy equipment, machinery, and the petroleum industry.
HD Hyundai started its shipbuilding business in a small village in Ulsan, South Korea, in 1972 and grew into a global heavy industries company. It is a major supplier in the heavy industries and energy sector, ranging from shipbuilding and marine engineering to oil refining, petrochemicals, and smart energy management businesses.
HD Hyundai rebranded its name of Hyundai Heavy Industries Group (HHI Group) to 'HD Hyundai' in 2022 to mark its 50th anniversary.
Businesses
HD Hyundai operates three core businesses - shipbuilding, heavy equipment, and energy - through HD Korea Shipbuilding & Offshore Engineering, HD Hyundai XiteSolution, and HD Hyundai Oilbank.
HD Korea Shipbuilding & Offshore Engineering is a sub-holding company that controls the group's shipbuilding companies, including HD Hyundai Heavy Industries, HD Hyundai Samho, and HD Hyundai Mipo.
HD Hyundai XiteSolution is another sub-holding company that oversees heavy equipment business, having HD Hyundai Infracore and HD Hyundai Construction Equipment as subsidiaries.
HD Hyundai Oilbank is one of the four major oil refiners in South Korea, along with SK Energy, GS Caltex, and S-Oil.
Affiliates
The subsidiaries of HD Hyundai Group are as follows:
Marine
HD Korea Shipbuilding & Offshore Engineering
HD Hyundai Heavy Industries
HD Hyundai Mipo
HD Hyundai Samho
HD Hyundai Marine Solution
HD Hyundai Engineering & Technology
Avikus
Energy
HD Hyundai Oilbank
HD Hyundai Chemical
HD Hyundai & Shell Base Oil
HD Hyundai OCI
HD Hyundai Cosmo
HD Hyundai Electric
HD Hyundai Energy Solutions
Industrial
HD Hyundai XiteSolution
HD Hyundai Construction Equipment
HD Hyundai Infracore
HD Hyundai Robotics
Support and Service
Ulsan HD Football Club
Hotel SEAMARQ
See also
Asan Medical Center
Munhwa Ilbo
Ulsan HD FC
References
External links
Conglomerate companies of South Korea
Chaebol
Hyundai
Engine manufacturers of South Korea
Automotive transmission makers
Forklift truck manufacturers
Truck manufacturers of South Korea
Electrical generation engine manufacturers
Gas engine manufacturers
Diesel engine manufacturers
Marine engine manufacturers
Photovoltaics manufacturers
Electrical equipment manufacturers
Electrical engineering companies of South Korea
Electric transformer manufacturers
Construction equipment manufacturers of South Korea
Companies in the KOSPI 200 | HD Hyundai | Engineering | 450 |
5,165 | https://en.wikipedia.org/wiki/Country | A country is a distinct part of the world, such as a state, nation, or other political entity. When referring to a specific polity, the term "country" may refer to a sovereign state, states with limited recognition, constituent country, or a dependent territory. Most sovereign states, but not all countries, are members of the United Nations. There is no universal agreement on the number of "countries" in the world since several states have disputed sovereignty status, limited recognition and a number of non-sovereign entities are commonly considered countries.
The definition and usage of the word "country" are flexible and have changed over time. The Economist wrote in 2010 that "any attempt to find a clear definition of a country soon runs into a thicket of exceptions and anomalies."
Areas much smaller than a political entity may be referred to as a "country", such as the West Country in England, "big sky country" (used in various contexts of the American West), "coal country" (used to describe coal-mining regions), or simply "the country" (used to describe a rural area). The term "country" is also used as a qualifier descriptively, such as country music or country living.
Etymology
The word country comes from Old French , which derives from Vulgar Latin () ("(land) lying opposite"; "(land) spread before"), derived from ("against, opposite"). It most likely entered the English language after the Franco-Norman invasion during the 11th century.
Definition of a country
In English the word has increasingly become associated with political divisions, so that one sense, associated with the indefinite article – "a country" – is now frequently applied as a synonym for a state or a former sovereign state. It may also be used as a synonym for "nation". Taking as examples Canada, Sri Lanka, and Yugoslavia, cultural anthropologist Clifford Geertz wrote in 1997 that "it is clear that the relationships between 'country' and 'nation' are so different from one [place] to the next as to be impossible to fold into a dichotomous opposition as they are into a promiscuous fusion."
Areas much smaller than a political state may be referred to as countries, such as the West Country in England, "big sky country" (used in various contexts of the American West), "coal country" (used to describe coal-mining regions in several sovereign states) and many other terms. The word "country" is also used for the sense of native sovereign territory, such as the widespread use of Indian country in the United States.
The term "country" in English may also be wielded to describe rural areas, or used in the form "countryside." Raymond Williams, a Welsh scholar, wrote in 1975:
The unclear definition of "country" in modern English was further commented upon by philosopher Simon Keller:
Melissa Lucashenko, an Aboriginal Australian writer, expressed the difficulty of defining "country" in a 2005 essay, "Unsettlement":
Statehood
When referring to a specific polity, the term "country" may refer to a sovereign state, states with limited recognition, constituent country, or a dependent territory. A sovereign state is a political entity that has supreme legitimate authority over a part of the world. There is no universal agreement on the number of "countries" in the world since several states have disputed sovereignty status, and a number of non-sovereign entities are commonly called countries. No definition is binding on all the members of the community of nations on the criteria for statehood. State practice relating to the recognition of a country typically falls somewhere between the declaratory and constitutive approaches. International law defines sovereign states as having a permanent population, defined territory, a government not under another, and the capacity to interact with other states.
The declarative theory outlined in the 1933 Montevideo Convention describes a state in Article 1 as:
Having a permanent population
Having a defined territory
Having a government
Having the ability to enter into relations with other states
The Montevideo Convention in Article 3 implies that a sovereign state can still be a sovereign state even if no other countries recognise that it exists. As a restatement of customary international law, the Montevideo Convention merely codified existing legal norms and its principles, and therefore does not apply merely to the signatories of international organizations (such as the United Nations), but to all subjects of international law as a whole. A similar opinion has been expressed by the European Economic Community, reiterated by the European Union, in the principal statement of its Badinter Committee, and by Judge Challis Professor, James Crawford.
According to the constitutive theory a state is a legal entity of international law if, and only if, it is recognised as sovereign by at least one other country. Because of this, new states could not immediately become part of the international community or be bound by international law, and recognised nations did not have to respect international law in their dealings with them. In 1912, L. F. L. Oppenheim said the following, regarding constitutive theory:
In 1976 the Organisation of African Unity define state recognition as:
Some countries, such as Taiwan, Sahrawi Republic and Kosovo have disputed sovereignty and/or limited recognition among some countries. Some sovereign states are unions of separate polities, each of which may also be considered a country in its own right, called constituent countries. The Danish Realm consists of Denmark proper, the Faroe Islands, and Greenland. The Kingdom of the Netherlands consists of the Netherlands proper, Aruba, Curaçao, and Sint Maarten. The United Kingdom consists of England, Scotland, Wales, and Northern Ireland.
Dependent territories are the territories of a sovereign state that are outside of its proper territory. These include the overseas territories of New Zealand, the dependencies of Norway, the British Overseas Territories and Crown Dependencies, the territories of the United States, the external territories of Australia, the special administrative regions of China, the autonomous regions of the Danish Realm, Åland, Overseas France, and the Caribbean Netherlands. Some dependent territories are treated as a separate "country of origin" in international trade, such as Hong Kong, Greenland, and Macau.
Identification
Symbols of a country may incorporate cultural, religious or political symbols of any nation that the country includes. Many categories of symbols can be seen in flags, coats of arms, or seals.
Name
Most countries have a long name and a short name. The long name is typically used in formal contexts and often describes the country's form of government. The short name is the country's common name by which it is typically identified. The International Organization for Standardization maintains a list of country codes as part of ISO 3166 to designate each country with a two-letter country code. The name of a country can hold cultural and diplomatic significance. Upper Volta changed its name to Burkina Faso to reflect the end of French colonization, and the name of North Macedonia was disputed for years due to a conflict with the similarly named Macedonia region in Greece. The ISO 3166-1 standard currently comprises 249 countries, 193 of which are sovereign states that are members of the United Nations.
Flags
Originally, flags representing a country would generally be the personal flag of its rulers; however, over time, the practice of using personal banners as flags of places was abandoned in favor of flags that had some significance to the nation, often its patron saint. Early examples of these were the maritime republics such as Genoa which could be said to have a national flag as early as the 12th century. However, these were still mostly used in the context of marine identification.
Although some flags date back earlier, widespread use of flags outside of military or naval context begins only with the rise of the idea of the nation state at the end of the 18th century and particularly are a product of the Age of Revolution. Revolutions such as those in France and America called for people to begin thinking of themselves as citizens as opposed to subjects under a king, and thus necessitated flags that represented the collective citizenry, not just the power and right of a ruling family. With nationalism becoming common across Europe in the 19th century, national flags came to represent most of the states of Europe. Flags also began fostering a sense of unity between different peoples, such as the Union Jack representing a union between England and Scotland, or began to represent unity between nations in a perceived shared struggle, for example, the Pan-Slavic colors or later Pan-Arab colors.
As Europeans colonized significant portions of the world, they exported ideas of nationhood and national symbols, including flags, with the adoption of a flag becoming seen as integral to the nation-building process. Political change, social reform, and revolutions combined with a growing sense of nationhood among ordinary people in the 19th and 20th centuries led to the birth of new nations and flags around the globe. With so many flags being created, interest in these designs began to develop and the study of flags, vexillology, at both professional and amateur levels, emerged. After World War II, Western vexillology went through a phase of rapid development, with many research facilities and publications being established.
National anthems
A national anthem is a patriotic musical composition symbolizing and evoking eulogies of the history and traditions of a country or nation. Though the custom of an officially adopted national anthem became popular only in the 19th century, some national anthems predate this period, often existing as patriotic songs long before designation as national anthem. Several countries remain without an official national anthem. In these cases, there are established de facto anthems played at sporting events or diplomatic receptions. These include the United Kingdom ("God Save the King") and Sweden (). Some sovereign states that are made up of multiple countries or constituencies have associated musical compositions for each of them (such as with the United Kingdom, Russia, and the Soviet Union). These are sometimes referred to as national anthems even though they are not sovereign states (for example, "Hen Wlad Fy Nhadau" is used for Wales, part of the United Kingdom).
Other symbols
Coats of arms or national emblems
Seals or stamps
National mottos
National colors
Patriotism
A positive emotional connection to a country a person belongs to is called patriotism. Patriotism is a sense of love for, devotion to, and sense of attachment to one's country. This attachment can be a combination of many different feelings, and language relating to one's homeland, including ethnic, cultural, political, or historical aspects. It encompasses a set of concepts closely related to nationalism, mostly civic nationalism and sometimes cultural nationalism.
Economy
Several organizations seek to identify trends to produce economy country classifications. Countries are often distinguished as developing countries or developed countries.
The United Nations Department of Economic and Social Affairs annually produces the World Economic Situation and Prospects Report classifies states as developed countries, economies in transition, or developing countries. The report classifies country development based on per capita gross national income (GNI). The UN identifies subgroups within broad categories based on geographical location or ad hoc criteria. The UN outlines the geographical regions for developing economies like Africa, East Asia, South Asia, Western Asia, Latin America, and the Caribbean. The 2019 report recognizes only developed countries in North America, Europe, Asia, and the Pacific. The majority of economies in transition and developing countries are found in Africa, Asia, Latin America, and the Caribbean.
The World Bank also classifies countries based on GNI per capita. The World Bank Atlas method classifies countries as low-income economies, lower-middle-income economies, upper-middle-income economies, or high-income economies. For the 2020 fiscal year, the World Bank defines low-income economies as countries with a GNI per capita of $1,025 or less in 2018; lower-middle-income economies as countries with a GNI per capita between $1,026 and $3,995; upper-middle-income economies as countries with a GNI per capita between $3,996 and $12,375; high-income economies as countries with a GNI per capita of $12,376 or more..
It also identifies regional trends. The World Bank defines its regions as East Asia and Pacific, Europe and Central Asia, Latin America and the Caribbean, Middle East and North Africa, North America, South Asia, and Sub-Saharan Africa. Lastly, the World Bank distinguishes countries based on its operational policies. The three categories include International Development Association (IDA) countries, International Bank for Reconstruction and Development (IBRD) countries, and Blend countries.
See also
Country (identity)
Lists by country
List of former sovereign states
Lists of sovereign states and dependent territories
List of sovereign states and dependent territories by continent
List of transcontinental countries
Micronation
Quasi-state
Notes
References
Works cited
Further reading
Defining what makes a country The Economist
External links
The CIA World Factbook
Country Studies from the United States Library of Congress
Foreign Information by Country and Country & Territory Guides from GovPubs at UCB Libraries
United Nations statistics division
Human geography | Country | Environmental_science | 2,664 |
74,671,923 | https://en.wikipedia.org/wiki/Kaunis%20Koti | Kaunis Koti () was the first interior design magazine in Finland which existed between 1948 and 1971.
History and profile
Kaunis Koti was established in 1948 as the first professional Finnish magazine on interior design. Its first issue featured homes of Rut Bryk and Tapio Wirkkala. Later issues also published articles on homes of leading figures, including that of architect Jorma Järvi. The magazine came out 4 to 8 times a year. It adopted a rational modernist approach towards home decoration.
Eila Jokela was among the editors-in-chief of the magazine. Cover pages of Kaunis Koti mostly featured scenes from everyday life, nature and people. The magazine published informative advertisements, and the first commercials for home pools appeared in the magazine in 1966. It folded in 1971 when it merged with another interior design magazine Avotakka.
References
1948 establishments in Finland
1971 disestablishments in Finland
Defunct magazines published in Finland
Design magazines
Finnish-language magazines
Magazines established in 1948
Magazines disestablished in 1971 | Kaunis Koti | Engineering | 211 |
27,049,342 | https://en.wikipedia.org/wiki/Willem%20van%20Biljon | Willem van Biljon (born 1961) is an entrepreneur and technologist born, raised and educated in South Africa.
Van Biljon graduated from the University of Cape Town with a degree in Computer Science.
He held engineering and research positions at LinkData, the Institute for Applied Computer Science and the National Research Institute for Mathematical Sciences.
Van Biljon co-founded Mosaic Software. Mosaic built the Postilion payment system, the first high-end payment transaction switch for commodity hardware and operating systems (Windows). Mosaic's investors included GE and Paul Maritz. The company became one of the top three payment processing software vendors in the world and was sold in 2004 to S1 Corp.
Van Biljon worked for Amazon.com where he, along with Chris Pinkham and Christopher Brown, led the team that developed Amazon's Elastic Compute Cloud (EC2). Willem built the business plan for the service and was responsible for product management and marketing for the public cloud service.
In 2006, van Biljon left Amazon Web Services and later started a venture with Chris Pinkham. The company, Nimbula, was focused on cloud computing software and was funded by Sequoia Capital and Accel Partners. In March 2013, Nimbula was acquired by Oracle Corporation.
Van Biljon co-authored seven patents in cloud computing including "Managing Communications Between Computing Nodes", "Managing Execution of Programs by Multiple Computing Systems".
Publications
Hirsch, M, SR Schach, and WR van Biljon, "High-Level Debugging Systems for Pascal: Interpreter versus Compiler," Quaest. Informaticae 3 (3), pp 9–13, August 1987.
Van Biljon WR, "A geographic database system", Proceedings Auto Carto 8, Baltimore USA, pp 689–700, March 1987.
Van Biljon WR, "Towards a fuzzy mathematical model of data quality in a GIS", Proceedings EDIS '87 Conference, Pretoria SA, 11 pp, September 1987.
Van Biljon, WR, DA Sewry, and MA Mulders. "Register allocation in a pattern matching code generator." Software: Practice & Experience, 17(8):521–531, August 1987.
Van Biljon, WR, "Extending Petri Nets for Specifying Man-Machine Dialogues", Int. J. Man-Machine Studies, Vol. 28, pp 437–455. 1988.
References
1961 births
Living people
People from Pretoria
Businesspeople from Cape Town
Afrikaner people
University of Cape Town alumni
Amazon (company) people | Willem van Biljon | Technology | 534 |
25,668,599 | https://en.wikipedia.org/wiki/Pentazenium | In chemistry, the pentazenium cation (also known as pentanitrogen) is a positively-charged polyatomic ion with the chemical formula and structure . Together with solid nitrogen polymers and the azide anion, it is one of only three poly-nitrogen species obtained in bulk quantities.
History
Within the High Energy Density Matter research program, run by the U.S. Air Force since 1986, systematic attempts to approach polynitrogen compounds began in 1998, when Air Force Research Laboratory at Edwards AFB became interested in researching alternatives to the highly toxic hydrazine-based rocket fuel and simultaneously funded several such proposals. Karl O. Christe, then, a senior investigator at AFRL, chose to attempt building linear out of and , based on the proposed bond structure:
The reaction succeeded, and was created in sufficient quantities to be fully characterized by NMR, IR and Raman spectroscopy in 1999. The salt was highly explosive, but when was replaced by , a stronger Lewis acid, much more stable was produced, shock-resistant and thermally stable up to 60–70 °C. This made bulk quantities, easy handling, and X-ray crystal structure analysis possible.
Actually N5+ had been predicted by ab initio calculations as a member of the dicyanamide isoelectronic series by Pyykkö and Runeberg in 1991 and this was quoted as ref. [10] of Christe [2] in 1999.
Preparation
Reaction of and in dry HF at −78 °C is the only known method so far:
Chemistry
is capable of oxidizing water, NO, and , but not or ; its electron affinity is 10.44 eV (1018.4 kJ/mol). For this reason, must be prepared and handled in a dry environment:
Due to stability of the fluoroantimonate, it is used as the precursor for all other known salts, typically accomplished by metathesis reactions in non-aqueous solvents such as HF, , , or , where suitable hexafluoroantimonates are insoluble:
The most stable salts of decompose when heated to 50–60 °C: , , and , while the most unstable salts that were obtained and studied, and were extremely shock and temperature sensitive, exploding in solutions as dilute as 0.5 mmol. A number of salts, such as fluoride, azide, nitrate, or perchlorate, cannot be formed.
Structure and bonding
In valence bond theory, pentazenium can be described by six resonance structures:
,
where the last three structures have smaller contributions to the overall structure because they have less favorable formal charge states than the first three.
According to both ab initio calculations and the experimental X-ray structure, the cation is planar, symmetric, and approximately V-shaped, with bond angles 111° at the central atom (angle N2–N3–N4) and 168° at the second and fourth atoms (angles N1–N2–N3 and N3–N4–N5). The bond lengths for N1–N2 and N4–N5 are 1.10 Å and the bond lengths N2–N3 and N3–N4 are 1.30 Å.
See also
Pentazole
Azide
Pentazenium tetraazidoborate
References
Cations
Nitrogen
Explosive chemicals | Pentazenium | Physics,Chemistry | 702 |
250,344 | https://en.wikipedia.org/wiki/Norman%20Lockyer | Sir Joseph Norman Lockyer (17 May 1836 – 16 August 1920) was an English scientist and astronomer. Along with the French scientist Pierre Janssen, he is credited with discovering the gas helium. Lockyer also is remembered for being the founder and first editor of the influential journal Nature.
Biography
Lockyer was born in Rugby, Warwickshire. His early introduction to science was through his father, who was a pioneer of the electric telegraph. After a conventional schooling supplemented by travel in Switzerland and France, he worked for some years as a civil servant in the British War Office. He settled in Wimbledon, South London after marrying Winifred James, who helped translate at least four French scientific works into English.
He was a keen amateur astronomer with a particular interest in the Sun. In 1885 he became the world's first professor of astronomical physics at the Royal College of Science, South Kensington, now part of Imperial College. At the college, the Solar Physics Observatory was built for him and here he directed research until 1913.
In the 1860s Lockyer became fascinated by electromagnetic spectroscopy as an analytical tool for determining the composition of heavenly bodies. He conducted his research from his new home in West Hampstead, with a -inch telescope which he had already used in Wimbledon.
In 1868 a prominent yellow line was observed in a spectrum taken near the edge of the Sun. Its wavelength was about 588 nm, slightly less than the so-called "D" lines of sodium. The line could not be explained as due to any material known at the time, and so it was suggested by Lockyer, after he had observed it from London, that the yellow line was caused by an unknown solar element. He named this element helium after the Greek word 'Helios' meaning 'sun'. An observation of the new yellow line had been made earlier by Janssen at the 18 August 1868 solar eclipse
, and because their papers reached the French academy on the same day, he and Lockyer usually are awarded joint credit for helium's discovery. Terrestrial helium was found about 27 years later by the Scottish chemist William Ramsay. In his work on the identification of helium, Lockyer collaborated with the noted chemist Edward Frankland.
To facilitate the transmission of ideas between scientific disciplines, Lockyer established the general science journal Nature in 1869. He was elected as a member of the American Philosophical Society in 1874. He remained its editor until shortly before his death.
Lockyer led eight expeditions to observe solar eclipses for example in 1870 to Sicily, 1871 to India and 1898 to India.
Lockyer is among the pioneers of archaeoastronomy. Travelling 1890 in Greece he noticed the east–west orientation of many temples, in Egypt he found an orientation of temples to sunrise at midsummer and towards Sirius. Assuming orientation of the Heel-Stone of Stonehenge to sunrise at midsummer he calculated the construction of the monument to have taken place in 1680 BC. Radiocarbon dating in 1952 gave a date of 1800 BC. He also confirmed the alignment of the Parthenon on the rising point of the Pleiades and did extensive work on the solar and stellar alignments of Egyptian temples and their dating, presented in his book The Dawn Of Astronomy.
Lockyer's first wife Winifred née James died in 1879. They had six sons and two daughters in all. In 1903, Lockyer started a second marriage, to suffragist Thomazine Mary Brodhurst (née Browne). After his retirement in 1913, Lockyer established an observatory near his home in Salcombe Regis near Sidmouth, Devon. Originally known as the Hill Observatory, the site was renamed the Norman Lockyer Observatory after his death and directed by his fifth son William J.S. Lockyer. For a time the observatory was a part of the University of Exeter, but is now owned by the East Devon District Council, and run by the Norman Lockyer Observatory Society. The Norman Lockyer Chair in Astrophysics at the University of Exeter is currently held by Professor Tim Naylor, who is the member of the Astrophysics group there which studies star formation and extrasolar planets. Naylor was the lead scientist for the eSTAR Project.
Lockyer died at his home in Salcombe Regis in 1920, and was buried there in the churchyard of St Peter and St Mary.
Publications
(1868–94)
Questions on Astronomy (1870)
(1873)
(1873)
(1878)
(1878)
Report to the Committee on Solar Physics on the Basic Lines Common to Spots and Prominences (1880)
(1887)
(1887)
(1890)
Penrose, F.C., (communicated by Joseph Norman Lockyer), The Orientation of Greek Temples, Nature, v.48, n.1228, 11 May 1893, pp. 42–43
(1894)
Norman Lockyer; William Rutherford (1896). The Rules of Golf: Being the St. Andrews Rules for the Game. Macmillan & Co.
(1897)
Recent and Coming Eclipses (1900)
(1900)
(1903)
Stonehenge and Other British Stone Monuments Astronomically Considered (1906; second edition, 1909)
(1907)
(1909)
(1910)
Honours and awards
Fellow of the Royal Society (1869)
Rumford Medal, Royal Society of London (1874)
Janssen Medal, Paris Academy of Sciences (1889)
Knight Commander of the Order of the Bath (1897)
President, British Association (1903 – 1904)
The crater Lockyer on the Moon and the crater Lockyer on Mars are both named after him, as is Norman Lockyer Island in Nunavut, Canada.
References
Further reading
- A biography of Lockyer
External links
Norman Lockyer Observatory & James Lockyer Planetarium
Archives of the Norman Lockyer Observatory (University of Exeter)
Norman Lockyer Observatory radio station in Sidmouth
Certificate of candidacy for Lockyer's election to the Royal Society
Brief biography of Lockyer by Chris Plicht
Prof. Tim Naylor, Norman Lockyer Professor of Astrophysics
Astrophysics Group, University of Exeter
The 1871 solar eclipse
1836 births
1920 deaths
People from Rugby, Warwickshire
Discoverers of chemical elements
Fellows of the Royal Society
English science writers
Knights Commander of the Order of the Bath
19th-century English astronomers
20th-century English astronomers
Helium
Spectroscopists
Nature (journal) editors | Norman Lockyer | Physics,Chemistry | 1,267 |
1,034,567 | https://en.wikipedia.org/wiki/Hexadecane | Hexadecane (also called cetane) is an alkane hydrocarbon with the chemical formula C16H34. Hexadecane consists of a chain of 16 carbon atoms, with three hydrogen atoms bonded to the two end carbon atoms, and two hydrogens bonded to each of the 14 other carbon atoms.
Cetane number
Cetane is often used as a shorthand for cetane number, a measure of the combustion of diesel fuel. Cetane ignites very easily under compression; for this reason, it is assigned a cetane number of 100, and serves as a reference for other fuel mixtures.
Hexadecyl radical
Hexadecyl is an alkyl radical of carbon and hydrogen derived from hexadecane, with formula C16H33 and with mass 225.433, occurring especially in cetyl alcohol. It confers strong hydrophobicity on molecules containing it. Carboplatin modified with hexadecyl and polyethylene glycol has increased liposolubility and PEGylation, proposed to useful in chemotherapy, specifically non-small-cell lung cancer.
Hexadecyl was used from 1982 for radiolabelling, and this continues to be useful, for example for radiolabelling exosomes and hydrogels,
and for positron emission tomography.
Hexadecyl platelet-activating factor has profound effects on the lung, and hexadecyl glyceryl ether participates in the biosynthesis of plasmalogens.
See also
Cetane index
Isocetane
Higher alkanes
References
Cited sources
External links
Vapor pressure and liquid density calculation
Technique to determine hexadecane transfer
Alkanes | Hexadecane | Chemistry | 365 |
271,430 | https://en.wikipedia.org/wiki/Computational%20neuroscience | Computational neuroscience (also known as theoretical neuroscience or mathematical neuroscience) is a branch of neuroscience which employs mathematics, computer science, theoretical analysis and abstractions of the brain to understand the principles that govern the development, structure, physiology and cognitive abilities of the nervous system.
Computational neuroscience employs computational simulations to validate and solve mathematical models, and so can be seen as a sub-field of theoretical neuroscience; however, the two fields are often synonymous. The term mathematical neuroscience is also used sometimes, to stress the quantitative nature of the field.
Computational neuroscience focuses on the description of biologically plausible neurons (and neural systems) and their physiology and dynamics, and it is therefore not directly concerned with biologically unrealistic models used in connectionism, control theory, cybernetics, quantitative psychology, machine learning, artificial neural networks, artificial intelligence and computational learning theory;
although mutual inspiration exists and sometimes there is no strict limit between fields, with model abstraction in computational neuroscience depending on research scope and the granularity at which biological entities are analyzed.
Models in theoretical neuroscience are aimed at capturing the essential features of the biological system at multiple spatial-temporal scales, from membrane currents, and chemical coupling via network oscillations, columnar and topographic architecture, nuclei, all the way up to psychological faculties like memory, learning and behavior. These computational models frame hypotheses that can be directly tested by biological or psychological experiments.
History
The term 'computational neuroscience' was introduced by Eric L. Schwartz, who organized a conference, held in 1985 in Carmel, California, at the request of the Systems Development Foundation to provide a summary of the current status of a field which until that point was referred to by a variety of names, such as neural modeling, brain theory and neural networks. The proceedings of this definitional meeting were published in 1990 as the book Computational Neuroscience. The first of the annual open international meetings focused on Computational Neuroscience was organized by James M. Bower and John Miller in San Francisco, California in 1989. The first graduate educational program in computational neuroscience was organized as the Computational and Neural Systems Ph.D. program at the California Institute of Technology in 1985.
The early historical roots of the field can be traced to the work of people including Louis Lapicque, Hodgkin & Huxley, Hubel and Wiesel, and David Marr. Lapicque introduced the integrate and fire model of the neuron in a seminal article published in 1907, a model still popular for artificial neural networks studies because of its simplicity (see a recent review).
About 40 years later, Hodgkin and Huxley developed the voltage clamp and created the first biophysical model of the action potential. Hubel and Wiesel discovered that neurons in the primary visual cortex, the first cortical area to process information coming from the retina, have oriented receptive fields and are organized in columns. David Marr's work focused on the interactions between neurons, suggesting computational approaches to the study of how functional groups of neurons within the hippocampus and neocortex interact, store, process, and transmit information. Computational modeling of biophysically realistic neurons and dendrites began with the work of Wilfrid Rall, with the first multicompartmental model using cable theory.
Major topics
Research in computational neuroscience can be roughly categorized into several lines of inquiry. Most computational neuroscientists collaborate closely with experimentalists in analyzing novel data and synthesizing new models of biological phenomena.
Single-neuron modeling
Even a single neuron has complex biophysical characteristics and can perform computations (e.g.). Hodgkin and Huxley's original model only employed two voltage-sensitive currents (Voltage sensitive ion channels are glycoprotein molecules which extend through the lipid bilayer, allowing ions to traverse under certain conditions through the axolemma), the fast-acting sodium and the inward-rectifying potassium. Though successful in predicting the timing and qualitative features of the action potential, it nevertheless failed to predict a number of important features such as adaptation and shunting. Scientists now believe that there are a wide variety of voltage-sensitive currents, and the implications of the differing dynamics, modulations, and sensitivity of these currents is an important topic of computational neuroscience.
The computational functions of complex dendrites are also under intense investigation. There is a large body of literature regarding how different currents interact with geometric properties of neurons.
There are many software packages, such as GENESIS and NEURON, that allow rapid and systematic in silico modeling of realistic neurons. Blue Brain, a project founded by Henry Markram from the École Polytechnique Fédérale de Lausanne, aims to construct a biophysically detailed simulation of a cortical column on the Blue Gene supercomputer.
Modeling the richness of biophysical properties on the single-neuron scale can supply mechanisms that serve as the building blocks for network dynamics. However, detailed neuron descriptions are computationally expensive and this computing cost can limit the pursuit of realistic network investigations, where many neurons need to be simulated. As a result, researchers that study large neural circuits typically represent each neuron and synapse with an artificially simple model, ignoring much of the biological detail. Hence there is a drive to produce simplified neuron models that can retain significant biological fidelity at a low computational overhead. Algorithms have been developed to produce faithful, faster running, simplified surrogate neuron models from computationally expensive, detailed neuron models.
Modeling Neuron-glia interactions
Glial cells participate significantly in the regulation of neuronal activity at both the cellular and the network level. Modeling this interaction allows to clarify the potassium cycle, so important for maintaining homeostasis and to prevent epileptic seizures. Modeling reveals the role of glial protrusions that can penetrate in some cases the synaptic cleft to interfere with the synaptic transmission and thus control synaptic communication.
Development, axonal patterning, and guidance
Computational neuroscience aims to address a wide array of questions, including: How do axons and dendrites form during development? How do axons know where to target and how to reach these targets? How do neurons migrate to the proper position in the central and peripheral systems? How do synapses form? We know from molecular biology that distinct parts of the nervous system release distinct chemical cues, from growth factors to hormones that modulate and influence the growth and development of functional connections between neurons.
Theoretical investigations into the formation and patterning of synaptic connection and morphology are still nascent. One hypothesis that has recently garnered some attention is the minimal wiring hypothesis, which postulates that the formation of axons and dendrites effectively minimizes resource allocation while maintaining maximal information storage.
Sensory processing
Early models on sensory processing understood within a theoretical framework are credited to Horace Barlow. Somewhat similar to the minimal wiring hypothesis described in the preceding section, Barlow understood the processing of the early sensory systems to be a form of efficient coding, where the neurons encoded information which minimized the number of spikes. Experimental and computational work have since supported this hypothesis in one form or another. For the example of visual processing, efficient coding is manifested in the
forms of efficient spatial coding, color coding, temporal/motion coding, stereo coding, and combinations of them.
Further along the visual pathway, even the efficiently coded visual information is too much for the capacity of the information bottleneck, the visual attentional bottleneck. A subsequent theory, V1 Saliency Hypothesis (V1SH), has been developed on exogenous attentional selection of a fraction of visual input for further processing, guided by a bottom-up saliency map in the primary visual cortex.
Current research in sensory processing is divided among a biophysical modeling of different subsystems and a more theoretical modeling of perception. Current models of perception have suggested that the brain performs some form of Bayesian inference and integration of different sensory information in generating our perception of the physical world.
Motor control
Many models of the way the brain controls movement have been developed. This includes models of processing in the brain such as the cerebellum's role for error correction, skill learning in motor cortex and the basal ganglia, or the control of the vestibulo ocular reflex. This also includes many normative models, such as those of the Bayesian or optimal control flavor which are built on the idea that the brain efficiently solves its problems.
Memory and synaptic plasticity
Earlier models of memory are primarily based on the postulates of Hebbian learning. Biologically relevant models such as Hopfield net have been developed to address the properties of associative (also known as "content-addressable") style of memory that occur in biological systems. These attempts are primarily focusing on the formation of medium- and long-term memory, localizing in the hippocampus.
One of the major problems in neurophysiological memory is how it is maintained and changed through multiple time scales. Unstable synapses are easy to train but also prone to stochastic disruption. Stable synapses forget less easily, but they are also harder to consolidate. It is likely that computational tools will contribute greatly to our understanding of how synapses function and change in relation to external stimulus in the coming decades.
Behaviors of networks
Biological neurons are connected to each other in a complex, recurrent fashion. These connections are, unlike most artificial neural networks, sparse and usually specific. It is not known how information is transmitted through such sparsely connected networks, although specific areas of the brain, such as the visual cortex, are understood in some detail. It is also unknown what the computational functions of these specific connectivity patterns are, if any.
The interactions of neurons in a small network can be often reduced to simple models such as the Ising model. The statistical mechanics of such simple systems are well-characterized theoretically. Some recent evidence suggests that dynamics of arbitrary neuronal networks can be reduced to pairwise interactions. It is not known, however, whether such descriptive dynamics impart any important computational function. With the emergence of two-photon microscopy and calcium imaging, we now have powerful experimental methods with which to test the new theories regarding neuronal networks.
In some cases the complex interactions between inhibitory and excitatory neurons can be simplified using mean-field theory, which gives rise to the population model of neural networks. While many neurotheorists prefer such models with reduced complexity, others argue that uncovering structural-functional relations depends on including as much neuronal and network structure as possible. Models of this type are typically built in large simulation platforms like GENESIS or NEURON. There have been some attempts to provide unified methods that bridge and integrate these levels of complexity.
Visual attention, identification, and categorization
Visual attention can be described as a set of mechanisms that limit some processing to a subset of incoming stimuli. Attentional mechanisms shape what we see and what we can act upon. They allow for concurrent selection of some (preferably, relevant) information and inhibition of other information. In order to have a more concrete specification of the mechanism underlying visual attention and the binding of features, a number of computational models have been proposed aiming to explain psychophysical findings. In general, all models postulate the existence of a saliency or priority map for registering the potentially interesting areas of the retinal input, and a gating mechanism for reducing the amount of incoming visual information, so that the limited computational resources of the brain can handle it.
An example theory that is being extensively tested behaviorally and physiologically is the V1 Saliency Hypothesis that a bottom-up saliency map is created in the primary visual cortex to guide attention exogenously. Computational neuroscience provides a mathematical framework for studying the mechanisms involved in brain function and allows complete simulation and prediction of neuropsychological syndromes.
Cognition, discrimination, and learning
Computational modeling of higher cognitive functions has only recently begun. Experimental data comes primarily from single-unit recording in primates. The frontal lobe and parietal lobe function as integrators of information from multiple sensory modalities. There are some tentative ideas regarding how simple mutually inhibitory functional circuits in these areas may carry out biologically relevant computation.
The brain seems to be able to discriminate and adapt particularly well in certain contexts. For instance, human beings seem to have an enormous capacity for memorizing and recognizing faces. One of the key goals of computational neuroscience is to dissect how biological systems carry out these complex computations efficiently and potentially replicate these processes in building intelligent machines.
The brain's large-scale organizational principles are illuminated by many fields, including biology, psychology, and clinical practice. Integrative neuroscience attempts to consolidate these observations through unified descriptive models and databases of behavioral measures and recordings. These are the bases for some quantitative modeling of large-scale brain activity.
The Computational Representational Understanding of Mind (CRUM) is another attempt at modeling human cognition through simulated processes like acquired rule-based systems in decision making and the manipulation of visual representations in decision making.
Consciousness
One of the ultimate goals of psychology/neuroscience is to be able to explain the everyday experience of conscious life. Francis Crick, Giulio Tononi and Christof Koch made some attempts to formulate consistent frameworks for future work in neural correlates of consciousness (NCC), though much of the work in this field remains speculative.
Computational clinical neuroscience
Computational clinical neuroscience is a field that brings together experts in neuroscience, neurology, psychiatry, decision sciences and computational modeling to quantitatively define and investigate problems in neurological and psychiatric diseases, and to train scientists and clinicians that wish to apply these models to diagnosis and treatment.
Predictive computational neuroscience
Predictive computational neuroscience is a recent field that combines signal processing, neuroscience, clinical data and machine learning to predict the brain during coma or anesthesia. For example, it is possible to anticipate deep brain states using the EEG signal. These states can be used to anticipate hypnotic concentration to administrate to the patient.
Computational Psychiatry
Computational psychiatry is a new emerging field that brings together experts in machine learning, neuroscience, neurology, psychiatry, psychology to provide an understanding of psychiatric disorders.
Technology
Neuromorphic computing
A neuromorphic computer/chip is any device that uses physical artificial neurons (made from silicon) to do computations (See: neuromorphic computing, physical neural network). One of the advantages of using a physical model computer such as this is that it takes the computational load of the processor (in the sense that the structural and some of the functional elements don't have to be programmed since they are in hardware). In recent times, neuromorphic technology has been used to build supercomputers which are used in international neuroscience collaborations. Examples include the Human Brain Project SpiNNaker supercomputer and the BrainScaleS computer.
See also
Action potential
Biological neuron models
Bayesian brain
Brain simulation
Computational anatomy
Connectomics
Differentiable programming
Electrophysiology
FitzHugh–Nagumo model
Goldman equation
Hodgkin–Huxley model
Information theory
Mathematical model
Nonlinear dynamics
Neural coding
Neural decoding
Neural oscillation
Neuroinformatics
Neuromimetic intelligence
Neuroplasticity
Neurophysiology
Systems neuroscience
Theoretical biology
Theta model
References
Bibliography
See also
Software
BRIAN, a Python based simulator
Budapest Reference Connectome, web based 3D visualization tool to browse connections in the human brain
Emergent, neural simulation software.
GENESIS, a general neural simulation system.
NEST is a simulator for spiking neural network models that focuses on the dynamics, size and structure of neural systems rather than on the exact morphology of individual neurons.
External links
Journals
Journal of Mathematical Neuroscience
Journal of Computational Neuroscience
Neural Computation
Cognitive Neurodynamics
Frontiers in Computational Neuroscience
PLoS Computational Biology
Frontiers in Neuroinformatics
Conferences
Computational and Systems Neuroscience (COSYNE) – a computational neuroscience meeting with a systems neuroscience focus.
Annual Computational Neuroscience Meeting (CNS) – a yearly computational neuroscience meeting.
Neural Information Processing Systems (NIPS)– a leading annual conference covering mostly machine learning.
Cognitive Computational Neuroscience (CCN) – a computational neuroscience meeting focusing on computational models capable of cognitive tasks.
International Conference on Cognitive Neurodynamics (ICCN) – a yearly conference.
UK Mathematical Neurosciences Meeting– a yearly conference, focused on mathematical aspects.
Bernstein Conference on Computational Neuroscience (BCCN)– a yearly computational neuroscience conference ].
AREADNE Conferences– a biennial meeting that includes theoretical and experimental results.
Websites
Encyclopedia of Computational Neuroscience, part of Scholarpedia, an online expert curated encyclopedia on computational neuroscience and dynamical systems
Computational fields of study
Computational neuroscience
Mathematical and theoretical biology | Computational neuroscience | Mathematics,Technology | 3,417 |
39,427,827 | https://en.wikipedia.org/wiki/Talking%20About%20Life | Talking About Life: Conversations on Astrobiology is a non-fiction book edited by astronomer Chris Impey that consists of interviews with three dozen leading experts on the subject of astrobiology. The subject matter ranges from the nature and limits of life on Earth to the current search for exoplanets and the prospects of intelligent life in the universe. The book was published as a hardcover by Cambridge University Press in 2010.
Summary
Talking About Life: Conversations on Astrobiology is a book of interviews between astronomer Chris Impey and leading researchers in the effort to understand life on Earth and discover habitable worlds and biology beyond Earth. The book is a snapshot of a fast-moving interdisciplinary field, with a conversational tone, where researchers describe what they do in their own words and convey the excitement of addressing fundamental questions about the universe.
The first section has a range of perspectives on the general topic of life in the universe. Timothy Ferris, noted writer and journalist, talks about being involved in the planning for the Voyager record and on astrobiology in the popular culture. Steven Dick and Iris Fry talk about the history of the search for life in the universe and the history of theories of the origin of life on Earth, respectively. Ann Druyan discusses her long association with Carl Sagan and her work in science education. Neil Tyson, Director of the Hayden Planetarium, talks about our halting progress in space travel and the prospects for venturing to find life among the stars. George “Pinky” Nelson gives an astronaut’s perspective on life on Earth and elsewhere, and Steve Benner and William Bains speculate on altering the architecture of life on Earth and on how strange life beyond Earth may be.
The second section of the book turns to the history of life on Earth. Roger Buick talks about the earliest evidence for biology and John Baross talks about its possible origin on the sea floor. Lynn Rothschild talks about extremophiles and the extraordinary modes of adaptation of terrestrial organisms. Joe Kirschvink presents the evidence for Snowball Earth and the challenges that a restless planet presents for biology. Andrew Knoll and Simon Conway Morris discuss natural selection and the contrasting themes of contingency and convergence. As two examples of "alien" intelligence on Earth, Roger Hanlon talks about his field work with octopuses and Lori Marino talks about her research on dolphins.
Turning to the Solar System, the next section of the book looks at the prospects for life on our doorstep. Chris McKay and Peter Smith talk about Mars and the potential for extant microbial life under the surface layer. Speculating about more exotic habitats for life, David Grinspoon considers Venus and Jupiter’s moon Io, then Jonathan Lunine considers Saturn’s large moon Titan. Carolyn Porco notes the surprising results from the Cassini mission, including the habitability of Enceladus. The biological potential of meteorites is the subject of the interviews with Laurie Leshin and Jesuit Guy Consolmagno, who note the presence of the complex building blocks of life in this primordial material from the outer Solar System.
The next section of the book covers the fast-moving research on planets around other stars. Alan Boss discusses the theory of extrasolar planets or exoplanets, and ace planet-hunters Debra Fischer and Geoff Marcy talk about their properties and the technical innovations that led to their discovery. Sara Seager summarizes efforts to characterize exoplanets in detail, and David Charbonneau talks about the power of the transit method for detecting low mass and Earth-like planets. Last, Vicky Meadows describes how planet models will be used to predict the spectral biomarkers that could indirectly indicate the presence of microbial life on an exoplanet.
Talking About Life ends with the search for intelligent life (SETI) and speculation about the role of life in the universe. Jill Tarter and Seth Shostak describe the strategies that have been used to listen for artificial signal from technological civilizations far from Earth for over fifty years, so far without success. Ray Kurzweil talks about postbiological evolution and Nick Bostrom talks about transhumanism and the odds that the entire universe, and our sense of it and ourselves, is a simulation by a super-intelligent civilization. Next, Paul Davies and Martin Rees talk about fine-tuning and the anthropic principle, which each indicate that biology has a privileged role in the cosmos. To round out the book with a humanistic perspective, Ben Bova talks about our future in space and Jennifer Michael Hecht rekindles our delight in alien yet familiar life on Earth.
References
External links
Cambridge University Press
Amazon Author Page
Chris Impey's Website
2010 non-fiction books
Books by Chris Impey
Astronomy books
Astrobiology books
British non-fiction books
2010 in biology
Books of interviews | Talking About Life | Astronomy | 977 |
77,787,873 | https://en.wikipedia.org/wiki/Civil%20basilica | In antiquity, a civil basilica was a grand public building with a semi-sacred significance, serving a variety of purposes. These structures were commonly used for court hearings, public assemblies, and, at times, for commercial activities such as shops and financial transactions.
The architectural style of the basilica, known for its expansive covered space, originated in Ancient Greek architecture and was later adopted and enhanced in Roman architecture, becoming a distinctive feature of Roman cities.
Unlike Christian basilicas, ancient basilicas did not serve religious functions.
Origins and etymology
The word "basilica" derived from the Latin term basilica, originates from two Greek elements: basileus, meaning "king", and the feminine adjective suffix -ikê. The full Greek expression is (basilika oikia), which translates to "royal hall". This was traditionally a place where the king or his representatives would grant public audiences, dispense justice, and serve as a venue for public assemblies.
The concept is related to the Greek stoa (), a public covered space designed to shelter various activities from the weather. Over time, the stoa acquired a more specialized function, such as the stoa basileios in Athens, which served as the seat of the archon-king. These structures typically had an entrance enclosed at the back by a solid wall and opened onto the public space (the agora) at the front, featuring a portico with a colonnade.
Roman Basilica
The Roman basilica, which emerged in the 2nd century BC, was inspired by and named after the Greek stoa basileios. The development of the Roman basilica followed a path similar to that of the Greek stoa. Initially designed as a public space providing shelter from the weather, the basilica evolved to serve specific functions, particularly in the administration of justice. All Roman basilicas were used for legal proceedings. For example, in Rome, the tribunes of the plebs held their hearings in the Basilica Porcia, while the Centumvirs court met in the Basilica Julia. By the early 2nd century BC, this type of building, which provided a spacious and sheltered open area, became a significant feature in Roman cities, with most courts across the Empire utilizing it.
Every well-developed Roman city had a basilica, typically situated next to the forum. Some basilicas were associated with shops (tabernae), which opened either onto the exterior (as seen with the Basilica Aemilia, or tabernae novae) or onto the interior (as seen with the Basilica Julia). These shops may have been used by bankers and pawnbrokers.
Typical plan
The typical floor plan of a Roman basilica is rectangular, with at least one end featuring an apse, a semi-circular or polygonal recess often used as a court or to house a statue of the Roman emperor. A basilica with an apse at each end is known as a double-apse basilica. The apses, or exedras, may be incorporated within the rectangular plan or extended outward, as seen in the Basilica Ulpia.
The interior of a basilica is divided into multiple naves by rows of single or double columns. The central nave, known as the spatium medium, is the widest and extends nearly the full length of the rectangular plan. It is flanked by side naves—one on each side for basilicas with three naves, or two on each side for those with five naves. These side naves are narrower, and sometimes lower, than the central nave but are of equal length. The interior space may be covered by either a wooden framed ceiling or a vaulted ceiling supported by pillars. The central nave is typically taller than the side naves, allowing for the installation of windows in the upper part of the walls, which provides natural light to the interior. In larger basilicas, the ground floor arcade is often complemented by a second or even a third level of colonnades that support the windowed walls. The side naves are sometimes topped with an additional story, creating a gallery that overlooks the central space.
Basilicas in Rome
The initial basilicas constructed in Rome during the 2nd century BC were influenced by Greek architectural models, reflecting the impact of Roman campaigns in Macedonia and Syria. The first small basilica was built on the Roman Forum, later occupied by the southern section of the Basilica Aemilia. This earliest structure, dating from the end of the 3rd century BC, is not specifically named but is referred to as a basilica by ancient authors.
Between 184 and 170 BC, the Porcia, Aemilia, and Sempronia basilicas were constructed around the Forum, each named after the censor who commissioned its construction. These basilicas were adorned with various artworks obtained from conquered territories. By the mid-5th century AD, Polemius Silvius listed eleven basilicas in Rome, highlighting the architectural and cultural significance of these structures in the city.
Christian basilicas
The floor plan of the Roman civil basilica served as a model for the construction of the first Christian churches in late Antiquity. This influence is evident in the continued use of the term "basilica" to designate certain churches from the time of Constantine onward. Today, the term "basilica" is still used for religious buildings of significant importance that, while not functioning as cathedrals, are granted special privileges.
References
Bibliography
Architecture
Ancient Greece
Ancient Rome | Civil basilica | Engineering | 1,085 |
33,007,044 | https://en.wikipedia.org/wiki/Indiaplaza | Indiaplaza was an Indian electronic commerce website. It was one of the pioneers in the online shopping space in India. Earlier known as Fabmart and then Fabmall, the company later acquired US-based online shopping firm Indiaplaza.com and rebranded itself as Indiaplaza.in in India, and as Indiaplaza.com in the United States, which were later merged into a single website.
History
In June 1999, K Vaitheeswaran and five of his friends including V S Sudhakar, Vipul Parekh, Hari Menon, V S Ramesh and Sundeep Thakran founded India's first online departmental store. The website Fabmart.com was launched in September 1999 which then offered only music CDs for sale. Between February and October 2000, the website introduced additional categories including books, movies, watches, and groceries. In February 2002, they launched their first offline grocery store in Bangalore, India.
Funding
Indiaplaza.com received angel funding from The Indigo Monsoon Group (IMG), a private investment firm based in the USA known for investing in Indian Internet and mobile domains. IMG's other investments include Sulekha.com, an online community for Indians integrating social media with local commerce and EShakti.com, an online India-inspired and customized apparel retailer aimed at a global audience. In February 2011, Indiaplaza successfully concluded a deal for Series A US $5 million funding from Indo-US Venture Partners (IUVP) who had previously invested in other internet companies like Myntra, and Snapdeal.
Despite securing these funding sources, Indiaplaza was unable to raise sufficient funding in 2012–2013, which meant that the company had to cease trading.
Product range
Indiaplaza offered a few thousand products online including books, CD-ROMs, cameras, mobile phones, apparel, jewelry, flowers, chocolates, watches, and food items.
Indiaplaza Golden Quill Book Award
The "Indiaplaza Golden Quill Book Awards" were instituted by Indiaplaza in 2008, to be conferred to an Indian author domiciled in India. The award was for an original full-length novel or work of fiction in English or a translation into English of an original full-length novel or work of fiction of any Indian language published in India in the previous calendar year.
See also
E-commerce
Online shopping
Electronic business
References
External links
Defunct official website
Internet properties established in 1999
Online retailers of India
1999 establishments in India
Indian websites
E-commerce | Indiaplaza | Technology | 528 |
1,177,781 | https://en.wikipedia.org/wiki/Terrell%20rotation | Terrell rotation or the Terrell effect is the visual distortion that a passing object would appear to undergo, according to the special theory of relativity, if it were travelling at a significant fraction of the speed of light. This behaviour was described independently by both Roger Penrose and James Edward Terrell. Penrose's article was submitted 29 July 1958 and published in January 1959. Terrell's article was submitted 22 June 1959 and published 15 November 1959. The general phenomenon was noted already in 1924 by Austrian physicist Anton Lampa.
This phenomenon was popularized by Victor Weisskopf in a Physics Today article.
Due to an early dispute about priority and correct attribution, the effect is also sometimes referred to as the Penrose–Terrell effect, the Terrell–Penrose effect or the Lampa–Terrell–Penrose effect, but not the Lampa effect.
Further detail
By symmetry, it is equivalent to the visual appearance of the object at rest as seen by a moving observer. Since the Lorentz transform does not depend on the acceleration, the visual appearance of the object depends only on the instantaneous velocity, and not the acceleration of the observer.
Terrell's and Penrose's papers pointed out that although special relativity appeared to describe an "observed contraction" in moving objects, these interpreted "observations" were not to be confused with the theory's literal predictions for the visible appearance of a moving object. Thanks to the differential timelag effects in signals reaching the observer from the object's different parts, a receding object would appear contracted, an approaching object would appear elongated (even under special relativity) and the geometry of a passing object would appear skewed, as if rotated. By R. Penrose: "the light from the trailing part reaches the observer from behind the sphere, which it can do since the sphere is continuously moving out of its way".
For images of passing objects, the apparent contraction of distances between points on the object's transverse surface could then be interpreted as being due to an apparent change in viewing angle, and the image of the object could be interpreted as appearing instead to be rotated. A previously popular description of special relativity's predictions, in which an observer sees a passing object to be contracted (for instance, from a sphere to a flattened ellipsoid), was wrong. A sphere maintains its circular outline since, as the sphere moves, light from further points of the Lorentz-contracted ellipsoid takes longer to reach the eye.
Terrell's and Penrose's papers prompted a number of follow-up papers, mostly in the American Journal of Physics, exploring the consequences of this correction. These papers pointed out that some existing discussions of special relativity were flawed and "explained" effects that the theory did not actually predict – while these papers did not change the actual mathematical structure of special relativity in any way, they did correct a misconception regarding the theory's predictions.
A representation of the Terrell effect can be seen in the physics simulator "A Slower Speed of Light," published by MIT.
See also
Length contraction
Stellar aberration
References and further reading
External links
A webpage explaining the Penrose-Terrell Effect
Extensive explanations and visualizations of the appearance of moving objects
Interactive simulation of the Penrose-Terrell Effect
Special relativity | Terrell rotation | Physics | 675 |
55,424 | https://en.wikipedia.org/wiki/Drive%20letter%20assignment | In computer data storage, drive letter assignment is the process of assigning alphabetical identifiers to volumes. Unlike the concept of UNIX mount points, where volumes are named and located arbitrarily in a single hierarchical namespace, drive letter assignment allows multiple highest-level namespaces. Drive letter assignment is thus a process of using letters to name the roots of the "forest" representing the file system; each volume holds an independent "tree" (or, for non-hierarchical file systems, an independent list of files).
Origin
The concept of drive letters, as used today, presumably owes its origins to IBM's VM family of operating systems, dating back to CP/CMS in 1967 (and its research predecessor CP-40), by way of Digital Research's (DRI) CP/M. The concept evolved through several steps:
CP/CMS uses drive letters to identify minidisks attached to a user session. A full file reference (pathname in today's parlance) consists of a filename, a filetype, and a disk letter called a filemode (e.g. A or B). Minidisks can correspond to physical disk drives, but more typically refer to logical drives, which are mapped automatically onto shared devices by the operating system as sets of virtual cylinders.
CP/CMS inspired numerous other operating systems, including the CP/M microcomputer operating system, which uses a drive letter to specify a physical storage device. Early versions of CP/M (and other microcomputer operating systems) implemented a flat file system on each disk drive, where a complete file reference consists of a drive letter, a colon, a filename (up to eight characters), a dot, and a filetype (three characters); for instance A:README.TXT. (This was the era of 8-inch floppy disks, where such small namespaces did not impose practical constraints.) This usage was influenced by the device prefixes used in Digital Equipment Corporation's (DEC) TOPS-10 operating system.
The drive letter syntax chosen for CP/M was inherited by Microsoft for its operating system MS-DOS by way of Seattle Computer Products' (SCP) 86-DOS, and thus also by IBM's OEM version PC DOS. Originally, drive letters always represented physical volumes, but support for logical volumes eventually appeared.
Through their designated position as DOS successor, the concept of drive letters was also inherited by OS/2 and the Microsoft Windows family.
The important capability of hierarchical directories within each drive letter was initially absent from these systems. This was a major feature of UNIX and other similar operating systems, where hard disk drives held thousands (rather than tens or hundreds) of files. Increasing microcomputer storage capacities led to their introduction, eventually followed by long filenames. In file systems lacking such naming mechanisms, drive letter assignment proved a useful, simple organizing principle.
Operating systems that use drive letter assignment
CP/M family
CP/M, MP/M, Concurrent CP/M, Concurrent DOS, FlexOS, 4680 OS, 4690 OS, S5-DOS/MT, Multiuser DOS, System Manager, REAL/32, REAL/NG, Personal CP/M, S5-DOS, DOS Plus
AMSDOS
DOS family
86-DOS, MS-DOS, PC DOS
DR DOS, Novell DOS, PalmDOS, OpenDOS
ROM-DOS
PTS-DOS, S/DOS
FreeDOS
PC-MOS/386
SISNE plus
GEMDOS, TOS, MiNT, MagiC, MultiTOS, EmuTOS
Atari DOS family
MSX-DOS
ANDOS, CSI-DOS, MK-DOS
GEOS
OS/2 (including eComStation and ArcaOS)
Windows family
Windows 9x family
Windows NT family
Xbox system software
ReactOS
Symbian OS
Hobbyist operating systems
SymbOS
TempleOS
Order of assignment
MS-DOS/PC DOS since version 5.0, and later operating systems, assigns drive letters according to the following algorithm:
Assign the drive letter A: to the first floppy disk drive (drive 0), and B: to the second floppy disk drive (drive 1). If only one physical floppy is present, drive B: will be assigned to a phantom floppy drive mapped to the same physical drive and dynamically assigned to either A: or B: for easier floppy file operations. If no physical floppy drive is present, DOS 4.0 will assign both A: and B: to the non-existent drive, whereas DOS 5.0 and higher will invalidate these drive letters. If more than two physical floppy drives are present, DOS versions prior to 5.0 will assign subsequent drive letters, whereas DOS 5.0 and higher will remap these drives to higher drive letters at a later stage; see below.
Assign a drive letter to the first active primary partition recognized upon the first physical hard disk. DOS 5.0 and higher will ensure that it will become drive C:, so that the boot drive will either have drive A: or C:.
Assign subsequent drive letters to the first primary partition upon each successive physical hard disk drive (DOS versions prior to 5.0 will probe for only two physical hard disks, whereas DOS 5.0 and higher support eight physical hard disks).
Assign subsequent drive letters to every recognized logical partition present in the first extended partition, beginning with the first hard drive and proceeding through successive physical hard disk drives.
DOS 5.0 and higher: Assign drive letters to all remaining primary partitions, beginning with the first hard drive and proceeding through successive physical hard disk drives.
DOS 5.0 and higher: Assign drive letters to all physical floppy drives beyond the second physical floppy drive.
Assign subsequent drive letters to any block device drivers loaded in CONFIG.SYS via DEVICE statements, e.g. RAM disks.
Assign subsequent drive letters to any dynamically loaded drives via CONFIG.SYS INSTALL statements, in AUTOEXEC.BAT or later, i.e. additional optical disc drives (MSCDEX etc.), PCMCIA / PC Card drives, USB or Firewire drives, or network drives.
Only partitions of recognized partition types are assigned letters. In particular, "hidden partitions" (those with their type ID changed to an unrecognized value, usually by adding 10h) are not.
MS-DOS/PC DOS versions 4.0 and earlier assign letters to all of the floppy drives before considering hard drives, so a system with four floppy drives would call the first hard drive E:. Starting with DOS 5.0, the system ensures that drive C: is always a hard disk, even if the system has more than two physical floppy drives.
While without deliberate remapping, the drive letter assignments are typically fixed until the next reboot, however, Zenith MS-DOS 3.21 will update the drive letter assignments when resetting a drive. This may cause drive letters to change without reboot if the partitioning of the harddisk was changed.
MS-DOS on the Apricot PC assigns letters to hard drives, starting with A:, before considering floppy drives. A system with two of each drive would call the hard drives A: and B:, and the floppies C: and D:.
On the Japanese PC-98, if the system is booted from floppy disk, the dedicated version of MS-DOS assigns letters to all floppy drives before considering hard drives; it does the opposite if it is booted from a hard drive, that is, if the OS was installed on the hard drive, MS-DOS would assign this drive as drive "A:" and a potentially existing floppy as drive "B:". The Japanese version of the Windows 95 SETUP program supports a special option /AT to enforce that Windows will be on drive C:.
Some versions of DOS do not assign the drive letter, beginning with C:, to the first active primary partition recognized upon the first physical hard disk, but on the first primary partition recognized of the first hard disk, even if it is not set active.
If there is more than one extended partition in a partition table, only the logical drives in the first recognized extended partition type are processed.
Some late versions of the DR-DOS IBMBIO.COM provide a preboot config structure, holding bit flags to select (beside others) between various drive letter assignment strategies. These strategies can be preselected by a user or OEM or be changed by a boot loader on the fly when launching DR-DOS. Under these issues, the boot drive can be different from A: or C: as well.
The drive letter order can depend on whether a given disk is managed by a boot-time driver or by a dynamically loaded driver. For example, if the second or third hard disk is of SCSI type and, on DOS, requires drivers loaded through the CONFIG.SYS file (e.g. the controller card does not offer on-board BIOS or using this BIOS is not practical), then the first SCSI primary partition will appear after all the IDE partitions on DOS. Therefore, DOS and for example OS/2 could have different drive letters, as OS/2 loads the SCSI driver earlier. A solution was not to use primary partitions on such hard disks.
In Windows NT and OS/2, the operating system uses the aforementioned algorithm to automatically assign letters to floppy disk drives, optical disc drives, the boot disk, and other recognized volumes that are not otherwise created by an administrator within the operating system. Volumes that are created within the operating system are manually specified, and some of the automatic drive letters can be changed. Unrecognized volumes are not assigned letters, and are usually left untouched by the operating system.
A common problem that occurs with the drive letter assignment is that the letter assigned to a network drive can interfere with the letter of a local volume (like a newly installed CD/DVD drive or a USB stick). For example, if the last local drive is drive D: and a network drive would have been assigned as E:, then a newly attached USB mass storage device would also be assigned drive E: causing loss of connectivity with either the network share or the USB device. Users with administrative privileges can assign drive letters manually to overcome this problem.
Another condition that can cause problems on Windows XP is when there are network drives defined, but in an error condition (as they would be on a laptop operating outside the network). Even when the unconnected network drive is not the next available drive letter, Windows XP may be unable to map a drive and this error may also prevent the mounting of the USB device.
Common assignments
Applying the scheme discussed above on a fairly modern Windows-based system typically results in the following drive letter assignments:
A: — Floppy disk drives, ″ or ″, and possibly other types of disk drives, if present.
B: — Reserved for a second floppy drive (that was present on many PCs).
C: — First hard disk drive partition.
D: to Z: — Other disk partitions get labeled here. Windows assigns the next free drive letter to the next drive it encounters while enumerating the disk drives on the system. Drives can be partitioned, thereby creating more drive letters. This applies to MS-DOS, as well as all Windows operating systems. Windows offers other ways to change the drive letters, either through the Disk Management snap-in or diskpart. MS-DOS typically uses parameters on the line loading device drivers inside the CONFIG.SYS file.
Case-specific drive letters:
F: — First network drive if using Novell NetWare.
G: — "Google Drive File Stream" if using Google Drive.
H: — "Home" directory on a network server.
L: — Dynamically assigned load drive under Concurrent DOS, Multiuser DOS, System Manager and REAL/32.
M: — Drive letter for optionally memory drive MDISK under Concurrent DOS.
N:, O:, P: — Assignable floating drives under CP/M-86 4.x, Personal CP/M-86 2.x, DOS Plus 1.1-2.1 (via BDOS call 0Fh), a concept later extended to any unused drive letters under Concurrent DOS, Multiuser DOS, System Manager, REAL/32 and DR DOS up to 6.0.
Q: — Microsoft Office Click-to-Run virtualization.
U: — Unix-like unified filesystem with virtual directory \DEV for device files under MiNT, MagiC, and MultiTOS.
Z: — First network drive if using Banyan VINES, and the initial drive letter assignment for the virtual disk network in the DOSBox x86 emulator. It is also the first letter selected by Windows for network resources, as it automatically selects from Z: downwards. By default, Wine maps Z: to the root of the UNIX directory tree.
When there is no second physical floppy drive, drive B: can be used as a "virtual" floppy drive mapped onto the physical drive A:, whereby the user would be prompted to switch floppies every time a read or write was required to whichever was the least recently used of A: or B:. This allows for much of the functionality of two floppy drives on a computer that has only one. This concept of multiple drive letters sharing a single physical device (optionally with different "views" of it) is not limited to the first floppy drive, but can be utilized for other drives as well by setting up additional block devices for them with the standard DOS DRIVER.SYS in CONFIG.SYS.
Network drives are often assigned letters towards the end of the alphabet. This is often done to differentiate them from local drives: by using letters towards the end, it reduces the risk of an assignment conflict. It is especially true when the assignment is done automatically across a network (usually by a logon script).
In most DOS systems, it is not possible to have more than 26 mounted drives. Atari GEMDOS supports 16 drive letters A: to P: only. The PalmDOS PCMCIA driver stack supports drive letters 0:, 1:, 2:, ... to address PCMCIA drive slots.
Some Novell network drivers for DOS support up to 32 drive letters under compatible DOS versions. In addition, Novell DOS 7, OpenDOS 7.01, and DR-DOS 7.02 genuinely support a CONFIG.SYS LASTDRIVE=32 directive in order to allocate up to 32 drive letters, named A: to Z:, [:, \:, ]:, ^:, _: and `:. (DR-DOS 7.02-7.07 also supports HILASTDRIVE and LASTDRIVEHIGH directives in order to relocate drive structures into upper memory.) Some DOS application programs do not expect drive letters beyond Z: and will not work with them, therefore it is recommended to use them for special purposes or search drives.
JP Software's 4DOS command line processor supports drive letters beyond Z: in general, but since some of the letters clash with syntactical extensions of this command line processor, they need to be escaped in order to use them as drive letters.
Windows 9x (MS-DOS 7.0/MS-DOS 7.1) added support for LASTDRIVE=32 and LASTDRIVEHIGH=32 as well.
If access to more filesystems than Z: is required under Windows NT, Volume Mount Points must be used. However, it is possible to mount non-letter drives, such as 1:, 2:, or !: using the command line SUBST utility in Windows XP or later (i.e. SUBST 1: C:\TEMP), but it is not officially supported and may break programs that assume that all drives are letters A: to Z:.
ASSIGN, JOIN and SUBST in DOS and Windows
Drive letters are not the only way of accessing different volumes. DOS offers a JOIN command that allows access to an assigned volume through an arbitrary directory, similar to the Unix mount command. It also offers a SUBST command which allows the assignment of a drive letter to a directory. One or both of these commands were removed in later systems like OS/2 or Windows NT, but starting with Windows 2000, both are again supported: The SUBST command exists as before, while JOIN's functionality is subsumed in LINKD (part of the Windows Resource Kit). In Windows Vista, the new command MKLINK can be used for this purpose. Also, Windows 2000 and later support mount points, accessible from the Control Panel.
Many operating systems originating from Digital Research provide means to implicitly assign substitute drives, called floating drives in DRI terminology, by using the CD/CHDIR command in the following syntax:
CD N:=C:\SUBDIR
DOS Plus supports this for drive letters N:, O:, and P:. This feature is also present in Concurrent DOS, Multiuser DOS, System Manager 7, and REAL/32, however, these systems extend the concept to all unused drive letters from A: to Z:, except for the reserved drive letter L:. DR DOS 3.31 - 6.0 (up to the 1992-11 updates with BDOS 6.7 only) also supports this including drive letter L:. This feature is not available under DR DOS 6.0 (1992 upgrade), PalmDOS 1.0, Novell DOS 7, OpenDOS 7.01, DR-DOS 7.02 and higher. Floating drives are implemented in the BDOS kernel, not in the command line shell, thus they can be used and assigned also from within applications when they use the "change directory" system call. However, most DOS applications are not aware of this extension and will consequently discard such directory paths as invalid. JP Software's command line interpreter 4DOS supports floating drives on operating systems also supporting it.
In a similar feature, Concurrent DOS, Multiuser DOS, System Manager and REAL/32 will dynamically assign a drive letter L: to the load path of a loaded application, thereby allowing applications to refer to files residing in their load directory under a standardized drive letter instead of under an absolute path. This load drive feature makes it easier to move software installations on and across disks without having to adapt paths to overlays, configuration files or user data stored in the load directory or subsequent directories.
(For similar reasons, the appendage to the environment block associated with loaded applications under DOS 3.0 (and higher) contains a reference to the load path of the executable as well, however, this consumes more resident memory, and to take advantage of it, support for it must be coded into the executable, whereas DRI's solution works with any kind of applications and is fully transparent to users as well.)
In some versions of DR-DOS, the load path contained in the appendage to the environment passed to drivers can be shortened to that of a temporary substitute drive (e.g. SUBST B: C:\DIR) through the INSTALL[HIGH]/LOADHIGH option /D[:loaddrive] (for B:TSR.COM instead of, say, C:\DIR\TSR.COM). This can be used to minimize a driver's effective memory footprint, if the executable is located in a deep subdirectory and the resident driver happens to not need its load path after installation any more.
See also
Drive mapping
Filename
net (command), a command in Microsoft Windows that can be used for viewing/controlling drive-letter assignments for network drives
Portable application
References
External links
Change Drive Letter in Windows 8
Tips for USB related drive letter issues
Windows architecture
DOS technology
Computer peripherals
Assignment operations
de:Laufwerk (Computer)#Laufwerksbuchstaben | Drive letter assignment | Technology | 4,095 |
5,730,170 | https://en.wikipedia.org/wiki/Trauzl%20lead%20block%20test | The Trauzl lead block test, also called the Trauzl test, or just Trauzl, is a test used to measure the strength of explosive materials. It was developed by Isidor Trauzl in 1885.
The test is performed by loading a 10-gram foil-wrapped sample of the explosive into a hole drilled into a lead block with specific dimensions and properties (a soft lead cylinder, 200 mm diameter and 200 mm high, with the hole 125 mm deep, and 25 mm diameter). The hole is then topped up with sand, and the sample is detonated electrically. After detonation, the volume increase of the cavity is measured. The result, given in cm3, is called the Trauzl number of the explosive.
The Trauzl test is not useful for some modern higher-powered explosives as their power often cracks or otherwise ruptures the lead block, leaving no hole to measure.
A variant of the test uses an aluminium block to avoid exposure of participants to lead-related hazards.
Examples
Explosive power of chemical explosives by Trauzl number:
Notes
Explosives engineering | Trauzl lead block test | Engineering | 232 |
25,459,420 | https://en.wikipedia.org/wiki/Tilt%20tray%20sorter | A tilt-tray sorter is a mechanical assembly similar to a conveyor belt but instead of a continuous belt, it consists of individual trays traveling in the same direction.
A tilt-tray sorter can be configured in an inline (AKA over/under) formation, or in a continuous-loop.
Items are loaded onto the passing trays at the front end of the sorter and travel towards a series of destinations on either side of the sorter. Items are loaded on to trays individually and their sort destination is determined in advance.
As the tray with an item approaches its destination the tray is tilted to slide the object into the chute. The empty tray will then return to the load section before it is loaded again with a new item.
A tilt-tray sorter is a continuous-loop sortation conveyor that uses a technique of tilting a tray at a chute to slide the object into the chute.
References
Industrial machinery | Tilt tray sorter | Engineering | 193 |
45,430,435 | https://en.wikipedia.org/wiki/Pitch%20clock | A pitch clock (also known as a pitch timer) is used in various baseball leagues to limit the amount of time a pitcher uses before he throws the ball to the hitter and/or limit the amount of time the hitter uses before he is prepared to hit.
Various baseball leagues and tournaments around the world have started using a pitch clock to speed up the pace of play. Major League Baseball (MLB) began using a pitch clock in the following a period of tests on MLB partner leagues, minor league baseball, and college baseball.
History
In college baseball, the Southeastern Conference experimented with using pitch clocks in 2010. Pitchers were given twenty seconds to throw the pitch, or a ball would be added to the count. Similarly, a batter stepping out of the batter's box with less than five seconds on the clock will be assessed an additional strike. After the 2010 season, the National Collegiate Athletic Association sought to make the pitch clocks mandatory, and instituted it for the 2011 college baseball season, but only for when there are no runners on base.
Pitch clocks made their professional debut in the Arizona Fall League in 2014. On January 15, 2015, Major League Baseball (MLB) announced it would institute a 20-second pitch clock in Minor League Baseball for Double-A and Triple-A teams during the 2015 season. Pitchers were given twenty seconds to throw the pitch, with the penalty of a ball awarded to the batter if not followed. Along with other rule changes addressing the pace of play, the clocks contributed to a 12-minute reduction in game times at those levels between the 2014 and 2015 seasons, compared to the leagues that did not use the clock, which saw game times change from an increase of three minutes per game to a decrease in five minutes per game. Game times increased in 2016 and 2017, but were still faster than games in 2014. The independent Atlantic League began using a 12-second pitch clock.
Major League Baseball
MLB and the MLB Players Association (MLBPA) discussed the possibility of introducing the pitch clock at the major league level for the 2018 season. MLB opted against imposing it unilaterally, over the opposition of the MLBPA. MLB implemented a 20-second pitch clock in spring training games in 2019. The collective bargaining agreement reached to end the 2021–22 Major League Baseball lockout included the possibility of introducing a pitch clock as of the 2023 MLB season. Four active players, six persons appointed by MLB, and one umpire were formed into a Joint Competition Committee to review and recommend any changes to playing rules.
On September 8, 2022, MLB announced a set of rules changes that took effect in 2023, including the use of a pitch clock. Pitchers would have 15 seconds between pitches when there are no baserunners and 20 seconds when there is at least one baserunner. Additionally, the batter will have seven to twelve seconds to be in the stance, ready to hit, otherwise an automatic strike will be called. The clock starts when the pitcher gets the ball and the catcher and batter are ready.
In addition to its primary use to time pitches, the clock is used to indicate the time remaining in a media timeout for commercials (usually between each half of an inning), and also to time the warmup period on the mound for a relief pitcher coming out of the bullpen. There are multiple clocks displayed throughout a major league stadium on the same timing system to allow full visibility of the pitch clock for players, coaches, umpires, press, and spectators throughout the venue. This also allows for implementation within the graphics of television broadcasts, as determined by broadcasters.
Marcus Stroman of the Chicago Cubs became the first pitcher to violate the pitch clock during the regular season, during the third inning of the 2023 opening day game against the Milwaukee Brewers. The Baltimore Orioles' Austin Hays was the first batter to receive a strike call due to a time infraction, while Rafael Devers of the Boston Red Sox was the first to be called for a strikeout.
The first 400 Major League Baseball games during the were, on average, about 30 minutes shorter than the first 400 of the previous season. In addition, the standard deviation of game times was down significantly. The game length distribution had not been this consistent since the . MLB postseason games in the first year of the pitch clock were 21 minutes shorter on average than postseason games in the previous year, with more runs and stolen bases.
In December 2023 it was reported that MLB competition committee had approved a rule change that reduced the pitch clock from 20 to 18 seconds with runners on base, beginning in the .
Other leagues
The Japan Amateur Baseball Association (part of the Baseball Federation of Japan) which organizes most Japanese adult baseball outside Nippon Professional Baseball and its minor league teams, decided to adopt the pitch clock after MLB's success in 2023 Spring Training.
See also
References
External links
Pace of Play | Glossary — MLB.com
Baseball terminology
Baseball rules
Time measurement systems
Timers | Pitch clock | Physics | 996 |
3,289,827 | https://en.wikipedia.org/wiki/Copper%20loss | Copper loss is the term often given to heat produced by electrical currents in the conductors of transformer windings, or other electrical devices. Copper losses are an undesirable transfer of energy, as are core losses, which result from induced currents in adjacent components. The term is applied regardless of whether the windings are made of copper or another conductor, such as aluminium. Hence the term winding loss is often preferred. The term load loss is used in electricity delivery to describe the portion of the electricity lost between the generator and the consumer that is related to the load power (is proportional to the square thereof), as opposed to the no-load loss.
Calculations
Copper losses result from Joule heating and so are also referred to as "I squared R losses", in reference to Joule's First Law. This states that the energy lost each second, or power, increases as the square of the current through the windings and in proportion to the electrical resistance of the conductors.
where I is the current flowing in the conductor and R is the resistance of the conductor. With I in amperes and R in ohms, the calculated power loss is given in watts.
Joule heating has a coefficient of performance of 1.0, meaning that every 1 watt of electrical power is converted to 1 Joule of heat. Therefore, the energy lost due to copper loss is:
where t is the time in seconds the current is maintained.
Effect of frequency
For low-frequency applications, the power loss can be minimized by employing conductors with a large cross-sectional area, made from low-resistivity metals.
With high-frequency currents, the proximity effect and skin effect cause the current to be unevenly distributed across the conductor, increasing its effective resistance, and making loss calculations more difficult.
Litz wire is a type of wire constructed to force the current to be distributed uniformly, thereby reducing Joule heating.
Reducing copper loss
Among other measures, the electrical energy efficiency of a typical industrial induction motor can be improved by reducing the electrical losses in the stator windings (e.g., by increasing the cross-sectional area of the conductor, improving the winding technique, and using materials with higher electrical conductivity, such as copper). In power transmission, voltage is stepped up to reduce current thereby reducing power loss.
References
Sources
External links
Reduction of copper losses
Electric transformers
Electrical engineering | Copper loss | Engineering | 481 |
67,385,737 | https://en.wikipedia.org/wiki/Discharge%20of%20radioactive%20water%20of%20the%20Fukushima%20Daiichi%20Nuclear%20Power%20Plant | Radioactive water from the Fukushima Daiichi Nuclear Power Plant in Japan began being discharged into the Pacific Ocean on 11 March 2011, following the Fukushima Daiichi nuclear disaster triggered by the Tōhoku earthquake and tsunami. Three of the plant's reactors experienced meltdowns, leaving behind melted fuel debris. Water was introduced to prevent the meltdowns from progressing further. When cooling water, groundwater, and rain came into contact with the melted fuel debris, they became contaminated with radioactive nuclides, such as iodine-131, caesium-134, Caesium-137, and strontium-90.
Over 500,000 tonnes of untreated wastewater (including 10,000 tonnes released to free up storage space) escaped into the ocean shortly after the accident. In addition, persistent leakage into groundwater was not admitted by the plant operator until 2013. The radioactivity from these sources exceeded legal limits.
Since then, contaminated water has been pumped into storage units and gradually treated using the Advanced Liquid Processing System (ALPS) to eliminate most radionuclides, except notably tritium with a half-life of 12.32 years. In 2021, the Japanese cabinet approved the release of ALPS-treated water containing tritium. Because it is still radioactive immediately after treatment, the solution will be diluted by sea water to a lower concentration before being discharged.
A review report by the International Atomic Energy Agency (IAEA) shows that the plan of discharging diluted ALPS-treated water into the sea is consistent with relevant international safety standards. It also emphasizes that the release of the treated water is a national decision by the Government of Japan and its report is neither a recommendation nor an endorsement of the decision.
On , the power plant started releasing the treated portion of its wastewater into the Pacific Ocean. At the time, its storage units held over a million tonnes of wastewater in total. Because new wastewater is constantly being formed and even treated water must be discharged slowly by diluting it with more sea water, the entire process could take more than 30 years. The decision to release this water into the ocean has faced concerns and criticism from other countries and international organisations.
As of the fourth round of discharge in March 2024, no abnormal tritium levels have been detected in nearby waters.
Initial atmospheric release
Radioactive materials were dispersed into the atmosphere immediately after the disaster and account for most of all such materials leaked into the environment. 80% of the initial atmospheric release eventually deposited over rivers and the Pacific Ocean, according to a UNSCEAR report in 2020. Specifically, "the total releases to the atmosphere of Iodine-131 and Caesium-137 ranged generally between about 100 to about 500 PBq [petabecquerel, 1015 Bq] and 6 to 20 PBq, respectively. The ranges correspond to about 2% to 8% of the total inventory of Iodine-131 and about 1% to 3% of the total inventory of Caesium-137 in the three operating units (Units 1–3)".
Deposition on river
The indirect deposition to rivers come from the earlier direct discharge to the atmosphere. "Continuing indirect releases of about 5 to 10 TBq [terabecquerel, 1012 Bq] of Caesium-137 per year via rivers draining catchment areas", according to the UNSCEAR report in 2020.
Discharge to ocean, untreated water (2011)
On 5 April 2011, the operator of the nuclear plant, Tokyo Electric Power Company (TEPCO), discharged 11,500 tons of untreated water into the Pacific Ocean in order to free up storage space for water that is even more radioactive. The untreated water was the least radioactively contaminated among the stored water, but still 100 times the legal limit. TEPCO estimated that a total of 520,000 tons of untreated radioactive water had escaped into the ocean before it could place silt fences to contain further spills.
The UNSCEAR report in 2020 determined "direct releases in the first three months amounting to about 10 to 20 PBq [petabecquerel, 1015 Bq] of Iodine-131 and about 3 to 6 PBq of Caesium-137". About 82 percent having flowed into the sea before 8 April 2011.
Discharge to soil and groundwater by leakage
Scientists suspected that radioactive elements continued to leak into the ocean. High levels of caesium-134 were found in local fish, despite the isotope's comparatively shorter half-life. Meanwhile, radiation levels in the nearby sea water did not fall as expected. After repeated denials, the operator of the nuclear plant, Tokyo Electric Power Company (TEPCO), finally admitted on 22 July 2013 that leaks to groundwater had been happening. Some groundwater samples contained 310 Bq/L of cesium-134 and 650 Bq/L of cesium-137, exceeding WHO's maximum guideline of 10 Bq/L for drinking water.
It was later determined that some of the leaks came from the storage tanks for wastewater. Since then, TEPCO has had a record of being dishonest on its figures and has lost the public trust. For instance, in 2014, TEPCO blamed its own measuring method and revised the strontium in a groundwater well in July 2013 from 900,000 Bq/L to 5,000,000 Bq/L, which is 160,000 times the standard for discharge.
While soil naturally absorbs the caesium in groundwater, strontium and tritium can flow through more freely. At one time, nearly 400 tonnes of radioactive water was being formed every day (150,000 tonnes per year). TEPCO has since tried to stem or divert the inflow of groundwater to the damaged reactor sites and prevent contaminated water from escaping into the ocean.
The UNSCEAR report in 2020 concluded "Direct release of about 60 TBq [terabecquerel, 1012 Bq] of caesium-137 in ground water draining from the site up to October 2015, when measures were taken to reduce these releases, and about 0.5 TBq per year thereafter".
In February 2024, a leak at the power plant was detected by a contractor and eventually repaired by TEPCO. The company estimated that 5.5 tonnes of water, which potentially contained 22 billion becquerels of radioactive materials such as caesium and strontium, had escaped from an air vent, pooled outside and seeped into the surrounding soil, but did not leave the plant compound. It said this was caused by 10 out of 16 valves being left open when they should have been closed for flushing.
Discharge to ocean, treated water
Advanced Liquid Processing System (2013–)
To prevent the reactor meltdowns from worsening, a continuous supply of new water is necessary to cool the melted fuel debris. As of 2013, 400 metric tonnes of water was becoming radioactively contaminated each day. The contaminated water is pumped out and combined into the reactor-cooling loop, which includes stronium–cesium removal (KURION, SURRY) and reverse osmosis desalination processes.
In October 2012, TEPCO introduced the "Advanced Liquid Processing System" (ALPS, ), which is designed to remove radionuclides other than tritium and carbon-14. ALPS works by first pre-processing the water by iron coprecipitation (removes alpha nuclides and organics) and carbonate coprecipitation (removes alkali earth metals including strontium elements). The water is then passed through 16 absorbent columns to remove nuclides.
Wastewater is pumped to ALPS along with the concentrated saltwater from desalination. As some tritium still remains, even treated water would require dilution to meet drinkable standards. Although carbon-14 is not removed, the content in pre-treatment water is low enough to meet drinkable standards without dilution.
Japan's Nuclear Regulation Authority (NRA) approved the design of ALPS in March 2013. ALPS is to be run in three independent units and will be able to purify 250 tons of water per day. Unit "A" started operation in April. In June, unit A was found to be leaking water and shut down. In July, the cause was narrowed down to chloride and hypochlorite corrosion of water tanks; TEPCO responded by adding a rubber layer into the tanks. By August, all systems were shut down awaiting repair. One unit was expected to come online by September, with full recovery planned by the end of 2013.
By September 2018, TEPCO reports that 20% of the water had been treated to the required level.
By 2020, the daily buildup of contaminated water was reduced to 170 metric tonnes thanks to groundwater isolation installations. TEPCO reports that 72% of the water in its tanks, some from early trials of ALPS, needed to be repurified. The portion of ready-to-discharge water raised to 34% by 2021, and to 35% by 2023.
Some scientists expressed reservations due to potential bioaccumulation of ruthenium, cobalt, strontium, and plutonium, which sometimes slip through the ALPS process and were present in 71% of the tanks.
Japanese approval and monitoring (2021-)
Since the 2011 Fukushima Daiichi nuclear disaster, the nuclear plant has accumulated 1.25 million tonnes of waste water, stored in 1,061 tanks on the land of the nuclear plant, as of March 2021. It will run out of land for water tanks by 2022. It has been suggested the government could have solved the problem by allocating more land surrounding the power plant for water tanks, since the surrounding area had been designated as unsuitable for humans. Regardless, the government was reluctant to act. Mainichi Shimbun criticized the government for showing "no sincerity" in "unilaterally push[ing] through with the logic that there will no longer be enough storage space"
On 13 April 2021, the Cabinet of Prime Minister Suga unanimously approved that TEPCO dump the stored water to the Pacific Ocean over a course of 30 years. The Cabinet asserted the dumped water will be treated and diluted to drinkable standard. The idea of dumping had been floated by Japanese experts and officials as early as June 2016.
In April 2023, Japan's NRA announced a Comprehensive Radiation Monitoring Plan, in which the concentration of radionuclides in food (land and sea), soil, water, and air will be continually monitored across Japan. NRA also set up a system to monitor the radionuclide concentration in ALPS-processed water in order to verify TEPCO's readings.
International testing
An IAEA task force was dispatched to Japan in 2021, and they released their first report in February 2022. Among other findings, TEPCO has demonstrated to IAEA that their pump setup thoroughly mixes waters in tanks.
In May 2023, 3 IAEA laboratories and 4 national laboratories participated in an interlaboratory comparison to verify TEPCO's testing of ALPS-treated water. Out of the 30 radionuclides TEPCO regularly tests for, 12 were found to be above detection limits. 52 out of 53 results were found to agree with the combined result; the only problematic result was of I-129, where Korea Institute of Nuclear Safety reported a value too low compared to the weighted average. TEPCO's methology was found to be fit for purpose: although it is less sensitive for actinides than some participating labs, the detection limits were far enough from regulatory limits, and the alpha-emission screening test appears accurate enough. TEPCO's testing method for Am-141 may require additional review. The same sample was tested by Japan's NRA with no disagreements found.
The tritium that is not filtered out has a radioactivity of 148,900 Bq/L, compared to 620,000 Bq/L before treatment. TEPCO intends to dilute it down to 1,500 Bq/L or less before release.
Discharge into the Pacific Ocean (2023–)
On 22 August 2023, Japan announced that it would start releasing treated radioactive water from the tsunami-hit Fukushima nuclear plant into the Pacific Ocean in 48 hours, despite opposition from its neighbours. Japan says the water is safe after the use of Advanced Liquid Processing System (ALPS), which removes nearly all traces of radiation from the wastewater, with tritium being the primary exception to this. As a result, Japan has committed to diluting the water in order to bring levels of tritium below the regulatory standards set by the International Atomic Energy Agency. The International Atomic Energy Agency has stated that the plan meets safety standards, but critics contend that more studies need to be done and the release should be halted. On 24 August, Japan began the discharge of treated waste water into the Pacific Ocean, sparking protests in the region and China to expand its ban to all aquatic imports from Japan. Over 1 million tonnes of treated wastewater will be released by Japan over the next thirty years as per the plan.
On August 25, TEPCO reported that the amount of tritium in seawater around Fukushima has remained below the detection limit of 10 Bq/L. The Japanese Fishery Agency reported that fish caught 4 km away from the discharge pipe contained no detectable amounts of tritium.
In March 2024, the discharge was suspended temporarily after the Fukushima coastal region experienced another 5.8-magnitude earthquake. No abnormalities were detected with the wastewater treatment.
Reactions
Official nuclear science panels
The Japanese expert panel "ALPS subcommittee", chaired by nuclear scientist Ichiro Yamamoto, released a report in January 2020 which calculated that discharging all the water to the sea in one year would cause a radiation dose of 0.81 microsieverts to the locals, therefore it is negligible as compared to the Japanese' natural radiation of 2,100 microsieverts per year. Its calculations were verified by International Atomic Energy Agency to be correct.
Japanese public
A panel of public policy professors pointed out the lack of research on the harmful effects of tritium. It also criticized the government being insincere on accepting alternative disposal proposals as the proposals were always shelved after "procedural" discussion.
A survey by Asahi Shimbun in December 2020 found, among 2,126 respondents, that 55% of Japanese opposed dumping and 86% worried about international reception. Opposition is strongest among fishers and coastal communities.
The Fukushima Fishery Cooperatives was given written promises by TEPCO's CEO Hirose Naomi in 2015 that TEPCO would not dump the water before consulting the fishery industry. The Cooperatives felt bypassed and betrayed by the government's decision.
In August 2023, fisheries minister Tetsuro Nomura called the treated radioactive water "contaminated" but later apologised and retracted the statement after receiving an instruction from Prime Minister Fumio Kishida.
International reactions
Opposed to discharge
The South Korean government has been concerned since 2019 that Japan's release of radioactive water from Fukushima could be non-compliant with Article 2 of the London Protocol to protect the marine environment, but the Japanese government says the release is not applicable because it is a land-based pollution.
In June 2020, Baskut Tuncak, United Nations's Special Rapporteur on toxics and human rights, wrote on Japan's Kyodo News that the communities of Fukushima have the right not to be exposed deliberately to additional radioactive contamination." Greenpeace and five other UN Rapporteurs (including Clément Nyaletsossi Voule) issued condemnation echoing those sentiments.
Various governments have voiced concerns, including the governments of South Korea, North Korea, Taiwan, China, Russia, Germany, the Philippines, New Zealand, Belize, Costa Rica, Dominican Republic, El Salvador, Guatemala, Honduras, Nicaragua, Panama, and Mexico.
In June 2021, at least 70 U.S. civic groups condemned Japan's wastewater discharge plan, and 17 civic organizations from various countries held protests in Berlin.
In January 2023, the U.S. National Association of Marine Laboratories expressed their opposition to the plan and stated that "there was a lack of adequate and accurate scientific data supporting Japan's assertion of safety".
In June 2023, South Korean shoppers rushed to buy up salt and other items prior to the expected release of the treated discharge. The South Korean government had banned seafood from the waters near Fukushima and says it will closely monitor the radioactivity level of salt farms. A similar salt rush occurred in China, after the discharge began.
In the months leading up to the start of discharge, over 80 per cent of South Koreans surveyed opposed the dumping, and over 60 per cent indicated intention to avoid seafood products after the release begins.
In August 2023, the Green Party of the United States issued a press release opposing the discharge.
In the same month, Shaun Burnie, senior nuclear specialist for Greenpeace, accused the Japanese government and TEPCO of diverting attention away from the radiation levels in the waste water from the nuclear plant by emphasizing tritium, arguing that various other harmful radionuclides, including strontium-90, iodine, ruthenium, rhodium, antimony, tellurium, cobalt, and carbon-14, will remain present even after filtration.
On 24 August, protests against the discharge erupted in South Korea, Hong Kong and Tokyo. According to the organisers, about 50,000 people gathered in Seoul. Some attempted to storm the Japanese embassy there.
Japanese shops reported receiving spam calls from China, prompting the Japanese government to summon a Chinese diplomat in response. A man threw stones into a Japanese school in Qingdao, and eggs were thrown into one in Suzhou, with no confirmed damage. Social media campaigns in China called for a boycott of Japanese products. This drove a 14% single-day stock price decline for luxury cosmetics conglomerate Shiseido.
Chinese state media outlets ran paid ads denouncing the water release on Facebook and Instagram in multiple countries and languages. Analysts labeled it part of a concerted disinformation campaign.
The Chinese government, Hong Kong, Macau, and the South Korean government have banned aquatic imports from some or all regions of Japan.
In support of discharge
U.S. Secretary of State Antony Blinken stated on 13 April 2021, "We thank Japan for its transparent efforts in its decision to dispose of the treated water". US Climate Envoy John Kerry expressed support.
On 14 June 2023, Palau President Surangel Whipps Jr. expressed understanding of Japan's plan to release the treated waste water into the sea, remarking "The people who would be impacted most are their own people. [...] And if it's acceptable to their people, it should be acceptable to all of us."
On 23 August 2023, Fijian Prime Minister Sitiveni Rabuka expressed support for the IAEA report, stating that the Japanese plan met international safety standards and most of the waste water would be discharged into Japan's "own backyard".
Mixed
Micronesia had expressed strong opposition to Japan releasing the water. In February 2023, however, President of the Federated States of Micronesia David Panuelo said he trusted Japan's intention and capabilities but stopped short of offering full support, saying that he would continue to consult with Japan to ensure the water's safety.
IAEA report
On 23 March 2021, Rafael Grossi, director-general of the International Atomic Energy Agency (IAEA), reached a consensus with the Japanese government three weeks before its announcement of decision to release water from the damaged power plant.
In February 2023, Robert H. Richmond, a marine biologist consulting for the Pacific Islands Forum (PIF), expressed doubts about the data behind Japan's plan. He pointed out that whereas the PIF is focused more on people and the ocean, the IAEA "has a mandate to promote the use of nuclear energy" and "there are alternatives" to discharging the water.
In July 2023, the IAEA released its conclusion that Japan's plans to slowly discharge the treated wastewater are in accordance with the relevant international safety standards but stopped short of endorsing the decision, which is for Japan's government to make.
Arjun Makhijani, president of the Institute for Energy and Environmental Research, criticised the IAEA for not looking into its own safety principle of justification, that is, whether an action's benefit outweighs its cost, because the IAEA was approached to make a report after Japan had already decided to discharge the water.
On 10 July 2023, New Zealand expressed confidence in the IAEA report.
On 9 August 2023, during a Nuclear Non-Proliferation Treaty (NPT) committee, Australia, France, Italy, Malaysia, United Kingdom, and the United States expressed support for the IAEA report. Australia said it was "independent, impartial and science-based" and trusts it completely, the UK also said it can be trusted, and the US said the report was impartial. South Korea requested that the IAEA inspect every step of the discharging, while China said the report was insufficient and urged Japan not to proceed with its plan. Australia expressed confidence in the IAEA report again on 23 August.
Pacific Islands Forum
In April 2021, the Pacific Islands Forum expressed deep concerns and urged Japan to rethink its decision on the discharge of the ALPS Treated Water.
In August 2023, a panel of five independent experts consulting for the Pacific Islands Forum was split on the issue of discharge. Some had no issue with it, saying it would not harm the Pacific. Two of them said trying to obtain information from Japan was difficult and its data had "red flags". The panelists wrote that more study is needed on the contaminants inside the water tanks, that TEPCO only took small samples from a quarter of the tanks, which showed large variations in readings, and used commercial pellets, not tritium-exposed fish, as food source for its experiments. Ken Buesseler, a scientist at the Woods Hole Oceanographic Institution (WHOI), does not expect widespread direct health effects across the Pacific but said contaminants missed by ALPS could accumulate near the shore in Japan and ultimately hurt fisheries in local areas. He recommended keeping them on land instead and mixing into concrete, for example, which would have been easier to monitor.
Environmental effects
Initial discharge
A large amount of caesium entered the sea from the initial atmospheric release (see above). By 2013, the concentrations of caesium-137 in the Fukushima coastal waters were around the level before the accident. However, concentrations in coastal sediments declined more slowly than in coastal waters, and the amount of caesium-137 stored in sediments most likely exceeded that in the water column by 2020. The sediments may provide a long-term source of caesium-137 in the seawater. According to Buesseler, the release of strontium-90 could be more problematic because, unlike some of the other isotopes, it gets into a person's bones.
Data on marine foods indicates their radioactive concentrations are falling towards initial levels. 41% of samples caught off the Fukushima coast in 2011 had caesium-137 concentrations above the legal limit (100 becquerels per kilogram), and this had declined to 0.05% in 2015. United States Food and Drug Administration stated in 2021 that "FDA has no evidence that radionuclides from the Fukushima incident are present in the U.S. food supply at levels that are unsafe". Yet, presenting the science alone has not helped customers to regain their trust on eating Fukushima fishery products.
2023 discharge
The most prevalent radionuclide in the wastewater is tritium. A total of 780 terabecquerels (TBq) will be released into the ocean at a rate of 22 TBq per year.
Tritium is routinely released into the ocean from operating nuclear power plants, sometimes in much greater quantities. For comparison, the La Hague nuclear processing site in France released 11,400 TBq of tritium in the year of 2018. In addition, about 60,000 TBq of tritium is produced naturally in the atmosphere each year by cosmic rays.
Other radionuclides present in the wastewater, like caesium-137, are not normally released by nuclear power plants. However, the concentrations in the treated water is minuscule relative to regulation limits.
"There is consensus among scientists that the impact on health is minuscule, still, it can't be said the risk is zero, which is what causes controversy", Michiaki Kai, a Japanese nuclear expert, told AFP. David Bailey, a physicist whose lab measures radioactivity, said that with tritium at diluted concentrations, "there is no issue with marine species, unless we see a severe decline in fish population".
Ferenc Dalnoki-Veress, a scientist-in-residence at the Middlebury Institute of International Studies at Monterey, said regarding dilution that bringing in living creatures makes the situation more complex. Robert Richmond, a biologist from the University of Hawaii, told the BBC that the inadequate radiological and ecological assessment raises the concern that Japan would be unable to detect what enters the environment and "get the genie back in the bottle". Dalnoki-Veress, Richmond, and three other panelists consulting for the Pacific Islands Forum wrote that dilution may fail to account for bioaccumulation and exposure pathways that involve organically-bound tritium (OBT).
Presenting the science alone has yet to gain public trust, as the government's attitude was deemed insincere by the public.
See also
2011 Tōhoku earthquake and tsunami
2011 Fukushima nuclear disaster
London Convention on the Prevention of Marine Pollution by Dumping of Wastes and Other Matter
Nuclear power in Japan
References
External links
Basic policy on handling of the ALPS treated water, Ministry of Economy, Trade and Industry, Japan
Treated water portal, TEPCO
Current ALPS treated water, etc. conditions , TEPCO
Measurement and Analysis Results for Contaminated Water Treatment, TEPCO, reports ALPS outlet and tank storage concentrations
Data from the treated water discharge, International Atomic Energy Agency – Live data
– review describing ALPS technology and performance written by Finnish scientists using publicly-available information
Fukushima Daiichi nuclear disaster
Environmental issues in Japan
Water pollution in Japan
Water supply and sanitation in Japan
Fishing industry in Japan
Natural history of Japan
Nature conservation in Japan
Environmental controversies
2011 in Japan
2021 in Japan
2011 in the environment
2011 industrial disasters
2011 Tōhoku earthquake and tsunami
Radiation accidents and incidents
INES Level 7 accidents
History of the Pacific Ocean
Environmental impact of nuclear power | Discharge of radioactive water of the Fukushima Daiichi Nuclear Power Plant | Technology | 5,497 |
9,952,794 | https://en.wikipedia.org/wiki/Eustigmatophyte | Eustigmatophytes are a small group (17 genera; ~107 species) of eukaryotic forms of algae that includes marine, freshwater and soil-living species.
All eustigmatophytes are unicellular, with coccoid cells and polysaccharide cell walls. Eustigmatophytes contain one or more yellow-green chloroplasts, which contain chlorophyll a and the accessory pigments violaxanthin and β-carotene. Eustigmatophyte zoids (gametes) possess a single or pair of flagella, originating from the apex of the cell. Unlike other heterokontophytes, eustigmatophyte zoids do not have typical photoreceptive organelles (or eyespots); instead an orange-red eyespot outside a chloroplast is located at the anterior end of the zoid.
Ecologically, eustigmatophytes occur as photosynthetic autotrophs across a range of systems. Most eustigmatophyte genera live in freshwater or in soil, although Nannochloropsis contains marine species of picophytoplankton (2–4 μm).
The class was erected to include some algae previously classified in the Xanthophyceae.
Classification
Class Eustigmatophyceae Hibberd & Leedale 1970
Order Eustigmatales Hibberd 1981
Genus Paraeustigmatos Fawley, Nemcová, & Fawley 2019
Family Eustigmataceae Hibberd 1981 [Chlorobothryaceae Pascher 1925; Pseudocharaciopsidaceae Lee & Bold ex Hibberd 1981]
Genus ?Ellipsoidion Pascher 1937
Genus Chlorobotrys Bohlin 1901
Genus Eustigmatos Hibberd 1981
Genus Pseudocharaciopsis Lee & Bold 1973
Genus Pseudostaurastrum Chodat 1921
Genus Vischeria Pascher 1938 - 16 spp.
Family Monodopsidaceae Hibberd 1981 [Loboceae Hegewald 2007]
Genus Microchloropsis Fawley, Jameson & Fawley 2015
Genus Monodopsis Hibberd 1981
Genus Nannochloropsis Hibberd 1981
Genus Pseudotetraedriella Hegewald & Padisák 2007
Family Neomonodaceae Amaral et al. 2020
Genus ?Botryochloropsis Preisig & Wilhelm 1989
Genus Characiopsiella Amaral et al. 2020
Genus Munda Amaral et al. 2020
Genus Neomonodus Amaral et al. 2020
Genus Pseudellipsoidion Neustupa & Nemková 2001
Order Goniochloridales Fawley, Elias & Fawley 2013
Family Goniochloridaceae
Genus Goniochloris Geitler 1928
Genus Pseudostaurastrum Chodat 1921
Genus Tetraedriella Pascher 1930
Genus Trachydiscus H.Ettl 1964
Genus Vacuoliviride Nakayama et al. 2015
Phylogeny
Phylogeny of Eustigmatophyceae based on the work of Amaral et al 2020
References
Ochrophyta
Eustigmatophyceae | Eustigmatophyte | Biology | 668 |
28,800,843 | https://en.wikipedia.org/wiki/Cantharellus%20formosus | Cantharellus formosus, commonly known as the Pacific golden chanterelle, is a fungus native to the Pacific Northwest region of North America. It is a member of the genus Cantharellus along with other popular edible chanterelles. It was distinguished from C. cibarius in the 1990s. It is orange to yellow, meaty and funnel-shaped. On the underside of the smooth cap, it has gill-like ridges that run down onto its stipe, which tapers down from the cap. The false gills often have a pinkish hue. It has a mild, sweet odor.
It appears solitary to gregarious in coniferous forests, from July to December. It is a choice edible mushroom and Oregon's state mushroom.
Taxonomy
E. J. H. Corner formally described C. formosus in 1966 from specimens collected on Vancouver Island in 1938. Despite this publication, the name C. cibarius (a European species) continued to be used to refer to golden chanterelles in the Pacific Northwest. In 1997, Redhead et al. Corner's specimens, returned to the type locale, and collected new specimens, confirming the identity of C. formosus. DNA analysis has since confirmed the species-level rank of C. formosus.
Description
Fruiting bodies of C. formosus range from wide, with cap colors varying depending on light levels and weather. In dry weather, the cap is medium orange yellow to light yellow brown, but wet weather may brighten the cap to brilliant to soft orange yellow. In low light conditions, caps may not develop the yellow pigmentation, resulting in salmon to rosy buff colors. The false gills may be yellow, salmon, buff, or even whitish depending on conditions, but are usually paler than the cap. The stem is colored similarly to the cap, and is either equal-width or tapering downwards. The spore print is a yellowish white color.
Similar species
Several other species of chanterelle may be found in western North America:
C. californicus – large size, associated with oaks in California
C. cascadensis – bright yellow fading to white in center of cap, may have bulbous base of stem
C. cibarius var. roseocanus – brilliant orange-yellow color without pinkish hues, false gills not paler than cap
C. subalbidus – whitish overall color
Additionally, Hygrophoropsis aurantiaca, Chroogomphus tomentosus, and species in the genera Craterellus, Gomphus (namely G. floccosus and G. kauffmanii), Omphalotus (particularly the poisonous O. olivascens in California), and Polyozellus may have a somewhat similar appearance to C. formosus.
Distribution and habitat
Cantharellus formosus has been reported from British Columbia to California, and is particularly abundant in the conifer forests of Washington and Oregon. It forms a mycorrhizal association with Douglas-fir and western hemlock, and has been shown to be more common in younger (40- to 60-year-old) forests than in old-growth forests. It grows solitary to gregarious, from July to December.
Uses
The mushroom has a mildly sweet odor and a mild taste. It should be brushed clean but not washed before cooking. It can be tossed, stir-fried, and sautéed in butter or oil. Commonly sold in grocery markets and restaurants, it is the most important commercially harvested Cantharellus species in the Pacific Northwest.
In culture
The species has been designated Oregon's state mushroom due to its economic value and abundance.
References
External links
formosus
Edible fungi
Fungi of Canada
Fungi of the United States
Fungi described in 1966
Fungi without expected TNC conservation status
Fungus species | Cantharellus formosus | Biology | 786 |
5,092,080 | https://en.wikipedia.org/wiki/Stinespring%20dilation%20theorem | In mathematics, Stinespring's dilation theorem, also called Stinespring's factorization theorem, named after W. Forrest Stinespring, is a result from operator theory that represents any completely positive map on a C*-algebra A as a composition of two completely positive maps each of which has a special form:
A *-representation of A on some auxiliary Hilbert space K followed by
An operator map of the form T ↦ V*TV.
Moreover, Stinespring's theorem is a structure theorem from a C*-algebra into the algebra of bounded operators on a Hilbert space. Completely positive maps are shown to be simple modifications of *-representations, or sometimes called *-homomorphisms.
Formulation
In the case of a unital C*-algebra, the result is as follows:
Theorem. Let A be a unital C*-algebra, H be a Hilbert space, and B(H) be the bounded operators on H. For every completely positive
there exists a Hilbert space K and a unital *-homomorphism
such that
where is a bounded operator. Furthermore, we have
Informally, one can say that every completely positive map can be "lifted" up to a map of the form .
The converse of the theorem is true trivially. So Stinespring's result classifies completely positive maps.
Sketch of proof
We now briefly sketch the proof. Let . For , define
and extend by semi-linearity to all of K. This is a Hermitian sesquilinear form because is compatible with the * operation. Complete positivity of is then used to show that this sesquilinear form is in fact positive semidefinite. Since positive semidefinite Hermitian sesquilinear forms satisfy the Cauchy–Schwarz inequality, the subset
is a subspace. We can remove degeneracy by considering the quotient space . The completion of this quotient space is then a Hilbert space, also denoted by . Next define and . One can check that and have the desired properties.
Notice that is just the natural algebraic embedding of H into K. One can verify that holds. In particular holds so that is an isometry if and only if . In this case H can be embedded, in the Hilbert space sense, into K and , acting on K, becomes the projection onto H. Symbolically, we can write
In the language of dilation theory, this is to say that is a compression of . It is therefore a corollary of Stinespring's theorem that every unital completely positive map is the compression of some *-homomorphism.
Minimality
The triple (, V, K) is called a Stinespring representation of Φ. A natural question is now whether one can reduce a given Stinespring representation in some sense.
Let K1 be the closed linear span of (A) VH. By property of *-representations in general, K1 is an invariant subspace of (a) for all a. Also, K1 contains VH. Define
We can compute directly
and if k and ℓ lie in K1
So (1, V, K1) is also a Stinespring representation of Φ and has the additional property that K1 is the closed linear span of (A) V H. Such a representation is called a minimal Stinespring representation.
Uniqueness
Let (1, V1, K1) and (2, V2, K2) be two Stinespring representations of a given Φ. Define a partial isometry W : K1 → K2 by
On V1H ⊂ K1, this gives the intertwining relation
In particular, if both Stinespring representations are minimal, W is unitary. Thus minimal Stinespring representations are unique up to a unitary transformation.
Some consequences
We mention a few of the results which can be viewed as consequences of Stinespring's theorem. Historically, some of the results below preceded Stinespring's theorem.
GNS construction
The Gelfand–Naimark–Segal (GNS) construction is as follows. Let H in Stinespring's theorem be 1-dimensional, i.e. the complex numbers. So Φ now is a positive linear functional on A. If we assume Φ is a state, that is, Φ has norm 1, then the isometry is determined by
for some of unit norm. So
and we have recovered the GNS representation of states. This is one way to see that completely positive maps, rather than merely positive ones, are the true generalizations of positive functionals.
A linear positive functional on a C*-algebra is absolutely continuous with respect to another such functional (called a reference functional) if it is zero on any positive element on which the reference positive functional is zero. This leads to a noncommutative generalization of the Radon–Nikodym theorem. The usual density operator of states on the matrix algebras with respect to the standard trace is nothing but the Radon–Nikodym derivative when the reference functional is chosen to be trace. Belavkin introduced the notion of complete absolute continuity of one completely positive map with respect to another (reference) map and proved an operator variant of the noncommutative Radon–Nikodym theorem for completely positive maps. A particular case of this theorem corresponding to a tracial completely positive reference map on the matrix algebras leads to the Choi operator as a Radon–Nikodym derivative of a CP map with respect to the standard trace (see Choi's Theorem).
Choi's theorem
It was shown by Choi that if is completely positive, where G and H are finite-dimensional Hilbert spaces of dimensions n and m respectively, then Φ takes the form:
This is called Choi's theorem on completely positive maps. Choi proved this using linear algebra techniques, but his result can also be viewed as a special case of Stinespring's theorem: Let (, V, K) be a minimal Stinespring representation of Φ. By minimality, K has dimension less than that of . So without loss of generality, K can be identified with
Each is a copy of the n-dimensional Hilbert space. From , we see that the above identification of K can be arranged so , where Pi is the projection from K to . Let . We have
and Choi's result is proved.
Choi's result is a particular case of noncommutative Radon–Nikodym theorem for completely positive (CP) maps corresponding to a tracial completely positive reference map on the matrix algebras. In strong operator form this general theorem was proven by Belavkin in 1985 who showed the existence of the positive density operator representing a CP map which is completely absolutely continuous with respect to a reference CP map. The uniqueness of this density operator in the reference Steinspring representation simply follows from the minimality of this representation. Thus, Choi's operator is the Radon–Nikodym derivative of a finite-dimensional CP map with respect to the standard trace.
Notice that, in proving Choi's theorem, as well as Belavkin's theorem from Stinespring's formulation, the argument does not give the Kraus operators Vi explicitly, unless one makes the various identification of spaces explicit. On the other hand, Choi's original proof involves direct calculation of those operators.
Naimark's dilation theorem
Naimark's theorem says that every B(H)-valued, weakly countably-additive measure on some compact Hausdorff space X can be "lifted" so that the measure becomes a spectral measure. It can be proved by combining the fact that C(X) is a commutative C*-algebra and Stinespring's theorem.
Sz.-Nagy's dilation theorem
This result states that every contraction on a Hilbert space has a unitary dilation with the minimality property.
Application
In quantum information theory, quantum channels, or quantum operations, are defined to be completely positive maps between C*-algebras. Being a classification for all such maps, Stinespring's theorem is important in that context. For example, the uniqueness part of the theorem has been used to classify certain classes of quantum channels.
For the comparison of different channels and computation of their mutual fidelities and information another representation of the channels by their "Radon–Nikodym" derivatives introduced by Belavkin is useful. In the finite-dimensional case, Choi's theorem as the tracial variant of the Belavkin's Radon–Nikodym theorem for completely positive maps is also relevant. The operators from the expression
are called the Kraus operators of Φ. The expression
is sometimes called the operator sum representation of Φ.
References
M.-D. Choi, Completely Positive Linear Maps on Complex Matrices, Linear Algebra and its Applications, 10, 285–290 (1975).
V. P. Belavkin, P. Staszewski, Radon–Nikodym Theorem for Completely Positive Maps, Reports on Mathematical Physics, v. 24, No 1, 49–55 (1986).
V. Paulsen, Completely Bounded Maps and Operator Algebras, Cambridge University Press, 2003.
W. F. Stinespring, Positive Functions on C*-algebras, Proceedings of the American Mathematical Society, 6, 211–216 (1955).
Operator theory
Operator algebras
Theorems in functional analysis | Stinespring dilation theorem | Mathematics | 1,961 |
33,840,668 | https://en.wikipedia.org/wiki/Anxiety%20dream | An anxiety dream is an unpleasant dream which can be more disturbing than a nightmare. Anxiety dreams are characterized by the feelings of unease, distress, or apprehension in the dreamer upon waking. Anxiety dreams tend to occur in rapid eye movement sleep, and usual themes involve incomplete tasks, embarrassment, falling, getting in to legal or financial trouble, failed pursuits and being pursued by another, often an unrealistic entity but other human beings can also be the pursuer. Anxiety dreams may be caused by childhood trauma, or an adult dealing with conflict. Though they create anxiety in the dreamer, anxiety dreams also serve as a way for a person's ego to reset.
Classification and provenance
Most individuals, when woken by a disturbing dream, would label it as a nightmare; but dream classification is not that simple. Anxiety dreams, punishment dreams, nightmares, post-trauma dreams, and night terrors are difficult to distinguish because they are commonly clumped under the term "nightmare". The different types of dreams, however, have different qualities. The stage in which the dream occurs is key. Anxiety dreams, punishment dreams, nightmares, or post-trauma dreams occur in the REM stage of sleep, while night terrors will occur in the NREM stage.
Ernest Jones, author of On The Nightmare, states that the characteristics of a nightmare are: "Intense or agonizing dread; the sense of oppression or of weight on the chest which dangerously threatens the continuation of breathing; and the dreamer’s conviction of being helpless or paralyzed." Published in 1911, these characteristics lasted sixty years until American sleep researcher, Charles Fisher, and his colleagues recognized that they were too broad. Fisher concluded that distressing dreams in REM sleep will contain the feeling of weight on the chest and sense of helplessness, but the intense or agonizing dread is a characteristic of NREM dreams. These dreams are more commonly known as night terrors.
The division of distressing dreams within REM sleep is subtle. The distinction between an anxiety dream and a nightmare comes down to what, contributing author of The Nightmare, Ruth Bers Shapiro calls the "profoundly disturbing" content that distinguishes the nightmare from the anxiety dream.
Common themes
Common themes in anxiety dreams involve incomplete tasks. These can include such things as a suitcase that has not been packed or an exam that has not been taken. Another common theme is the loss of a family member. Freud places these dreams into two categories: "those in which there is sorrow attached to the death and those in which there is no grief." Other themes can involve embarrassment. The dream of falling or being chased is also prevalent in anxiety dreams. These usually take place at the onset of sleep during pictorial consciousness and have little structure or plot.
Pre-Freudian explanations
In literature
Anxiety dreams have a long tradition in (Western) literature, beginning with Homer, who describes in Book 12 of the Iliad how Achilles is unable to catch up with Hector, "As in a dream a man is not able to follow one who runs from him, nor can the runner escape, nor the other pursue him, so he could not run him down in his speed, nor the other get clear." This anxiety of not being able to escape (or catch up) was borrowed from Homer by Virgil in Book XII of the Aeneid, where Turnus is unable to catch up with Aeneas; subsequently the dream is found (always in simile, never reported directly) in Oppian's Halieutica, in Torquato Tasso's Jerusalem Delivered, and in Phineas Fletcher's Locusts and Purple Island, to be "burlesqued" in Samuel Butler's Hudibras. An anxiety dream related more directly is Eve's in Books 4 and 5 of John Milton's Paradise Lost, who dreams prophetically that she will eat of the fruit of the forbidden tree, an event that will take place in Book 9. Other such anxiety dreams are found in the Anglo-Saxon elegy "The Wanderer" and in Arthurian romances such as Wolfram von Eschenbach's Parzival and Sir Gawain and the Green Knight (ll. 1750-55).
Supposed origin
In contrast to the supernatural and somatic origins for dreams proposed in classical dream theory, anxiety dreams were considered to be continuations of the thoughts when interrupted by sleep. Such references are found (cryptically) in Greek authors including the pre-Socratics and Herodotus, and (more explicitly) in Ecclesiastes 5:3 and Ecclesiasticus 34:1-7. Aristotle confirmed in the Problemata that waking thoughts are continued in sleep, and that even some prophetic (normally divinely inspired) dreams may result from anxiety continued in a dream. This theory is confirmed by Cicero (De diviniatione), Lucretius, and Petronius (Fragment 31). An English translation of a well-known medieval couplet by seventeenth-century poet Abraham Cowley: "What in the day he fears of future woe / At night in dreams, like truth, affrights his mind".
Freudian theory
Function
Freud’s theory is explained in his Interpretation of Dreams. One aspect of Freud’s work was his wish fulfillment theory; however, anxiety dreams were not always thought to fit within this theory as normal human nature is to avoid anxiety. Freud expected others to point out the discrepancy, and psychoanalyst Charles Brenner did just that. Freud countered Brenner by explaining the different ways that anxiety dreams and wish fulfillment could be intertwined. Freud gave one specific example in which a child dreamt his mother had gone missing and he had no one to comfort him. Freud explained, "the child dreamt of exchanging endearments with his mother and of sleeping with her; but all the pleasure was transformed into anxiety, and all the ideational content into its opposite." In this way the function of the anxiety dream is to disguise the unsavory wish fulfillment with a sense of punishment and resulting anxiety.
Causes
One suggested cause of anxiety dreams is childhood trauma. A factor in this is the developing ego of the child. This is especially true of children about one year in age. At this age anxiety dreams occur because the child's ego can't integrate his or her daily experiences. Shapiro also explains that the growing ego is easily affected by trauma and conflicts the child may be experiencing. This is an important factor because the ego-defense mechanisms (e.g. repression and intellectualization) are key in staving off anxiety dreams and nightmares.
Conflict in a child's life as well as the approaching of developmental stages can also cause anxiety dreams. For example, there may be conflict present as a child begins toilet training. "Toilet training precipitates conflicts between the wish to soil and fear of loss of parental love. If, during this period, the child is subject to disturbing experiences which leave him feeling helpless and unprotected, his anxiety over parental disapproval is exacerbated." This anxiety could likely lead to anxiety dreams in a child.
Effects
Positive
Anxiety dreams have an important function. When the ego has been overworked, often the only way it can reset is when one wakes up. Anxiety dreams will build until the dreamer is forced to wake and thus let the ego refocus. Shapiro also noted that anxiety dreams may serve in "alerting the dreamer to a psychologically dangerous situation".
Negative
General anxiety is a negative effect of anxiety dreams. Individuals dealing with distress in their dreams have been found to have general anxiety more often than those who were experiencing real life events that could be equally as stressful.
Treatments
Barry Kraków developed three steps to alleviate any anxiety dream or nightmare. These steps include:
Learning imagery techniques
Recording the dreams
Changing the dreams
Once a person has been taught the first step he/she can continue using the second and third steps to overcome any new anxiety dreams that might develop.
If more help is needed one might consider workshops that utilize psychodrama and psychotherapeutic techniques. As doctorandus Herma Reeskamp explains, workshops such as these aim to "help patients change the haunting themes of their nightmares and anxiety-filled dreams".
References
Dream | Anxiety dream | Biology | 1,686 |
53,827,083 | https://en.wikipedia.org/wiki/Rebecca%20Abergel | Rebecca Abergel is a professor of nuclear engineering and of chemistry at University of California, Berkeley. Abergel is also a senior faculty scientist in the chemical sciences division of Lawrence Berkeley National Laboratory, where she directs the Glenn T. Seaborg Center and leads the Heavy Element Chemistry research group. She is the recipient of several awards for her research in nuclear and inorganic chemistry.
Her research interests include ligand design and use of spectroscopic characterization methods to study the biological coordination chemistry and toxicity mechanisms of f-elements and inorganic isotopes, especially as applied to decontamination strategies, waste management, remediation, separation, and radiopharmaceutical development.
Abergel is known for leading the development of new drug products for the treatment of populations contaminated with heavy metals and radionuclides. Clinical development and commercialization of these products are now spearheaded by HOPO Therapeutics, which she co-founded.
Early life and education
Abergel was born in Caracas, Venezuela and grew up in Paris, France. She attended the École Normale Supérieure of Paris for her undergraduate degree, where she studied chemistry. While an undergraduate, she received a scholarship to work in the laboratory of Prof. John Arnold at the University of California, Berkeley. She remained at UC Berkeley to conduct her graduate studies, under the supervision of Prof. Ken Raymond. Her doctoral work focused on the synthesis and characterization of siderophore analogs to probe microbial iron transport systems and to develop new iron chelating agents. After earning her PhD in inorganic chemistry, Abergel pursued postdoctoral research in the UC Berkeley Department of Chemistry and the group of Prof. Roland Strong at the Fred Hutchinson Cancer Research Center. There she investigated the bacteriostatic function of the innate immune protein siderocalin in binding siderophores from pathogenic microorganisms such as Bacillus anthracis, for the development of new antibiotics.
Independent career
Abergel began her independent career at Berkeley Lab in 2009. She joined the Nuclear Engineering Department of UC Berkeley in 2018 and became the Heavy Element Chemistry Group Leader and Glenn T. Seaborg Center Director at Berkeley Lab that same year. In 2023, she joined the UC Berkeley Chemistry Department and became Associate Dean of the College of Engineering.
Honors
Radiation Research Society Vice-President Elect (2023)
Berkeley Lab Director’s Award for Exceptional Achievement in Tech Transfer (2022)
UC Berkeley Kenneth N. Raymond Lectureship in Inorganic Chemistry (2022)
Bakar Faculty Fellow (2021)
DOE Secretary of Energy Achievement Honor Award - COVID-19 Clinical Testing Teams (2020)
DOE Secretary of Energy Achievement Honor Award - National Virtual Biotechnology Laboratory (2020)
Hellman Faculty Fellow (2020)
AAAS Fellow (2019)
KAIST Nuclear & Quantum Engineering Pioneer Lecturer (2019)
American Chemical Society WCC Rising Star award (2017)
DOE Early Career Award (2014)
MIT Technology Review Innovators Under 35 – France (2014)
Berkeley Lab Director’s Award for Exceptional Scientific Achievement (2013)
Berkeley Lab Women at the Lab Award (2013)
Radiation Research Society Junior Faculty NCRP Award (2013)
Cooley’s Anemia Foundation Young Investigator Award (2009)
References
21st-century French scientists
21st-century French chemists
French women scientists
Living people
Year of birth missing (living people)
École Normale Supérieure alumni
UC Berkeley College of Engineering faculty
Lawrence Berkeley National Laboratory people
French women chemists
Inorganic chemists
21st-century French women | Rebecca Abergel | Chemistry | 702 |
20,841,138 | https://en.wikipedia.org/wiki/Crimidine | Crimidine is a convulsant poison used as a rodenticide. Crimidine was originally known by its product name, Castrix. It was originally produced in the 1940s by the conglomerate, IG Farben. It is classified as an extremely hazardous substance in the United States as defined in Section 302 of the U.S. Emergency Planning and Community Right-to-Know Act (42 U.S.C. 11002), and is subject to strict reporting requirements by facilities which produce, store, or use it in significant quantities. It is also no longer used in the United States as a rodenticide, but is still used to this day in other countries.
Mechanism of action
Crimidine is a highly reactive compound. The main mechanism of toxicity with crimidine is that it inhibits vitamin B6, which is used in the metabolism of carbohydrates and amino acids. This is due to the pyrimidine ring that both compounds contain. Although, the exact mechanism of how crimidine antagonizes vitamin B6 is unknown. Another mechanism of toxicity with crimidine is due to its deactivating effect on acetylcholinesterase
The serine residue, which is part of the acetylcholinesterase, acts a nucleophile and eventually replaces the C-Cl bond that is present in crimidine. Unlike with acetylcholine, the resulting serine-crimidine bond does not hydrolyze, permanently deactivating the enzyme
Toxicity
Crimidine is a fast acting convulsant, with an LD50 of 5 mg/kg. Earliest symptoms can develop within 20–40 minutes. These symptoms can include burning, irritation, and itching at the site of exposure or intake. Following these initial symptoms, convulsions follow and can be fatal. Low dose, long-term exposure can lead to damage in the central nervous system, resulting in muscle stiffness, restlessness, and sensitivity to light and noise. Although crimidine is fast acting, it is also quickly secreted and can pass through the system in less than 24 hours..
intravenous vitamin B6 should be given as soon as poisoning is suspected
References
External links
Convulsants
Pesticides
Pyrimidines
Chloroarenes
Acetylcholinesterase inhibitors
Chloropyrimidines
Vitamin B6 antagonists
Neurotoxins
Dimethylamino compounds | Crimidine | Chemistry,Biology,Environmental_science | 505 |
14,419,407 | https://en.wikipedia.org/wiki/Matrix%20Gla%20protein | Matrix Gla protein (MGP) is member of a family of vitamin K2 dependent, Gla-containing proteins. MGP has a high affinity binding to calcium ions, similar to other Gla-containing proteins. The protein acts as an inhibitor of vascular mineralization and plays a role in bone organization.
MGP is found in a number of body tissues in mammals, birds, and fish. Its mRNA is present in bone, cartilage, heart, and kidney.
It is present in bone together with the related vitamin K2-dependent protein osteocalcin. In bone, its production is increased by vitamin D.
Genetics
The MGP was linked to the short arm of chromosome 12 in 1990. Its mRNA sequence length is 585 bases long in humans.
Physiology
MGP and osteocalcin are both calcium-binding proteins that may participate in the organisation of bone tissue. Both have glutamate residues that are post-translationally carboxylated by the enzyme gamma-glutamyl carboxylase in a reaction that requires Vitamin K hydroquinone.
Role in disease
Abnormalities in the MGP gene have been linked with Keutel syndrome, a rare condition characterised by abnormal calcium deposition in cartilage, peripheral stenosis of the pulmonary artery, and midfacial hypoplasia.
Mice that lack MGP develop to term but die within two months as a result of arterial calcification which leads to blood-vessel rupture.
References
External links
Extracellular matrix proteins
Genes on human chromosome 12
Glycoproteins | Matrix Gla protein | Chemistry | 323 |
42,195,137 | https://en.wikipedia.org/wiki/UBot%20Studio | UBot Studio is a web browser automation tool, which allows users to build scripts that complete web-based actions such as data mining, web testing, and social media marketing. The scripts are created via a command window inside the UBot Studio browser, and can be compiled into separate executable files (“internet bots”) which can be run on any computer. It has been called “an infrastructural piece of the botting world”.
UBot Studio was developed by Seth Turin Media, Inc. First released in 2009, UBot Studio is the only web automation product designed for internet marketing automation. Advanced versions of UBot Studio contain a drag-and-drop user interface designer for bots, image recognition, task scheduler, and the ability to automate non web-based applications.
In 2013, the company introduced an API for the creation of plugins, to allow the addition of non-standard functionality to the software. In 2015, the company released UBot Studio Stealth, with a new browser using the CEF framework and additional features.
References
External links
Automation software
Scripting languages
Web scraping | UBot Studio | Engineering | 230 |
18,618,609 | https://en.wikipedia.org/wiki/Distinctness%20of%20image | Distinctness of image (DOI) is a quantification of the deviation of the direction of light propagation from the regular direction by scattering during transmission or reflection. DOI is sensitive to even subtle scattering effects; the more light is being scattered out of the regular direction the more the initially sharp (well defined) image is blurred (that is, small details are lost). In polluted air it is the sum of all particles of various dimensions (dust, aerosols, vapor, etc.) that induces haze.
DOI is measured to characterize the visual appearance of polished high-gloss surfaces such as automotive car finishes, mirrors, beyond the capabilities of gloss.
Other appearance phenomena are: gloss, haze, and orange peel. Various categories of visual appearance related to the perception of regular or diffuse reflection and transmission of light have been organized under the concept of cesia in an order system with three variables. In this system, DOI is connected to the variable called diffusivity.
Reflected Image Quality (RIQ) vs. DOI
DOI is not sensitive to low amounts of orange peel on highest quality surfaces.
RIQ has more proportionate response to orange peel on a wider range of surface finishes.
RIQ works well in differentiating low gloss surfaces with different specular/diffuse components.
Parameters that affect RIQ
Substrate alignment (horizontal/vertical)
Coating formulation
Substrate
Application technique
References
ASTM Standards on Color & Appearance Measurement
Richard S. Hunter, The Measurement of Appearance, John Wiley & Sons (1987)
External links
Measurement Theory
Vision
Optics | Distinctness of image | Physics,Chemistry | 310 |
43,290,887 | https://en.wikipedia.org/wiki/Tremetone | Tremetone is a constituent of the toxic compound tremetol, found in snakeroot (Ageratina altissima), that causes milk sickness in humans and trembles in livestock. Tremetone is the main constituent of at least 11 chemically related substances in tremetol. Tremetone is toxic to fish, but not to chicken, and is therefore not the major toxic compound in tremetol. Tremetol can be found in a number of different species of the family Asteraceae, including snakeroot and rayless goldenrod (Isocoma pluriflora).
Synthesis
Tremetol, an oil with a straw-colored tinge, was first isolated from white snakeroot by J.F. Couch in 1929. Column chromatography of tremetol yielded a hydrocarbon, two steroids, and three ketones. Further isolation experiments revealed that tremetone is the major ketone constituent of the compound tremetol. Hence, tremetone was hypothesized to be responsible for the “trembles” that characterize the milk sickness disease. Tremetone was first synthesized in July 1963 by DeGraw, Bowen, and Bonner. This synthesis is illustrated below. The final dehydration step was accomplished by treatment with phosphoryl chloride/pyridine at 75 °C.
This synthesis had a 75% yield, but the final product was a racemate that would not suitably undergo chiral resolution. This prevented isolation of the natural levorotary enantiomer of tremetone; thus limiting the ability to further analyze its biological mechanisms. However, in November 1963, the enantiomers of tremetone were isolated by Bowen, et al. via the synthesis illustrated below. Resolution of enantiomers occurred by co-crystallization of the acid following Na/Hg reduction.
See also
Milk sickness
Snakeroot (Ageratina altissima)
Toxol, a structurally related substance (3-hydroxy-tremetone)
References
Piceol ethers
Plant toxins
Benzofurans
Alkene derivatives
Aromatic ketones | Tremetone | Chemistry | 442 |
5,232,171 | https://en.wikipedia.org/wiki/Levant%20bole | Levant bole is an earthy clay brought from the Levant, and historically used in medicine for the same purposes as Armenian bole. It was indeed so similar to Armenian bole that some believed them both to be the same, or at least mixtures of each other. Levant bole was used in several compositions, particularly diascordium, to give it color.
Chambers discusses two other similar boles:
Lemnian bole or terra lemia from the island of Lemnos, also called terra sigillata
Samnian bole or terra samia from the island of Samos
See also
Armenian bole
Sources
Natural materials
Medicinal clay | Levant bole | Physics | 132 |
26,502,557 | https://en.wikipedia.org/wiki/Negative%20room%20pressure | Negative room pressure is an isolation technique used in hospitals and medical centers to prevent cross-contamination from room to room. It includes a ventilation that generates negative pressure (pressure lower than that of the surroundings) to allow air to flow into the isolation room but not escape from the room, as air will naturally flow from areas with higher pressure to areas with lower pressure, thereby preventing contaminated air from escaping the room. This technique is used to isolate patients with airborne contagious diseases such as: influenza (flu), measles, chickenpox, tuberculosis (TB), severe acute respiratory syndrome (SARS-CoV), Middle East respiratory syndrome (MERS-CoV), and coronavirus disease 2019 (COVID-19).
Mechanism
Negative pressure is generated and maintained in a room by a ventilation system that continually attempts to move air out of the room. Replacement air is allowed into the room through a gap under the door (typically about one half-inch high). Except for this gap, the room is as airtight as possible, allowing little air in through cracks and gaps, such as those around windows, light fixtures and electrical outlets. Leakage from these sources can make it more difficult and less energy efficient to maintain room negative pressure.
Because generally there are components of the exhausted air such as chemical contaminants, microorganisms, or radioactive isotopes that would be unacceptable to release into the surrounding outdoor environment, the air outlet must, at a minimum, be located such that it will not expose people or other occupied spaces. Commonly it is exhausted out of the roof of the building. However, in some cases, such as with highly infectious microorganisms in biosafety level 4 rooms, the air must first be mechanically filtered or disinfected by ultraviolet irradiation or chemical means before being released to the surrounding outdoor environment. In the case of nuclear facilities, the air is monitored for the presence of radioactive isotopes and usually filtered before being exhausted through a tall exhaust duct to be released higher in the air away from occupied spaces.
Monitoring and guidelines
In 2003, the CDC published guidelines on infection control, which included recommendations regarding negative pressure isolation rooms. Still absent from the CDC are recommendations of acute negative pressure isolation room monitoring. This has led to hospitals developing their own policies, such as the Cleveland Clinic. Commonly used methods for acute monitoring include the smoke or tissue test and periodic (noncontinuous) or continuous electronic pressure monitoring.
Smoke/tissue test
This test uses smoke or tissue paper to assess room pressurization. A capsule of smoke or a tissue is placed near the bottom of the door, if the smoke or tissue is pulled under the door, the room is negatively pressurized. The advantages of this test are that it is cost efficient and easily performed by hospital staff. The disadvantages are that it is not a continuous test and that it does not measure magnitude. Without a measure for magnitude, isolation rooms may be under- or over-pressurized, even though the smoke/tissue test is positive. A 1994 CDC recommendation stated TB isolation rooms should be checked daily for negative pressure while being used for TB isolation. If these rooms are not being used for patients who have suspected or confirmed TB but potentially could be used for such patients, the negative pressure in the rooms should be checked monthly.
Continuous electronic pressure monitoring
This test uses an electronic device with a pressure port in the isolation room and an isolation port in the corridor to continuously monitor the pressure differential between the spaces. The advantages of this type of monitoring are that the test is continuous and an alarm will alert staff to undesirable pressure changes. The disadvantages of this monitoring are that pressure ports can become contaminated with particulates which can lead to inaccuracy and false alarms, the devices are expensive to purchase and install, and staff must be trained to use and calibrate these devices because the pressure differentials used to achieve the low negative pressure necessitate the use of very sensitive mechanical devices, electronic devices, or pressure gauges to ensure accurate measurements.
See also
Airborne infection isolation room
References
Medical hygiene
Infectious diseases
Pressure
Isolation (health care) | Negative room pressure | Physics | 842 |
48,777,793 | https://en.wikipedia.org/wiki/Boolean%20satisfiability%20algorithm%20heuristics | In computer science, there are certain classes of algorithms (heuristics) that solves types of the Boolean satisfiability problem despite there being no known efficient algorithm in the general case.
Overview
The Boolean satisfiability (or SAT) problem can be stated formally as:
given a Boolean expression with variables, finding an assignment of the variables such that is true. It is seen as the canonical NP-complete problem. Although no known algorithm is known to solve SAT in polynomial time, there are classes of SAT problems which do have efficient algorithms that solve them.
The classes of problems amenable to SAT heuristics arise from many practical problems in AI planning, circuit testing, and software verification. Research on constructing efficient SAT solvers has been based on various principles such as resolution, search, local search and random walk, binary decisions, and Stalmarck's algorithm. Some of these algorithms are deterministic, while others may be stochastic.
As there exist polynomial-time algorithms to convert any Boolean expression to conjunctive normal form such as Tseitin's algorithm, posing SAT problems in CNF does not change their computational difficulty. SAT problems are canonically expressed in CNF because CNF has certain properties that can help prune the search space and speed up the search process.
Branching heuristics in conflict-driven algorithms
One of the cornerstone Conflict-Driven Clause Learning SAT solver algorithms is the DPLL algorithm. The algorithm works by iteratively assigning free variables, and when the algorithm encounters a bad assignment, then it backtracks to a previous iteration and chooses a different assignment of variables. It relies on a Branching Heuristic to pick the next free variable assignment; the branching algorithm effectively makes choosing the variable assignment into a decision tree. Different implementations of this heuristic produce markedly different decision trees, and thus have significant effect on the efficiency of the solver.
Early branching Heuristics
Heuristics such as Bohm's Heuristic, Maximum Occurrences on Minimum sized clauses heuristic, and Jeroslow-Wang heuristic can be regarded as greedy algorithms. Their basic premise is to choose a free variable assignment that will satisfy the most already unsatisfied clauses in the Boolean expression. However, as Boolean expressions get larger, more complicated, or more structured, these heuristics fail to capture useful information about these problems that could improve efficiency; they often get stuck in local maxima or do not consider the distribution of variables. Additionally, larger problems require more processing, as the operation of counting free variables in unsatisfied clauses dominates the run-time.
Variable State Independent Decaying Sum
An influential heuristic called Variable State Independent Decaying Sum (VSIDS) attempts to score each variable. VSIDS starts by looking at small portions of the Boolean expression and assigning each phase of a variable (a variable and its negated complement) a score proportional to the number of clauses that variable phase is in. As VSIDS progresses and searches more parts of the Boolean expression, periodically, all scores are divided by a constant. This discounts the effect of the presence of variables in earlier-found clauses in favor of variables with a greater presence in more recent clauses. VSIDS will select the variable phase with the highest score to determine where to branch.
VSIDS is quite effective because the scores of variable phases is independent of the current variable assignment, so backtracking is much easier. Further, VSIDS guarantees that each variable assignment satisfies the greatest number of recently searched segments of the Boolean expression.
Stochastic solvers
For MAX-SAT, the version of SAT in which the number of satisfied clauses is maximized, solvers also use probabilistic algorithms. If we are given a Boolean expression , with variables and we set each variable randomly, then each clause , with variables, then the chance of being satisfied by a particular variable assignment is
Pr( is satisfied) = .
This is because each variable in has probability of being satisfied, and we only need one variable in to be satisfied. This works for all , so
Pr( is satisfied) = .
Now we show that randomly assigning variable values is a -approximation algorithm, which means that is an optimal approximation algorithm unless P = NP. Suppose we are given a Boolean expression and
then we have that
This algorithm cannot be further optimized by the PCP theorem unless P = NP.
Other stochastic SAT solvers, such as WalkSAT and GSAT are an improvement to the above procedure. They start by randomly assigning values to each variable and then traverse the given Boolean expression to identify which variables to flip to minimize the number of unsatisfied clauses. They may randomly select a variable to flip or select a new random variable assignment to escape local maxima, much like a simulated annealing algorithm.
Weighted SAT problems
Numerous weighted SAT problems exist as the optimization versions of the general SAT problem. In this class of problems, each clause in a CNF Boolean expression is given a weight. The objective is the maximize or minimize the total sum of the weights of the satisfied clauses given a Boolean expression. weighted Max-SAT is the maximization version of this problem, and Max-SAT is an instance of weighted MAX-SAT problem where the weights of each clause are the same. The partial Max-SAT problem is the problem where some clauses necessarily must be satisfied (hard clauses) and the sum total of weights of the rest of the clauses (soft clauses) are to be maximized or minimized, depending on the problem. Partial Max-SAT represents an intermediary between Max-SAT (all clauses are soft) and SAT (all clauses are hard).
Note that the stochastic probabilistic solvers can also be used to find optimal approximations for Max-SAT.
Variable splitting
Variable splitting is a tool to find upper and lower bounds on a Max-SAT problem. It involves splitting a variable into new variables for all but once occurrence of in the original Boolean expression. For example, given the Boolean expression:
will become:
,
with being all distinct variables.
This relaxes the problem by introducing new variables into the Boolean expression, which has the effect of removing many of the constraints in the expression. Because any assignment of variables in can be represented by an assignment of variables in , the minimization and maximization of the weights of represent lower and upper bounds on the minimization and maximization of the weights of .
Partial Max-SAT
Partial Max-SAT can be solved by first considering all of the hard clauses and solving them as an instance of SAT. The total maximum (or minimum) weight of the soft clauses can be evaluated given the variable assignment necessary to satisfy the hard clauses and trying to optimize the free variables (the variables that the satisfaction of the hard clauses does not depend on). The latter step is an implementation of Max-SAT given some pre-defined variables. Of course, different variable assignments that satisfy the hard clauses might have different optimal free variable assignments, so it is necessary to check different hard clause satisfaction variable assignments.
Data structures for storing clauses
As SAT solvers and practical SAT problems (e.g. circuit verification) get more advanced, the Boolean expressions of interest may exceed millions of variables with several million clauses; therefore, efficient data structures to store and evaluate the clauses must be used.
Expressions can be stored as a list of clauses, where each clause is a list of variables, much like an adjacency list. Though these data structures are convenient for manipulation (adding elements, deleting elements, etc.), they rely on many pointers, which increases their memory overhead, decreases cache locality, and increases cache misses, which renders them impractical for problems with large clause counts and large clause sizes.
When clause sizes are large, more efficient analogous implementations include storing expressions as a list of clauses, where each clause is represented as a matrix that represents the clauses and the variables present in that clause, much like an adjacency matrix. The elimination of pointers and the contiguous memory occupation of arrays serve to decrease memory usage and increase cache locality and cache hits, which offers a run-time speed up compared to the aforesaid implementation.
References
Boolean algebra | Boolean satisfiability algorithm heuristics | Mathematics | 1,703 |
5,034,169 | https://en.wikipedia.org/wiki/Windows%20service | In Windows NT operating systems, a Windows service is a computer program that operates in the background. It is similar in concept to a Unix daemon. A Windows service must conform to the interface rules and protocols of the Service Control Manager, the component responsible for managing Windows services. It is the Services and Controller app, services.exe, that launches all the services and manages their actions, such as start, end, etc.
Windows services can be configured to start when the operating system is started and run in the background as long as Windows is running. Alternatively, they can be started manually or by an event. Windows NT operating systems include numerous services which run in context of three user accounts: System, Network Service and Local Service. These Windows components are often associated with Host Process for Windows Services. Because Windows services operate in the context of their own dedicated user accounts, they can operate when a user is not logged on.
Prior to Windows Vista, services installed as an "interactive service" could interact with Windows desktop and show a graphical user interface. In Windows Vista, however, interactive services are deprecated and may not operate properly, as a result of Windows Service hardening.
Administration
Windows administrators can manage services via:
The Services snap-in (found under Administrative Tools in Windows Control Panel)
Sc.exe
Windows PowerShell
Services snap-in
The Services snap-in, built upon Microsoft Management Console, can connect to the local computer or a remote computer on the network, enabling users to:
view a list of installed services along with service name, descriptions and configuration
start, stop, pause or restart services
specify service parameters when applicable
change the startup type. Acceptable startup types include:
Automatic: The service starts at system startup.
Automatic (Delayed): The service starts a short while after the system has finished starting up. This option was introduced in Windows Vista in an attempt to reduce the boot-to-desktop time. However, not all services support delayed start.
Manual: The service starts only when explicitly summoned.
Disabled: The service is disabled. It will not run.
change the user account context in which the service operates
configure recovery actions that should be taken if a service fails
inspect service dependencies, discovering which services or device drivers depend on a given service or upon which services or device drivers a given service depends
export the list of services as a text file or as a CSV file
Command line
The command-line tool to manage Windows services is sc.exe. It is available for all versions of Windows NT. This utility is included with Windows XP and later and also in ReactOS.
The sc command's scope of management is restricted to the local computer. However, starting with Windows Server 2003, not only can sc do all that the Services snap-in does, but it can also install and uninstall services.
The sc command duplicates some features of the net command.
The ReactOS version was developed by Ged Murphy and is licensed under the GPL.
Examples
The following example enumerates the status for active services & drivers.
C:\>sc query
The following example displays the status for the Windows Event log service.
C:\>sc query eventlog
PowerShell
The Microsoft.PowerShell.Management PowerShell module (included with Windows) has several cmdlets which can be used to manage Windows services:
Get-Service
New-Service
Restart-Service
Resume-Service
Set-Service
Start-Service
Stop-Service
Suspend-Service
Other management tools
Windows also includes components that can do a subset of what the snap-in, Sc.exe and PowerShell do. The net command can start, stop, pause or resume a Windows service. In Windows Vista and later, Windows Task Manager can show a list of installed services and start or stop them. MSConfig can enable or disable (see startup type description above) Windows services.
Installation
Windows services are installed and removed via *.INF setup scripts by SetupAPI; an installed service can be started immediately following its installation, and a running service can be stopped before its deinstallation.
Development
Writing native services
For a program to run as a Windows service, the program needs to be written to handle service start, stop, and pause messages from the Service Control Manager (SCM) through the System Services API. SCM is the Windows component responsible for managing service processes.
Wrapping applications as a service
The Windows Resource Kit for Windows NT 3.51, Windows NT 4.0 and Windows 2000 provides tools to control the use and registration of services: SrvAny.exe acts as a service wrapper to handle the interface expected of a service (e.g. handle service_start and respond sometime later with service_started or service_failed) and allow any executable or script to be configured as a service. Sc.exe allows new services to be installed, started, stopped and uninstalled.
See also
Windows services
Windows Service Hardening
svchost.exe
Concept
Background process
Daemon (computing)
DOS Protected Mode Services
Terminate-and-stay-resident program
Device driver
Operating system service management
Service Control Manager
Service Management Facility
Service wrapper
References
Further reading
David B. Probert, Windows Service Processes
External links
Windows Sysinternals: Autoruns for Windows v13.4 – An extremely detailed query of services
Service Management With Windows Sc From Command Line – Windows Service Management Tutorial
Windows Service Manager Tray
Process (computing) | Windows service | Technology | 1,105 |
14,799,091 | https://en.wikipedia.org/wiki/ATBF1 | Zinc finger homeobox protein 3 is a protein that in humans is encoded by the ZFHX3 gene.
References
Further reading
External links
Transcription factors | ATBF1 | Chemistry,Biology | 33 |
23,666,004 | https://en.wikipedia.org/wiki/IntEnz | IntEnz (Integrated relational Enzyme database) contains data on enzymes organized by enzyme EC number and is the official version of the Enzyme Nomenclature system developed by the International Union of Biochemistry and Molecular Biology.
References
External links
Enzyme databases
Science and technology in Cambridgeshire
South Cambridgeshire District | IntEnz | Chemistry,Biology | 54 |
57,328,403 | https://en.wikipedia.org/wiki/Aplysioviolin | Aplysioviolin is a purple-colored molecule secreted by sea hares of the genera Aplysia and Dolabella to deter predators. Aplysioviolin is a chemodeterrent, serving to dispel predators on olfactory and gustatory levels as well as by temporarily blinding predators with the molecule's dark color. Aplysioviolin is an important component of secreted ink and is strongly implicated in the sea hares' predatory escape mechanism. While the ink mixture as a whole may produce dangerous hydrogen peroxide and is relatively acidic, the aplysioviolin component alone has not been shown to produce human toxicity.
Biosynthetic origin
Aplysioviolin is a metabolic product of Aplysia californica species of sea hare, and is a major component to its ink mixture. Sea hares first consume red algae as nutriment, and extract from it the light-harvesting pigment phycoerythrin, cleaving it to separate the red-colored chromophore phycoerythrobilin from its covalently-bound protein structure. The sea hare then methylates one of phycoerythrobilin's two carboxylic acid functional groups to form aplysioviolin, which is concentrated and then stored in the ink gland.
Mechanism of action
Aplysioviolin, when squirted or otherwise exposed to predators, causes avoidance behavior that allows the sea hare to escape from being eaten. While its effects on predatory behavior have been investigated, the precise enzymatic targets of aplysioviolin are as of yet unknown. The behavioral effects of aplysioviolin have been especially characterized in blue crabs, whose feeding behavior is relatively easy to observe. In addition however, aplysioviolin has been shown to deter the approach of spiny lobsters, sea catfish, and other fish and crustacean species. The sea anemone Anthopleura sola has also been shown to retract its feeding protrusions when exposed to aplysioviolin. Aplysioviolin is known to be the major chemodeterrent compound in Aplysia but it is not the only one; both opaline and phycoerythrobilin have been shown to carry chemodeterrant effects, although they are less potent than aplysioviolin. Concentrations of aplysioviolin and phycoerythrobilin in ink are dependent on species: one study showed a 9:1 ratio (27 mg/mL and 3 mg/mL) of aplysioviolin to phycoerythrobilin in A. californica, and a 3.4:1 ratio (2.4 mg/mL and 0.7 mg/mL) for A. dactylomela. Aplysioviolin is often released with escapin in ink, which catalyzes conversion of ink metabolites into hydrogen peroxide, which is an additional deterrent of predators.
History
Aplysioviolin, along with the other components of sea hare ink, has been utilized as a dye since antiquity. Aplysioviolin in particular has been implicated in classical-age dyeing, and has recently been the subject of investigation as the ancient tekhelet (תְּכֵלֶת) dye of Hebrew and other Mediterranean civilizations, though it remains one of several possible historical contenders. Aplysioviolin was first specifically isolated and characterized as a pH-dependent color-changing zoochrome by Lederer & Huttrer in 1942. A first structure was proposed by Rüdiger in 1967 using a chromic-acid based microdegredation technique. This technique was similarly applied in the years following to characterize the structures of the related compounds phycoerythrobilin and phycocyanobilin. The 1967 proposed structure was later modified to remove an angular hydroxyl group at the 7' position, and the final structure was given by Rüdiger & O'Carra in 1969.
Human applications
The principal application of aplysioviolin has been historically in dyeing textiles. Aplysioviolin, in contrast to other more widely-used dyes, is considered a light-sensitive arylmethane dye, and is thus known for fading over time. Other pigments have been similarly extracted from marine animals, including Tyrian purple (6,6-dibromoindigo), from Murex purpuream shellfish, and additionally used as dyes.
Aplysioviolin has seen renewed interest in recent years due to its application to medicine and optical microscopy. Especially given its chirality, aplysioviolin and other natural compounds may serve as useful tools for stereoselective drug production and directed optical polarization. Within the past decade, aplysioviolin has additionally been hypothesized to confer medical pharmacodynamic effects. While as of yet uncharacterized in humans, the bioactive effects seen in fish are hypothesized to be recapitulated in some form in mammalian organisms.
References
Aplysiidae
Alkaloids
Methyl esters
Carboxylic acids
Lactams
Tetrapyrroles | Aplysioviolin | Chemistry | 1,097 |
348,560 | https://en.wikipedia.org/wiki/Morse%20theory | In mathematics, specifically in differential topology, Morse theory enables one to analyze the topology of a manifold by studying differentiable functions on that manifold. According to the basic insights of Marston Morse, a typical differentiable function on a manifold will reflect the topology quite directly. Morse theory allows one to find CW structures and handle decompositions on manifolds and to obtain substantial information about their homology.
Before Morse, Arthur Cayley and James Clerk Maxwell had developed some of the ideas of Morse theory in the context of topography. Morse originally applied his theory to geodesics (critical points of the energy functional on the space of paths). These techniques were used in Raoul Bott's proof of his periodicity theorem.
The analogue of Morse theory for complex manifolds is Picard–Lefschetz theory.
Basic concepts
To illustrate, consider a mountainous landscape surface (more generally, a manifold). If is the function giving the elevation of each point, then the inverse image of a point in is a contour line (more generally, a level set). Each connected component of a contour line is either a point, a simple closed curve, or a closed curve with a double point. Contour lines may also have points of higher order (triple points, etc.), but these are unstable and may be removed by a slight deformation of the landscape. Double points in contour lines occur at saddle points, or passes, where the surrounding landscape curves up in one direction and down in the other.
Imagine flooding this landscape with water. When the water reaches elevation , the underwater surface is , the points with elevation or below. Consider how the topology of this surface changes as the water rises. It appears unchanged except when passes the height of a critical point, where the gradient of is (more generally, the Jacobian matrix acting as a linear map between tangent spaces does not have maximal rank). In other words, the topology of does not change except when the water either (1) starts filling a basin, (2) covers a saddle (a mountain pass), or (3) submerges a peak.
To these three types of critical pointsbasins, passes, and peaks (i.e. minima, saddles, and maxima)one associates a number called the index, the number of independent directions in which decreases from the point. More precisely, the index of a non-degenerate critical point of is the dimension of the largest subspace of the tangent space to at on which the Hessian of is negative definite. The indices of basins, passes, and peaks are and respectively.
Considering a more general surface, let be a torus oriented as in the picture, with again taking a point to its height above the plane. One can again analyze how the topology of the underwater surface changes as the water level rises.
Starting from the bottom of the torus, let and be the four critical points of index and corresponding to the basin, two saddles, and peak, respectively. When is less than then is the empty set. After passes the level of when then is a disk, which is homotopy equivalent to a point (a 0-cell) which has been "attached" to the empty set. Next, when exceeds the level of and then is a cylinder, and is homotopy equivalent to a disk with a 1-cell attached (image at left). Once passes the level of and then is a torus with a disk removed, which is homotopy equivalent to a cylinder with a 1-cell attached (image at right). Finally, when is greater than the critical level of is a torus, i.e. a torus with a disk (a 2-cell) removed and re-attached.
This illustrates the following rule: the topology of does not change except when passes the height of a critical point; at this point, a -cell is attached to , where is the index of the point. This does not address what happens when two critical points are at the same height, which can be resolved by a slight perturbation of In the case of a landscape or a manifold embedded in Euclidean space, this perturbation might simply be tilting slightly, rotating the coordinate system.
One must take care to make the critical points non-degenerate. To see what can pose a problem, let and let Then is a critical point of but the topology of does not change when passes The problem is that the second derivative is that is, the Hessian of vanishes and the critical point is degenerate. This situation is unstable, since by slightly deforming to , the degenerate critical point is either removed () or breaks up into two non-degenerate critical points ().
Formal development
For a real-valued smooth function on a differentiable manifold the points where the differential of vanishes are called critical points of and their images under are called critical values. If at a critical point the matrix of second partial derivatives (the Hessian matrix) is non-singular, then is called a ; if the Hessian is singular then is a .
For the functions
from to has a critical point at the origin if which is non-degenerate if (that is, is of the form ) and degenerate if (that is, is of the form ). A less trivial example of a degenerate critical point is the origin of the monkey saddle.
The index of a non-degenerate critical point of is the dimension of the largest subspace of the tangent space to at on which the Hessian is negative definite. This corresponds to the intuitive notion that the index is the number of directions in which decreases. The degeneracy and index of a critical point are independent of the choice of the local coordinate system used, as shown by Sylvester's Law.
Morse lemma
Let be a non-degenerate critical point of Then there exists a chart in a neighborhood of such that for all and
throughout Here is equal to the index of at . As a corollary of the Morse lemma, one sees that non-degenerate critical points are isolated. (Regarding an extension to the complex domain see Complex Morse Lemma. For a generalization, see Morse–Palais lemma).
Fundamental theorems
A smooth real-valued function on a manifold is a Morse function if it has no degenerate critical points. A basic result of Morse theory says that almost all functions are Morse functions. Technically, the Morse functions form an open, dense subset of all smooth functions in the topology. This is sometimes expressed as "a typical function is Morse" or "a generic function is Morse".
As indicated before, we are interested in the question of when the topology of changes as varies. Half of the answer to this question is given by the following theorem.
Theorem. Suppose is a smooth real-valued function on is compact, and there are no critical values between and Then is diffeomorphic to and deformation retracts onto
It is also of interest to know how the topology of changes when passes a critical point. The following theorem answers that question.
Theorem. Suppose is a smooth real-valued function on and is a non-degenerate critical point of of index and that Suppose is compact and contains no critical points besides Then is homotopy equivalent to with a -cell attached.
These results generalize and formalize the 'rule' stated in the previous section.
Using the two previous results and the fact that there exists a Morse function on any differentiable manifold, one can prove that any differentiable manifold is a CW complex with an -cell for each critical point of index To do this, one needs the technical fact that one can arrange to have a single critical point on each critical level, which is usually proven by using gradient-like vector fields to rearrange the critical points.
Morse inequalities
Morse theory can be used to prove some strong results on the homology of manifolds. The number of critical points of index of is equal to the number of cells in the CW structure on obtained from "climbing" Using the fact that the alternating sum of the ranks of the homology groups of a topological space is equal to the alternating sum of the ranks of the chain groups from which the homology is computed, then by using the cellular chain groups (see cellular homology) it is clear that the Euler characteristic is equal to the sum
where is the number of critical points of index Also by cellular homology, the rank of the th homology group of a CW complex is less than or equal to the number of -cells in Therefore, the rank of the th homology group, that is, the Betti number , is less than or equal to the number of critical points of index of a Morse function on These facts can be strengthened to obtain the :
In particular, for any
one has
This gives a powerful tool to study manifold topology. Suppose on a closed manifold there exists a Morse function with precisely k critical points. In what way does the existence of the function restrict ? The case was studied by Georges Reeb in 1952; the Reeb sphere theorem states that is homeomorphic to a sphere The case is possible only in a small number of low dimensions, and M is homeomorphic to an Eells–Kuiper manifold.
In 1982 Edward Witten developed an analytic approach to the Morse inequalities by considering the de Rham complex for the perturbed operator
Application to classification of closed 2-manifolds
Morse theory has been used to classify closed 2-manifolds up to diffeomorphism. If is oriented, then is classified by its genus and is diffeomorphic to a sphere with handles: thus if is diffeomorphic to the 2-sphere; and if is diffeomorphic to the connected sum of 2-tori. If is unorientable, it is classified by a number and is diffeomorphic to the connected sum of real projective spaces In particular two closed 2-manifolds are homeomorphic if and only if they are diffeomorphic.
Morse homology
Morse homology is a particularly easy way to understand the homology of smooth manifolds. It is defined using a generic choice of Morse function and Riemannian metric. The basic theorem is that the resulting homology is an invariant of the manifold (that is, independent of the function and metric) and isomorphic to the singular homology of the manifold; this implies that the Morse and singular Betti numbers agree and gives an immediate proof of the Morse inequalities. An infinite dimensional analog of Morse homology in symplectic geometry is known as Floer homology.
Morse–Bott theory
The notion of a Morse function can be generalized to consider functions that have nondegenerate manifolds of critical points. A is a smooth function on a manifold whose critical set is a closed submanifold and whose Hessian is non-degenerate in the normal direction. (Equivalently, the kernel of the Hessian at a critical point equals the tangent space to the critical submanifold.) A Morse function is the special case where the critical manifolds are zero-dimensional (so the Hessian at critical points is non-degenerate in every direction, that is, has no kernel).
The index is most naturally thought of as a pair
where is the dimension of the unstable manifold at a given point of the critical manifold, and is equal to plus the dimension of the critical manifold. If the Morse–Bott function is perturbed by a small function on the critical locus, the index of all critical points of the perturbed function on a critical manifold of the unperturbed function will lie between and
Morse–Bott functions are useful because generic Morse functions are difficult to work with; the functions one can visualize, and with which one can easily calculate, typically have symmetries. They often lead to positive-dimensional critical manifolds. Raoul Bott used Morse–Bott theory in his original proof of the Bott periodicity theorem.
Round functions are examples of Morse–Bott functions, where the critical sets are (disjoint unions of) circles.
Morse homology can also be formulated for Morse–Bott functions; the differential in Morse–Bott homology is computed by a spectral sequence. Frederic Bourgeois sketched an approach in the course of his work on a Morse–Bott version of symplectic field theory, but this work was never published due to substantial analytic difficulties.
See also
References
Further reading
A classic advanced reference in mathematics and mathematical physics.
Lemmas
Smooth functions | Morse theory | Mathematics | 2,569 |
44,338,719 | https://en.wikipedia.org/wiki/Taniyama%20group | In mathematics, the Taniyama group is a group that is an extension of the absolute Galois group of the rationals by the Serre group. It was introduced by using an observation by Deligne, and named after Yutaka Taniyama. It was intended to be the group scheme whose representations correspond to the (hypothetical) CM motives over the field Q of rational numbers.
References
Algebraic groups
Langlands program | Taniyama group | Mathematics | 86 |
57,939,963 | https://en.wikipedia.org/wiki/Random%20column%20packing | Random column packing is the practice of packing a distillation column with randomly fitting filtration material in order to optimize surface area over which reactants can interact while minimizing the complexity of construction of such columns. Random column packing is an alternative to structured column packing.
Packed columns
Packed columns utilizing filter media for chemical exchange are the most common devices used in the chemical industry for reactant contact optimization. Packed columns are used in a range of industries to allow intimate contact between two immiscible/partly immiscible fluids, which can be liquid/gas or liquid/liquid. The fluids are passed through a column in a countercurrent flow.
In the column it is important to maintain an effective mass transfer, so it is essential that a packing is selected which will support a large surface area for mass transfer.
History
Random packing was used as early as 1820. Originally the packing material consisted of glass spheres, however in 1850 they were replaced by a more porous pumice stone and pieces of coke.
Applications
Random packed columns are used in a variety of applications, including:
Distillation
Stripping
Carbon dioxide scrubbing
Liquid%E2%80%93liquid extraction
Types
Raschig ring
The Raschig ring is a piece of tube, invented circa 1914, that is used in large numbers in a packing column. Raschig rings are usually made of ceramic or metals, and they provide a large surface area within the column, allowing for interaction between liquid and gas vapors.
Lessing ring
Lessing rings are a type of random packing similar to the Raschig ring invented in the early 20th century by German-born British chemist Rudolf Lessing (1878-1964) of Mond Nickel Company. Originally wrapped from steel strips according to his 1919 patent, now they are made of ceramic. Lessing rings have partitions insides which increase the surface area and enhance mass transfer efficiency. Lessing rings have a high density and an excellent heat and acid resistance. Lessing rings withstand corrosion and are used in regenerative oxide systems and transfer systems.
Pall ring
Pall rings are the most common form of random packing. They are similar to Lessing rings and were developed from the Raschig ring. Pall rings have similar cylindrical dimensions but has rows of windows which increase performance by increasing the surface area. They are suited for low pressure drop and high capacity applications. They have a degree of randomness and a relatively high liquid hold up, promoting a high absorption, especially when the rate of reaction is slow. The cross structure of the Pall ring makes it mechanically robust and suitable for use in deep packed beds.
Białecki ring
The Bialecki ring was patented in 1974 by Polish chemical engineer from Kraków Zbigniew Białecki rings are an improved version of Raschig rings. The rings may be injection moulded of plastics or press-formed from metal sheet without welding. Specific surface area of filling ranges between 60 and 440 m2/m3.
Dixon ring
Dixon rings have a similar design to Lessing rings. They are made of stainless steel mesh, giving Dixon rings a low pressure drop and after pre-wetting. Dixon rings have a very large surface area, which increases the rate of mass transfer. Dixon rings have a large liquid hold up, a low pressure drop and a large surface area, and have a high mass transfer rate. Dixon rings are used for laboratory distillation and scrubbing applications.
References
Chemical process engineering | Random column packing | Chemistry,Engineering | 709 |
45,291,929 | https://en.wikipedia.org/wiki/Genesys%20%28website%29 | Genesys is an online, global portal about plant genetic resources for food and agriculture. It is a gateway from which germplasm accessions from gene banks around the world can be easily found and ordered.
The project started in 2008 by Bioversity International, the Global Crop Diversity Trust and the Secretariat of the International Treaty on Plant Genetic Resources for Food and Agriculture, "to create a single information portal to facilitate the access to, and use of, accessions in ex situ gene banks".
In May 2011, the first version of the website was launched, containing 2.3 million accession records and some three million phenotypic records for 22 crops: bananas, barley, beans, breadfruit, cassava, chickpeas, coconuts, cowpeas, faba beans, finger millet, grass peas, lentils, maize, pearl millet, pigeon peas, potatoes, rice, sorghum, sweet potatoes, taro, wheat and yams. It brought together data from three major networks: the European Plant Genetic Resources Search Catalogue (EURISCO), System-Wide Information Network for Genetic Resources (SINGER) from CGIAR and the US Department of Agriculture's Germplasm Resources Information Network (GRIN).
In 2014, the second version of the website was launched. As of March 2015, the database listed 2.7 million accessions stored in 446 institutes from 252 countries. The source code, notably for the web server, is available online.
See also
International Treaty on Plant Genetic Resources for Food and Agriculture
References
External links
Genesys home page
EURISCO home page
GRIN home page
Plant genetics
German science websites | Genesys (website) | Biology | 339 |
7,433,307 | https://en.wikipedia.org/wiki/List%20of%20New%20York%20City%20housing%20cooperatives | A partial list of housing cooperatives in New York City.
Projects originally built as housing cooperatives
Alku and Alku Toinen, started in 1916 by Finnish immigrants
Hudson View Gardens (1923–25), Hudson Heights, real estate developer Charles Paterno, architect George Fred Pelham Jr.
United Workers Cooperative Colony (1927–1929), 339 + 385 units, on Allerton Avenue on the Bronx, sponsored by communist garment industry workers; known as "The Communist Coops"
Dunbar Apartments, built by John D. Rockefeller Jr. in 1928 as a housing cooperative to provide housing for African Americans. Bankrupt in 1936 and taken over by Rockefeller.
Sponsored by Amalgamated Clothing Workers of America, Architects Springsteen and Goldhammer, Herman Jessor
Amalgamated Housing Cooperative (1927, 1947–49, expansion 1952–55, 1968–70 Bronx, "The Amalgamated", 1,435 units; still operating as a co-operative
Amalgamated Dwellings (1930), in Cooperative Village, Lower East Side of Manhattan, New York City, 236 units
Hillman Housing Corporation (1947–1950), in Cooperative Village, 807 units
Under the Housing Development Fund Corporation
566 W. 159th Street, Washington Heights
1007-09 E. 174th Street, the Bronx
Lenox Court, East Harlem
Sponsored by the United Housing Foundation and International Ladies' Garment Workers' Union. Architects George W. Springsteen and Herman Jessor
East River Houses, (1956), in Cooperative Village, 1,672 units,
Seward Park Housing Corporation, in Cooperative Village, 1,728 units
Mutual Houses and Park Reservoir Housing Corporation (1955), Bronx affiliated with Amalgamated Housing
Penn South (1962), 2,820 units, Chelsea, Manhattan
Rochdale Village (1965), 5,860 units, central Queens
Amalgamated Warbasse Houses (1965), 2,585 units, Coney Island, Brooklyn
Amalgamated Towers (1969), 316 units (see "Amalgamated Housing Cooperative" above)
Co-op City (1968–1971), Baychester area of the Bronx 15,382 units
Twin Pines Village (Starrett City) (1975), 5,881 units, southern Brooklyn
Mitchell-Lama Housing Program
Morningside Gardens (1957), Morningside Heights
Southbridge Towers (1969), Lower Manhattan
Confucius Plaza (1975), Chinatown, Manhattan
Converted rental property
Castle Village (1939, 1985), real estate developer Charles Paterno, architect George Fred Pelham Jr.
See also
List of condominiums in the United States
References
Labor and housing in New York City
2004 Annual Report – Mitchell-Lama Housing Companies in New York State
DHCR-Supervised Developments Within New York City
DHCR-Supervised Developments Outside New York City
cooperatives
cooperatives | List of New York City housing cooperatives | Engineering | 554 |
4,388,722 | https://en.wikipedia.org/wiki/Herbert%20Fr%C3%B6hlich | Herbert Fröhlich (9 December 1905 – 23 January 1991) FRS was a German-born British physicist.
Career
In 1927, Fröhlich entered Ludwig-Maximilians University in Munich to study physics, and received his doctorate under Arnold Sommerfeld in 1930. His first position was as Privatdozent at the University of Freiburg. Due to rising anti-Semitism and the Deutsche Physik movement under Adolf Hitler, and at the invitation of Yakov Frenkel, Fröhlich went to the Soviet Union, in 1933, to work at the Ioffe Physico-Technical Institute in Leningrad. During the Great Purge following the murder of Sergei Kirov, he fled to England in 1935. Except for a short visit to the Netherlands and a brief internment during World War II, he worked in Nevill Francis Mott's department, at the University of Bristol, until 1948, rising to the position of Reader. At the invitation of James Chadwick, he took the Chair for Theoretical Physics at the University of Liverpool.
In 1950, Bell Telephone Laboratories offered Fröhlich their endowed professorial position at Princeton University. However, at Liverpool he had a purely research post which was attractive to him. He was then newly married to an American, Fanchon Angst, who was studying linguistic philosophy at Somerville College, Oxford under P. F. Strawson, and who did not want to return to the United States at that time.
From 1973, he was Professor of Solid State Physics at the University of Salford, however, all the while maintaining an office at the University of Liverpool, where he gained emeritus status in 1976 and remained there until his death. During 1981, he was a visiting professor at Purdue University. He was nominated for the Nobel Prize in Physics in 1963 and in 1964.
Fröhlich, who pursued theoretical research notably in the fields of superconductivity and bioelectrodynamics, proposed a theory of coherent excitations in biological systems known as Fröhlich coherence. A system that attains this coherent state is known as a Fröhlich condensate, similar to room-temperature non-equilibrium Bose–Einstein condensation of quasiparticles.
Honours and awards
Fröhlich was elected a Fellow of the Royal Society (FRS) in 1951. In 1972, he was awarded the Deutsche Physikalische Gesellschaft Max-Planck Medal and in 1981 an Honorary Doctorate from Purdue University.
Books by Fröhlich
Herbert Fröhlich Elektronentheorie der Metalle. (Struktur und Eigenschaften der Materie in Eigendarstellung, Bd.18). (Springer, 1936, 1969)
Herbert Fröhlich Elektronentheorie der Metalle (Ann Arbor: Edwards Brothers, First US edition, in German, 1943)
Herbert Fröhlich Theory of Dielectrics: Dielectric Constant and Dielectric Loss (Clarendon Press, 1949, 1958)
Herbert Fröhlich and F. Kremer Coherent Excitations in Biological Systems (Springer-Verlag, 1983)
Herbert Fröhlich, editor Biological Coherence and Response to External Stimuli (Springer, 1988)
Personal life
Fröhlich was born on 09 December 1905 in Rexingen Baden-Württemberg. He was the son of Fanny Frida (née Schwarz) and Jakob Julius Fröhlich, members of an old-established Jewish family, and the brother of Albrecht Fröhlich, a mathematician who was elected Fellow of the Royal Society in 1976.
References
External links
University of Liverpool: Fröhlich, Herbert FRS (1905–1991), Physicist
Did Herbert Fröhlich predict or postdict the isotope effect in superconductors?
Artist statement by Fanchon Fröhlich with:
A portrait of Herbert Fröhlich
1905 births
1991 deaths
Fellows of the Royal Society
Jewish emigrants from Nazi Germany to the United Kingdom
Academics of the University of Salford
Academics of the University of Liverpool
Academics of the University of Bristol
Jewish German physicists
British physicists
20th-century British physicists
Optical physicists
Condensed matter physicists
Scientists from Baden-Württemberg
Semiconductor physicists
Fellows of Somerville College, Oxford
Ludwig Maximilian University of Munich alumni
Winners of the Max Planck Medal | Herbert Fröhlich | Physics,Materials_science | 861 |
216,811 | https://en.wikipedia.org/wiki/Tarski%27s%20circle-squaring%20problem | Tarski's circle-squaring problem is the challenge, posed by Alfred Tarski in 1925, to take a disc in the plane, cut it into finitely many pieces, and reassemble the pieces so as to get a square of equal area. It is possible, using pieces that are Borel sets, but not with pieces cut by Jordan curves.
Solutions
Tarski's circle-squaring problem was proven to be solvable by Miklós Laczkovich in 1990. The decomposition makes heavy use of the axiom of choice and is therefore non-constructive. Laczkovich estimated the number of pieces in his decomposition at roughly 1050. The pieces used in his decomposition are non-measurable subsets of the plane.
Laczkovich actually proved the reassembly can be done using translations only; rotations are not required. Along the way, he also proved that any simple polygon in the plane can be decomposed into finitely many pieces and reassembled using translations only to form a square of equal area.
It follows from a result of that it is possible to choose the pieces in such a way that they can be moved continuously while remaining disjoint to yield the square. Moreover, this stronger statement can be proved as well to be accomplished by means of translations only.
A constructive solution was given by Łukasz Grabowski, András Máthé and Oleg Pikhurko in 2016 which worked everywhere except for a set of measure zero. More recently, Andrew Marks and Spencer Unger gave a completely constructive solution using about Borel pieces.
Limitations
Lester Dubins, Morris W. Hirsch & Jack Karush proved it is impossible to dissect a circle and make a square using pieces that could be cut with an idealized pair of scissors (that is, having Jordan curve boundary).
Related problems
The Bolyai–Gerwien theorem is a related but much simpler result: it states that one can accomplish such a decomposition of a simple polygon with finitely many polygonal pieces if both translations and rotations are allowed for the reassembly.
These results should be compared with the much more paradoxical decompositions in three dimensions provided by the Banach–Tarski paradox; those decompositions can even change the volume of a set. However, in the plane, a decomposition into finitely many pieces must preserve the sum of the Banach measures of the pieces, and therefore cannot change the total area of a set.
See also
Squaring the circle, a different problem: the task (which has been proven to be impossible) of constructing, for a given circle, a square of equal area with straightedge and compass alone.
References
Discrete geometry
Euclidean plane geometry
Mathematical problems
Geometric dissection | Tarski's circle-squaring problem | Mathematics | 564 |
957,001 | https://en.wikipedia.org/wiki/Ring%20network | A ring network is a network topology in which each node connects to exactly two other nodes, forming a single continuous pathway for signals through each node – a ring. Data travels from node to node, with each node along the way handling every packet.
Rings can be unidirectional, with all traffic travelling either clockwise or anticlockwise around the ring, or bidirectional (as in SONET/SDH). Because a unidirectional ring topology provides only one pathway between any two nodes, unidirectional ring networks may be disrupted by the failure of a single link. A node failure or cable break might isolate every node attached to the ring. In response, some ring networks add a "counter-rotating ring" (C-Ring) to form a redundant topology: in the event of a break, data are wrapped back onto the complementary ring before reaching the end of the cable, maintaining a path to every node along the resulting C-Ring. Such "dual ring" networks include the ITU-T's PSTN telephony systems network Signalling System No. 7 (SS7), Spatial Reuse Protocol, Fiber Distributed Data Interface (FDDI), Resilient Packet Ring, and Ethernet Ring Protection Switching. IEEE 802.5 networks – also known as IBM Token Ring networks – avoid the weakness of a ring topology altogether: they actually use a star topology at the physical layer and a media access unit (MAU) to imitate a ring at the datalink layer. Ring networks are used by ISPs to provide data backhaul services, connecting the ISP's facilities such as central offices/headends together.
All Signalling System No. 7 (SS7), and some SONET/SDH rings have two sets of bidirectional links between nodes. This allows maintenance or failures at multiple points of the ring usually without loss of the primary traffic on the outer ring by switching the traffic onto the inner ring past the failure points.
Advantages
Very orderly network where every device has access to the token and the opportunity to transmit
Performs better than a bus topology under heavy network load
Does not require a central node to manage the connectivity between the computers
Due to the point-to-point line configuration of devices with a device on either side (each device is connected to its immediate neighbor), it is quite easy to install and reconfigure since adding or removing a device requires moving just two connections.
Point-to-point line configuration makes it easy to identify and isolate faults.
Ring Protection reconfiguration for line faults of bidirectional rings can be very fast, as switching happens at a high level, and thus the traffic does not require individual rerouting.
Ring topology helps mitigate collisions in a network.
Disadvantages
One malfunctioning workstation can create problems for the entire network. This can be solved by using a dual ring or a switch that closes off the break.
Moving, adding and changing the devices can affect the network
Communication delay is directly proportional to number of nodes in the network
Bandwidth is shared on all links between devices
More difficult to configure than a Star: node adjunction = Ring shutdown and reconfiguration
Access protocols
Rings can be used to carry circuits or packets or a combination of both. SDH rings carry circuits. Circuits are set up with out-of-band signalling protocols, whereas packets are usually carried via a Medium Access Control Protocol (MAC).
The purpose of media access control is to determine which station transmits when. As in any MAC protocol, the aims are to resolve contention and provide fairness. There are three main classes of media access protocol for ring networks: slotted, token and register insertion.
The slotted ring treats the latency of the ring network as a large shift register that permanently rotates. It is formatted into so-called slots of fixed size. A slot is either full or empty, as indicated by control flags in the head of the slot. A station that wishes to transmit waits for an empty slot and puts data in. Other stations can copy out the data and may free the slot, or it may circulate back to the source who frees it. An advantage of source-release, if the sender is banned from immediately re-using it, is that all other stations get the chance to use it first, hence avoiding bandwidth hogging. The pre-eminent example of the slotted ring is the Cambridge Ring.
Misconceptions
"Token Ring is an example of a ring topology." 802.5 (Token Ring) networks do not use a ring topology at layer 1. Token Ring networks are technologies developed by IBM typically used in local area networks. Token Ring (802.5) networks imitate a ring at layer 2 but use a physical star at layer 1.
"Rings prevent collisions." The term "ring" only refers to the layout of the cables. It is true that there are no collisions on an IBM Token Ring, but this is because of the layer 2 Media Access Control method, not the physical topology (which again is a star, not a ring.) Token passing, not rings, prevent collisions.
"Token passing happens on rings." Token passing is a way of managing access to the cable, implemented at the MAC sublayer of layer 2. Ring topology is the cable layout at layer one. It is possible to do token passing on a bus (802.4) a star (802.5) or a ring (FDDI). Token passing is not restricted to rings.
References
Network topology | Ring network | Mathematics | 1,145 |
152,611 | https://en.wikipedia.org/wiki/Cellular%20differentiation | Cellular differentiation is the process in which a stem cell changes from one type to a differentiated one. Usually, the cell changes to a more specialized type. Differentiation happens multiple times during the development of a multicellular organism as it changes from a simple zygote to a complex system of tissues and cell types. Differentiation continues in adulthood as adult stem cells divide and create fully differentiated daughter cells during tissue repair and during normal cell turnover. Some differentiation occurs in response to antigen exposure. Differentiation dramatically changes a cell's size, shape, membrane potential, metabolic activity, and responsiveness to signals. These changes are largely due to highly controlled modifications in gene expression and are the study of epigenetics. With a few exceptions, cellular differentiation almost never involves a change in the DNA sequence itself. Metabolic composition, however, gets dramatically altered where stem cells are characterized by abundant metabolites with highly unsaturated structures whose levels decrease upon differentiation. Thus, different cells can have very different physical characteristics despite having the same genome.
A specialized type of differentiation, known as terminal differentiation, is of importance in some tissues, including vertebrate nervous system, striated muscle, epidermis and gut. During terminal differentiation, a precursor cell formerly capable of cell division permanently leaves the cell cycle, dismantles the cell cycle machinery and often expresses a range of genes characteristic of the cell's final function (e.g. myosin and actin for a muscle cell). Differentiation may continue to occur after terminal differentiation if the capacity and functions of the cell undergo further changes.
Among dividing cells, there are multiple levels of cell potency, which is the cell's ability to differentiate into other cell types. A greater potency indicates a larger number of cell types that can be derived. A cell that can differentiate into all cell types, including the placental tissue, is known as totipotent. In mammals, only the zygote and subsequent blastomeres are totipotent, while in plants, many differentiated cells can become totipotent with simple laboratory techniques. A cell that can differentiate into all cell types of the adult organism is known as pluripotent. Such cells are called meristematic cells in higher plants and embryonic stem cells in animals, though some groups report the presence of adult pluripotent cells. Virally induced expression of four transcription factors Oct4, Sox2, , and Klf4 (Yamanaka factors) is sufficient to create pluripotent (iPS) cells from adult fibroblasts. A multipotent cell is one that can differentiate into multiple different, but closely related cell types. Oligopotent cells are more restricted than multipotent, but can still differentiate into a few closely related cell types. Finally, unipotent cells can differentiate into only one cell type, but are capable of self-renewal. In cytopathology, the level of cellular differentiation is used as a measure of cancer progression. "Grade" is a marker of how differentiated a cell in a tumor is.
Mammalian cell types
Three basic categories of cells make up the mammalian body: germ cells, somatic cells, and stem cells. Each of the approximately 37.2 trillion (3.72x1013) cells in an adult human has its own copy or copies of the genome except certain cell types, such as red blood cells, that lack nuclei in their fully differentiated state. Most cells are diploid; they have two copies of each chromosome. Such cells, called somatic cells, make up most of the human body, such as skin and muscle cells. Cells differentiate to specialize for different functions.
Germ line cells are any line of cells that give rise to gametes—eggs and sperm—and thus are continuous through the generations. Stem cells, on the other hand, have the ability to divide for indefinite periods and to give rise to specialized cells. They are best described in the context of normal human development.
Development begins when a sperm fertilizes an egg and creates a single cell that has the potential to form an entire organism. In the first hours after fertilization, this cell divides into identical cells. In humans, approximately four days after fertilization and after several cycles of cell division, these cells begin to specialize, forming a hollow sphere of cells, called a blastocyst. The blastocyst has an outer layer of cells, and inside this hollow sphere, there is a cluster of cells called the inner cell mass. The cells of the inner cell mass go on to form virtually all of the tissues of the human body. Although the cells of the inner cell mass can form virtually every type of cell found in the human body, they cannot form an organism. These cells are referred to as pluripotent.
Pluripotent stem cells undergo further specialization into multipotent progenitor cells that then give rise to functional cells. Examples of stem and progenitor cells include:
Radial glial cells (embryonic neural stem cells) that give rise to excitatory neurons in the fetal brain through the process of neurogenesis.
Hematopoietic stem cells (adult stem cells) from the bone marrow that give rise to red blood cells, white blood cells, and platelets.
Mesenchymal stem cells (adult stem cells) from the bone marrow that give rise to stromal cells, fat cells, and types of bone cells
Epithelial stem cells (progenitor cells) that give rise to the various types of skin cells
Muscle satellite cells (progenitor cells) that contribute to differentiated muscle tissue.
A pathway that is guided by the cell adhesion molecules consisting of four amino acids, arginine, glycine, asparagine, and serine, is created as the cellular blastomere differentiates from the single-layered blastula to the three primary layers of germ cells in mammals, namely the ectoderm, mesoderm and endoderm (listed from most distal (exterior) to proximal (interior)). The ectoderm ends up forming the skin and the nervous system, the mesoderm forms the bones and muscular tissue, and the endoderm forms the internal organ tissues.
Dedifferentiation
Dedifferentiation, or integration, is a cellular process seen in the more basal life forms in animals, such as worms and amphibians where a differentiated cell reverts to an earlier developmental stageusually as part of a regenerative process. Dedifferentiation also occurs in plant cells. And, in cell culture in the laboratory, cells can change shape or may lose specific properties such as protein expressionwhich processes are also termed dedifferentiation.
Some hypothesize that dedifferentiation is an aberration that likely results in cancers, but others explain it as a natural part of the immune response that was lost to humans at some point of evolution.
A newly discovered molecule dubbed reversine, a purine analog, has proven to induce dedifferentiation in myotubes. These manifestly dedifferentiated cellsnow performing essentially as stem cellscould then redifferentiate into osteoblasts and adipocytes.
Mechanisms
Each specialized cell type in an organism expresses a subset of all the genes that constitute the genome of that species. Each cell type is defined by its particular pattern of regulated gene expression. Cell differentiation is thus a transition of a cell from one cell type to another and it involves a switch from one pattern of gene expression to another. Cellular differentiation during development can be understood as the result of a gene regulatory network. A regulatory gene and its cis-regulatory modules are nodes in a gene regulatory network; they receive input and create output elsewhere in the network. The systems biology approach to developmental biology emphasizes the importance of investigating how developmental mechanisms interact to produce predictable patterns (morphogenesis). However, an alternative view has been proposed recently. Based on stochastic gene expression, cellular differentiation is the result of a Darwinian selective process occurring among cells. In this frame, protein and gene networks are the result of cellular processes and not their cause.
While evolutionarily conserved molecular processes are involved in the cellular mechanisms underlying these switches, in animal species these are very different from the well-characterized gene regulatory mechanisms of bacteria, and even from those of the animals' closest unicellular relatives. Specifically, cell differentiation in animals is highly dependent on biomolecular condensates of regulatory proteins and enhancer DNA sequences.
Cellular differentiation is often controlled by cell signaling. Many of the signal molecules that convey information from cell to cell during the control of cellular differentiation are called growth factors. Although the details of specific signal transduction pathways vary, these pathways often share the following general steps. A ligand produced by one cell binds to a receptor in the extracellular region of another cell, inducing a conformational change in the receptor. The shape of the cytoplasmic domain of the receptor changes, and the receptor acquires enzymatic activity. The receptor then catalyzes reactions that phosphorylate other proteins, activating them. A cascade of phosphorylation reactions eventually activates a dormant transcription factor or cytoskeletal protein, thus contributing to the differentiation process in the target cell. Cells and tissues can vary in competence, their ability to respond to external signals.
Signal induction refers to cascades of signaling events, during which a cell or tissue signals to another cell or tissue to influence its developmental fate. Yamamoto and Jeffery investigated the role of the lens in eye formation in cave- and surface-dwelling fish, a striking example of induction. Through reciprocal transplants, Yamamoto and Jeffery found that the lens vesicle of surface fish can induce other parts of the eye to develop in cave- and surface-dwelling fish, while the lens vesicle of the cave-dwelling fish cannot.
Other important mechanisms fall under the category of asymmetric cell divisions, divisions that give rise to daughter cells with distinct developmental fates. Asymmetric cell divisions can occur because of asymmetrically expressed maternal cytoplasmic determinants or because of signaling. In the former mechanism, distinct daughter cells are created during cytokinesis because of an uneven distribution of regulatory molecules in the parent cell; the distinct cytoplasm that each daughter cell inherits results in a distinct pattern of differentiation for each daughter cell. A well-studied example of pattern formation by asymmetric divisions is body axis patterning in Drosophila. RNA molecules are an important type of intracellular differentiation control signal. The molecular and genetic basis of asymmetric cell divisions has also been studied in green algae of the genus Volvox, a model system for studying how unicellular organisms can evolve into multicellular organisms. In Volvox carteri, the 16 cells in the anterior hemisphere of a 32-cell embryo divide asymmetrically, each producing one large and one small daughter cell. The size of the cell at the end of all cell divisions determines whether it becomes a specialized germ or somatic cell.
Epigenetic control
Since each cell, regardless of cell type, possesses the same genome, determination of cell type must occur at the level of gene expression. While the regulation of gene expression can occur through cis- and trans-regulatory elements including a gene's promoter and enhancers, the problem arises as to how this expression pattern is maintained over numerous generations of cell division. As it turns out, epigenetic processes play a crucial role in regulating the decision to adopt a stem, progenitor, or mature cell fate This section will focus primarily on mammalian stem cells.
In systems biology and mathematical modeling of gene regulatory networks, cell-fate determination is predicted to exhibit certain dynamics, such as attractor-convergence (the attractor can be an equilibrium point, limit cycle or strange attractor) or oscillatory.
Importance of epigenetic control
The first question that can be asked is the extent and complexity of the role of epigenetic processes in the determination of cell fate. A clear answer to this question can be seen in the 2011 paper by Lister R, et al. on aberrant epigenomic programming in human induced pluripotent stem cells. As induced pluripotent stem cells (iPSCs) are thought to mimic embryonic stem cells in their pluripotent properties, few epigenetic differences should exist between them. To test this prediction, the authors conducted whole-genome profiling of DNA methylation patterns in several human embryonic stem cell (ESC), iPSC, and progenitor cell lines.
Female adipose cells, lung fibroblasts, and foreskin fibroblasts were reprogrammed into induced pluripotent state with the OCT4, SOX2, KLF4, and MYC genes. Patterns of DNA methylation in ESCs, iPSCs, somatic cells were compared. Lister R, et al. observed significant resemblance in methylation levels between embryonic and induced pluripotent cells. Around 80% of CG dinucleotides in ESCs and iPSCs were methylated, the same was true of only 60% of CG dinucleotides in somatic cells. In addition, somatic cells possessed minimal levels of cytosine methylation in non-CG dinucleotides, while induced pluripotent cells possessed similar levels of methylation as embryonic stem cells, between 0.5 and 1.5%. Thus, consistent with their respective transcriptional activities, DNA methylation patterns, at least on the genomic level, are similar between ESCs and iPSCs.
However, upon examining methylation patterns more closely, the authors discovered 1175 regions of differential CG dinucleotide methylation between at least one ES or iPS cell line. By comparing these regions of differential methylation with regions of cytosine methylation in the original somatic cells, 44-49% of differentially methylated regions reflected methylation patterns of the respective progenitor somatic cells, while 51-56% of these regions were dissimilar to both the progenitor and embryonic cell lines. In vitro-induced differentiation of iPSC lines saw transmission of 88% and 46% of hyper and hypo-methylated differentially methylated regions, respectively.
Two conclusions are readily apparent from this study. First, epigenetic processes are heavily involved in cell fate determination, as seen from the similar levels of cytosine methylation between induced pluripotent and embryonic stem cells, consistent with their respective patterns of transcription. Second, the mechanisms of reprogramming (and by extension, differentiation) are very complex and cannot be easily duplicated, as seen by the significant number of differentially methylated regions between ES and iPS cell lines. Now that these two points have been established, we can examine some of the epigenetic mechanisms that are thought to regulate cellular differentiation.
Mechanisms of epigenetic regulation
Pioneer factors (Oct4, Sox2, Nanog)
Three transcription factors, OCT4, SOX2, and NANOG – the first two of which are used in induced pluripotent stem cell (iPSC) reprogramming, along with Klf4 and c-Myc – are highly expressed in undifferentiated embryonic stem cells and are necessary for the maintenance of their pluripotency. It is thought that they achieve this through alterations in chromatin structure, such as histone modification and DNA methylation, to restrict or permit the transcription of target genes. While highly expressed, their levels require a precise balance to maintain pluripotency, perturbation of which will promote differentiation towards different lineages based on how the gene expression levels change. Differential regulation of Oct-4 and SOX2 levels have been shown to precede germ layer fate selection. Increased levels of Oct4 and decreased levels of Sox2 promote a mesendodermal fate, with Oct4 actively suppressing genes associated with a neural ectodermal fate. Similarly, increased levels of Sox2 and decreased levels of Oct4 promote differentiation towards a neural ectodermal fate, with Sox2 inhibiting differentiation towards a mesendodermal fate. Regardless of the lineage cells differentiate down, suppression of NANOG has been identified as a necessary prerequisite for differentiation.
Polycomb repressive complex (PRC2)
In the realm of gene silencing, Polycomb repressive complex 2, one of two classes of the Polycomb group (PcG) family of proteins, catalyzes the di- and tri-methylation of histone H3 lysine 27 (H3K27me2/me3). By binding to the H3K27me2/3-tagged nucleosome, PRC1 (also a complex of PcG family proteins) catalyzes the mono-ubiquitinylation of histone H2A at lysine 119 (H2AK119Ub1), blocking RNA polymerase II activity and resulting in transcriptional suppression. PcG knockout ES cells do not differentiate efficiently into the three germ layers, and deletion of the PRC1 and PRC2 genes leads to increased expression of lineage-affiliated genes and unscheduled differentiation. Presumably, PcG complexes are responsible for transcriptionally repressing differentiation and development-promoting genes.
Trithorax group proteins (TrxG)
Alternately, upon receiving differentiation signals, PcG proteins are recruited to promoters of pluripotency transcription factors. PcG-deficient ES cells can begin differentiation but cannot maintain the differentiated phenotype. Simultaneously, differentiation and development-promoting genes are activated by Trithorax group (TrxG) chromatin regulators and lose their repression. TrxG proteins are recruited at regions of high transcriptional activity, where they catalyze the trimethylation of histone H3 lysine 4 (H3K4me3) and promote gene activation through histone acetylation. PcG and TrxG complexes engage in direct competition and are thought to be functionally antagonistic, creating at differentiation and development-promoting loci what is termed a "bivalent domain" and rendering these genes sensitive to rapid induction or repression.
DNA methylation
Regulation of gene expression is further achieved through DNA methylation, in which the DNA methyltransferase-mediated methylation of cytosine residues in CpG dinucleotides maintains heritable repression by controlling DNA accessibility. The majority of CpG sites in embryonic stem cells are unmethylated and appear to be associated with H3K4me3-carrying nucleosomes. Upon differentiation, a small number of genes, including OCT4 and NANOG, are methylated and their promoters repressed to prevent their further expression. Consistently, DNA methylation-deficient embryonic stem cells rapidly enter apoptosis upon in vitro differentiation.
Nucleosome positioning
While the DNA sequence of most cells of an organism is the same, the binding patterns of transcription factors and the corresponding gene expression patterns are different. To a large extent, differences in transcription factor binding are determined by the chromatin accessibility of their binding sites through histone modification and/or pioneer factors. In particular, it is important to know whether a nucleosome is covering a given genomic binding site or not. This can be determined using a chromatin immunoprecipitation assay.
Histone acetylation and methylation
DNA-nucleosome interactions are characterized by two states: either tightly bound by nucleosomes and transcriptionally inactive, called heterochromatin, or loosely bound and usually, but not always, transcriptionally active, called euchromatin. The epigenetic processes of histone methylation and acetylation, and their inverses demethylation and deacetylation primarily account for these changes. The effects of acetylation and deacetylation are more predictable. An acetyl group is either added to or removed from the positively charged Lysine residues in histones by enzymes called histone acetyltransferases or histone deactylases, respectively. The acetyl group prevents Lysine's association with the negatively charged DNA backbone. Methylation is not as straightforward, as neither methylation nor demethylation consistently correlate with either gene activation or repression. However, certain methylations have been repeatedly shown to either activate or repress genes. The trimethylation of lysine 4 on histone 3 (H3K4Me3) is associated with gene activation, whereas trimethylation of lysine 27 on histone 3 represses genes
In stem cells
During differentiation, stem cells change their gene expression profiles. Recent studies have implicated a role for nucleosome positioning and histone modifications during this process. There are two components of this process: turning off the expression of embryonic stem cell (ESC) genes, and the activation of cell fate genes. Lysine specific demethylase 1 (KDM1A) is thought to prevent the use of enhancer regions of pluripotency genes, thereby inhibiting their transcription. It interacts with Mi-2/NuRD complex (nucleosome remodelling and histone deacetylase) complex, giving an instance where methylation and acetylation are not discrete and mutually exclusive, but intertwined processes.
Role of signaling in epigenetic control
A final question to ask concerns the role of cell signaling in influencing the epigenetic processes governing differentiation. Such a role should exist, as it would be reasonable to think that extrinsic signaling can lead to epigenetic remodeling, just as it can lead to changes in gene expression through the activation or repression of different transcription factors. Little direct data is available concerning the specific signals that influence the epigenome, and the majority of current knowledge about the subject consists of speculations on plausible candidate regulators of epigenetic remodeling. We will first discuss several major candidates thought to be involved in the induction and maintenance of both embryonic stem cells and their differentiated progeny, and then turn to one example of specific signaling pathways in which more direct evidence exists for its role in epigenetic change.
The first major candidate is Wnt signaling pathway. The Wnt pathway is involved in all stages of differentiation, and the ligand Wnt3a can substitute for the overexpression of c-Myc in the generation of induced pluripotent stem cells. On the other hand, disruption of β-catenin, a component of the Wnt signaling pathway, leads to decreased proliferation of neural progenitors.
Growth factors comprise the second major set of candidates of epigenetic regulators of cellular differentiation. These morphogens are crucial for development, and include bone morphogenetic proteins, transforming growth factors (TGFs), and fibroblast growth factors (FGFs). TGFs and FGFs have been shown to sustain expression of OCT4, SOX2, and NANOG by downstream signaling to Smad proteins. Depletion of growth factors promotes the differentiation of ESCs, while genes with bivalent chromatin can become either more restrictive or permissive in their transcription.
Several other signaling pathways are also considered to be primary candidates. Cytokine leukemia inhibitory factors are associated with the maintenance of mouse ESCs in an undifferentiated state. This is achieved through its activation of the Jak-STAT3 pathway, which has been shown to be necessary and sufficient towards maintaining mouse ESC pluripotency. Retinoic acid can induce differentiation of human and mouse ESCs, and Notch signaling is involved in the proliferation and self-renewal of stem cells. Finally, Sonic hedgehog, in addition to its role as a morphogen, promotes embryonic stem cell differentiation and the self-renewal of somatic stem cells.
The problem, of course, is that the candidacy of these signaling pathways was inferred primarily on the basis of their role in development and cellular differentiation. While epigenetic regulation is necessary for driving cellular differentiation, they are certainly not sufficient for this process. Direct modulation of gene expression through modification of transcription factors plays a key role that must be distinguished from heritable epigenetic changes that can persist even in the absence of the original environmental signals. Only a few examples of signaling pathways leading to epigenetic changes that alter cell fate currently exist, and we will focus on one of them.
Expression of Shh (Sonic hedgehog) upregulates the production of BMI1, a component of the PcG complex that recognizes H3K27me3. This occurs in a Gli-dependent manner, as Gli1 and Gli2 are downstream effectors of the Hedgehog signaling pathway. In culture, Bmi1 mediates the Hedgehog pathway's ability to promote human mammary stem cell self-renewal. In both humans and mice, researchers showed Bmi1 to be highly expressed in proliferating immature cerebellar granule cell precursors. When Bmi1 was knocked out in mice, impaired cerebellar development resulted, leading to significant reductions in postnatal brain mass along with abnormalities in motor control and behavior. A separate study showed a significant decrease in neural stem cell proliferation along with increased astrocyte proliferation in Bmi null mice.
An alternative model of cellular differentiation during embryogenesis is that positional information is based on mechanical signalling by the cytoskeleton using Embryonic differentiation waves. The mechanical signal is then epigenetically transduced via signal transduction systems (of which specific molecules such as Wnt are part) to result in differential gene expression.
In summary, the role of signaling in the epigenetic control of cell fate in mammals is largely unknown, but distinct examples exist that indicate the likely existence of further such mechanisms.
Effect of matrix elasticity
In order to fulfill the purpose of regenerating a variety of tissues, adult stems are known to migrate from their niches, adhere to new extracellular matrices (ECM) and differentiate. The ductility of these microenvironments are unique to different tissue types. The ECM surrounding brain, muscle and bone tissues range from soft to stiff. The transduction of the stem cells into these cells types is not directed solely by chemokine cues and cell to cell signaling. The elasticity of the microenvironment can also affect the differentiation of mesenchymal stem cells (MSCs which originate in bone marrow.) When MSCs are placed on substrates of the same stiffness as brain, muscle and bone ECM, the MSCs take on properties of those respective cell types.
Matrix sensing requires the cell to pull against the matrix at focal adhesions, which triggers a cellular mechano-transducer to generate a signal to be informed what force is needed to deform the matrix. To determine the key players in matrix-elasticity-driven lineage specification in MSCs, different matrix microenvironments were mimicked. From these experiments, it was concluded that focal adhesions of the MSCs were the cellular mechano-transducer sensing the differences of the matrix elasticity. The non-muscle myosin IIa-c isoforms generates the forces in the cell that lead to signaling of early commitment markers. Nonmuscle myosin IIa generates the least force increasing to non-muscle myosin IIc. There are also factors in the cell that inhibit non-muscle myosin II, such as blebbistatin. This makes the cell effectively blind to the surrounding matrix. Researchers have achieved some success in inducing stem cell-like properties in HEK 239 cells by providing a soft matrix without the use of diffusing factors. The stem-cell properties appear to be linked to tension in the cells' actin network. One identified mechanism for matrix-induced differentiation is tension-induced proteins, which remodel chromatin in response to mechanical stretch. The RhoA pathway is also implicated in this process.
Evolutionary history
A billion-years-old, likely holozoan, protist, Bicellum brasieri with two types of cells, shows that the evolution of differentiated multicellularity, possibly but not necessarily of animal lineages, occurred at least 1 billion years ago and possibly mainly in freshwater lakes rather than the ocean.
See also
Interbilayer Forces in Membrane Fusion
Fusion mechanism
Lipid bilayer fusion
Cell-cell fusogens
CAF-1
List of human cell types derived from the germ layers
References
Cellular processes
Developmental biology
Induced stem cells | Cellular differentiation | Biology | 5,938 |
52,827 | https://en.wikipedia.org/wiki/Volumetric%20heat%20capacity | The volumetric heat capacity of a material is the heat capacity of a sample of the substance divided by the volume of the sample. It is the amount of energy that must be added, in the form of heat, to one unit of volume of the material in order to cause an increase of one unit in its temperature. The SI unit of volumetric heat capacity is joule per kelvin per cubic meter, J⋅K−1⋅m−3.
The volumetric heat capacity can also be expressed as the specific heat capacity (heat capacity per unit of mass, in J⋅K−1⋅kg−1) times the density of the substance (in kg/L, or g/mL). It is defined to serve as an intensive property.
This quantity may be convenient for materials that are commonly measured by volume rather than mass, as is often the case in engineering and other technical disciplines. The volumetric heat capacity often varies with temperature, and is different for each state of matter. While the substance is undergoing a phase transition, such as melting or boiling, its volumetric heat capacity is technically infinite, because the heat goes into changing its state rather than raising its temperature.
The volumetric heat capacity of a substance, especially a gas, may be significantly higher when it is allowed to expand as it is heated (volumetric heat capacity at constant pressure) than when is heated in a closed vessel that prevents expansion (volumetric heat capacity at constant volume).
If the amount of substance is taken to be the number of moles in the sample (as is sometimes done in chemistry), one gets the molar heat capacity (whose SI unit is joule per kelvin per mole, J⋅K−1⋅mol−1).
Definition
The volumetric heat capacity is defined as
where is the volume of the sample at temperature , and is the amount of heat energy needed to raise the temperature of the sample from to . This parameter is an intensive property of the substance.
Since both the heat capacity of an object and its volume may vary with temperature, in unrelated ways, the volumetric heat capacity is usually a function of temperature too. It is equal to the specific heat of the substance times its density (mass per volume) , both measured at the temperature . Its SI unit is joule per kelvin per cubic meter (J⋅K−1⋅m−3).
This quantity is used almost exclusively for liquids and solids, since for gases it may be confused with the "specific heat capacity at constant volume", which generally has very different values. International standards now recommend that "specific heat capacity" always refer to capacity per unit of mass. Therefore, the word "volumetric" should always be used for this quantity.
History
Dulong and Petit predicted in 1818 that the product of solid substance density and specific heat capacity (ρcp) would be constant for all solids. This amounted to a prediction that volumetric heat capacity in solids would be constant. In 1819 they found that volumetric heat capacities were not quite constant, but that the most constant quantity was the heat capacity of solids adjusted by the presumed weight of the atoms of the substance, as defined by Dalton (the Dulong–Petit law). This quantity was proportional to the heat capacity per atomic weight (or per molar mass), which suggested that it is the heat capacity per atom (not per unit of volume) which is closest to being a constant in solids.
Eventually it became clear that heat capacities per particle for all substances in all states are the same, to within a factor of two, so long as temperatures are not in the cryogenic range.
Typical values
The volumetric heat capacity of solid materials at room temperatures and above varies widely, from about 1.2 MJ⋅K−1⋅m−3 (for example bismuth) to 3.4 MJ⋅K−1⋅m−3 (for example iron). This is mostly due to differences in the physical size of atoms. Atoms vary greatly in density, with the heaviest often being more dense, and thus are closer to taking up the same average volume in solids than their mass alone would predict. If all atoms were the same size, molar and volumetric heat capacity would be proportional and differ by only a single constant reflecting ratios of the atomic molar volume of materials (their atomic density). An additional factor for all types of specific heat capacities (including molar specific heats) then further reflects degrees of freedom available to the atoms composing the substance, at various temperatures.
For most liquids, the volumetric heat capacity is narrower, for example octane at 1.64 MJ⋅K−1⋅m−3 or ethanol at 1.9. This reflects the modest loss of degrees of freedom for particles in liquids as compared with solids.
However, water has a very high volumetric heat capacity, at 4.18 MJ⋅K−1⋅m−3, and ammonia is also fairly high: 3.3 MJ⋅K−1⋅m−3.
For gases at room temperature, the range of volumetric heat capacities per atom (not per molecule) only varies between different gases by a small factor less than two, because every ideal gas has the same molar volume. Thus, each gas molecule occupies the same mean volume in all ideal gases, regardless of the type of gas (see kinetic theory). This fact gives each gas molecule the same effective "volume" in all ideal gases (although this volume/molecule in gases is far larger than molecules occupy on average in solids or liquids). Thus, in the limit of ideal gas behavior (which many gases approximate except at low temperatures and/or extremes of pressure) this property reduces differences in gas volumetric heat capacity to simple differences in the heat capacities of individual molecules. As noted, these differ by a factor depending on the degrees of freedom available to particles within the molecules.
Volumetric heat capacity of gases
Large complex gas molecules may have high heat capacities per mole (of molecules), but their heat capacities per mole of atoms are very similar to those of liquids and solids, again differing by less than a factor of two per mole of atoms. This factor of two represents vibrational degrees of freedom available in solids vs. gas molecules of various complexities.
In monatomic gases (like argon) at room temperature and constant volume, volumetric heat capacities are all very close to 0.5 kJ⋅K−1⋅m−3, which is the same as the theoretical value of RT per kelvin per mole of gas molecules (where R is the gas constant and T is temperature). As noted, the much lower values for gas heat capacity in terms of volume as compared with solids (although more comparable per mole, see below) results mostly from the fact that gases under standard conditions consist of mostly empty space (about 99.9% of volume), which is not filled by the atomic volumes of the atoms in the gas. Since the molar volume of gases is very roughly 1000 times that of solids and liquids, this results in a factor of about 1000 loss in volumetric heat capacity for gases, as compared with liquids and solids. Monatomic gas heat capacities per atom (not per molecule) are decreased by a factor of 2 with regard to solids, due to loss of half of the potential degrees of freedom per atom for storing energy in a monatomic gas, as compared with regard to an ideal solid. There is some difference in the heat capacity of monatomic vs. polyatomic gasses, and also gas heat capacity is temperature-dependent in many ranges for polyatomic gases; these factors act to modestly (up to the discussed factor of 2) increase heat capacity per atom in polyatomic gases, as compared with monatomic gases. Volumetric heat capacities in polyatomic gases vary widely, however, since they are dependent largely on the number of atoms per molecule in the gas, which in turn determines the total number of atoms per volume in the gas.
The volumetric heat capacity is defined as having SI units of J/(m3⋅K). It can also be described in Imperial units of BTU/(ft3⋅°F).
Volumetric heat capacity of solids
Since the bulk density of a solid chemical element is strongly related to its molar mass (usually about 3R per mole, as noted above), there exists noticeable inverse correlation between a solid's density and its specific heat capacity on a per-mass basis. This is due to a very approximate tendency of atoms of most elements to be about the same size, despite much wider variations in density and atomic weight. These two factors (constancy of atomic volume and constancy of mole-specific heat capacity) result in a good correlation between the volume of any given solid chemical element and its total heat capacity. Another way of stating this, is that the volume-specific heat capacity (volumetric heat capacity) of solid elements is roughly a constant. The molar volume of solid elements is very roughly constant, and (even more reliably) so also is the molar heat capacity for most solid substances. These two factors determine the volumetric heat capacity, which as a bulk property may be striking in consistency. For example, the element uranium is a metal which has a density almost 36 times that of the metal lithium, but uranium's volumetric heat capacity is only about 20% larger than lithium's.
Since the volume-specific corollary of the Dulong–Petit specific heat capacity relationship requires that atoms of all elements take up (on average) the same volume in solids, there are many departures from it, with most of these due to variations in atomic size. For instance, arsenic, which is only 14.5% less dense than antimony, has nearly 59% more specific heat capacity on a mass basis. In other words; even though an ingot of arsenic is only about 17% larger than an antimony one of the same mass, it absorbs about 59% more heat for a given temperature rise. The heat capacity ratios of the two substances closely follows the ratios of their molar volumes (the ratios of numbers of atoms in the same volume of each substance); the departure from the correlation to simple volumes in this case is due to lighter arsenic atoms being significantly more closely packed than antimony atoms, instead of similar size. In other words, similar-sized atoms would cause a mole of arsenic to be 63% larger than a mole of antimony, with a correspondingly lower density, allowing its volume to more closely mirror its heat capacity behavior.
Volumetric heat capacity of liquids
The volumetric heat capacity of liquids could be measured from the thermal conductivity and thermal diffusivity correlation. The volumetric heat capacity of liquids could be directly obtained during thermal conductivity analysis using thermal conductivity analyzers that use techniques like the transient plane source method.
Constant volume and constant pressure
For gases it is necessary to distinguish between volumetric heat capacity at constant volume and volumetric heat capacity at constant pressure, which is always larger due to the pressure–volume work done as a gas expands during heating at constant pressure (thus absorbing heat which is converted to work). The distinctions between constant-volume and constant-pressure heat capacities are also made in various types of specific heat capacity (the latter meaning either mass-specific or mole-specific heat capacity).
Thermal inertia
Thermal inertia is a term commonly used to describe the observed delays in a body's temperature response during heat transfers. The phenomenon exists because of a body's ability to both store and transport heat relative to its environment. Larger values of volumetric heat capacity, as may occur in association with thermal effusivity, typically yield slower temperature responses.
See also
Heat capacity
Specific heat capacity
Temperature
Thermodynamic equations
References
Thermodynamic properties
Physical quantities
Volume
Heat transfer | Volumetric heat capacity | Physics,Chemistry,Mathematics | 2,408 |
2,964,718 | https://en.wikipedia.org/wiki/Electronic%20remittance%20advice | An electronic remittance advice (ERA) is an electronic data interchange (EDI) version of a medical insurance payment explanation. It provides details about providers' claims payment, and if the claims are denied, it would then contain the required explanations. The explanations include the denial codes and the descriptions, which present at the bottom of ERA. ERA are provided by plans to Providers. In the United States the industry standard ERA is HIPAA X12N 835 (HIPAA = Health Insurance Portability and Accountability Act; X12N = insurance subcommittees of ASC X12; 835 is the specific code number for ERA), which is sent from insurer to provider either directly or via a bank.
See also
Remittance advice
References
Citations
Data interchange standards
Health insurance | Electronic remittance advice | Technology | 158 |
172,640 | https://en.wikipedia.org/wiki/Iterator | In computer programming, an iterator is an object that progressively provides access to each item of a collection, in order.
A collection may provide multiple iterators via its interface that provide items in different orders, such as forwards and backwards.
An iterator is often implemented in terms of the structure underlying a collection implementation and is often tightly coupled to the collection to enable the operational semantics of the iterator.
An iterator is behaviorally similar to a database cursor.
Iterators date to the CLU programming language in 1974.
Pattern
An iterator provides access to an element of a collection (element access) and can change its internal state to provide access to the next element (element traversal). It also provides for creation and initialization to a first element and indicates whether all elements have been traversed. In some programming contexts, an iterator provides additional functionality.
An iterator allows a consumer to process each element of a collection while isolating the consumer from the internal structure of the collection. The collection can store elements in any manner while the consumer can access them as a sequence.
In object-oriented programming, an iterator class is usually designed in tight coordination with the corresponding collection class. Usually, the collection provides the methods for creating iterators.
A loop counter is sometimes also referred to as a loop iterator. A loop counter, however, only provides the traversal functionality and not the element access functionality.
Generator
One way of implementing an iterator is via a restricted form of coroutine, known as a generator. By contrast with a subroutine, a generator coroutine can yield values to its caller multiple times, instead of returning just once. Most iterators are naturally expressible as generators, but because generators preserve their local state between invocations, they're particularly well-suited for complicated, stateful iterators, such as tree traversers. There are subtle differences and distinctions in the use of the terms "generator" and "iterator", which vary between authors and languages. In Python, a generator is an iterator constructor: a function that returns an iterator. An example of a Python generator returning an iterator for the Fibonacci numbers using Python's yield statement follows:
def fibonacci(limit):
a, b = 0, 1
for _ in range(limit):
yield a
a, b = b, a + b
for number in fibonacci(100): # The generator constructs an iterator
print(number)
Internal Iterator
An internal iterator is a higher order function (often taking anonymous functions) that traverses a collection while applying a function to each element. For example, Python's map function applies a caller-defined function to each element:
digits = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
squared_digits = map(lambda x: x**2, digits)
# Iterating over this iterator would result in 0, 1, 4, 9, 16, ..., 81.
Implicit iterator
Some object-oriented languages such as C#, C++ (later versions), Delphi (later versions), Go, Java (later versions), Lua, Perl, Python, Ruby provide an intrinsic way of iterating through the elements of a collection without an explicit iterator. An iterator object may exist, but is not represented in the source code.
An implicit iterator is often manifest in language syntax as foreach.
In Python, a collection object can be iterated directly:
for value in iterable:
print(value)
In Ruby, iteration requires accessing an iterator property:
iterable.each do |value|
puts value
end
This iteration style is sometimes called "internal iteration" because its code fully executes within the context of the iterable object (that controls all aspects of iteration), and the programmer only provides the operation to execute at each step (using an anonymous function).
Languages that support list comprehensions or similar constructs may also make use of implicit iterators during the construction of the result list, as in Python:
names = [person.name for person in roster if person.male]
Sometimes the implicit hidden nature is only partial. The C++ language has a few function templates for implicit iteration, such as for_each(). These functions still require explicit iterator objects as their initial input, but the subsequent iteration does not expose an iterator object to the user.
Stream
Iterators are a useful abstraction of input streams – they provide a potentially infinite iterable (but not necessarily indexable) object. Several languages, such as Perl and Python, implement streams as iterators. In Python, iterators are objects representing streams of data. Alternative implementations of stream include data-driven languages, such as AWK and sed.
Contrast with indexing
Instead of using an iterator, many languages allow the use of a subscript operator and a loop counter to access each element. Although indexing may be used with collections, the use of iterators may have advantages such as:
Counting loops are not suitable to all data structures, in particular to data structures with no or slow random access, like lists or trees.
Iterators can provide a consistent way to iterate on data structures of all kinds, and therefore make the code more readable, reusable, and less sensitive to a change in the data structure.
An iterator can enforce additional restrictions on access, such as ensuring that elements cannot be skipped or that a previously visited element cannot be accessed a second time.
An iterator may allow the collection object to be modified without invalidating the iterator. For instance, once an iterator has advanced beyond the first element it may be possible to insert additional elements into the beginning of the collection with predictable results. With indexing this is problematic since the index numbers must change.
The ability of a collection to be modified while iterating through its elements has become necessary in modern object-oriented programming, where the interrelationships between objects and the effects of operations may not be obvious. By using an iterator one is isolated from these sorts of consequences. This assertion must however be taken with a grain of salt, because more often than not, for efficiency reasons, the iterator implementation is so tightly bound to the collection that it does preclude modification of the underlying collection without invalidating itself.
For collections that may move around their data in memory, the only way to not invalidate the iterator is, for the collection, to somehow keep track of all the currently alive iterators and update them on the fly. Since the number of iterators at a given time may be arbitrarily large in comparison to the size of the tied collection, updating them all will drastically impair the complexity guarantee on the collection's operations.
An alternative way to keep the number of updates bound relatively to the collection size would be to use a kind of handle mechanism, that is a collection of indirect pointers to the collection's elements that must be updated with the collection, and let the iterators point to these handles instead of directly to the data elements. But this approach will negatively impact the iterator performance, since it must effectuate a double pointer following to access the actual data element. This is usually not desirable, because many algorithms using the iterators invoke the iterators data access operation more often than the advance method. It is therefore especially important to have iterators with very efficient data access.
All in all, this is always a trade-off between security (iterators remain always valid) and efficiency. Most of the time, the added security is not worth the efficiency price to pay for it. Using an alternative collection (for example a singly linked list instead of a vector) would be a better choice (globally more efficient) if the stability of the iterators is needed.
Classification
Categories
Iterators can be categorised according to their functionality. Here is a (non-exhaustive) list of iterator categories:
Types
Different languages or libraries used with these languages define iterator types. Some of them are
In different programming languages
.NET
Iterators in the .NET Framework (i.e. C#) are called "enumerators" and represented by the IEnumerator interface.IEnumerator provides a MoveNext() method, which advances to the next element and indicates whether the end of the collection has been reached; a Current property, to obtain the value of the element currently being pointed at. and an optional Reset() method, to rewind the enumerator back to its initial position. The enumerator initially points to a special value before the first element, so a call to MoveNext() is required to begin iterating.
Enumerators are typically obtained by calling the GetEnumerator() method of an object implementing the IEnumerable interface. a Current property, to obtain the value of the element currently being pointed at;Container classes typically implement this interface. However, the foreach statement in C# can operate on any object providing such a method, even if it does not implement IEnumerable (duck typing). Both interfaces were expanded into generic versions in .NET 2.0.
The following shows a simple use of iterators in C# 2.0:
// explicit version
IEnumerator<MyType> iter = list.GetEnumerator();
while (iter.MoveNext())
Console.WriteLine(iter.Current);
// implicit version
foreach (MyType value in list)
Console.WriteLine(value);
C# 2.0 also supports generators: a method that is declared as returning IEnumerator (or IEnumerable), but uses the "yield return" statement to produce a sequence of elements instead of returning an object instance, will be transformed by the compiler into a new class implementing the appropriate interface.
C++
The C++ language makes wide use of iterators in its Standard Library and describes several categories of iterators differing in the repertoire of operations they allow. These include forward iterators, bidirectional iterators, and random access iterators, in order of increasing possibilities. All of the standard container template types provide iterators of one of these categories. Iterators generalize pointers to elements of an array (which indeed can be used as iterators), and their syntax is designed to resemble that of C pointer arithmetic, where the * and -> operators are used to reference the element to which the iterator points and pointer arithmetic operators like ++ are used to modify iterators in the traversal of a container.
Traversal using iterators usually involves a single varying iterator, and two fixed iterators that serve to delimit a range to be traversed. The distance between the limiting iterators, in terms of the number of applications of the operator ++ needed to transform the lower limit into the upper one, equals the number of items in the designated range; the number of distinct iterator values involved is one more than that. By convention, the lower limiting iterator "points to" the first element in the range, while the upper limiting iterator does not point to any element in the range, but rather just beyond the end of the range.
For traversal of an entire container, the begin() method provides the lower limit, and end() the upper limit. The latter does not reference any element of the container at all but is a valid iterator value that can be compared against.
The following example shows a typical use of an iterator.
std::vector<int> items;
items.push_back(5); // Append integer value '5' to vector 'items'.
items.push_back(2); // Append integer value '2' to vector 'items'.
items.push_back(9); // Append integer value '9' to vector 'items'.
for (auto it = items.begin(), end = items.end(); it != end; ++it) { // Iterate through 'items'.
std::cout << *it; // And print value of 'items' for current index.
}
// In C++11, the same can be done without using any iterators:
for (auto x : items) {
std::cout << x; // Print value of each element 'x' of 'items'.
}
// Both of the for loops print "529".
Iterator types are separate from the container types they are used with, though the two are often used in concert. The category of the iterator (and thus the operations defined for it) usually depends on the type of container, with for instance arrays or vectors providing random access iterators, but sets (which use a linked structure as implementation) only providing bidirectional iterators. One same container type can have more than one associated iterator type; for instance the std::vector<T> container type allows traversal either using (raw) pointers to its elements (of type *<T>), or values of a special type std::vector<T>::iterator, and yet another type is provided for "reverse iterators", whose operations are defined in such a way that an algorithm performing a usual (forward) traversal will actually do traversal in reverse order when called with reverse iterators. Most containers also provide a separate const_iterator type, for which operations that would allow changing the values pointed to are intentionally not defined.
Simple traversal of a container object or a range of its elements (including modification of those elements unless a const_iterator is used) can be done using iterators alone. But container types may also provide methods like insert or erase that modify the structure of the container itself; these are methods of the container class, but in addition require one or more iterator values to specify the desired operation. While it is possible to have multiple iterators pointing into the same container simultaneously, structure-modifying operations may invalidate certain iterator values (the standard specifies for each case whether this may be so); using an invalidated iterator is an error that will lead to undefined behavior, and such errors need not be signaled by the run time system.
Implicit iteration is also partially supported by C++ through the use of standard function templates, such as std::for_each(),
std::copy()
and
std::accumulate().
When used they must be initialized with existing iterators, usually begin and end, that define the range over which iteration occurs. But no explicit iterator object is subsequently exposed as the iteration proceeds. This example shows the use of for_each.
ContainerType<ItemType> c; // Any standard container type of ItemType elements.
void ProcessItem(const ItemType& i) { // Function that will process each item of the collection.
std::cout << i << std::endl;
}
std::for_each(c.begin(), c.end(), ProcessItem); // A for-each iteration loop.
The same can be achieved using std::copy, passing a std::ostream_iterator value as third iterator:
std::copy(c.begin(), c.end(), std::ostream_iterator<ItemType>(std::cout, "\n"));
Since C++11, lambda function syntax can be used to specify to operation to be iterated inline, avoiding the need to define a named function. Here is an example of for-each iteration using a lambda function:
ContainerType<ItemType> c; // Any standard container type of ItemType elements.
// A for-each iteration loop with a lambda function.
std::for_each(c.begin(), c.end(), [](const ItemType& i) { std::cout << i << std::endl; });
Java
Introduced in the Java JDK 1.2 release, the interface allows the iteration of container classes. Each Iterator provides a and method, and may optionally support a method. Iterators are created by the corresponding container class, typically by a method named iterator().
The next() method advances the iterator and returns the value pointed to by the iterator. The first element is obtained upon the first call to next(). To determine when all the elements in the container have been visited the hasNext() test method is used. The following example shows a simple use of iterators:
Iterator iter = list.iterator();
// Iterator<MyType> iter = list.iterator(); // in J2SE 5.0
while (iter.hasNext()) {
System.out.print(iter.next());
if (iter.hasNext())
System.out.print(", ");
}
To show that hasNext() can be called repeatedly, we use it to insert commas between the elements but not after the last element.
This approach does not properly separate the advance operation from the actual data access. If the data element must be used more than once for each advance, it needs to be stored in a temporary variable. When an advance is needed without data access (i.e. to skip a given data element), the access is nonetheless performed, though the returned value is ignored in this case.
For collection types that support it, the remove() method of the iterator removes the most recently visited element from the container while keeping the iterator usable. Adding or removing elements by calling the methods of the container (also from the same thread) makes the iterator unusable. An attempt to get the next element throws the exception. An exception is also thrown if there are no more elements remaining (hasNext() has previously returned false).
Additionally, for there is a with a similar API but that allows forward and backward iteration, provides its current index in the list and allows setting of the list element at its position.
The J2SE 5.0 release of Java introduced the interface to support an enhanced for (foreach) loop for iterating over collections and arrays. Iterable defines the method that returns an Iterator. Using the enhanced for loop, the preceding example can be rewritten as
for (MyType obj : list) {
System.out.print(obj);
}
Some containers also use the older (since 1.0) Enumeration class. It provides hasMoreElements() and nextElement() methods but has no methods to modify the container.
Scala
In Scala, iterators have a rich set of methods similar to collections, and can be used directly in for loops. Indeed, both iterators and collections inherit from a common base trait - scala.collection.TraversableOnce. However, because of the rich set of methods available in the Scala collections library, such as map, collect, filter etc., it is often not necessary to deal with iterators directly when programming in Scala.
Java iterators and collections can be automatically converted into Scala iterators and collections, respectively, simply by adding the single line
import scala.collection.JavaConversions._
to the file. The JavaConversions object provides implicit conversions to do this. Implicit conversions are a feature of Scala: methods that, when visible in the current scope, automatically insert calls to themselves into relevant expressions at the appropriate place to make them typecheck when they otherwise would not.
MATLAB
MATLAB supports both external and internal implicit iteration using either "native" arrays or cell arrays. In the case of external iteration where the onus is on the user to advance the traversal and request next elements, one can define a set of elements within an array storage structure and traverse the elements using the for-loop construct. For example,
% Define an array of integers
myArray = [1,3,5,7,11,13];
for n = myArray
% ... do something with n
disp(n) % Echo integer to Command Window
end
traverses an array of integers using the for keyword.
In the case of internal iteration where the user can supply an operation to the iterator to perform over every element of a collection, many built-in operators and MATLAB functions are overloaded to execute over every element of an array and return a corresponding output array implicitly. Furthermore, the arrayfun and cellfun functions can be leveraged for performing custom or user defined operations over "native" arrays and cell arrays respectively. For example,
function simpleFun
% Define an array of integers
myArray = [1,3,5,7,11,13];
% Perform a custom operation over each element
myNewArray = arrayfun(@(a)myCustomFun(a),myArray);
% Echo resulting array to Command Window
myNewArray
function outScalar = myCustomFun(inScalar)
% Simply multiply by 2
outScalar = 2*inScalar;
defines a primary function simpleFun that implicitly applies custom subfunction myCustomFun to each element of an array using built-in function arrayfun.
Alternatively, it may be desirable to abstract the mechanisms of the array storage container from the user by defining a custom object-oriented MATLAB implementation of the Iterator Pattern. Such an implementation supporting external iteration is demonstrated in MATLAB Central File Exchange item Design Pattern: Iterator (Behavioral). This is written in the new class-definition syntax introduced with MATLAB software version 7.6 (R2008a) and features a one-dimensional cell array realization of the List Abstract Data Type (ADT) as the mechanism for storing a heterogeneous (in data type) set of elements. It provides the functionality for explicit forward List traversal with the hasNext(), next() and reset() methods for use in a while-loop.
PHP
PHP's foreach loop was introduced in version 4.0 and made compatible with objects as values in 4.0 Beta 4. However, support for iterators was added in PHP 5 through the introduction of the internal Traversable interface. The two main interfaces for implementation in PHP scripts that enable objects to be iterated via the foreach loop are Iterator and IteratorAggregate. The latter does not require the implementing class to declare all required methods, instead it implements an accessor method (getIterator) that returns an instance of Traversable. The Standard PHP Library provides several classes to work with special iterators. PHP also supports Generators since 5.5.
The simplest implementation is by wrapping an array, this can be useful for type hinting and information hiding.
namespace Wikipedia\Iterator;
final class ArrayIterator extends \Iterator
{
private array $array;
public function __construct(array $array)
{
$this->array = $array;
}
public function rewind(): void
{
echo 'rewinding' , PHP_EOL;
reset($this->array);
}
public function current()
{
$value = current($this->array);
echo "current: {$value}", PHP_EOL;
return $value;
}
public function key()
{
$key = key($this->array);
echo "key: {$key}", PHP_EOL;
return $key;
}
public function next()
{
$value = next($this->array);
echo "next: {$value}", PHP_EOL;
return $value;
}
public function valid(): bool
{
$valid = $this->current() !== false;
echo 'valid: ', ($valid ? 'true' : 'false'), PHP_EOL;
return $valid;
}
}
All methods of the example class are used during the execution of a complete foreach loop (foreach ($iterator as $key => $current) {}). The iterator's methods are executed in the following order:
$iterator->rewind() ensures that the internal structure starts from the beginning.
$iterator->valid() returns true in this example.
$iterator->current() returned value is stored in $value.
$iterator->key() returned value is stored in $key.
$iterator->next() advances to the next element in the internal structure.
$iterator->valid() returns false and the loop is aborted.
The next example illustrates a PHP class that implements the Traversable interface, which could be wrapped in an IteratorIterator class to act upon the data before it is returned to the foreach loop. The usage together with the MYSQLI_USE_RESULT constant allows PHP scripts to iterate result sets with billions of rows with very little memory usage. These features are not exclusive to PHP nor to its MySQL class implementations (e.g. the PDOStatement class implements the Traversable interface as well).
mysqli_report(MYSQLI_REPORT_ERROR | MYSQLI_REPORT_STRICT);
$mysqli = new \mysqli('host.example.com', 'username', 'password', 'database_name');
// The \mysqli_result class that is returned by the method call implements the internal Traversable interface.
foreach ($mysqli->query('SELECT `a`, `b`, `c` FROM `table`', MYSQLI_USE_RESULT) as $row) {
// Act on the returned row, which is an associative array.
}
Python
Iterators in Python are a fundamental part of the language and in many cases go unseen as they are implicitly used in the for (foreach) statement, in list comprehensions, and in generator expressions. All of Python's standard built-in collection types support iteration, as well as many classes that are part of the standard library. The following example shows typical implicit iteration over a sequence:
for value in sequence:
print(value)
Python dictionaries (a form of associative array) can also be directly iterated over, when the dictionary keys are returned; or the items() method of a dictionary can be iterated over where it yields corresponding key,value pairs as a tuple:
for key in dictionary:
value = dictionary[key]
print(key, value)
for key, value in dictionary.items():
print(key, value)
Iterators however can be used and defined explicitly. For any iterable sequence type or class, the built-in function iter() is used to create an iterator object. The iterator object can then be iterated with the next() function, which uses the __next__() method internally, which returns the next element in the container. (The previous statement applies to Python 3.x. In Python 2.x, the next() method is equivalent.) A StopIteration exception will be raised when no more elements are left. The following example shows an equivalent iteration over a sequence using explicit iterators:
it = iter(sequence)
while True:
try:
value = it.next() # in Python 2.x
value = next(it) # in Python 3.x
except StopIteration:
break
print(value)
Any user-defined class can support standard iteration (either implicit or explicit) by defining an __iter__() method that returns an iterator object. The iterator object then needs to define a __next__() method that returns the next element.
Python's generators implement this iteration protocol.
Raku
Iterators in Raku are a fundamental part of the language, although usually users do not have to care about iterators. Their usage is hidden behind iteration APIs such as the for statement, map, grep, list indexing with .[$idx], etc.
The following example shows typical implicit iteration over a collection of values:
my @values = 1, 2, 3;
for @values -> $value {
say $value
}
# OUTPUT:
# 1
# 2
# 3
Raku hashes can also be directly iterated over; this yields key-value Pair objects. The kv method can be invoked on the hash to iterate over the key and values; the keys method to iterate over the hash's keys; and the values method to iterate over the hash's values.
my %word-to-number = 'one' => 1, 'two' => 2, 'three' => 3;
for %word-to-number -> $pair {
say $pair;
}
# OUTPUT:
# three => 3
# one => 1
# two => 2
for %word-to-number.kv -> $key, $value {
say "$key: $value"
}
# OUTPUT:
# three: 3
# one: 1
# two: 2
for %word-to-number.keys -> $key {
say "$key => " ~ %word-to-number{$key};
}
# OUTPUT:
# three => 3
# one => 1
# two => 2
Iterators however can be used and defined explicitly. For any iterable type, there are several methods that control different aspects of the iteration process. For example, the iterator method is supposed to return an Iterator object, and the pull-one method is supposed to produce and return the next value if possible, or return the sentinel value IterationEnd if no more values could be produced. The following example shows an equivalent iteration over a collection using explicit iterators:
my @values = 1, 2, 3;
my $it := @values.iterator; # grab iterator for @values
loop {
my $value := $it.pull-one; # grab iteration's next value
last if $value =:= IterationEnd; # stop if we reached iteration's end
say $value;
}
# OUTPUT:
# 1
# 2
# 3
All iterable types in Raku compose the Iterable role, Iterator role, or both. The Iterable is quite simple and only requires the iterator to be implemented by the composing class. The Iterator is more complex and provides a series of methods such as pull-one, which allows for a finer operation of iteration in several contexts such as adding or eliminating items, or skipping over them to access other items. Thus, any user-defined class can support standard iteration by composing these roles and implementing the iterator and/or pull-one methods.
The DNA class represents a DNA strand and implements the iterator by composing the Iterable role. The DNA strand is split into a group of trinucleotides when iterated over:
subset Strand of Str where { .match(/^^ <[ACGT]>+ $$/) and .chars %% 3 };
class DNA does Iterable {
has $.chain;
method new(Strand:D $chain) {
self.bless: :$chain
}
method iterator(DNA:D:){ $.chain.comb.rotor(3).iterator }
};
for DNA.new('GATTACATA') {
.say
}
# OUTPUT:
# (G A T)
# (T A C)
# (A T A)
say DNA.new('GATTACATA').map(*.join).join('-');
# OUTPUT:
# GAT-TAC-ATA
The Repeater class composes both the Iterable and Iterator roles:
class Repeater does Iterable does Iterator {
has Any $.item is required;
has Int $.times is required;
has Int $!count = 1;
multi method new($item, $times) {
self.bless: :$item, :$times;
}
method iterator { self }
method pull-one(--> Mu){
if $!count <= $!times {
$!count += 1;
return $!item
}
else {
return IterationEnd
}
}
}
for Repeater.new("Hello", 3) {
.say
}
# OUTPUT:
# Hello
# Hello
# Hello
Ruby
Ruby implements iterators quite differently; all iterations are done by means of passing callback closures to container methods - this way Ruby not only implements basic iteration but also several patterns of iteration like function mapping, filters and reducing. Ruby also supports an alternative syntax for the basic iterating method each, the following three examples are equivalent:
(0...42).each do |n|
puts n
end
...and...
for n in 0...42
puts n
end
or even shorter
42.times do |n|
puts n
end
Ruby can also iterate over fixed lists by using Enumerators and either calling their #next method or doing a for each on them, as above.
Rust
Rust makes use of external iterators throughout the standard library, including in its for loop, which implicitly calls the next() method of an iterator until it is consumed. The most basic for loop for example iterates over a Range type:
for i in 0..42 {
println!("{}", i);
}
// Prints the numbers 0 to 41
Specifically, the for loop will call a value's into_iter() method, which returns an iterator that in turn yields the elements to the loop. The for loop (or indeed, any method that consumes the iterator), proceeds until the next() method returns a None value (iterations yielding elements return a Some(T) value, where T is the element type).
All collections provided by the standard library implement the IntoIterator trait (meaning they define the into_iter() method). Iterators themselves implement the Iterator trait, which requires defining the next() method. Furthermore, any type implementing Iterator is automatically provided an implementation for IntoIterator that returns itself.
Iterators support various adapters (map(), filter(), skip(), take(), etc.) as methods provided automatically by the Iterator trait.
Users can create custom iterators by creating a type implementing the Iterator trait. Custom collections can implement the IntoIterator trait and return an associated iterator type for their elements, enabling their use directly in for loops. Below, the Fibonacci type implements a custom, unbounded iterator:
struct Fibonacci(u64, u64);
impl Fibonacci {
pub fn new() -> Self {
Self(0, 1)
}
}
impl Iterator for Fibonacci {
type Item = u64;
fn next(&mut self) -> Option<Self::Item> {
let next = self.0;
self.0 = self.1;
self.1 = self.0 + next;
Some(next)
}
}
let fib = Fibonacci::new();
for n in fib.skip(1).step_by(2).take(4) {
println!("{n}");
}
// Prints 1, 2, 5, and 13
See also
References
External links
Java's Iterator, Iterable and ListIterator Explained
.NET interface
Article "Understanding and Using Iterators" by Joshua Gatcomb
Article "A Technique for Generic Iteration and Its Optimization" (217 KB) by Stephen M. Watt
Iterators
Boost C++ Iterator Library
Java interface
PHP: Object Iteration
STL Iterators
What are iterators? - Reference description
Articles with example C Sharp code
Articles with example C++ code
Articles with example Java code
Articles with example PHP code
Articles with example Python (programming language) code
Articles with example Ruby code
Iteration in programming
Object (computer science)
Abstract data types | Iterator | Mathematics | 7,804 |
27,633,526 | https://en.wikipedia.org/wiki/Biological%20plausibility | In epidemiology and biomedicine, biological plausibility is the proposal of a causal association—a relationship between a putative cause and an outcome—that is consistent with existing biological and medical knowledge.
Biological plausibility is one component of a method of reasoning that can establish a cause-and-effect relationship between a biological factor and a particular disease or adverse event. It is also an important part of the process of evaluating whether a proposed therapy (drug, vaccine, surgical procedure, etc.) has a real benefit to a patient. This concept has application to many controversial public affairs debates, such as that over the causes of adverse vaccination outcomes.
Biological plausibility is an essential element of the intellectual background of epidemiology. The term originated in the seminal work of determining the causality of smoking-related disease (The Surgeon General's Advisory Committee on Smoking and Health [1964]).
Applications
Disease and adverse event causality
It is generally agreed that to be considered "causal", the association between a biological factor and a disease (or other bad outcome) should be biologically coherent. That is to say, it should be plausible and explicable biologically according to the known facts of the natural history and biology of the disease in question.
Other important criteria in evaluations of disease and adverse event causality include consistency, strength of association, specificity and a meaningful temporal relationship. These are known collectively as the Bradford-Hill criteria, after the great English epidemiologist who proposed them in 1965. However, Austin Bradford Hill himself de-emphasized "plausibility" among the other criteria:
Treatment outcomes
The preliminary research leading up to a randomized clinical trial (RCT) of a drug or biologic has been termed "plausibility building". This involves the gathering and analysis of biochemical, tissue or animal data which are eventually found to point to a mechanism of action or to demonstrate the desired biological effect. This process is said to confer biological plausibility. Since large, definitive RCTs are extremely expensive and labor-intensive, only sufficiently promising therapies are thought to merit the attention and effort of final confirmation (or refutation) in them.
In distinction to biological plausibility, clinical data from epidemiological studies, case reports, case series and small, formal open or controlled clinical trials may confer clinical plausibility. According to the strictest criteria, a therapy is sufficiently scientifically plausible to merit the time and expense of definitive testing only if it is either biologically or clinically plausible. It has been observed that, despite its importance, biological plausibility is lacking for most complementary and alternative medicine therapies.
References
Epidemiology
Clinical trials | Biological plausibility | Environmental_science | 559 |
24,631,606 | https://en.wikipedia.org/wiki/Galerina%20patagonica | Galerina patagonica is a species of agaric fungus in the family Hymenogastraceae. First described by mycologist Rolf Singer in 1953, it has a Gondwanan distribution, and is found in Australia, New Zealand, and Patagonia (South America), where it grows on rotting wood.
The fungus contains a laccase enzyme that has been investigated for possible used in bioremediation of chlorophenol-polluted environments.
The toxicity of Galerina patagonica is unknown. However, it's phylogenetically nested within the Galerina marginata species complex, and thus likely contains deadly amatoxins.
References
External links
Hymenogastraceae
Fungi described in 1954
Fungi of Australia
Fungi of New Zealand
Fungi of South America
Taxa named by Rolf Singer
Fungus species | Galerina patagonica | Biology | 170 |
10,477,988 | https://en.wikipedia.org/wiki/Skoda%E2%80%93El%20Mir%20theorem | The Skoda–El Mir theorem is a theorem of complex geometry,
stated as follows:
Theorem (Skoda, El Mir, Sibony). Let X be a complex manifold, and
E a closed complete pluripolar set in X. Consider a closed positive current on
which is locally integrable around E. Then the trivial extension of to X is closed on X.
Notes
References
J.-P. Demailly, L² vanishing theorems for positive line bundles and adjunction theory, Lecture Notes of a CIME course on "Transcendental Methods of Algebraic Geometry" (Cetraro, Italy, July 1994)
Complex manifolds
Several complex variables
Theorems in geometry | Skoda–El Mir theorem | Mathematics | 142 |
5,678,338 | https://en.wikipedia.org/wiki/Flux%20limiter | Flux limiters are used in high resolution schemes – numerical schemes used to solve problems in science and engineering, particularly fluid dynamics, described by partial differential equations (PDEs). They are used in high resolution schemes, such as the MUSCL scheme, to avoid the spurious oscillations (wiggles) that would otherwise occur with high order spatial discretization schemes due to shocks, discontinuities or sharp changes in the solution domain. Use of flux limiters, together with an appropriate high resolution scheme, make the solutions total variation diminishing (TVD).
Note that flux limiters are also referred to as slope limiters because they both have the same mathematical form, and both have the effect of limiting the solution gradient near shocks or discontinuities. In general, the term flux limiter is used when the limiter acts on system fluxes, and slope limiter is used when the limiter acts on system states (like pressure, velocity etc.).
How they work
The main idea behind the construction of flux limiter schemes is to limit the spatial derivatives to realistic values – for scientific and engineering problems this usually means physically realisable and meaningful values. They are used in high resolution schemes for solving problems described by PDEs and only come into operation when sharp wave fronts are present. For smoothly changing waves, the flux limiters do not operate and the spatial derivatives can be represented by higher order approximations without introducing spurious oscillations. Consider the 1D semi-discrete scheme below,
where, and represent edge fluxes for the i-th cell. If these edge fluxes can be represented by low and high resolution schemes, then a flux limiter can switch between these schemes depending upon the gradients close to the particular cell, as follows,
where
is the low resolution flux,
is the high resolution flux,
is the flux limiter function, and
represents the ratio of successive gradients on the solution mesh, i.e.,
The limiter function is constrained to be greater than or equal to zero, i.e., . Therefore, when the limiter is equal to zero (sharp gradient, opposite slopes or zero gradient), the flux is represented by a low resolution scheme. Similarly, when the limiter is equal to 1 (smooth solution), it is represented by a high resolution scheme. The various limiters have differing switching characteristics and are selected according to the particular problem and solution scheme. No particular limiter has been found to work well for all problems, and a particular choice is usually made on a trial and error basis.
Limiter functions
The following are common forms of flux/slope limiter function, :
CHARM [not 2nd order TVD]
HCUS [not 2nd order TVD]
HQUICK [not 2nd order TVD]
Koren – third-order accurate for sufficiently smooth data
minmod – symmetric
monotonized central (MC) – symmetric
Osher
ospre – symmetric
smart [not 2nd order TVD]
superbee – symmetric
Sweby – symmetric
UMIST – symmetric
van Albada 1 – symmetric
van Albada 2 – alternative form [not 2nd order TVD] used on high spatial order schemes
van Leer – symmetric
All the above limiters indicated as being symmetric, exhibit the following symmetry property,
This is a desirable property as it ensures that the limiting actions for forward and backward gradients operate in the same way.
Unless indicated to the contrary, the above limiter functions are second order TVD. This means that they are designed such that they pass through a certain region of the solution, known as the TVD region, in order to guarantee stability of the scheme. Second-order, TVD limiters satisfy at least the following criteria:
,
,
,
,
The admissible limiter region for second-order TVD schemes is shown in the Sweby Diagram opposite, and plots showing limiter functions overlaid onto the TVD region are shown below. In this image, plots for the Osher and Sweby limiters have been generated using .
Generalised minmod limiter
An additional limiter that has an interesting form is the van-Leer's one-parameter family of minmod limiters. It is defined as follows
Note: is most dissipative for when it reduces to and is least dissipative for .
See also
Godunov's theorem
High resolution scheme
MUSCL scheme
Sergei K. Godunov
Total variation diminishing
Notes
References
Further reading
Computational fluid dynamics
Numerical differential equations | Flux limiter | Physics,Chemistry | 923 |
25,887,651 | https://en.wikipedia.org/wiki/Psilocybe%20brasiliensis | Psilocybe brasiliensis is a species of psilocybin mushroom in the family Hymenogastraceae. Found in Brazil, it was described as new to science in 1978 by Mexican mycologist Gastón Guzmán.
See also
List of Psilocybe species
List of psilocybin mushrooms
References
External links
Entheogens
Fungi described in 1978
Psychoactive fungi
brasiliensis
Psychedelic tryptamine carriers
Fungi of South America
Taxa named by Gastón Guzmán
Fungus species | Psilocybe brasiliensis | Biology | 96 |
1,295,452 | https://en.wikipedia.org/wiki/Gas-generator%20cycle | The gas-generator cycle, also called open cycle, is one of the most commonly used power cycles in bipropellant liquid rocket engines.
Propellant is burned in a gas generator (or "preburner") and the resulting hot gas is used to power the propellant pumps before being exhausted overboard and lost. Because of this loss, this type of engine is termed open cycle.
The gas generator cycle exhaust products pass over the turbine first. Then they are expelled overboard. They can be expelled directly from the turbine, or are sometimes expelled into the nozzle (downstream from the throat) for a small gain in efficiency.
The main combustion chamber does not use these products. This explains the name of the open cycle. The major disadvantage is that this propellant contributes little to no thrust because they are not injected into the combustion chamber. The major advantage of the cycle is reduced engineering complexity compared to the staged combustion (closed) cycle.
Examples
RD-107, RD-108—Soviet engine type developed in the 1950s, used on R-7 family vehicles including the active Soyuz-2.
F-1—RP-1/LOX engine used on the first stage of Saturn V. Most powerful single combustion chamber liquid-fueled engine ever flown.
J-2—Upper stage LH2/LOX engine developed in the 1960s and used on Saturn V.
RS-27A—American RP-1/LOX engine first flown in 1990.
Vulcain—A family of European first stage engines using LH2/LOX flown on Ariane 5 and Ariane 6.
Merlin—RP-1/LOX engine developed by SpaceX for Falcon 9 and Falcon Heavy, used on both first and second stages.
RS-68—LH2/LOX engine built in the 1990s by Aerojet Rocketdyne. Largest hydrogen-fueled rocket engine ever flown.
CE-20—Indian LH2/LOX engine developed in the 2010s for use on the LVM3 launch vehicle.
YF-20—Chinese N2O4/UDMH engine developed in the 1990s and used on Long March 2, 3, and 4.
TQ-12—LCH4/LOX engine developed by LandSpace. First flew in 2022 on Zhuque-2.
See also
Combustion tap-off cycle
Expander cycle
Pressure-fed engine
Rocket engine
Staged combustion cycle
Turbopump
References
External links
Rocket power cycles
Rocket-Engine Cooling at NASA
Combustion
Rocket engines
Spacecraft propulsion
Thermodynamic cycles | Gas-generator cycle | Chemistry,Technology | 516 |
27,638,406 | https://en.wikipedia.org/wiki/Century%20Flyer | The Century Flyer is a gauge train originally built as an amusement park ride. It is located on the grounds of the Conway Human Development Center, a residential facility that treats patients with developmental disabilities at 150 East Siebenmorgen Road in Conway, Arkansas.
History
This train was built in the early 1950s by National Amusement Devices, a manufacturer of roller coaster cars. This company built several of the Century Flyer train sets for use at amusement parks around the country. This particular train set was sold to the Burns Park "Funland" in North Little Rock, Arkansas in 1957.
In 1959, the train and tracks were sold to a local professional women's organization, who donated them to the Conway Human Development Center (then known as the Arkansas Children's Colony) for use by the patients. One third of a mile of track was laid and two trestles were built.
Over the years, the train and track fell into disrepair. The Central Arkansas Model Railroad Club volunteered to upgrade and refurbish the train and track.
The Century Flyer was listed in the National Register of Historic Places due to its being one of the few remaining trains built by National Amusement Devices still operating.
See also
National Register of Historic Places listings in Faulkner County, Arkansas
References
External links
Central Arkansas Model Railroad Club article on the Century Flyer
Rail transportation on the National Register of Historic Places in Arkansas
Buildings and structures in Conway, Arkansas
Amusement rides
National Register of Historic Places in Faulkner County, Arkansas | Century Flyer | Physics,Technology | 294 |
26,576,301 | https://en.wikipedia.org/wiki/National%20Institute%20of%20Aerospace | The National Institute of Aerospace (NIA) is a non-profit research and graduate education institute headquartered in Hampton, Virginia, near NASA's Langley Research Center.
NIA was formed in 2002 by a consortium of research universities. NIA performs research in a broad range of disciplines including space exploration, systems engineering, nanoscale materials science, flight systems, aerodynamics, air traffic management, aviation safety, planetary and space science, and global climate change.
NIA is headed by Dr. Douglas O. Stanley, who was named interim to the post of president and executive director in July 2012. He succeeded Dr. Robert Lindberg, who became the first President and executive director in October 2003.
Member Institutions
American Institute of Aeronautics and Astronautics (AIAA) Foundation
Georgia Institute of Technology
Hampton University
North Carolina Agricultural and Technical State University
North Carolina State University
University of Maryland
Virginia Polytechnic Institute and State University
University of Virginia
Affiliates
The College of William & Mary
Old Dominion University
Research projects
About 50 full-time researchers are working on projects at NIA.
NIA conducts a broad range of scientific and engineering research sponsored by NASA, other government agencies and the aerospace industry. This work is performed by resident scientists and engineers, faculty, students and consultants in principal areas of investigation to include space exploration, systems engineering, materials science, flight systems, aerodynamics, air traffic management, aviation safety, planetary and space science, and global climate change.
Research programs, led by faculty in residence at NIA, serve as the core of the Institute's academic research program. Through NIA's University Research Program, faculty and students at NIA member universities collaborate with NASA research leaders in fundamental investigations in aerospace, mechanical, electrical, and systems engineering; materials science; applied mathematics, meteorology and other related fields.
NIA also collaborate with other research institutions worldwide, including universities, government laboratories, industry and other non-profit institutes to accomplish its research objectives. NIA conducts applied research with and for the aerospace industry. Through NIA, industrial partners can gain access to LaRC personnel, facilities and intellectual property.
Research projects include:
Boron nitride nano tubes (BNNT) research and development for aerospace and public safety applications. In collaboration with NASA Langley, and the Thomas Jefferson National Accelerator Facility, NIA has developed a neutron shielding material using boron-containing nanomaterials, which include boron nanoparticles, boron nitride nanotubes (BNNTs), and boron nitride nano-platelets, as well as the polymer composites thereof. This is proposed for advanced radiation shielding and for shielding against high kinetic energy penetrators (i.e. bulletproof vests, protection against micrometeoroids, etc.)
Field research in the Republic of Tyva in Siberia to validate satellite studies that show longer periods of drought have strengthened the intensity of fires. Amber Soja, NIA Senior Research Scientist based at NASA's Langley Researcher Center, conducted research that has shown that these more destructive fires have started to impair the growth of pine-dominated forests after fires and are causing some pine forest areas to transition to steppe that stores far less carbon from the atmosphere. Likewise, the research shows that the burns cause more destruction when they occur. According to Soja, in other areas where precipitation is limited to begin with, such as the savannas of Australia, longer droughts might actually reduce the number of wildfires because less vegetation would be available to burn.
Education
NIA's graduate program offers M.S. and Ph.D. degrees in the fields of aerospace engineering, mechanical engineering, engineering mechanics, engineering physics, materials science and engineering, electrical engineering, ocean engineering and systems engineering. Degrees are issued through its university partners: Georgia Tech, Hampton University, North Carolina A&T State University, North Carolina State University, the University of Maryland, the University of Virginia, Virginia Tech, Old Dominion University, and the College of William & Mary. Classes are offered on site and through distance education to about 40 graduate students in residence. (Students in residence at NIA are considered in residence at their home university.) NIA also provides Langley Research Center employees the opportunity to pursue a PhD while working.
The faculty comprises Langley Professors who share their time between NIA and their home schools:
Dr. Mool C. Gupta, University of Virginia
Dr. James Hubbard, Jr., University of Maryland
Dr. Christopher Fuller, Virginia Tech
Dr. William Edmonson, North Carolina A&T State University
Dr. Fuh-Gwo Yuan, North Carolina State University
Dr. Alan Wilhite, Georgia Institute of Technology
NIA Research and Innovation Laboratories
The NIA Research and Innovation Laboratories opened in early 2012. The building, located at 1100 Exploration Way in Hampton, Virginia consists of a 14-laboratory, 60,000-square-foot building that houses research and development facilities including a wind tunnel, an unmanned aerial vehicles structures lab and a boron nanotube development lab, among other facilities. The new facility also host the Peninsula Technology Incubator (PTI), a subsidiary of NIA, which encourages entrepreneurship.
See also
NASA RealWorld-InWorld Engineering Design Challenge
National Aeronautics and Space Administration
References
External links
NIA website
Langley Research Center
Aerospace
Virginia Tech
Aerospace research institutes
Aviation research institutes | National Institute of Aerospace | Physics | 1,079 |
2,783,083 | https://en.wikipedia.org/wiki/Ville%20Contemporaine | The Ville contemporaine (, Contemporary City) was an unrealized utopian planned community intended to house three million inhabitants designed by the French-Swiss architect Le Corbusier in 1922.
Plan
The centerpiece of this plan was a group of sixty-story cruciform skyscrapers built on steel frames and encased in curtain walls of glass. The skyscrapers housed both offices and the flats of the most wealthy inhabitants. These skyscrapers were set within large, rectangular park-like green spaces.
At the center of the planned city was a transportation hub which housed depots for buses and trains as well as highway intersections and at the top, an airport.
Le Corbusier segregated the pedestrian circulation paths from the roadways, and glorified the use of the automobile as a means of transportation. As one moved out from the central skyscrapers, smaller multi-story zigzag blocks set in green space and set far back from the street housed the proletarian workers.
Critics
Robert Hughes spoke of Le Corbusier's city planning in his series The Shock of the New:
"...the car would abolish the human street, and possibly the human foot. Some people would have aeroplanes too. The one thing no one would have is a place to bump into each other, walk the dog, strut, one of the hundred random things that people do ... being random was loathed by Le Corbusier ... its inhabitants surrender their freedom of movement to the omnipresent architect."
See also
Experimental Prototype Community of Tomorrow (concept)
Ville Radieuse
References
External links
Drawings at the Le Corbusier Foundation
Le Corbusier
Unbuilt buildings and structures
Proposed skyscrapers
Architecture related to utopias | Ville Contemporaine | Engineering | 367 |
67,070,255 | https://en.wikipedia.org/wiki/Dual-rotor%20motor | A dual-rotor motor is a motor having two rotors within the same motor housing. This rotor arrangement can increase power volume density, efficiency, and reduce cogging torque.
Stator on the outside
In one arrangement, the motor has an ordinary stator. A squirrel-cage rotor connected to the output shaft rotates within the stator at slightly less than the rotating field from the stator. Within the squirrel-cage rotor is a freely rotating permanent magnet rotor, which is locked in with rotating field from the stator. The effect of the inner rotor is to reenforce the field from the stator.
Because the stator slips behind the rotating magnetic field inducing a current in the rotor, this type of motor meets the definition of an induction motor.
Stator between rotors
In another arrangement, one rotor is inside the stator with a second rotor on the outside of the stator. The photo labelled FIG. 8 is from a patent application. It shows two rotors assembled into a single unit, with eight permanent magnets attached to the outer surface of the inner rotor, and eight to the inner surface of the outer rotor.
Vendors are working on both axial and radial flux configurations. In one axial flux design, the rotor is a disk that sits between two symmetric rotor disks.
References
Electric motors | Dual-rotor motor | Technology,Engineering | 265 |
42,868,332 | https://en.wikipedia.org/wiki/Candida%20tolerans | Candida tolerans is an ascomycetous yeast species first isolated from Australian Hibiscus flowers. It is small and a pseudomycelium is formed. The carbon and nitrogen assimilation pattern is similar to that of Zygosaccharomyces rouxii. Its type strain is UWO (PS) 98-115.5 (CBS 8613).
References
Further reading
tolerans
Yeasts
Fungi described in 1999
Fungus species | Candida tolerans | Biology | 94 |
40,263,841 | https://en.wikipedia.org/wiki/Global%20Alliance%20on%20Health%20and%20Pollution | GAHP (Global Alliance on Health and Pollution) is a network of international and national level agencies committed to a collaborative, multi-sectoral approach to address the global pollution crisis and the resulting health and economic impacts. GAHP’s overall goal is to reduce death and illness caused by all forms of toxic pollution, including air, water, soil and chemical wastes especially in low and middle-income countries.
History
GAHP is a collaborative body made up of more than 60 members and dozens of observers that advocates for resources and solutions to pollution problems. GAHP was formed because international and national level actors/ agencies recognize that a collaborative, multi-stakeholder, multi-sectoral approach is necessary and critical to deal with the global pollution crisis and resulting health and economic impacts.
In 2012, Pure Earth initiated the alliance with representatives from the World Bank, UNEP, UNDP, UNIDO, Asian Development Bank, the European Commission, and Ministries of Environment and Health of many low and middle-income countries to formulate strategies to address pollution and health at scale. GAHP incorporated as a foundation in 2019 in Geneva, Switzerland.
GAHP focuses its efforts in two main areas: advocacy and awareness raising and country-specific support. GAHP builds public, technical and financial support to address pollution globally by promoting scientific research, raising awareness and tracking progress. GAHP assists low- and middle-income countries to prioritize and address pollution and problems through Health and Pollution Action Plans.
In October 2017, GAHP published the Lancet Commission on Pollution and Health in collaboration with The Lancet. The commission "addresses the full health and economic costs of air, water, and soil pollution. Through analyses of existing and emerging data, the Commission reveals pollution’s severe and underreported contribution to the Global Burden of Disease. It uncovers the economic costs of pollution to low-income and middle-income countries. The Commission will inform key decision makers around the world about the burden that pollution places on health and economic development, and about available cost-effective pollution control solutions and strategies."
The report's findings were distributed widely through media outlets, reaching over 2 billion people and counting. The work of the Commission was also covered extensively through special partnerships with high-profile media organizations.
In addition, GAHP updates findings from The Lancet Commission on Pollution and Health, and provides a ranking of pollution deaths on a global, regional and country level with Pollution and Health Metrics: Global, Regional and Country Analysis reports.
Pollution remains the world’s largest environmental threat to human health, responsible in 2017 for 15% of all deaths globally, and 275 million Disability-Adjusted Life Years. The 2019 report, which uses the most recent Global Burden of Disease data from the Institute of Health Metrics Evaluation, underscores the extent and severity of harm caused by air, water, and occupational pollution.
GAHP members include
Pure Earth (formerly known as Blacksmith Institute) (GAHP Secretariat)
Cyrus R. Vance Center for International Justice
Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH
European Commission
Fundación Chile
Intendencia de Montevideo, Government of Uruguay
Inter American Development Bank (BID)
Komite Penghapusan Bensin Bertimbel (KPBB – Indonesian NGO)
La Agencia de Protección Ambiental de la Ciudad de Buenos Aires, Government of Argentina
Ministry of Health, Government of the Republic of Tajikistan
Ministry of Environment, Government of Indonesia
Ministry of Environment, Government of Madagascar
Ministry of Environment, Government of Mexico (SEMARNAT)
Ministry of Environment, Government of Perú (MINAM)
Department of Environment and Natural Resources, Government of the Philippines (DENR)
Ministry of Environment, Government of Senegal
Ministry of Environment, Government of Uruguay, DINAMA
United Nations Development Programme (UNDP)
United Nations Environment Program (UNEP)
United Nations Industrial Development Organization (UNIDO)
World Bank (WB)
References
Chemical safety
Environmental health organizations
Environmental justice organizations
International medical and health organizations | Global Alliance on Health and Pollution | Chemistry | 803 |
51,138,349 | https://en.wikipedia.org/wiki/Magma%20ocean | Magma oceans are vast fields of surface magma that exist during periods of a planet's or some natural satellite's accretion when the celestial body is completely or partly molten.
In the early Solar System, magma oceans were formed by the melting of planetesimals and planetary impacts. Small planetesimals are melted by the heat provided by the radioactive decay of aluminium-26. As planets grew larger, the energy was then supplied from giant impacts with other planetary bodies. Magma oceans are integral parts of planetary formation as they facilitate the formation of a core through metal segregation and an atmosphere and hydrosphere through degassing. Evidence exists to support the existence of magma oceans on both the Earth and the Moon. Magma oceans may survive for millions to tens of millions of years, interspersed by relatively mild conditions.
Magma ocean heat sources
The sources of the energy required for the formation of magma oceans in the early Solar System were the radioactive decay of aluminium-26, accretionary impacts, and core formation. The abundance and short half life of aluminium-26 allowed it to function as one of the sources of heat for the melting of planetesimals. With aluminium-26 as a heat source, planetesimals that had accreted within 2 Ma after the formation of the first solids in the Solar System could melt. Melting in the planetesimals began in the interior and the interior magma ocean transported heat via convection. Planetesimals larger than 20 km in radius that accreted within 2 Ma are expected to have melted, although not completely.
The kinetic energy provided by accretionary impacts and the loss of potential energy from a planet during core formation are also large heat sources for planet melting. Core formation, also referred to as metal-silicate differentiation, is the separation of metallic components from silicate in the magma that sink to form a planetary core. Accretionary impacts that produce heat for the melting of planet embryos and large terrestrial planets have an estimated timescale of tens to hundreds of millions of years. A prime example would be the Moon-forming impact on Earth, that is thought to have formed a magma ocean with a depth of up to 2000 km. The energy of accretionary impacts foremost melt the exterior of the planetary body, and the potential energy provided by core differentiation and the sinking of metals melts the interior.
Lunar magma ocean
The findings of the Apollo missions were the first articles of evidence to suggest the existence of a magma ocean on the Moon. The rocks in the samples acquired from the missions were found to be composed of a mineral called anorthite. Anorthite consists mostly of a variety of plagioclase feldspars, which are lower in density than magma. This discovery gave rise to the hypothesis that the rocks formed through an ascension to the surface of a magma ocean during the early life stages of the Moon. Additional evidence for the existence of the Lunar Magma Ocean includes the sources of mare basalts and KREEP (K for potassium, REE for rare-earth elements, and P for phosphorus). The existence of these components within the mostly anorthositic crust of the Moon are synonymous with the solidification of the Lunar Magma Ocean. Furthermore, the abundance of the trace element europium within the Moon's crust suggests that it was absorbed from the magma ocean, leaving europium deficits in the mare basalt rock sources of the Moon's crust. The lunar magma ocean was initially 200-300 km thick and the magma achieved a temperature of about 2000 K. After the early stages of the Moon's accretion, the magma ocean was subjected to cooling caused by convection in the planet's interior.
Earth's magma ocean
During its formation, the Earth likely suffered a series of magma oceans resulting from giant impacts, the final one being the Moon-forming impact. The best chemical evidence for the existence of magma oceans on Earth is the abundance of certain siderophile elements in the mantle that record magma ocean depths of approximately 1000 km during accretion. The scientific evidence to support the existence of magma oceans on early Earth is not as developed as the evidence for the Moon because of the recycling of the Earth's crust and mixing of the mantle. Unlike Earth, indications of a magma ocean on the Moon such as the flotation crust, elemental components in rocks, and KREEP have been preserved throughout its lifetime.
Today Earth's outer core is a liquid layer about thick, composed mostly of molten iron and molten nickel that lies above Earth's solid inner core and below its mantle. This layer may be considered as an ocean of molten iron and nickel inside Earth.
See also
Lava planet – hypothetical type of planet with a surface dominated by molten rock
Hadean
Chondrite
Planetary differentiation
References
Planetary science
Geology of the Moon
Hadean volcanism
Geochemistry
Planetary geology | Magma ocean | Chemistry,Astronomy | 991 |
68,637,122 | https://en.wikipedia.org/wiki/Book%20of%20the%20Zodiac | The Book of the Zodiac (; Modern Mandaic: Asfar Malwāši) is a Mandaean text. It covers Mandaean astrology in great detail. The book is used to obtain a Mandaean's baptismal name (malwasha). It is also an important source on Mandaean numerology.
Manuscripts and translations
An English translation of the text, based on Manuscript 31 of the Drower Collection (DC 31), was published by E. S. Drower in 1949. The manuscript is a kurasa, or unbound manuscript consisting of loose sheets.
Buckley has also located a privately held copy of the Book of the Zodiac dating from 1919, which belonged to Lamea Abbas Amara in San Diego.
There is also a manuscript of the Book of the Zodiac from 1789 CE that is currently held at the Bibliothèque National in Paris, which was used by Drower and may have also been used by Nicolas Siouffi.
Contents
Drower's manuscript (DC 31) consists of 289 pages in Mandaic. There are 20 individual books or sections, which are:
Book 1: The Book of the Signs of the Zodiac for Men
Book 2: The Book of the Signs of the Zodiac for Women
Book 3: The Book of Stars
Book 4: lists of astrological terms and calculations
Book 5: The Book of the Moon
Book 6: charms against evil spirits
Book 7: charms against evil spirits
Book 8: The Days of the Month
Book 9: illnesses
Book 10: astrological information
Book 11: selecting days for certain activities
Book 12: The Opening of a Door
Book 13: predictions
Book 14: predictions
Book 15: predictions
Book 16: predictions
Book 17: geographical regions governed by the planets and zodiac signs
Book 18: predictions
Book 19: transits of Saturn, halos of the sun, meteors and comets, and rainfall
Book 20: meteorology
There is also an appendix (labeled as Part II in Drower's text) that discusses omens, predictions, remedies, eclipses, and other topics.
See also
Ginza Rabba
Mandaean Book of John
References
External links
Code Sabéen 25 from Wikimedia Commons
Sfar Mulwasha (Mandaic text from the Mandaean Network)
Sfar Mulwasha (Mandaic text from the Mandaean Network)
Sfar Mulwasha (English translation)
Mandaean texts
Astrological texts
Numerology | Book of the Zodiac | Mathematics | 506 |
70,996,357 | https://en.wikipedia.org/wiki/Ground%20deicing%20of%20aircraft | In aviation, ground deicing of aircraft is the process of removing surface frost, ice or frozen contaminants on aircraft surfaces before an aircraft takes off. This prevents even a small amount of surface frost or ice on aircraft surfaces from severely impacting flight performance. Frozen contaminants on surfaces can also break off in flight, damaging engines or control surfaces.
Major airports in climates conducive to ground icing will have some kind of ground deicing systems in place. Ultimately it is the pilot-in-command's responsibility to ensure that all necessary deicing processes are carried out before departure.
Planes are often equipped with ice protection systems or icephobic surface coatings to control in-flight atmospheric icing; however, those are not considered substitutes for adequate ground based deicing.
Purpose
Aircraft flight characteristics are extremely sensitive to the slightest amount of surface irregularity, in particular that caused by frost, ice, or snow. These may interrupt smooth airflow over surfaces; add weight to the airframe; interfere with control surfaces; or come loose in flight and cause impact damage to the airframe or engines. A layer as thin as 0.4 mm (1/64 inch) can have a significant effect on lift, drag, and control.
Ground icing can occur even when the ambient temperature is above freezing, via a process known as "cold soaking." In this situation, ice is formed because the fuel in the wing tanks is below freezing, causing condensation on the wings which subsequently freezes.
Many aircraft accidents have been attributed by post-accident investigations to aircraft operators' failure to remove surface frost, ice, and/or snow prior to takeoff. Such accidents include:
1946 Railway Air Services Dakota crash
Air Florida Flight 90
Air Ontario Flight 1363
Arrow Air Flight 1285R
Continental Airlines Flight 1713
Scandinavian Airlines System Flight 751
West Wind Aviation Flight 282
Process
Before every flight the pilot-in-command of an aircraft is responsible for inspecting the airframe for frost, ice, and snow. This can be done visually or by means of specially designed Ground Ice Detection Systems.
If frost, ice, or snow contamination is observed or suspected, the aircraft must undergo a deicing procedure before takeoff, using one or more of the methods listed below.
A complicating factor is that ambient atmospheric conditions may be such that contamination starts to build up again immediately after deicing is complete. For example, it might be snowing. The deicing process must take this into account to ensure that the aircraft remains free of contamination until it takes off. Typically this involves adding a viscous "anti-icing" fluid which will remain on the wings and immediately melt falling snow.
The time between deicing/anti-icing treatments and take-off is called the "holdover time". Various aviation authorities (e.g., the United States' Federal Aviation Administration (FAA),
Transport Canada) publish detailed tables giving the hold over time for various combinations of deicing fluids and atmospheric conditions.
Holdover times can be short, sometimes just a few minutes, so deicing of commercial passenger aircraft is usually done after the passengers are aboard and the aircraft is otherwise ready for departure. That way the aircraft can depart immediately after deicing is complete.
If an aircraft exceeds its holdover time, it must be deiced again. If an anti-icing fluid was used, that fluid will now be considered "failed" and must be removed before re-application. Anti-icing fluids must not be applied over a previous failed layer.
Because aircraft icing is such an important safety issue, most aviation authorities and commercial aircraft operators require detailed management plans and record keeping to ensure that the process is done in a safe, organized, timely, and repeatable fashion.
Methods
Fluid-based
In most cases ground-based deicing is accomplished by spraying the aircraft with an aircraft deicing fluid just prior to departure. For commercial aircraft this fluid is usually applied to contaminated surfaces using a specially designed machine. For smaller aircraft a handheld spray applicator may suffice.
Deicing fluids are typically based on propylene glycol or ethylene glycol, which freeze at a lower temperature than water. There are several different types of fluid, falling into two basic categories:
Deicing fluids remove existing frozen contaminants. These are generally non-viscous, and may be heated.
Anti-icing fluids provide short term protection against recontamination. These are generally thickened fluids that remain on control surfaces until the aircraft is accelerating down the runway. They are generally applied cold.
In some cases both types of fluid are applied to aircraft, a process known as two-step deicing.
Glycol-based deicing fluids are toxic, and environmental concerns in the use of such fluids include increased salinity of groundwater, when de-icing fluids are discharged into soil, and toxicity to humans and other mammals. Thus, research into non-toxic alternative deicing fluids is ongoing.
Hot water
It may be possible to deice an aircraft using hot () water if the ambient weather conditions are appropriate. Depending on circumstances this may be followed by an application of type I deicing fluid to prevent re-freezing.
Forced air
Forced air can be used to blow off accumulated snow provided precautions are taken to avoid damaging aircraft components.
If the outside air temperature is higher than freezing then unheated forced air can
also be used for removing frost and ice, perhaps in conjunction with a
subsequent application of deicing fluid.
Heated forced air is not generally used because it may result in the melted contamination refreezing on aircraft surfaces and/or damage to aircraft components.
The use of forced air for deicing is a maturing technology. Hybrid systems
using heated air along with deicing fluids are currently being developed in an attempt to reduce the amount of fluids required.
Infrared heating
Direct infrared heating has also been developed as an aircraft deicing technique. This heat transfer mechanism is substantially faster than conventional heat transfer modes used by deicing fluids (convection and conduction) due to the cooling effect of the air on the deicing fluid spray.
One infrared deicing system requires that the heating process take place inside a specially-constructed hangar. This system has had limited interest among airport operators, due to the space and related logistical requirements for the hangar. In the United States, this type of infrared deicing system has been used, on a limited basis, at two large hub airports and one small commercial airport.
Another infrared system uses mobile, truck-mounted heating units that do not require the use of hangars. The manufacturer claims that the system can be used for both fixed wing aircraft and helicopters, although it has not cited any instances of its use on commercial aircraft.
Mechanical
Mechanical deicing using tools such as brooms, scrapers, ropes, and mops can be used to minimize the amount of fluid or heat-based deicing required. However care must be taken to avoid damaging surfaces, antennas, pitot tubes, etc. Even a thin layer of frost can severely impact flight performance so mechanical methods do not usually suffice on their own. In extremely cold conditions however spray deicing may be impractical leaving mechanical deicing as the only possibility.
Hangar
Frozen contaminants on aircraft surfaces will eventually melt if the aircraft is placed in a warm hangar, but depending on the circumstances, frost or ice could form on surfaces once the aircraft is removed from the hangar and necessitate other types of deicing. In particular the difference in temperature of the fuel in wing tanks and the ambient air can cause frost to form.
Ice shedding
Typically fan-jet engines cannot be deiced with glycol based fluids, as doing so could cause damage to the engine itself or to its associated bleed air systems. Instead most aircraft manufacturers define an engine "ice shedding" procedure to be performed before takeoff, which involves spinning up the engine to a certain RPM for a specified period of time.
Equipment
Commercial airports located in climates conducive to ground icing often have very elaborate deicing processes and equipment.
Typically deicing fluids are applied using a specialized vehicle similar to a "cherry picker" aerial work platform. These vehicles include tanks for fluids, a means to heat those fluids, and a system to deliver those heated fluids at high-pressure.
SAE International publishes standards and requirements for deicing vehicles, including: SAE ARP1971 (Aircraft Deicing Vehicle – Self-Propelled)
and SAE ARP4806 (Deicing/Anti-Icing Self-Propelled Vehicle Functional Requirements).
Aircraft may be deiced in a hangar, at the arrival/departure gate, or on an airport apron dedicated to deicing. The advantage to the latter is that it facilitates collection of deicing fluid runoff for recycling.
Deicing can use a large quantity of fluids. Airports must have the appropriate storage and transportation facilities for these fluids.
Environmental impacts and mitigation
Water pollution impacts
Ethylene glycol and propylene glycol exert high levels of biochemical oxygen demand during degradation in surface waters. This process can adversely affect aquatic life by consuming oxygen needed by aquatic organisms for survival. Large quantities of dissolved oxygen in the water column are consumed when microbial populations decompose propylene glycol.
Sufficient dissolved oxygen levels in surface waters are critical for the survival of fish, macroinvertebrates, and other aquatic organisms. If oxygen concentrations drop below a minimum level, organisms emigrate, if able and possible, to areas with higher oxygen levels, or eventually die. This effect can drastically reduce the amount of usable aquatic habitat. Reductions in dissolved oxygen levels can reduce or eliminate bottom feeder populations, create conditions that favor a change in a community's species profile, or alter critical food-web interactions.
Mitigation
Aircraft deicing can use a considerable amount of deicing fluids, generally hundreds of gallons per aircraft. Some airports recycle used deicing fluid, separating water and solid contaminants, enabling reuse of the fluid in other applications. Other airports have an on-site wastewater treatment facility, or send collected fluid to a municipal sewage treatment plant or a commercial wastewater treatment facility.
See also
Airliner accidents and incidents caused by ice
References
External links
Aircraft operations
Aviation safety
Weather hazards to aircraft
Transport safety
Ice in transportation | Ground deicing of aircraft | Physics | 2,099 |
75,446,479 | https://en.wikipedia.org/wiki/Collar%20neighbourhood | In topology, a branch of mathematics, a collar neighbourhood of a manifold with boundary is a neighbourhood of its boundary that has the same structure as .
Formally if is a differentiable manifold with boundary, is a collar neighbourhood of whenever there is a diffeomorphism such that for every , .
Every differentiable manifold has a collar neighbourhood.
References
Differential topology
Manifolds | Collar neighbourhood | Mathematics | 75 |
63,970,382 | https://en.wikipedia.org/wiki/T%C3%BCrkan%20Halilo%C4%9Flu | Türkan Haliloğlu is a Turkish biochemist researching biopolymers, computational structural biology, protein dynamics, binding and folding of proteins, and protein interactions. She is a professor in the department of chemical engineering and director of the polymer research center at the Boğaziçi University.
Education
Haliloğlu earned a BS (1987), MS (1989), and PhD (1992) in chemical engineering Boğaziçi University. From 1992 to 1993, she was a postdoctoral researcher at the University of Akron Institute of Polymer Science.
Career and research
Haliloğlu is a professor in the department of chemical engineering and director of the polymer research center at the Boğaziçi University. She researches biopolymers, computational structural biology, protein dynamics, binding and folding of proteins, and protein interactions.
Awards and honors
In 2012, Haliloğlu became a member of the Turkish Academy of Sciences. In 2018, the NATO Deputy Secretary General, Rose Gottemoeller, presented Haliloğlu with the partnership prize from the NATO Science for Peace and Security for her molecular research on bacteria used in biological weapons.
References
External links
Year of birth missing (living people)
Place of birth missing (living people)
Living people
Members of the Turkish Academy of Sciences
Boğaziçi University alumni
Academic staff of Boğaziçi University
Turkish women chemists
Turkish chemists
Turkish biochemists
20th-century biochemists
21st-century biochemists
Women biochemists
20th-century biologists
20th-century women scientists
21st-century women scientists
Polymer scientists and engineers
Computational biologists
Women computational biologists | Türkan Haliloğlu | Chemistry,Materials_science | 320 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.