text
stringlengths
11
320k
source
stringlengths
26
161
In organic chemistry , Keller's reagent is a mixture of anhydrous (glacial) acetic acid , concentrated sulfuric acid , and small amounts of ferric chloride , used to detect alkaloids . Keller's reagent can also be used to detect other kinds of alkaloids via reactions in which it produces products with a wide range of colors. [ 1 ] [ 2 ] [ 3 ] Cohn describes its use to detect the principal components of digitalis (note that they may not be alkaloids). [ 4 ] The reaction with this reagent is also known as the Keller–Kiliani reaction, after C. C. Keller and H. Kiliani, who both used it to study digitalis in the late 19th century. [ 5 ] [ 6 ] It can be used for digitoxin 's quantitative analysis. [ 7 ] Another method of visualizing the Keller-Kiliani reaction is to treat the test solution with ferric chloride-containing glacial acetic acid, followed by the addition of concentrated sulfuric acid, which sinks to the bottom (like in the brown ring test for nitrates ). A brown ring in the interface indicates the presence of cardenolides . [ 8 ] [ better source needed ]
https://en.wikipedia.org/wiki/Keller's_reagent_(organic)
Kelling's test is a chemical test used for detecting the presence of lactic acid in gastric juice . Two drops of iron(III) chloride are added to a test tube with distilled water. After mixing, it is divided into two parts. Add one millilitre of gastric juice in one test tube and the same volume of distilled water in the other test tube, which is acting as a control. The test tube with the gastric juice turns yellow in the presence of lactic acid due to the formation of ferric lactate. [ 1 ] [ 2 ] [ 3 ] [ 4 ] This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kelling's_test
Kellogg's theorem is a pair of related results in the mathematical study of the regularity of harmonic functions on sufficiently smooth domains by Oliver Dimon Kellogg . In the first version, it states that, for k ≥ 2 {\displaystyle k\geq 2} , if the domain's boundary is of class C k {\displaystyle C^{k}} and the k -th derivatives of the boundary are Dini continuous , then the harmonic functions are uniformly C k {\displaystyle C^{k}} as well. The second, more common version of the theorem states that for domains which are C k , α {\displaystyle C^{k,\alpha }} , if the boundary data is of class C k , α {\displaystyle C^{k,\alpha }} , then so is the harmonic function itself. Kellogg's method of proof analyzes the representation of harmonic functions provided by the Poisson kernel , applied to an interior tangent sphere. In modern presentations, Kellogg's theorem is usually covered as a specific case of the boundary Schauder estimates for elliptic partial differential equations . This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kellogg's_theorem
Kelly Chibale PhD , MASSAf, FAAS, Fellow of UCT, FRSSAf , FRSC (born 1964) is professor of organic chemistry at the University of Cape Town , and the founder and director of H3D research center and H3D Foundation NPC . In 2018 he was recognised as one of Fortune magazine 's top 50 World's Greatest Leaders. [ 1 ] [ 2 ] His research focuses on drug discovery and the development of tools and models to contribute to improving treatment outcomes in people of African descent or heritage. Chibale grew up without electricity or running water in Muwele Village, Chief Chiundaponde, Mpika district, Zambia. [ 3 ] [ 4 ] His parents are Elizabeth Malekano Chanda and Harrison Chibale. [ 4 ] He studied chemistry at the University of Zambia , graduating in 1987. [ 5 ] Chibale worked at Kafironda Explosives in Mufulira . [ 4 ] As there were no opportunities for graduate studies in Zambia, he moved to the University of Cambridge for his PhD, working in Stuart Warren 's group on synthetic organic chemistry of optically active molecules. [ 6 ] He was funded by a Cambridge Livingstone Trust scholarship. [ 6 ] Following his PhD, Chibale joined the University of Liverpool as a Sir William Ramsay British Postdoctoral Research Fellow. He developed optically active alcohols using lanthanides . In 1994 he joined the Scripps Research Institute , creating complicated natural and designed molecules from organic building blocks. He began to explore angiogenesis inhibitors , which can be used to stop cancer cells developing new blood vessels. Inspired by medicinal chemistry, Chibale returned to Africa in 1996. In 2002 he joined the University of California, San Francisco as a Sandler Foundation Sabbatical Fellow. He was promoted to the rank of  Professor in 2007 and a Life Fellow of the University of Cape Town in 2009. His group currently studies treatments for malaria, tuberculosis, and antibiotic-resistant microbial diseases. [ 7 ] He set up collaborations and exchange programs for South African students to learn how to translate basic science into potential products. [ 8 ] He was elected a Fellow of the Royal Society of South Africa in 2009. [ 9 ] In 2010 he founded H3D, the first drug discovery centre of its kind in Africa, at the University of Cape Town . [ 1 ] [ 10 ] The research program received significant media attention and has been supported by Bill Gates . [ 3 ] In 2008 he took a sabbatical, working as a Fulbright scholar at the University of Pennsylvania and Pfizer . [ 6 ] In 2012 Chibale's group discovered MMV390048, an aminopyridine compound that can be used as a single-dose treatment for malaria. [ 11 ] [ 12 ] It was the first antimalarial medicine to enter phase 1 human studies in Africa. [ 13 ] In 2016 they discovered another antimalarial compound, UCT943. [ 14 ] [ 15 ] [ 13 ] He has written for The Conversation about how Africa's medicinal drug research can paved the landscape for health innovation in the continent. [ 16 ] Today he holds the Neville Isdell Chair in African-centric Drug Discovery and Development at the University of Cape Town . [ 17 ] [ 18 ] Over the years H3D has partnered with the South African government and innovative pharmaceutical companies to build Africa's capacity for research. [ 3 ] In 2016 the Royal Society of Chemistry recognised him as one of their 175 Faces of chemistry. [ 18 ] He was elected a Fellow of the Royal Society of Chemistry in 2014. [ 19 ] Kelly has received many notable awards and honors, which include: Kelly was also:
https://en.wikipedia.org/wiki/Kelly_Chibale
In probability theory , the Kelly criterion (or Kelly strategy or Kelly bet ) is a formula for sizing a sequence of bets by maximizing the long-term expected value of the logarithm of wealth, which is equivalent to maximizing the long-term expected geometric growth rate . John Larry Kelly Jr. , a researcher at Bell Labs , described the criterion in 1956. [ 1 ] The practical use of the formula has been demonstrated for gambling , [ 2 ] [ 3 ] and the same idea was used to explain diversification in investment management . [ 4 ] In the 2000s, Kelly-style analysis became a part of mainstream investment theory [ 5 ] and the claim has been made that well-known successful investors including Warren Buffett [ 6 ] and Bill Gross [ 7 ] use Kelly methods. [ 8 ] Also see intertemporal portfolio choice . It is also the standard replacement of statistical power in anytime-valid statistical tests and confidence intervals, based on e-values and e-processes . In a system where the return on an investment or a bet is binary, so an interested party either wins or loses a fixed percentage of their bet, the expected growth rate coefficient yields a very specific solution for an optimal betting percentage. Where losing the bet involves losing the entire wager, the Kelly bet is: where: As an example, if a gamble has a 60% chance of winning ( p = 0.6 {\displaystyle p=0.6} , q = 0.4 {\displaystyle q=0.4} ), and the gambler receives 1-to-1 odds on a winning bet ( b = 1 {\displaystyle b=1} ), then to maximize the long-run growth rate of the bankroll, the gambler should bet 20% of the bankroll at each opportunity ( f ∗ = 0.6 − 0.4 1 = 0.2 {\textstyle f^{*}=0.6-{\frac {0.4}{1}}=0.2} ). If the gambler has zero edge (i.e., if b = q / p {\displaystyle b=q/p} ), then the criterion recommends the gambler bet nothing. If the edge is negative ( b < q / p {\displaystyle b<q/p} ), the formula gives a negative result, indicating that the gambler should take the other side of the bet. A more general form of the Kelly formula allows for partial losses, which is relevant for investments: [ 9 ] : 7 where: Note that the Kelly criterion is perfectly valid only for fully known outcome probabilities, which is almost never the case with investments. In addition, risk-averse strategies invest less than the full Kelly fraction. The general form can be rewritten as follows where: It is clear that, at least, one of the factors W L P {\displaystyle WLP} or W L R {\displaystyle WLR} needs to be larger than 1 for having an edge (so f ∗ > 0 {\displaystyle f^{*}>0} ). It is even possible that the win-loss probability ratio is unfavorable W L P < 1 {\displaystyle WLP<1} , but one has an edge as long as W L P ∗ W L R > 1 {\displaystyle WLP*WLR>1} . The Kelly formula can easily result in a fraction higher than 1, such as with losing size l ≪ 1 {\displaystyle l\ll 1} (see the above expression with factors of W L R {\displaystyle WLR} and W L P {\displaystyle WLP} ). This happens somewhat counterintuitively, because the Kelly fraction formula compensates for a small losing size with a larger bet. However, in most real situations, there is high uncertainty about all parameters entering the Kelly formula. In the case of a Kelly fraction higher than 1, it is theoretically advantageous to use leverage to purchase additional securities on margin . In a study, each participant was given $25 and asked to place even-money bets on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. But the behavior of the test subjects was far from optimal: Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment. [ 10 ] [ 11 ] Using the Kelly criterion and based on the odds in the experiment (ignoring the cap of $250 and the finite duration of the test), the right approach would be to bet 20% of one's bankroll on each toss of the coin, which works out to a 2.034% average gain each round. This is a geometric mean , not the arithmetic rate of 4% (r = 0.2 x (0.6 - 0.4) = 0.04). The theoretical expected wealth after 300 rounds works out to $10,505 ( = 25 ⋅ ( 1.02034 ) 300 {\displaystyle =25\cdot (1.02034)^{300}} ) if it were not capped. In this particular game, because of the cap, a strategy of betting only 12% of the pot on each toss would have even better results (a 95% probability of reaching the cap and an average payout of $242.03). Heuristic proofs of the Kelly criterion are straightforward. [ 12 ] The Kelly criterion maximizes the expected value of the logarithm of wealth (the expectation value of a function is given by the sum, over all possible outcomes, of the probability of each particular outcome multiplied by the value of the function in the event of that outcome). We start with 1 unit of wealth and bet a fraction f {\displaystyle f} of that wealth on an outcome that occurs with probability p {\displaystyle p} and offers odds of b {\displaystyle b} . The probability of winning is p {\displaystyle p} , and in that case the resulting wealth is equal to 1 + f b {\displaystyle 1+fb} . The probability of losing is q = 1 − p {\displaystyle q=1-p} and the odds of a negative outcome is a {\displaystyle a} . In that case the resulting wealth is equal to 1 − f a {\displaystyle 1-fa} . Therefore, the geometric growth rate r {\displaystyle r} is: We want to find the maximum r of this curve (as a function of f ), which involves finding the derivative of the equation. This is more easily accomplished by taking the logarithm of each side first; because the logarithm is monotonic , it does not change the locations of function extrema. The resulting equation is: with E {\displaystyle E} denoting logarithmic wealth growth. To find the value of f {\displaystyle f} for which the growth rate is maximized, denoted as f ∗ {\displaystyle f^{*}} , we differentiate the above expression and set this equal to zero. This gives: Rearranging this equation to solve for the value of f ∗ {\displaystyle f^{*}} gives the Kelly criterion: Notice that this expression reduces to the simple gambling formula when a = 1 = 100 % {\displaystyle a=1=100\%} , when a loss results in full loss of the wager. If the return rates on an investment or a bet are continuous in nature the optimal growth rate coefficient must take all possible events into account. In mathematical finance, if security weights maximize the expected geometric growth rate (which is equivalent to maximizing log wealth), then a portfolio is growth optimal. The Kelly Criterion shows that for a given volatile security this is satisfied when f ∗ = μ − r σ 2 {\displaystyle f^{*}={\frac {\mu -r}{\sigma ^{2}}}} where f ∗ {\displaystyle f^{*}} is the fraction of available capital invested that maximizes the expected geometric growth rate, μ {\displaystyle \mu } is the expected growth rate coefficient, σ 2 {\displaystyle \sigma ^{2}} is the variance of the growth rate coefficient and r {\displaystyle r} is the risk-free rate of return. Note that a symmetric probability density function was assumed here. Computations of growth optimal portfolios can suffer tremendous garbage in, garbage out problems. For example, the cases below take as given the expected return and covariance structure of assets, but these parameters are at best estimates or models that have significant uncertainty. If portfolio weights are largely a function of estimation errors, then Ex-post performance of a growth-optimal portfolio may differ fantastically from the ex-ante prediction. Parameter uncertainty and estimation errors are a large topic in portfolio theory. An approach to counteract the unknown risk is to invest less than the Kelly criterion. Rough estimates are still useful. If we take excess return 4% and volatility 16%, then yearly Sharpe ratio and Kelly ratio are calculated to be 25% and 150%. Daily Sharpe ratio and Kelly ratio are 1.7% and 150%. Sharpe ratio implies daily win probability of p=(50% + 1.7%/4), where we assumed that probability bandwidth is 4 σ = 4 % {\displaystyle 4\sigma =4\%} . Now we can apply discrete Kelly formula for f ∗ {\displaystyle f^{*}} above with p = 50.425 % , a = b = 1 % {\displaystyle p=50.425\%,a=b=1\%} , and we get another rough estimate for Kelly fraction f ∗ = 85 % {\displaystyle f^{*}=85\%} . Both of these estimates of Kelly fraction appear quite reasonable, yet a prudent approach suggest a further multiplication of Kelly ratio by 50% (i.e. half-Kelly). A detailed paper by Edward O. Thorp and a co-author estimates Kelly fraction to be 117% for the American stock market SP500 index. [ 13 ] Significant downside tail-risk for equity markets is another reason [ 14 ] to reduce Kelly fraction from naive estimate (for instance, to reduce to half-Kelly). A rigorous and general proof can be found in Kelly's original paper [ 1 ] or in some of the other references listed below. Some corrections have been published. [ 15 ] We give the following non-rigorous argument for the case with b = 1 {\displaystyle b=1} (a 50:50 "even money" bet) to show the general idea and provide some insights. [ 1 ] When b = 1 {\displaystyle b=1} , a Kelly bettor bets 2 p − 1 {\displaystyle 2p-1} times their initial wealth W {\displaystyle W} , as shown above. If they win, they have 2 p W {\displaystyle 2pW} after one bet. If they lose, they have 2 ( 1 − p ) W {\displaystyle 2(1-p)W} . Suppose they make N {\displaystyle N} bets like this, and win K {\displaystyle K} times out of this series of N {\displaystyle N} bets. The resulting wealth will be: The ordering of the wins and losses does not affect the resulting wealth. Suppose another bettor bets a different amount, ( 2 p − 1 + Δ ) W {\displaystyle (2p-1+\Delta )W} for some value of Δ {\displaystyle \Delta } (where Δ {\displaystyle \Delta } may be positive or negative). They will have ( 2 p + Δ ) W {\displaystyle (2p+\Delta )W} after a win and [ 2 ( 1 − p ) − Δ ] W {\displaystyle [2(1-p)-\Delta ]W} after a loss. After the same series of wins and losses as the Kelly bettor, they will have: Take the derivative of this with respect to Δ {\displaystyle \Delta } and get: The function is maximized when this derivative is equal to zero, which occurs at: which implies that but the proportion of winning bets will eventually converge to: according to the weak law of large numbers . So in the long run, final wealth is maximized by setting Δ {\displaystyle \Delta } to zero, which means following the Kelly strategy. This illustrates that Kelly has both a deterministic and a stochastic component. If one knows K and N and wishes to pick a constant fraction of wealth to bet each time (otherwise one could cheat and, for example, bet zero after the K th win knowing that the rest of the bets will lose), one will end up with the most money if one bets: each time. This is true whether N {\displaystyle N} is small or large. The "long run" part of Kelly is necessary because K is not known in advance, just that as N {\displaystyle N} gets large, K {\displaystyle K} will approach p N {\displaystyle pN} . Someone who bets more than Kelly can do better if K > p N {\displaystyle K>pN} for a stretch; someone who bets less than Kelly can do better if K < p N {\displaystyle K<pN} for a stretch, but in the long run, Kelly always wins. The heuristic proof for the general case proceeds as follows. [ citation needed ] In a single trial, if one invests the fraction f {\displaystyle f} of their capital, if the strategy succeeds, the capital at the end of the trial increases by the factor 1 − f + f ( 1 + b ) = 1 + f b {\displaystyle 1-f+f(1+b)=1+fb} , and, likewise, if the strategy fails, the capital is decreased by the factor 1 − f a {\displaystyle 1-fa} . Thus at the end of N {\displaystyle N} trials (with p N {\displaystyle pN} successes and q N {\displaystyle qN} failures), the starting capital of $1 yields Maximizing log ⁡ ( C N ) / N {\displaystyle \log(C_{N})/N} , and consequently C N {\displaystyle C_{N}} , with respect to f {\displaystyle f} leads to the desired result Edward O. Thorp provided a more detailed discussion of this formula for the general case. [ 9 ] There, it can be seen that the substitution of p {\displaystyle p} for the ratio of the number of "successes" to the number of trials implies that the number of trials must be very large, since p {\displaystyle p} is defined as the limit of this ratio as the number of trials goes to infinity. In brief, betting f ∗ {\displaystyle f^{*}} each time will likely maximize the wealth growth rate only in the case where the number of trials is very large, and p {\displaystyle p} and b {\displaystyle b} are the same for each trial. In practice, this is a matter of playing the same game over and over, where the probability of winning and the payoff odds are always the same. In the heuristic proof above, p N {\displaystyle pN} successes and q N {\displaystyle qN} failures are highly likely only for very large N {\displaystyle N} . Kelly's criterion may be generalized [ 16 ] on gambling on many mutually exclusive outcomes, such as in horse races. Suppose there are several mutually exclusive outcomes. The probability that the k {\displaystyle k} -th horse wins the race is p k {\displaystyle p_{k}} , the total amount of bets placed on k {\displaystyle k} -th horse is B k {\displaystyle B_{k}} , and where Q k {\displaystyle Q_{k}} are the pay-off odds. D = 1 − t t {\displaystyle D=1-tt} , is the dividend rate where t t {\displaystyle tt} is the track take or tax, D β k {\displaystyle {\frac {D}{\beta _{k}}}} is the revenue rate after deduction of the track take when k {\displaystyle k} -th horse wins. The fraction of the bettor's funds to bet on k {\displaystyle k} -th horse is f k {\displaystyle f_{k}} . Kelly's criterion for gambling with multiple mutually exclusive outcomes gives an algorithm for finding the optimal set S o {\displaystyle S^{o}} of outcomes on which it is reasonable to bet and it gives explicit formula for finding the optimal fractions f k o {\displaystyle f_{k}^{o}} of bettor's wealth to be bet on the outcomes included in the optimal set S o {\displaystyle S^{o}} . The algorithm for the optimal set of outcomes consists of four steps: [ 16 ] If the optimal set S o {\displaystyle S^{o}} is empty then do not bet at all. If the set S o {\displaystyle S^{o}} of optimal outcomes is not empty, then the optimal fraction f k o {\displaystyle f_{k}^{o}} to bet on k {\displaystyle k} -th outcome may be calculated from this formula: One may prove [ 16 ] that where the right hand-side is the reserve rate [ clarification needed ] . Therefore, the requirement e r k = D β k p k > R ( S ) {\displaystyle er_{k}={\frac {D}{\beta _{k}}}p_{k}>R(S)} may be interpreted [ 16 ] as follows: k {\displaystyle k} -th outcome is included in the set S o {\displaystyle S^{o}} of optimal outcomes if and only if its expected revenue rate is greater than the reserve rate. The formula for the optimal fraction f k o {\displaystyle f_{k}^{o}} may be interpreted as the excess of the expected revenue rate of k {\displaystyle k} -th horse over the reserve rate divided by the revenue after deduction of the track take when k {\displaystyle k} -th horse wins or as the excess of the probability of k {\displaystyle k} -th horse winning over the reserve rate divided by revenue after deduction of the track take when k {\displaystyle k} -th horse wins. The binary growth exponent is and the doubling time is This method of selection of optimal bets may be applied also when probabilities p k {\displaystyle p_{k}} are known only for several most promising outcomes, while the remaining outcomes have no chance to win. In this case it must be that The second-order Taylor polynomial can be used as a good approximation of the main criterion. Primarily, it is useful for stock investment, where the fraction devoted to investment is based on simple characteristics that can be easily estimated from existing historical data – expected value and variance . This approximation may offer similar results as the original criterion, [ 17 ] but in some cases the solution obtained may be infeasible. [ 18 ] For single assets (stock, index fund, etc.), and a risk-free rate, it is easy to obtain the optimal fraction to invest through geometric Brownian motion . The stochastic differential equation governing the evolution of a lognormally distributed asset S {\displaystyle S} at time t {\displaystyle t} ( S t {\displaystyle S_{t}} ) is whose solution is where W t {\displaystyle W_{t}} is a Wiener process , and μ {\displaystyle \mu } (percentage drift) and σ {\displaystyle \sigma } (the percentage volatility) are constants. Taking expectations of the logarithm: Then the expected log return R s {\displaystyle R_{s}} is Consider a portfolio made of an asset S {\displaystyle S} and a bond paying risk-free rate r {\displaystyle r} , with fraction f {\displaystyle f} invested in S {\displaystyle S} and ( 1 − f ) {\displaystyle (1-f)} in the bond. The aforementioned equation for d S t {\displaystyle dS_{t}} must be modified by this fraction, ie d S t ′ = f d S t {\displaystyle dS_{t}'=fdS_{t}} , with associated solution the expected one-period return is given by For small μ {\displaystyle \mu } , σ {\displaystyle \sigma } , and W t {\displaystyle W_{t}} , the solution can be expanded to first order to yield an approximate increase in wealth Solving max ( G ( f ) ) {\displaystyle \max(G(f))} we obtain f ∗ {\displaystyle f^{*}} is the fraction that maximizes the expected logarithmic return, and so, is the Kelly fraction. Thorp [ 9 ] arrived at the same result but through a different derivation. Remember that μ {\displaystyle \mu } is different from the asset log return R s {\displaystyle R_{s}} . Confusing this is a common mistake made by websites and articles talking about the Kelly Criterion. For multiple assets, consider a market with n {\displaystyle n} correlated stocks S k {\displaystyle S_{k}} with stochastic returns r k {\displaystyle r_{k}} , k = 1 , … , n , {\displaystyle k=1,\dots ,n,} and a riskless bond with return r {\displaystyle r} . An investor puts a fraction u k {\displaystyle u_{k}} of their capital in S k {\displaystyle S_{k}} and the rest is invested in the bond. Without loss of generality, assume that investor's starting capital is equal to 1. According to the Kelly criterion one should maximize Expanding this with a Taylor series around u 0 → = ( 0 , … , 0 ) {\displaystyle {\vec {u_{0}}}=(0,\ldots ,0)} we obtain Thus we reduce the optimization problem to quadratic programming and the unconstrained solution is where r → ^ {\displaystyle {\widehat {\vec {r}}}} and Σ ^ {\displaystyle {\widehat {\Sigma }}} are the vector of means and the matrix of second mixed noncentral moments of the excess returns. There is also a numerical algorithm for the fractional Kelly strategies and for the optimal solution under no leverage and no short selling constraints. [ 19 ] In a 1738 article, Daniel Bernoulli suggested that, when one has a choice of bets or investments, one should choose that with the highest geometric mean of outcomes. This is mathematically equivalent to the Kelly criterion, although the motivation is different (Bernoulli wanted to resolve the St. Petersburg paradox ). An English translation of the Bernoulli article was not published until 1954, [ 20 ] but the work was well known among mathematicians and economists. Although the Kelly strategy's promise of doing better than any other strategy in the long run seems compelling, some economists have argued strenuously against it, mainly because an individual's specific investing constraints may override the desire for optimal growth rate. [ 8 ] The conventional alternative is expected utility theory which says bets should be sized to maximize the expected utility of the outcome (to an individual with logarithmic utility, the Kelly bet maximizes expected utility, so there is no conflict; moreover, Kelly's original paper clearly states the need for a utility function in the case of gambling games which are played finitely many times [ 1 ] ). Even Kelly supporters usually argue for fractional Kelly (betting a fixed fraction of the amount recommended by Kelly) for a variety of practical reasons, such as wishing to reduce volatility, or protecting against non-deterministic errors in their advantage (edge) calculations. [ 21 ] In colloquial terms, the Kelly criterion requires accurate probability values, which isn't always possible for real-world event outcomes. When a gambler overestimates their true probability of winning, the criterion value calculated will diverge from the optimal, increasing the risk of ruin. Kelly formula can be thought as 'time diversification', which is taking equal risk during different sequential time periods (as opposed to taking equal risk in different assets for asset diversification). There is clearly a difference between time diversification and asset diversification, which was raised [ 22 ] by Paul A. Samuelson . There is also a difference between ensemble-averaging (utility calculation) and time-averaging (Kelly multi-period betting over a single time path in real life). The debate was renewed by evoking ergodicity breaking. [ 23 ] Yet the difference between ergodicity breaking and Knightian uncertainty should be recognized. [ 24 ]
https://en.wikipedia.org/wiki/Kelly_criterion
A kelly hose (also known as a mud hose or rotary hose ) is a flexible, steel reinforced, high pressure hose that connects the standpipe to the kelly (or more specifically to the goose-neck on the swivel above the kelly) and allows free vertical movement of the kelly while facilitating the flow of drilling fluid through the system and down the drill string . [ 1 ] The Kelly hose has a diameter of 3-5 inches (inside diameter). [ 2 ] This industry -related article is a stub . You can help Wikipedia by expanding it . This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kelly_hose
In fluid mechanics , Kelvin's circulation theorem states: [ 1 ] [ 2 ] In a barotropic , ideal fluid with conservative body forces, the circulation around a closed curve (which encloses the same fluid elements) moving with the fluid remains constant with time. The theorem is named after William Thomson, 1st Baron Kelvin who published it in 1869. Stated mathematically: where Γ {\displaystyle \Gamma } is the circulation around a material moving contour C ( t ) {\displaystyle C(t)} as a function of time t {\displaystyle t} . The differential operator D {\displaystyle \mathrm {D} } is a substantial (material) derivative moving with the fluid particles. [ 3 ] Stated more simply, this theorem says that if one observes a closed contour at one instant, and follows the contour over time (by following the motion of all of its fluid elements), the circulation over the two locations of this contour remains constant. This theorem does not hold in cases with viscous stresses, nonconservative body forces (for example the Coriolis force ) or non-barotropic pressure-density relations. The circulation Γ {\displaystyle \Gamma } around a closed material contour C ( t ) {\displaystyle C(t)} is defined by: where u is the velocity vector, and ds is an element along the closed contour. The governing equation for an inviscid fluid with a conservative body force is where D/D t is the convective derivative , ρ is the fluid density, p is the pressure and Φ is the potential for the body force. These are the Euler equations with a body force. The condition of barotropicity implies that the density is a function only of the pressure, i.e. ρ = ρ ( p ) {\displaystyle \rho =\rho (p)} . Taking the convective derivative of circulation gives For the first term, we substitute from the governing equation, and then apply Stokes' theorem , thus: The final equality arises since ∇ ρ × ∇ p = 0 {\displaystyle {\boldsymbol {\nabla }}\rho \times {\boldsymbol {\nabla }}p=0} owing to barotropicity. We have also made use of the fact that the curl of any gradient is necessarily 0, or ∇ × ∇ f = 0 {\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {\nabla }}f=0} for any function f {\displaystyle f} . For the second term, we note that evolution of the material line element is given by Hence The last equality is obtained by applying gradient theorem . Since both terms are zero, we obtain the result A similar principle which conserves a quantity can be obtained for the rotating frame also, known as the Poincaré–Bjerknes theorem, named after Henri Poincaré and Vilhelm Bjerknes , who derived the invariant in 1893 [ 4 ] [ 5 ] and 1898. [ 6 ] [ 7 ] The theorem can be applied to a rotating frame which is rotating at a constant angular velocity given by the vector Ω {\displaystyle {\boldsymbol {\Omega }}} , for the modified circulation Here r {\displaystyle {\boldsymbol {r}}} is the position of the area of fluid. From Stokes' theorem , this is: The vorticity of a velocity field in fluid dynamics is defined by: Then:
https://en.wikipedia.org/wiki/Kelvin's_circulation_theorem
In fluid mechanics , Kelvin's minimum energy theorem (named after William Thomson, 1st Baron Kelvin who published it in 1849 [ 1 ] ) states that the steady irrotational motion of an incompressible fluid occupying a simply connected region has less kinetic energy than any other motion with the same normal component of velocity at the boundary (and, if the domain extends to infinity, with zero value values there) . [ 2 ] [ 3 ] [ 4 ] [ 5 ] Let u {\displaystyle \mathbf {u} } be the velocity field of an incompressible irrotational fluid and u 1 {\displaystyle \mathbf {u_{1}} } be that of any other incompressible fluid motion with same normal component velocity u ⋅ n = u 1 ⋅ n {\displaystyle \mathbf {u} \cdot \mathbf {n} =\mathbf {u_{1}} \cdot \mathbf {n} } at the boundary of the domain, where n {\displaystyle \mathbf {n} } is the unit vector of the bounding surface (and, if the domain extends to infinity, u ⋅ n = u 1 ⋅ n = 0 {\displaystyle \mathbf {u} \cdot \mathbf {n} =\mathbf {u_{1}} \cdot \mathbf {n} =0} there). Then the difference between the kinetic energy is given by can be rearranged to give Since u {\displaystyle \mathbf {u} } is irrotational and the domain is simply-connected, a single-valued velocity potential exists, i.e., u = ∇ ϕ {\displaystyle \mathbf {u} =\nabla \phi } . Using this, the second integral in the above equation can be written as The second integral is identically zero for steady incompressible fluid, i.e., ∇ ⋅ u = ∇ ⋅ u 1 = 0 {\displaystyle \nabla \cdot \mathbf {u} =\nabla \cdot \mathbf {u} _{1}=0} . Applying the Gauss theorem for the first integral we find where the surface integral is zero since normal component of velocities are equal there. Thus, one concludes or in other words, T 1 ≥ T {\displaystyle T_{1}\geq T} , where the equality holds only if u 1 = u {\displaystyle \mathbf {u} _{1}=\mathbf {u} } , thereby proving the theorem.
https://en.wikipedia.org/wiki/Kelvin's_minimum_energy_theorem
A Kelvin bridge , also called a Kelvin double bridge and in some countries a Thomson bridge , is a measuring instrument used to measure unknown electrical resistors below 1 ohm . It is specifically designed to measure resistors that are constructed as four terminal resistors. Historically Kelvin bridges were used to measure shunt resistors for ammeters and sub one ohm reference resistors in metrology laboratories. In the scientific community the Kelvin bridge paired with a Null Detector was used to achieve the highest precision. Resistors above about 1 ohm in value can be measured using a variety of techniques, such as an ohmmeter or by using a Wheatstone bridge . In such resistors, the resistance of the connecting wires or terminals is negligible compared to the resistance value. For resistors of less than an ohm, the resistance of the connecting wires or terminals becomes significant, and conventional measurement techniques will include them in the result. To overcome the problems of these undesirable resistances (known as ' parasitic resistance '), very low value resistors and particularly precision resistors and high current ammeter shunts are constructed as four terminal resistors. These resistances have a pair of current terminals and a pair of potential or voltage terminals. In use, a current is passed between the current terminals, but the volt drop across the resistor is measured at the potential terminals. The volt drop measured will be entirely due to the resistor itself as the parasitic resistance of the leads carrying the current to and from the resistor are not included in the potential circuit. To measure such resistances requires a bridge circuit designed to work with four terminal resistances. That bridge is the Kelvin bridge. [ 1 ] The operation of the Kelvin bridge is very similar to the Wheatstone bridge, but uses two additional resistors. Resistors R 1 and R 2 are connected to the outside potential terminals of the four terminal known or standard resistor R s and the unknown resistor R x (identified as P 1 and P ′ 1 in the diagram). The resistors R s , R x , R 1 and R 2 are essentially a Wheatstone bridge. In this arrangement, the parasitic resistance of the upper part of R s and the lower part of R x is outside of the potential measuring part of the bridge and therefore are not included in the measurement. However, the link between R s and R x ( R par ) is included in the potential measurement part of the circuit and therefore can affect the accuracy of the result. To overcome this, a second pair of resistors R ′ 1 and R ′ 2 form a second pair of arms of the bridge (hence 'double bridge') and are connected to the inner potential terminals of R s and R x (identified as P 2 and P ′ 2 in the diagram). The detector D is connected between the junction of R 1 and R 2 and the junction of R ′ 1 and R ′ 2 . [ 2 ] The balance equation of this bridge is given by the equation In a practical bridge circuit, the ratio of R ′ 1 to R ′ 2 is arranged to be the same as the ratio of R1 to R2 (and in most designs, R 1 = R ′ 1 and R 2 = R ′ 2 ). As a result, the last term of the above equation becomes zero and the balance equation becomes Rearranging to make R x the subject The parasitic resistance R par has been eliminated from the balance equation and its presence does not affect the measurement result. This equation is the same as for the functionally equivalent Wheatstone bridge. In practical use the magnitude of the supply B, can be arranged to provide current through Rs and Rx at or close to the rated operating currents of the smaller rated resistor. This contributes to smaller errors in measurement. This current does not flow through the measuring bridge itself. This bridge can also be used to measure resistors of the more conventional two terminal design. The bridge potential connections are merely connected as close to the resistor terminals as possible. Any measurement will then exclude all circuit resistance not within the two potential connections. The accuracy of measurements made using this bridge are dependent on a number of factors. The accuracy of the standard resistor ( R s ) is of prime importance. Also of importance is how close the ratio of R 1 to R 2 is to the ratio of R ′ 1 to R ′ 2 . As shown above, if the ratio is exactly the same, the error caused by the parasitic resistance ( R par ) is eliminated. In a practical bridge, the aim is to make this ratio as close as possible, but it is not possible to make it exactly the same. If the difference in ratio is small enough, then the last term of the balance equation above becomes small enough that it is negligible. Measurement accuracy is also increased by setting the current flowing through R s and R x to be as large as the rating of those resistors allows. This gives the greatest potential difference between the innermost potential connections ( R 2 and R ′ 2 ) to those resistors and consequently sufficient voltage for the change in R ′ 1 and R ′ 2 to have its greatest effect. Commercial Kelvin Bridges were initially using galvanometers replaced by micro-ammeters and that was limiting factor of the precision, when voltage difference comes close to zero. Further improvement in precision was achieved using null detectors with a sensitivity of nanovolts. There are some commercial bridges reaching accuracies of better than 2% for resistance ranges from 1 microohm to 25 ohms. One such type is illustrated above. Modern digital meters exceed 0.25%. Laboratory bridges are usually constructed with high accuracy variable resistors in the two potential arms of the bridge and achieve accuracies suitable for calibrating standard resistors. In such an application, the 'standard' resistor ( R s ) will in reality be a sub-standard type (that is a resistor having an accuracy some 10 times better than the required accuracy of the standard resistor being calibrated). For such use, the error introduced by the mis-match of the ratio in the two potential arms would mean that the presence of the parasitic resistance R par could have a significant impact on the very high accuracy required. To minimise this problem, the current connections to the standard resistor ( R x ); the sub-standard resistor ( R s ) and the connection between them ( R par ) are designed to have as low a resistance as possible, and the connections both in the resistors and the bridge more resemble bus bars rather than wire. Some ohmmeters include Kelvin bridges in order to obtain large measurement ranges. Instruments for measuring sub-ohm values are often referred to as low-resistance ohmmeters, milli-ohmmeters, micro-ohmmeters, etc.
https://en.wikipedia.org/wiki/Kelvin_bridge
The Kelvin equation describes the change in vapour pressure due to a curved liquid–vapor interface, such as the surface of a droplet . The vapor pressure at a convex curved surface is higher than that at a flat surface . The Kelvin equation is dependent upon thermodynamic principles and does not allude to special properties of materials. It is also used for determination of pore size distribution of a porous medium using adsorption porosimetry . The equation is named in honor of William Thomson , also known as Lord Kelvin. The original form of the Kelvin equation, published in 1871, is: [ 1 ] p ( r 1 , r 2 ) = P − γ ρ v a p o r ( ρ l i q u i d − ρ v a p o r ) ( 1 r 1 + 1 r 2 ) , {\displaystyle p(r_{1},r_{2})=P-{\frac {\gamma \,\rho _{\rm {vapor}}}{(\rho _{\rm {liquid}}-\rho _{\rm {vapor}})}}\left({\frac {1}{r_{1}}}+{\frac {1}{r_{2}}}\right),} where: This may be written in the following form, known as the Ostwald–Freundlich equation : ln ⁡ p p s a t = 2 γ V m r R T , {\displaystyle \ln {\frac {p}{p_{\rm {sat}}}}={\frac {2\gamma V_{\text{m}}}{rRT}},} where p {\displaystyle p} is the actual vapour pressure, p s a t {\displaystyle p_{\rm {sat}}} is the saturated vapour pressure when the surface is flat, γ {\displaystyle \gamma } is the liquid/vapor surface tension , V m {\displaystyle V_{\text{m}}} is the molar volume of the liquid, R {\displaystyle R} is the universal gas constant , r {\displaystyle r} is the radius of the droplet, and T {\displaystyle T} is temperature . Equilibrium vapor pressure depends on droplet size. As r {\displaystyle r} increases, p {\displaystyle p} decreases towards p s a t {\displaystyle p_{sat}} , and the droplets grow into bulk liquid. If the vapour is cooled, then T {\displaystyle T} decreases, but so does p s a t {\displaystyle p_{\rm {sat}}} . This means p / p s a t {\displaystyle p/p_{\rm {sat}}} increases as the liquid is cooled. γ {\displaystyle \gamma } and V m {\displaystyle V_{\text{m}}} may be treated as approximately fixed, which means that the critical radius r {\displaystyle r} must also decrease. The further a vapour is supercooled, the smaller the critical radius becomes. Ultimately it can become as small as a few molecules, and the liquid undergoes homogeneous nucleation and growth. The change in vapor pressure can be attributed to changes in the Laplace pressure . When the Laplace pressure rises in a droplet, the droplet tends to evaporate more easily. When applying the Kelvin equation, two cases must be distinguished: A drop of liquid in its own vapor will result in a convex liquid surface, and a bubble of vapor in a liquid will result in a concave liquid surface. The form of the Kelvin equation here is not the form in which it appeared in Lord Kelvin's article of 1871. The derivation of the form that appears in this article from Kelvin's original equation was presented by Robert von Helmholtz (son of German physicist Hermann von Helmholtz ) in his dissertation of 1885. [ 2 ] In 2020, researchers found that the equation was accurate down to the 1nm scale. [ 3 ] The formal definition of the Gibbs free energy for a parcel of volume V {\displaystyle V} , pressure P {\displaystyle P} and temperature T {\displaystyle T} is given by: where U {\displaystyle U} is the internal energy and S {\displaystyle S} is the entropy . The differential form of the Gibbs free energy can be given as where μ {\displaystyle \mu } is the chemical potential and n {\displaystyle n} is the number of moles. Suppose we have a substance x {\displaystyle x} which contains no impurities. Let's consider the formation of a single drop of x {\displaystyle x} with radius r {\displaystyle r} containing n x {\displaystyle n_{x}} molecules from its pure vapor. The change in the Gibbs free energy due to this process is where G d {\displaystyle G_{d}} and G v {\displaystyle G_{v}} are the Gibbs energies of the drop and vapor respectively. Suppose we have N i {\displaystyle N_{i}} molecules in the vapor phase initially. After the formation of the drop, this number decreases to N f {\displaystyle N_{f}} , where Let g v {\displaystyle g_{v}} and g l {\displaystyle g_{l}} represent the Gibbs free energy of a molecule in the vapor and liquid phase respectively. The change in the Gibbs free energy is then: where 4 π r 2 σ {\displaystyle 4\pi r^{2}\sigma } is the Gibbs free energy associated with an interface with radius of curvature r {\displaystyle r} and surface tension σ {\displaystyle \sigma } . The equation can be rearranged to give Let v l {\displaystyle v_{l}} and v v {\displaystyle v_{v}} be the volume occupied by one molecule in the liquid phase and vapor phase respectively. If the drop is considered to be spherical, then The number of molecules in the drop is then given by The change in Gibbs energy is then The differential form of the Gibbs free energy of one molecule at constant temperature and constant number of molecules can be given by: If we assume that v v ≫ v l {\displaystyle v_{v}\gg v_{l}} then The vapor phase is also assumed to behave like an ideal gas, so where k {\displaystyle k} is the Boltzmann constant . Thus, the change in the Gibbs free energy for one molecule is where P s a t {\displaystyle P_{sat}} is the saturated vapor pressure of x {\displaystyle x} over a flat surface and P {\displaystyle P} is the actual vapor pressure over the liquid. Solving the integral, we have The change in the Gibbs free energy following the formation of the drop is then The derivative of this equation with respect to r {\displaystyle r} is The maximum value occurs when the derivative equals zero. The radius corresponding to this value is: Rearranging this equation gives the Ostwald–Freundlich form of the Kelvin equation: An equation similar to that of Kelvin can be derived for the solubility of small particles or droplets in a liquid, by means of the connection between vapour pressure and solubility, thus the Kelvin equation also applies to solids, to slightly soluble liquids, and their solutions if the partial pressure p {\displaystyle p} is replaced by the solubility of the solid ( c {\displaystyle c} ) (or a second liquid) at the given radius, r {\displaystyle r} , and p s a t {\displaystyle p_{\rm {sat}}} by the solubility at a plane surface ( c s a t {\displaystyle c_{\rm {sat}}} ). Hence small particles (like small droplets) are more soluble than larger ones. The equation would then be given by: These results led to the problem of how new phases can ever arise from old ones. For example, if a container filled with water vapour at slightly below the saturation pressure is suddenly cooled, perhaps by adiabatic expansion, as in a cloud chamber , the vapour may become supersaturated with respect to liquid water. It is then in a metastable state, and we may expect condensation to take place. A reasonable molecular model of condensation would seem to be that two or three molecules of water vapour come together to form a tiny droplet, and that this nucleus of condensation then grows by accretion, as additional vapour molecules happen to hit it. The Kelvin equation, however, indicates that a tiny droplet like this nucleus, being only a few ångströms in diameter, would have a vapour pressure many times that of the bulk liquid. As far as tiny nuclei are concerned, the vapour would not be supersaturated at all. Such nuclei should immediately re-evaporate, and the emergence of a new phase at the equilibrium pressure, or even moderately above it should be impossible. Hence, the over-saturation must be several times higher than the normal saturation value for spontaneous nucleation to occur. There are two ways of resolving this paradox. In the first place, we know the statistical basis of the second law of thermodynamics . In any system at equilibrium, there are always fluctuations around the equilibrium condition, and if the system contains few molecules, these fluctuations may be relatively large. There is always a chance that an appropriate fluctuation may lead to the formation of a nucleus of a new phase, even though the tiny nucleus could be called thermodynamically unstable. The chance of a fluctuation is e −Δ S / k , where Δ S is the deviation of the entropy from the equilibrium value. [ 4 ] It is unlikely, however, that new phases often arise by this fluctuation mechanism and the resultant spontaneous nucleation. Calculations show that the chance, e −Δ S / k , is usually too small. It is more likely that tiny dust particles act as nuclei in supersaturated vapours or solutions. In the cloud chamber, it is the clusters of ions caused by a passing high-energy particle that acts as nucleation centers. Actually, vapours seem to be much less finicky than solutions about the sort of nuclei required. This is because a liquid will condense on almost any surface, but crystallization requires the presence of crystal faces of the proper kind. For a sessile drop residing on a solid surface, the Kelvin equation is modified near the contact line, due to intermolecular interactions between the liquid drop and the solid surface. This extended Kelvin equation is given by [ 5 ] where Π {\displaystyle \Pi } is the disjoining pressure that accounts for the intermolecular interactions between the sessile drop and the solid and ( 2 γ / r ) {\displaystyle \left(2\gamma /r\right)} is the Laplace pressure, accounting for the curvature-induced pressure inside the liquid drop. When the interactions are attractive in nature, the disjoining pressure, Π {\displaystyle \Pi } is negative. Near the contact line, the disjoining pressure dominates over the Laplace pressure, implying that the solubility, c {\displaystyle c} is less than c s a t {\displaystyle c_{\rm {sat}}} . This implies that a new phase can spontaneously grow on a solid surface, even under saturation conditions. [ 6 ]
https://en.wikipedia.org/wiki/Kelvin_equation
Kelvin probe force microscopy ( KPFM ), also known as surface potential microscopy , is a noncontact variant of atomic force microscopy (AFM). [ 1 ] [ 2 ] [ 3 ] By raster scanning in the x,y plane the work function of the sample can be locally mapped for correlation with sample features. When there is little or no magnification, this approach can be described as using a scanning Kelvin probe ( SKP ). These techniques are predominantly used to measure corrosion and coatings . With KPFM, the work function of surfaces can be observed at atomic or molecular scales. The work function relates to many surface phenomena, including catalytic activity , reconstruction of surfaces, doping and band-bending of semiconductors , charge trapping in dielectrics and corrosion . The map of the work function produced by KPFM gives information about the composition and electronic state of the local structures on the surface of a solid. The SKP technique is based on parallel plate capacitor experiments performed by Lord Kelvin in 1898. [ 4 ] In the 1930s William Zisman built upon Lord Kelvin's experiments to develop a technique to measure contact potential differences of dissimilar metals . [ 5 ] In SKP the probe and sample are held parallel to each other and electrically connected to form a parallel plate capacitor. The probe is selected to be of a different material to the sample, therefore each component initially has a distinct Fermi level . When electrical connection is made between the probe and the sample electron flow can occur between the probe and the sample in the direction of the higher to the lower Fermi level. This electron flow causes the equilibration of the probe and sample Fermi levels. Furthermore, a surface charge develops on the probe and the sample, with a related potential difference known as the contact potential (V c ). In SKP the probe is vibrated along a perpendicular to the plane of the sample. [ 6 ] This vibration causes a change in probe to sample distance, which in turn results in the flow of current, taking the form of an ac sine wave . The resulting ac sine wave is demodulated to a dc signal through the use of a lock-in amplifier . [ 7 ] Typically the user must select the correct reference phase value used by the lock-in amplifier. Once the dc potential has been determined, an external potential, known as the backing potential (V b ) can be applied to null the charge between the probe and the sample. When the charge is nullified, the Fermi level of the sample returns to its original position. This means that V b is equal to -V c , which is the work function difference between the SKP probe and the sample measured. [ 8 ] The cantilever in the AFM is a reference electrode that forms a capacitor with the surface, over which it is scanned laterally at a constant separation. The cantilever is not piezoelectrically driven at its mechanical resonance frequency ω 0 as in normal AFM although an alternating current (AC) voltage is applied at this frequency. When there is a direct-current (DC) potential difference between the tip and the surface, the AC+DC voltage offset will cause the cantilever to vibrate. The origin of the force can be understood by considering that the energy of the capacitor formed by the cantilever and the surface is plus terms at DC. Only the cross-term proportional to the V DC ·V AC product is at the resonance frequency ω 0 . The resulting vibration of the cantilever is detected using usual scanned-probe microscopy methods (typically involving a diode laser and a four-quadrant detector). A null circuit is used to drive the DC potential of the tip to a value which minimizes the vibration. A map of this nulling DC potential versus the lateral position coordinate therefore produces an image of the work function of the surface. A related technique, electrostatic force microscopy (EFM), directly measures the force produced on a charged tip by the electric field emanating from the surface. EFM operates much like magnetic force microscopy in that the frequency shift or amplitude change of the cantilever oscillation is used to detect the electric field. However, EFM is much more sensitive to topographic artifacts than KPFM. Both EFM and KPFM require the use of conductive cantilevers, typically metal-coated silicon or silicon nitride . Another AFM-based technique for the imaging of electrostatic surface potentials, scanning quantum dot microscopy , [ 9 ] quantifies surface potentials based on their ability to gate a tip-attached quantum dot. The quality of an SKP measurement is affected by a number of factors. This includes the diameter of the SKP probe, the probe to sample distance, and the material of the SKP probe. The probe diameter is important in the SKP measurement because it affects the overall resolution of the measurement, with smaller probes leading to improved resolution. [ 10 ] [ 11 ] On the other hand, reducing the size of the probe causes an increase in fringing effects which reduces the sensitivity of the measurement by increasing the measurement of stray capacitances. [ 10 ] The material used in the construction of the SKP probe is important to the quality of the SKP measurement. [ 12 ] This occurs for a number of reasons. Different materials have different work function values which will affect the contact potential measured. Different materials have different sensitivity to humidity changes. The material can also affect the resulting lateral resolution of the SKP measurement. In commercial probes tungsten is used, [ 13 ] though probes of platinum , [ 14 ] copper , [ 15 ] gold , [ 16 ] and NiCr has been used. [ 17 ] The probe to sample distance affects the final SKP measurement, with smaller probe to sample distances improving the lateral resolution [ 11 ] and the signal-to-noise ratio of the measurement. [ 18 ] Furthermore, reducing the SKP probe to sample distance increases the intensity of the measurement, where the intensity of the measurement is proportional to 1/d 2 , where d is the probe to sample distance. [ 19 ] The effects of changing probe to sample distance on the measurement can be counteracted by using SKP in constant distance mode. The Kelvin probe force microscope or Kelvin force microscope (KFM) is based on an AFM set-up and the determination of the work function is based on the measurement of the electrostatic forces between the small AFM tip and the sample. The conducting tip and the sample are characterized by (in general) different work functions, which represent the difference between the Fermi level and the vacuum level for each material. If both elements were brought in contact, a net electric current would flow between them until the Fermi levels were aligned. The difference between the work functions is called the contact potential difference and is denoted generally with V CPD . An electrostatic force exists between tip and sample, because of the electric field between them. For the measurement a voltage is applied between tip and sample, consisting of a DC-bias V DC and an AC-voltage V AC sin(ωt) of frequency ω . Tuning the AC-frequency to the resonant frequency of the AFM cantilever results in an improved sensitivity. The electrostatic force in a capacitor may be found by differentiating the energy function with respect to the separation of the elements and can be written as where C is the capacitance, z is the separation, and V is the voltage, each between tip and surface. Substituting the previous formula for voltage (V) shows that the electrostatic force can be split up into three contributions, as the total electrostatic force F acting on the tip then has spectral components at the frequencies ω and 2ω . The DC component, F DC , contributes to the topographical signal, the term F ω at the characteristic frequency ω is used to measure the contact potential and the contribution F 2ω can be used for capacitance microscopy. For contact potential measurements a lock-in amplifier is used to detect the cantilever oscillation at ω . During the scan V DC will be adjusted so that the electrostatic forces between the tip and the sample become zero and thus the response at the frequency ω becomes zero. Since the electrostatic force at ω depends on V DC − V CPD , the value of V DC that minimizes the ω -term corresponds to the contact potential. Absolute values of the sample work function can be obtained if the tip is first calibrated against a reference sample of known work function. [ 20 ] Apart from this, one can use the normal topographic scan methods at the resonance frequency ω independently of the above. Thus, in one scan, the topography and the contact potential of the sample are determined simultaneously. This can be done in (at least) two different ways: 1) The topography is captured in AC mode which means that the cantilever is driven by a piezo at its resonant frequency. Simultaneously the AC voltage for the KPFM measurement is applied at a frequency slightly lower than the resonant frequency of the cantilever. In this measurement mode the topography and the contact potential difference are captured at the same time and this mode is often called single-pass. 2) One line of the topography is captured either in contact or AC mode and is stored internally. Then, this line is scanned again, while the cantilever remains on a defined distance to the sample without a mechanically driven oscillation but the AC voltage of the KPFM measurement is applied and the contact potential is captured as explained above. It is important to note that the cantilever tip must not be too close to the sample in order to allow good oscillation with applied AC voltage. Therefore, KPFM can be performed simultaneously during AC topography measurements but not during contact topography measurements. The Volta potential measured by SKP is directly proportional to the corrosion potential of a material, [ 21 ] as such SKP has found widespread use in the study of the fields of corrosion and coatings. In the field of coatings for example, a scratched region of a self-healing shape memory polymer coating containing a heat generating agent on aluminium alloys was measured by SKP. [ 22 ] Initially after the scratch was made the Volta potential was noticeably higher and wider over the scratch than over the rest of the sample, implying this region is more likely to corrode. The Volta potential decreased over subsequent measurements, and eventually the peak over the scratch completely disappeared implying the coating has healed. Because SKP can be used to investigate coatings in a non-destructive way it has also been used to determine coating failure. In a study of polyurethane coatings, it was seen that the work function increases with increasing exposure to high temperature and humidity. [ 23 ] This increase in work function is related to decomposition of the coating likely from hydrolysis of bonds within the coating. Using SKP the corrosion of industrially important alloys has been measured. [ citation needed ] In particular with SKP it is possible to investigate the effects of environmental stimulus on corrosion. For example, the microbially induced corrosion of stainless steel and titanium has been examined. [ 24 ] SKP is useful to study this sort of corrosion because it usually occurs locally, therefore global techniques are poorly suited. Surface potential changes related to increased localized corrosion were shown by SKP measurements. Furthermore, it was possible to compare the resulting corrosion from different microbial species. In another example SKP was used to investigate biomedical alloy materials, which can be corroded within the human body. In studies on Ti-15Mo under inflammatory conditions, [ 25 ] SKP measurements showed a lower corrosion resistance at the bottom of a corrosion pit than at the oxide protected surface of the alloy. SKP has also been used to investigate the effects of atmospheric corrosion, for example to investigate copper alloys in marine environment. [ 26 ] In this study Kelvin potentials became more positive, indicating a more positive corrosion potential, with increased exposure time, due to an increase in thickness of corrosion products. As a final example SKP was used to investigate stainless steel under simulated conditions of gas pipeline. [ 27 ] These measurements showed an increase in difference in corrosion potential of cathodic and anodic regions with increased corrosion time, indicating a higher likelihood of corrosion. Furthermore, these SKP measurements provided information about local corrosion, not possible with other techniques. SKP has been used to investigate the surface potential of materials used in solar cells , with the advantage that it is a non-contact, and therefore a non-destructive technique. [ 28 ] A more recent advancement in KPFM is High-Definition Kelvin Force microscopy (HD-KFM), which enables improved spatial resolution and reduced measurement artifacts through a single-pass scanning approach and optimized tip–sample distance control. HD-KFM has been applied to map surface potential variations in nanocomposites and electronic materials with high precision. For instance, it has been used to examine the dispersion of carbon nanomaterials in electrically conductive epoxy adhesives, revealing correlations between surface potential distribution and the formation of conductive networks. [ 29 ] It can be used to determine the electron affinity of different materials, thereby enabling analysis of energy level alignment in conduction bands of composite systems. This alignment is closely related to surface photovoltage response and device efficiency. [ 30 ] As a non-contact, non-destructive technique SKP has been used to investigate latent fingerprints on materials of interest for forensic studies. [ 31 ] When fingerprints are left on a metallic surface they leave behind salts which can cause the localized corrosion of the material of interest. This leads to a change in Volta potential of the sample, which is detectable by SKP. SKP is particularly useful for these analyses because it can detect this change in Volta potential even after heating, or coating by, for example, oils. SKP has been used to analyze the corrosion mechanisms of schreibersite -containing meteorites . [ 32 ] [ 33 ] The aim of these studies has been to investigate the role in such meteorites in releasing species utilized in prebiotic chemistry. In the field of biology SKP has been used to investigate the electric fields associated with wounding , [ 34 ] and acupuncture points. [ 35 ] In the field of electronics, KPFM is used to investigate the charge trapping in High-k gate oxides/interfaces of electronic devices. [ 36 ] [ 37 ] [ 38 ]
https://en.wikipedia.org/wiki/Kelvin_probe_force_microscope
Waterfowl and boats moving across the surface of water produce a wake pattern, first explained mathematically by Lord Kelvin and known today as the Kelvin wake pattern . [ 1 ] This pattern consists of two wake lines that form the arms of a chevron, V, with the source of the wake at the vertex of the V. For sufficiently slow motion, each wake line is offset from the path of the wake source by around arcsin(1/3) = 19.47° and is made up of feathery wavelets angled at roughly 53° to the path. The inside of the V (of total opening 39° as indicated above) is filled with transverse curved waves, each of which resembles an arc of a circle centered at a point lying on the path at a distance twice that of the arc to the wake source. This part of the pattern is independent of the speed and size of the wake source over a significant range of values. However, at higher speeds (specifically, at large Froude number ) other parts of the pattern come into play. At the tips of the transverse wave arcs their crests turn around and continue inside the V cone and towards the source, forming an overlapping pattern of narrower waves directed outside of the cone. As the source's speed increases, these shorter waves begin to dominate and form a second V within the pattern, which grows narrower as the increased speed of the source emphasizes the shorter waves that are closer to the source's path. [ 2 ] The angles in this pattern are not intrinsic properties of merely water: Any isentropic and incompressible liquid with low viscosity will exhibit the same phenomenon. Furthermore, this phenomenon has nothing to do with turbulence. Everything discussed here is based on the linear theory of an ideal fluid, cf. Airy wave theory . Parts of the pattern may be obscured by the effects of propeller wash, and tail eddies behind the boat's stern, and by the boat being a large object and not a point source. The water need not be stationary, but may be moving as in a large river, and the important consideration then is the velocity of the water relative to a boat or other object causing a wake. This pattern follows from the dispersion relation of deep water waves , which is often written as, where "Deep" means that the depth is greater than half of the wavelength. This formula implies that the group velocity of a deep water wave is half of its phase velocity , which, in turn, goes as the square root of the wavelength. Two velocity parameters of importance for the wake pattern are: As the surface object moves, it continuously generates small disturbances which are the sum of sinusoidal waves with a wide spectrum of wavelengths. Those waves with the longest wavelengths have phase speeds above v and dissipate into the surrounding water and are not easily observed. Other waves with phase speeds at or below v , however, are amplified through constructive interference and form visible shock waves , stationary in position w.r.t. the boat. The angle θ between the phase shock wave front and the path of the object is θ = arcsin( c/v ) . If c/v > 1 or < −1, no later waves can catch up with earlier waves and no shockwave forms. In deep water, shock waves form even from slow-moving sources, because waves with short enough wavelengths move slower. These shock waves are at sharper angles than one would naively expect, because it is group velocity that dictates the area of constructive interference and, in deep water, the group velocity is half of the phase velocity . All shock waves, that each by itself would have had an angle between 33° and 72°, are compressed into a narrow band of wake with angles between 15° and 19°, with the strongest constructive interference at the outer edge (angle arcsin(1/3) = 19.47°), placing the two arms of the V in the celebrated Kelvin wake pattern. A concise geometric construction [ 3 ] demonstrates that, strikingly, this group shock angle w.r.t. the path of the boat, 19.47°, for any and all of the above θ , is actually independent of v , c , and g ; it merely relies on the fact that the group velocity is half of the phase velocity c . On any planet, slow-swimming objects have "effective Mach number " 3. For slow swimmers, low Froude number, the Lighthill−Whitham geometric argument that the opening of the Kelvin chevron (wedge, V pattern) is universal goes as follows. Consider a boat moving from right to left with constant speed v , emitting waves of varying wavelength, and thus wavenumber k and phase velocity c ( k ) , of interest when < v for a shock wave (cf., e.g., Sonic boom or Cherenkov radiation ). Equivalently, and more intuitively, fix the position of the boat and have the water flow in the opposite direction, like a piling in a river. Focus first on a given k , emitting (phase) wavefronts whose stationary position w.r.t. the boat assemble to the standard shock wedge tangent to all of them, cf. Fig.12.3. As indicated above, the openings of these chevrons vary with wavenumber, the angle θ between the phase shock wavefront and the path of the boat (the water) being θ = arcsin( c / v ) ≡ π /2 − ψ . Evidently, ψ increases with k . However, these phase chevrons are not visible: it is their corresponding group wave manifestations which are observed. Consider one of the phase circles of Fig.12.3 for a particular k , corresponding to the time t in the past, Fig.12.2. Its radius is QS , and the phase chevron side is the tangent PS to it. Evidently, PQ = vt and SQ = ct = vt cos ψ , as the right angle PSQ places S on the semicircle of diameter PQ . Since the group velocity is half the phase velocity for any and all k , however, the visible (group) disturbance point corresponding to S will be T , the midpoint of SQ . Similarly, it lies on a semicircle now centered on R , where, manifestly, RQ = PQ /4, an effective group wavefront emitted from R , with radius v t /4 now. Significantly, the resulting wavefront angle with the boat's path, the angle of the tangent from P to this smaller circle, obviously has a sine of TR/PR =1/3, for any and all k , c , ψ , g , etc.: Strikingly, virtually all parameters of the problem have dropped out, except for the deep-water group-to-phase-velocity relation! Note the (highly notional) effective group disturbance emitter moves slower, at 3 v /4. Thus, summing over all relevant k and t s to flesh out an effective Fig.12.3 shock pattern, the universal Kelvin wake pattern arises: the full visible chevron angle is twice that, 2arcsin(1/3) ≈ 39°. The wavefronts of the wavelets in the wake are at 53°, which is roughly the average of 33° and 72°. The wave components with would-be shock wave angles between 73° and 90° dominate the interior of the V. They end up half-way between the point of generation and the current location of the wake source. This explains the curvature of the arcs. Those very short waves with would-be shock wave angles below 33° lack a mechanism to reinforce their amplitudes through constructive interference and are usually seen as small ripples on top of the interior transverse waves. The nature of two types of crests , longitudinal and transverse, is graphically illustrated by the pattern of wavefronts of a moving point source in proper frame . The radii of wavefronts are proportional, due to dispersion, to the square of time (measured from the moment of emission), and the envelope of the wavefronts represents the Kelvin wake pattern.
https://en.wikipedia.org/wiki/Kelvin_wake_pattern
The Kelvin–Helmholtz instability (after Lord Kelvin and Hermann von Helmholtz ) is a fluid instability that occurs when there is velocity shear in a single continuous fluid or a velocity difference across the interface between two fluids . Kelvin-Helmholtz instabilities are visible in the atmospheres of planets and moons, such as in cloud formations on Earth or the Red Spot on Jupiter , and the atmospheres of the Sun and other stars . [ 1 ] Fluid dynamics predicts the onset of instability and transition to turbulent flow within fluids of different densities moving at different speeds. [ 3 ] If surface tension is ignored, two fluids in parallel motion with different velocities and densities yield an interface that is unstable to short-wavelength perturbations for all speeds. However, surface tension is able to stabilize the short wavelength instability up to a threshold velocity. If the density and velocity vary continuously in space (with the lighter layers uppermost, so that the fluid is RT-stable ), the dynamics of the Kelvin-Helmholtz instability is described by the Taylor–Goldstein equation : ( U − c ) 2 ( d 2 ϕ ~ d z 2 − k 2 ϕ ~ ) + [ N 2 − ( U − c ) d 2 U d z 2 ] ϕ ~ = 0 , {\displaystyle (U-c)^{2}\left({d^{2}{\tilde {\phi }} \over dz^{2}}-k^{2}{\tilde {\phi }}\right)+\left[N^{2}-(U-c){d^{2}U \over dz^{2}}\right]{\tilde {\phi }}=0,} where N = g / L ρ {\textstyle N={\sqrt {g/L_{\rho }}}} denotes the Brunt–Väisälä frequency , U is the horizontal parallel velocity, k is the wave number, c is the eigenvalue parameter of the problem, ϕ ~ {\displaystyle {\tilde {\phi }}} is complex amplitude of the stream function . Its onset is given by the Richardson number R i {\displaystyle \mathrm {Ri} } . Typically the layer is unstable for R i < 0.25 {\displaystyle \mathrm {Ri} <0.25} . These effects are common in cloud layers. The study of this instability is applicable in plasma physics, for example in inertial confinement fusion and the plasma – beryllium interface. In situations where there is a state of static stability (where there is a continuous density gradient), the Rayleigh–Taylor instability is often insignificant compared to the magnitude of the Kelvin–Helmholtz instability. Numerically, the Kelvin–Helmholtz instability is simulated in a temporal or a spatial approach. In the temporal approach, the flow is considered in a periodic (cyclic) box "moving" at mean speed (absolute instability). In the spatial approach, simulations mimic a lab experiment with natural inlet and outlet conditions (convective instability). The existence of the Kelvin-Helmholtz instability was first discovered by German physiologist and physicist Hermann von Helmholtz in 1868. Helmholtz identified that "every perfect geometrically sharp edge by which a fluid flows must tear it asunder and establish a surface of separation". [ 5 ] [ 3 ] Following that work, in 1871, collaborator William Thomson (later Lord Kelvin), developed a mathematical solution of linear instability whilst attempting to model the formation of ocean wind waves. [ 6 ] Throughout the early 20th Century, the ideas of Kelvin-Helmholtz instabilities were applied to a range of stratified fluid applications. In the early 1920s, Lewis Fry Richardson developed the concept that such shear instability would only form where shear overcame static stability due to stratification, encapsulated in the Richardson Number . Geophysical observations of the Kelvin-Helmholtz instability were made through the late 1960s/early 1970s, for clouds, [ 7 ] and later the ocean. [ 8 ]
https://en.wikipedia.org/wiki/Kelvin–Helmholtz_instability
The Kelvin–Helmholtz mechanism is an astronomical process that occurs when the surface of a star or a planet cools. The cooling causes the internal pressure to drop, and the star or planet shrinks as a result. This compression, in turn, heats the core of the star/planet. This mechanism is evident on Jupiter and Saturn and on brown dwarfs whose central temperatures are not high enough to undergo hydrogen fusion . It is estimated that Jupiter radiates more energy through this mechanism than it receives from the Sun, but Saturn might not. Jupiter has been estimated to shrink at a rate of approximately 1 mm/year by this process, [ 1 ] corresponding to an internal flux of 7.485 W/m 2 . [ 2 ] The mechanism was originally proposed by Kelvin and Helmholtz in the late nineteenth century to explain the source of energy of the Sun . By the mid-nineteenth century, conservation of energy had been accepted, and one consequence of this law of physics is that the Sun must have some energy source to continue to shine. Because nuclear reactions were unknown, the main candidate for the source of solar energy was gravitational contraction. However, it soon was recognized by Sir Arthur Eddington and others that the total amount of energy available through this mechanism only allowed the Sun to shine for millions of years rather than the billions of years that the geological and biological evidence suggested for the age of the Earth . (Kelvin himself had argued that the Earth was millions, not billions, of years old.) The true source of the Sun's energy remained uncertain until the 1930s, when it was shown by Hans Bethe to be nuclear fusion . It was theorised that the gravitational potential energy from the contraction of the Sun could be its source of power. To calculate the total amount of energy that would be released by the Sun in such a mechanism (assuming uniform density ), it was approximated to a perfect sphere made up of concentric shells. The gravitational potential energy could then be found as the integral over all the shells from the centre to its outer radius. Gravitational potential energy from Newtonian mechanics is defined as: [ 3 ] where G is the gravitational constant , and the two masses in this case are that of the thin shells of width dr , and the contained mass within radius r as one integrates between zero and the radius of the total sphere. This gives: [ 3 ] where R is the outer radius of the sphere, and m ( r ) is the mass contained within the radius r . Changing m ( r ) into a product of volume and density to satisfy the integral, [ 3 ] Recasting in terms of the mass of the sphere gives the total gravitational potential energy as [ 3 ] According to the Virial Theorem , the total energy for gravitationally bound systems in equilibrium is one half of the time-averaged potential energy, While uniform density is not correct, one can get a rough order of magnitude estimate of the expected age of our star by inserting known values for the mass and radius of the Sun , and then dividing by the known luminosity of the Sun (note that this will involve another approximation, as the power output of the Sun has not always been constant): [ 3 ] where L ⊙ {\displaystyle L_{\odot }} is the luminosity of the Sun. While giving enough power for considerably longer than many other physical methods, such as chemical energy , this value was clearly still not long enough due to geological and biological evidence that the Earth was billions of years old. It was eventually discovered that thermonuclear energy was responsible for the power output and long lifetimes of stars. [ 4 ] The flux of internal heat for Jupiter is given by the derivative according to the time of the total energy With a shrinking of − 1 m m y r = − 0.001 m y r = − 3.17 × 10 − 11 m s {\textstyle -1\mathrm {\frac {~mm}{yr}} =-0.001\mathrm {\frac {~m}{yr}} =-3.17\times 10^{-11}~\mathrm {\frac {m}{s}} } , one gets dividing by the whole area of Jupiter, i.e. S = 6.14 × 10 16 m 2 {\displaystyle S=6.14\times 10^{16}~\mathrm {m^{2}} } , one gets Of course, one usually calculates this equation in the other direction: the experimental figure of the specific flux of internal heat, 7.485 W/m 2 , was given from the direct measures made on the spot by the Cassini probe during its flyby on 30 December 2000 and one gets the amount of the shrinking, ~1 mm/year, a minute figure below the boundaries of practical measurement.
https://en.wikipedia.org/wiki/Kelvin–Helmholtz_mechanism
The Kelvin-Varley voltage divider , named after its inventors William Thomson, 1st Baron Kelvin and Cromwell Fleetwood Varley , is an electronic circuit used to generate an output voltage as a precision ratio of an input voltage, with several decades of resolution. In effect, the Kelvin–Varley divider is an electromechanical precision digital-to-analog converter . The circuit is used for precision voltage measurements in calibration and metrology laboratories. It can achieve resolution, accuracy and linearity of 0.1 ppm (1 in 10 million). [ 1 ] The conventional voltage divider ( Kelvin divider ) uses a tapped string of resistors connected in series. The fundamental disadvantage of this architecture is that resolution of 1 part in 1000 would require 1000 precision resistors. To overcome this limitation, the Kelvin–Varley divider uses an iterated scheme whereby cascaded stages consisting of eleven precision resistors provide one decade of resolution per stage. Cascading three stages, for example, therefore permits any division ratio from 0 to 1 in increments of 0.001 to be selected. Each stage of a Kelvin–Varley divider consists of a tapped string of equal value resistors. Let the value of each resistor in the i -th stage be R i Ω . For a decade stage, there will be eleven resistors. Two of those resistors will be bridged by the following stage, and the following stage is designed to have an input impedance of 2 R i . That design choice makes the effective resistance of the bridged portion to be R i . The resulting input impedance of the i -th stage will be 10 R i . In the simple Kelvin-Varley decade design, the resistance of each stage decreases by a factor of 5: R i +1 = R i / 5. The first stage might use 10 kΩ resistors, the second stage 2 kΩ, the third stage 400 Ω, the fourth stage 80 Ω, and the fifth stage 16 Ω. The full precision of the circuit can only be realized with no output current flowing, since the output's effective source resistance is variable. Kelvin–Varley dividers are therefore usually applied in conjunction with a null detector to compare their output voltage against a known voltage standard, e.g. a Weston cell (which must also be used without drawing current from it). The final stage of a Kelvin–Varley divider is just a Kelvin divider. For a decade divider, there will be ten equal value resistors. Let the value of each resistor be R n Ohms . The input impedance of the entire string will be 10 R n . Alternatively, the last stage can be a two resistor bridge tap. For high precision, it is only necessary to ensure the resistors in any one decade have equal resistances, with the first decade requiring the highest precision of matching. The resistors have to be selected for tight tolerances, and may need to have their resistance values individually trimmed to be equal. This selection or trimming only requires comparing the resistances of two resistors in each trimming step, which is easily accomplished by using a Wheatstone bridge circuit and a sensitive null detector — a galvanometer in the 19th century, or an electronically amplified instrument today. The ratio of resistances from one decade to the next is, surprisingly, not critical — by using R i +1 resistances slightly higher than R i / 5 and connecting a trimming resistor in parallel to the entire preceding decade in order to trim the effective resistance down to 2 × R i +1 . In the above example, the second stage might use 3 kΩ resistors instead of 2 kΩ; connecting a (trimmable) resistor of 60 kΩ in parallel with the second stage brings the total input resistance of the second stage down to the 20 kΩ required. Ideally, a resistor has a constant resistance. In practice, the resistance will vary with time and external conditions. Resistance will vary with temperature. Carbon film resistors have temperature coefficients of several hundred parts per million per kelvin . [ 2 ] Some wirewound resistors have coefficients of 10 ppm/K. Some off-the-shelf metal foil resistors can be as low as 0.2 ppm/K. [ 3 ] The power dissipated in a resistor is converted to heat. That heat raises the temperature of the device. The heat is conducted or radiated away. A simple linear characterization looks at the average power dissipated in the device (unit watts) and the device's thermal resistance (K/W). A device that dissipates 0.5 W and has a thermal resistance of 12 K/W will have its temperature rise 6 K above the ambient temperature. When Kelvin–Varley dividers are used to test high voltages, self-heating can create a problem. The first divider stage is often made from 10 kΩ resistors, so the divider input resistance is 100 kΩ. Total power dissipation at 1000 V is therefore 10 W. Most of the divider resistors will dissipate 1 W, but the two resistors bridged by the second divider stage will only dissipate 0.25 W each. That means the bridged resistors will have a quarter of the self-heating and a quarter of the temperature rise. For the divider to maintain accuracy, the temperature rise from self-heating must be limited. Getting very low temperature coefficients keeps the effect of temperature variations small. Reducing the thermal resistance of the resistors keeps the temperature rise small. Commercial Kelvin–Varley dividers use wire-wound resistors and immerse them in an oil bath (sometimes the first decade only). The thermoelectric effect causes junctions of different metals to generate voltages if the junctions are at different temperatures (see also thermocouple ). While these unwanted voltages are small, on the order of a few microvolts per °C, they can cause appreciable errors at the high accuracy of which the Kelvin-Varley circuit is capable. The errors can be minimized through proper design — by keeping all junctions at the same temperature, and by employing only metal pairings with low thermoelectric coefficients (down to the external connectors and cables used; for example, a standard 4 mm plug/socket combination may have a coefficient of 1 μV/°C compared to only 0.07 μV/°C for a "low thermal EMF" grade plug/socket [ 4 ] ).
https://en.wikipedia.org/wiki/Kelvin–Varley_divider
A Kelvin–Voigt material , also called a Voigt material , is the most simple model viscoelastic material showing typical rubbery properties. It is purely elastic on long timescales (slow deformation), but shows additional resistance to fast deformation. The model was developed independently by the British physicist Lord Kelvin [ 1 ] in 1865 and by the German physicist Woldemar Voigt [ 2 ] in 1890. [ 3 ] The Kelvin–Voigt model, also called the Voigt model, is represented by a purely viscous damper and purely elastic spring connected in parallel as shown in the picture. If, instead, we connect these two elements in series we get a model of a Maxwell material . Since the two components of the model are arranged in parallel, the strains in each component are identical: where the subscript D indicates the stress-strain in the damper and the subscript S indicates the stress-strain in the spring. Similarly, the total stress will be the sum of the stress in each component: [ 4 ] From these equations we get that in a Kelvin–Voigt material, stress σ , strain ε and their rates of change with respect to time t are governed by equations of the form: or, in dot notation: where E is a modulus of elasticity and η {\displaystyle \eta } is the viscosity . The equation can be applied either to the shear stress or normal stress of a material. If we suddenly apply some constant stress σ 0 {\displaystyle \sigma _{0}} to Kelvin–Voigt material, then the deformations would approach the deformation for the pure elastic material σ 0 / E {\displaystyle \sigma _{0}/E} with the difference decaying exponentially: [ 4 ] where t is time and τ R = η E {\displaystyle \tau _{R}={\frac {\eta }{E}}} is the retardation time . If we would free the material at time t 1 {\displaystyle t_{1}} , then the elastic element would retard the material back until the deformation becomes zero. The retardation obeys the following equation: The picture shows the dependence of the dimensionless deformation E ε ( t ) σ 0 {\displaystyle {\frac {E\varepsilon (t)}{\sigma _{0}}}} on dimensionless time t / τ R {\displaystyle t/\tau _{R}} . In the picture the stress on the material is loaded at time t = 0 {\displaystyle t=0} , and released at the later dimensionless time t 1 ∗ = t 1 / τ R {\displaystyle t_{1}^{*}=t_{1}/\tau _{R}} . Since all the deformation is reversible (though not suddenly) the Kelvin–Voigt material is a solid . The Voigt model predicts creep more realistically than the Maxwell model, because in the infinite time limit the strain approaches a constant: while a Maxwell model predicts a linear relationship between strain and time, which is most often not the case. Although the Kelvin–Voigt model is effective for predicting creep, it is not good at describing the relaxation behavior after the stress load is removed. The complex dynamic modulus of the Kelvin–Voigt material is given by: Thus, the real and imaginary components of the dynamic modulus are referred to as storage modulus E ′ {\displaystyle E^{\prime }} and E ′ ′ {\displaystyle E^{\prime \prime }} respectively: Note that E ′ {\displaystyle E^{\prime }} is constant, while E ′ ′ {\displaystyle E^{\prime \prime }} is directly proportional to frequency (where time-scale τ {\displaystyle \tau } is the constant of proportionality). Often, this constant τ {\displaystyle \tau } multiplied with angular frequency ω {\displaystyle \omega } is called the loss modulus η = ω τ {\displaystyle \eta =\omega \tau } .
https://en.wikipedia.org/wiki/Kelvin–Voigt_material
Kemble's Cascade (designated Kemble 1 ) is an asterism located in the constellation Camelopardalis . It is an apparent straight line of more than 20 colourful 5th to 10th magnitude stars over a distance of approximately 3 degrees (five moon diameters) of the night sky. It appears to "flow" into the compact open cluster NGC 1502 , which can be found at one end. The asterism was named by Walter Scott Houston in honour of Father Lucian Kemble (1922–1999), [ 1 ] a Franciscan friar and amateur astronomer who wrote a letter to Houston about the asterism, describing it as "a beautiful cascade of faint stars tumbling from the northwest down to the open cluster NGC 1502" that he had discovered while sweeping the sky with a pair of 7×35 binoculars . [ 2 ] Houston was so impressed that he wrote an article on the asterism that appeared in his Deep Sky Wonders column in the astronomy magazine Sky & Telescope in 1980, in which he named it Kemble's Cascade . Father Lucian Kemble was also associated with two other asterisms, Kemble 2 (an asterism in the constellation of Draco that resembles a small version of Cassiopeia) and Kemble's Kite (an asterism that resembles a kite with a tail which is also in the constellation of Camelopardalis). In addition, an asteroid, 78431 Kemble , was named in his honour. [ 3 ] This constellation -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kemble's_Cascade
In additive number theory , Kemnitz's conjecture states that every set of lattice points in the plane has a large subset whose centroid is also a lattice point. It was proved independently in the autumn of 2003 by Christian Reiher , then an undergraduate student, and Carlos di Fiore, then a high school student. [ 1 ] The exact formulation of this conjecture is as follows: Kemnitz's conjecture was formulated in 1983 by Arnfried Kemnitz [ 2 ] as a generalization of the Erdős–Ginzburg–Ziv theorem , an analogous one-dimensional result stating that every 2 n − 1 {\displaystyle 2n-1} integers have a subset of size n {\displaystyle n} whose average is an integer. [ 3 ] In 2000, Lajos Rónyai proved a weakened form of Kemnitz's conjecture for sets with 4 n − 2 {\displaystyle 4n-2} lattice points. [ 4 ] Then, in 2003, Christian Reiher proved the full conjecture using the Chevalley–Warning theorem . [ 5 ] This combinatorics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kemnitz's_conjecture
The Kempner series [ 1 ] [ 2 ] : 31–33 is a modification of the harmonic series , formed by omitting all terms whose denominator expressed in base 10 contains the digit 9. That is, it is the sum where the prime indicates that n takes only values whose decimal expansion has no nines. The series was first studied by A. J. Kempner in 1914. [ 3 ] The series is counterintuitive [ 1 ] because, unlike the harmonic series, it converges. Kempner showed the sum of this series is less than 90. Baillie [ 4 ] showed that, rounded to 20 decimals, the actual sum is 22.92067 66192 64150 34816 (sequence A082838 in the OEIS ). Heuristically, this series converges because most large integers contain every digit. For example, a random 100-digit integer is very likely to contain at least one '9', causing it to be excluded from the above sum. Schmelzer and Baillie [ 5 ] found an efficient algorithm for the more general problem of any omitted string of digits. For example, the sum of ⁠ 1 / n ⁠ where n has no instances of "42" is about 228.44630 41592 30813 25415 . Another example: the sum of ⁠ 1 / n ⁠ where n has no occurrence of the digit string "314159" is about 2302582.33386 37826 07892 02376 . (All values are rounded in the last decimal place.) Kempner's proof of convergence [ 3 ] is repeated in some textbooks, for example Hardy and Wright, [ 6 ] : 120 and also appears as an exercise in Apostol. [ 7 ] : 212 The terms of the sum are grouped by the number of digits in the denominator. The number of n -digit positive integers that have no digit equal to '9' is 8 × 9 n −1 because there are 8 choices (1 through 8) for the first digit, and 9 independent choices (0 through 8) for each of the other n −1 digits. Each of these numbers having no '9' is greater than or equal to 10 n −1 , so the reciprocal of each of these numbers is less than or equal to 10 1− n . Therefore, the contribution of this group to the sum of reciprocals is less than 8 × ( ⁠ 9 / 10 ⁠ ) n −1 . Therefore the whole sum of reciprocals is at most The same argument works for any omitted non-zero digit. The number of n -digit positive integers that have no '0' is 9 n , so the sum of ⁠ 1 / n ⁠ where n has no digit '0' is at most The series also converge if strings of k digits are omitted, for example if we omit all denominators that have the decimal string 42. This can be proved in almost the same way. [ 5 ] First we observe that we can work with numbers in base 10 k and omit all denominators that have the given string as a "digit". The analogous argument to the base 10 case shows that this series converges. Now switching back to base 10, we see that this series contains all denominators that omit the given string, as well as denominators that include it if it is not on a " k -digit" boundary. For example, if we are omitting 42, the base-100 series would omit 4217 and 1742, but not 1427, so it is larger than the series that omits all 42s. Farhi [ 8 ] considered generalized Kempner series, namely, the sums S ( d , n ) of the reciprocals of the positive integers that have exactly n instances of the digit d where 0 ≤ d ≤ 9 (so that the original Kempner series is S (9, 0)). He showed that for each d the sequence of values S ( d , n ) for n ≥ 1 is decreasing and converges to 10 ln 10. The sequence is not in general decreasing starting with n = 0; for example, for the original Kempner series we have S (9, 0) ≈ 22.921 < 23.026 ≈ 10 ln 10 < S (9, n ) for n ≥ 1. The series converges extremely slowly. Baillie [ 4 ] remarks that after summing 10 24 terms the remainder is still larger than 1. [ 9 ] The upper bound of 80 is very crude. In 1916, Irwin [ 10 ] showed that the value of the Kempner series is between 22.4 and 23.3, since refined to the value above, 22.92067... [ 4 ] Baillie [ 4 ] considered the sum of reciprocals of j -th powers simultaneously for all j . He developed a recursion that expresses the j -th power contribution from the ( k + 1)-digit block in terms of all higher power contributions of the k -digit block. Therefore, with a small amount of computation, the original series (which is the value for j = 1, summed over all k ) can be accurately estimated. In 1916, Irwin [ 10 ] also generalized Kempner's results. Let k be a nonnegative integer. Irwin proved that the sum of 1/ n where n has at most k occurrences of any digit d is a convergent series. For example, the sum of 1/ n where n has at most one 9, is a convergent series. But the sum of 1/ n where n has no 9 is convergent. Therefore, the sum of 1/ n where n has exactly one 9, is also convergent. Baillie [ 11 ] showed that the sum of this last series is about 23.04428 70807 47848 31968 .
https://en.wikipedia.org/wiki/Kempner_series
Kenneth Austin Dill (born 1947) is a biophysicist and chemist best known for his work in folding pathways of proteins . He is the director of the Louis and Beatrice Laufer Center for Physical and Quantitative Biology at Stony Brook University . He was elected a member of the National Academy of Sciences in 2008. [ 1 ] He was elected to the American Academy of Arts and Sciences in 2014. He has been a co-editor or editor of the Annual Review of Biophysics since 2013. [ 2 ] Dill was born in Oklahoma City, Oklahoma in 1947. [ 1 ] He attended MIT where he obtained a S.B. and S.M. in Mechanical Engineering (1971). [ 1 ] He obtained his Ph.D. in 1978 at UCSD in the Biology Department working with Bruno H. Zimm , studying the biophysical properties of DNA molecules. Towards the end of his doctoral research, he had become interested in the mechanics of protein folding, specifically the way that the RNA-degrading enzyme Ribonuclease , folds into its native state. But before tackling the protein folding problem, he moved to Stanford University and worked with Paul J. Flory in Chemistry, for his post-doctoral training. After this, he went to the University of California, San Francisco , where he popularized the idea that any given protein's surrounding environment places constraints upon it, such that the shapes that it can assume are dramatically decreased. Dill introduced a toy model consisting of tethered beads on a lattice to mimic a folding protein, with beads of the same type (i.e. hydrophobic) attracting each other. Mathematically, the folding process can be visualized as a funnel, in which the several unfolded and misfolded high energy states of the protein occupy positions nearer the top of the funnel, but once the protein begins to fold, its options narrow down with the decrease in conformational entropy and the chain rapidly collapses into its most stable, low energy state. This state is sometimes identified with the native state of a natural protein. In Dill's words, "Like skiers all arriving at the same lodge, the folding protein gets systematically closer to the desired protein shape as it moves down the funnel". [ 1 ]
https://en.wikipedia.org/wiki/Ken_A._Dill
In queueing theory , a discipline within the mathematical theory of probability , Kendall's notation (or sometimes Kendall notation ) is the standard system used to describe and classify a queueing node. D. G. Kendall proposed describing queueing models using three factors written A/S/ c in 1953 [ 1 ] where A denotes the time between arrivals to the queue, S the service time distribution and c the number of service channels open at the node. It has since been extended to A/S/ c / K / N /D where K is the capacity of the queue, N is the size of the population of jobs to be served, and D is the queueing discipline . [ 2 ] [ 3 ] [ 4 ] When the final three parameters are not specified (e.g. M/M/1 queue ), it is assumed K = ∞, N = ∞ and D = FIFO . [ 5 ] A M/M/1 queue means that the time between arrivals is Markovian (M), i.e. the inter-arrival time follows an exponential distribution of parameter λ. The second M means that the service time is Markovian: it follows an exponential distribution of parameter μ. The last parameter is the number of service channel which one (1). In this section, we describe the parameters A/S/ c / K / N /D from left to right. A code describing the arrival process. The codes used are: This gives the distribution of time of the service of a customer. Some common notations are: The number of service channels (or servers). The M/M/1 queue has a single server and the M/M/c queue c servers. The capacity of queue, or the maximum number of customers allowed in the queue. When the number is at this maximum, further arrivals are turned away. If this number is omitted, the capacity is assumed to be unlimited, or infinite. The size of calling source. The size of the population from which the customers come. A small population will significantly affect the effective arrival rate , because, as more customers are in system, there are fewer free customers available to arrive into the system. If this number is omitted, the population is assumed to be unlimited, or infinite. The Service Discipline or Priority order that jobs in the queue, or waiting line, are served:
https://en.wikipedia.org/wiki/Kendall's_notation
Kendomycin is an anticancer macrolide first isolated from Streptomyces violaceoruber . [ 2 ] It has potent activity as an endothelin receptor antagonist and anti- osteoporosis agent. [ 3 ] It also has strong cytotoxicity against various tumor cell lines. [ 2 ] Because of its potent biological activities, kendomycin has attracted interest as a target of total synthesis . The first total synthesis of kendomycin was accomplished by Lee and Yuan in 2004. [ 4 ] The total number of syntheses stands at 6. [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Kendomycin
Kenichi Fukui ( 福井 謙一 , Fukui Ken'ichi , October 4, 1918 – January 9, 1998) was a Japanese chemist . [ 1 ] He became the first person of East Asian ancestry to be awarded the Nobel Prize in Chemistry when he won the 1981 prize with Roald Hoffmann , for their independent investigations into the mechanisms of chemical reactions . Fukui's prize-winning work focused on the role of frontier orbitals in chemical reactions: specifically that molecules share loosely bonded electrons which occupy the frontier orbitals, that is, the Highest Occupied Molecular Orbital ( HOMO ) and the Lowest Unoccupied Molecular Orbital ( LUMO ). [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Fukui was the eldest of three sons of Ryokichi Fukui, a foreign trade merchant, and Chie Fukui. He was born in Nara, Japan . In his student days between 1938 and 1941, Fukui's interest was stimulated by quantum mechanics and Erwin Schrödinger 's equation. He also had developed the belief that a breakthrough in science occurs through the unexpected fusion of remotely related fields. In an interview with The Chemical Intelligencer Kenichi discusses his path towards chemistry starting from middle school. "The reason for my selection of chemistry is not easy to explain, since chemistry was never my favorite branch in middle school and high school years. Actually, the fact that my respected Fabre had been a genius in chemistry had captured my heart latently, the most decisive occurrence in my education career came when my father asked the advice of Professor Gen-itsu Kita of the Kyoto Imperial University concerning the cause I should take.” On the advice of Kita, a personal friend of the elder Fukui, young Kenichi was directed to the Department of Industrial Chemistry, with which Kita was then affiliated. He also explains that chemistry was difficult to him because it seemed to require memorization to learn it, and that he preferred more logical character in chemistry. He followed the advice a mentor that was well respected by Kenichi himself and never looked back. He also followed in those footsteps by attending Kyoto University in Japan. During that same interview Kenichi also discussed his reason for preferring more theoretical chemistry rather than experimental chemistry. Although he certainly acceded at theoretical science he actually spent much of his early research on experimental. Kenichi had quickly completed more than 100 experimental projects and papers, and he rather enjoyed the experimental phenomena of chemistry. In fact, later on when teaching he would recommend experimental thesis projects for his students to balance them out, theoretical science came more natural to students, but by suggesting or assigning experimental projects his students could understand the concept of both, as all scientist should. Following his graduation from Kyoto Imperial University in 1941, Fukui was engaged in the Army Fuel Laboratory of Japan during World War II . In 1943, he was appointed a lecturer in fuel chemistry at Kyoto Imperial University and began his career as an experimental organic chemist. He was professor of physical chemistry at Kyoto University from 1951 to 1982, president of the Kyoto Institute of Technology between 1982 and 1988, and a member of the International Academy of Quantum Molecular Science and honorary member of the International Academy of Science, Munich. [ citation needed ] He was also director of the Institute for Fundamental Chemistry from 1988 till his death. As well as President of the Chemical Society of Japan from 1983–84, receiving multiple awards aside from his Nobel Prize such as; Japan Academy Prize in 1962, Person of Cultural Merit in 1981, Imperial Honour of Grand Cordon of the Order of the Rising Sun in 1988, with many other awards not quite as prestigious. In 1952, Fukui with his young collaborators T. Yonezawa and H. Shingu presented his molecular orbital theory of reactivity in aromatic hydrocarbons , which appeared in the Journal of Chemical Physics . At that time, his concept failed to garner adequate attention among chemists. Fukui observed in his Nobel lecture in 1981 that his original paper 'received a number of controversial comments. This was in a sense understandable, because for lack of my experiential ability, the theoretical foundation for this conspicuous result was obscure or rather improperly given.' The frontier orbitals concept came to be recognized following the 1965 publication by Robert B. Woodward and Roald Hoffmann of the Woodward-Hoffmann stereoselection rules , which could predict the reaction rates between two reactants. These rules, depicted in diagrams, explain why some pairs react easily while other pairs do not. The basis for these rules lies in the symmetry properties of the molecules and especially in the disposition of their electrons. Fukui had acknowledged in his Nobel lecture that, 'It is only after the remarkable appearance of the brilliant work by Woodward and Hoffmann that I have become fully aware that not only the density distribution but also the nodal property of the particular orbitals have significance in such a wide variety of chemical reactions.' What has been striking in Fukui's significant contributions is that he developed his ideas before chemists had access to large computers for modeling. Apart from exploring the theory of chemical reactions, Fukui's contributions to chemistry also include the statistical theory of gelation , organic synthesis by inorganic salts and polymerization kinetics. In an interview to New Scientist magazine in 1985, Fukui had been highly critical on the practices adopted in Japanese universities and industries to foster science. He noted, "Japanese universities have a chair system that is a fixed hierarchy. This has its merits when trying to work as a laboratory on one theme. But if you want to do original work you must start young, and young people are limited by the chair system. Even if students cannot become assistant professors at an early age they should be encouraged to do original work." Fukui also admonished Japanese industrial research stating, "Industry is more likely to put its research effort into its daily business. It is very difficult for it to become involved in pure chemistry. There is a need to encourage long-range research, even if we don't know its goal and if its application is unknown." In another interview with The Chemical Intelligencer he further elaborates on his criticism by saying, "As is known worldwide, Japan has tried to catch up with the western countries since the beginning of this century by importing science from them." Japan is, in a sense, relatively new to fundamental science as a part of its society and the lack of originality ability, and funding which the western countries have more advantages in hurt the country in fundamental science. Although, he has also stated that it is improving in Japan, especially funding for fundamental science as it has seen a steady increase for years. Fukui was awarded the Nobel Prize for his realization that a good approximation for reactivity could be found by looking at the frontier orbitals ( HOMO/LUMO ). This was based on three main observations of molecular orbital theory as two molecules interact. From these observations, frontier molecular orbital (FMO) theory simplifies reactivity to interactions between HOMO of one species and the LUMO of the other. This helps to explain the predictions of the Woodward-Hoffman rules for thermal pericyclic reactions, which are summarized in the following statement: "A ground-state pericyclic change is symmetry-allowed when the total number of (4q+2)s and (4r)a components is odd" [ 10 ] [ 11 ] [ 12 ] [ 13 ] Fukui was elected a Foreign Member of the Royal Society (ForMemRS) in 1989 . [ 1 ]
https://en.wikipedia.org/wiki/Kenichi_Fukui
Kennedy Joseph Previté-Orton (21 January 1872 – 16 March 1930) was a British chemist who became a lecturer and demonstrator at St Bartholomew's Hospital and then became a professor at Bangor. He was also a keen climber, amateur geologist, ornithologist and bird conservationist. [ 1 ] Kennedy Orton was born in St. Leonard's on Sea to clergyman William Previté and Eliza Orton. His grandfather was Italian and the name Previte is a Sicilian form of Prete. Kennedy's father studied at St. John's College Cambridge and was thirty-first wrangler in 1860, becoming a vicar in Leicester. After marrying he took the surname of his wife. Kennedy was the eldest son. He studied at Kibworth Grammar School (1882–1885) and then Wyggeston School, Leicester (1885–1888) before pursuing medicine at St. Thomas' Hospital , but there he became interested in chemistry and moved to St. John's College, Cambridge . [ 2 ] He then obtained a Ph.D. summa cum laude in Heidelberg under Karl von Auwers , before working for a year with Sir William Ramsey at University College, London . He was then lecturer and demonstrator of Chemistry at St. Bartholomew's Hospital , working under F. D. Chattaway (1860–1944) who would inspire much of his subsequent research. He would examine N-halogen compounds, reaction mechanisms and equilibria. In 1903 he was appointed Professor of Chemistry at University College of North Wales, Bangor , where he headed the department until his death. After World War I, his department in Bangor became a centre for research in physical organic chemistry. He collaborated with Chattaway on the synthesis and study of nitro-halogen componds. He worked on reaction mechanisms and kinetics in collaboration with Arthur Lapworth which led to his breaking away from Chattaway. He was elected a Fellow of the Royal Society in 1921. His collaborators and students included Herbert Ben Watson (1894–1975), Alan Edwin Bradfield (1897–1953), Edward David Hughes (1906–1963), Brynmor Jones (1903–1989), Gwyn Williams (1904–1955), and Frederick George Soper (1898–1982). [ 3 ] [ 4 ] [ 5 ] [ 1 ] [ 6 ] [ 7 ] Besides being a chemist, he was a keen climber, geologist, and ornithologist , and a biannual ornithological lecture was endowed in his name. [ 8 ] [ 9 ] [ 10 ] [ 11 ] He married Annie Ley in 1897 and they had a son and two daughters. He died from pneumonia. [ 7 ]
https://en.wikipedia.org/wiki/Kennedy_J._P._Orton
Kenneth B. Storey FRSC (born October 23, 1949) is a Canadian scientist whose work draws from a variety of fields including biochemistry and molecular biology . He is a Professor of Biology , Biochemistry and Chemistry at Carleton University in Ottawa, Canada. Storey has a world-wide reputation for his research on biochemical adaptation - the molecular mechanisms that allow animals to adapt to and endure severe environmental stresses such as deep cold, oxygen deprivation, and desiccation. Kenneth Storey studied biochemistry at the University of Calgary (B.Sc. '71) and zoology at the University of British Columbia (Ph.D. '74). [ 1 ] [ 2 ] Storey is a Professor of Biochemistry, cross-appointed in the Departments of Biology, Chemistry and Neuroscience and holds the Canada Research Chair in Molecular Physiology at Carleton University in Ottawa, Canada. Storey is an elected fellow of the Royal Society of Canada , [ 3 ] of the Society for Cryobiology [ 4 ] and of the American Association for the Advancement of Science . He has won fellowships and awards for research excellence including the Fry medal from the Canadian Society of Zoologists (2011), the Flavelle medal from the Royal Society of Canada (2010), Ottawa Life Sciences Council Basic Research Award (1998), a Killam Senior Research Fellowship (1993–1995), the Ayerst Award from the Canadian Society for Molecular Biosciences (1989), an E.W.R. Steacie Memorial Fellowship from the Natural Sciences and Engineering Research Council of Canada (1984–1986), and four Carleton University Research Achievement Awards. Storey is the author of over 1200 research articles, the editor of seven books, has given over 500 talks at conferences and institutes worldwide, and organized numerous international symposia. [ 5 ] Storey's research includes studies of enzyme properties, gene expression , protein phosphorylation , epigenetics , and cellular signal transduction mechanisms to seek out the basic principles of how organisms endure and flourish under extreme conditions. He is particularly known within the field of cryobiology for his studies of animals that can survive freezing, especially the frozen "frog-sicles" ( Rana sylvatica ) that have made his work popular with multiple TV shows and magazines. [ 6 ] [ 7 ] Storey's studies of the adaptations that allow frogs, insects , and other animals to survive freezing have made major advances in the understanding of how cells, tissues and organs can endure freezing. Storey was also responsible for the discovery that some turtle species are freeze tolerant: newly hatched painted turtles that spend their first winter on land ( Chrysemys picta marginata & C. p. bellii ). These turtles are unique as they are the only reptiles, and highest vertebrate life form, known to tolerate prolonged natural freezing of extracellular body fluids during winter hibernation. [ 8 ] These advances may aid the development of organ cryopreservation technology. [ 9 ] A second area of his research is metabolic rate depression - understanding the mechanisms by which some animals can reduce their metabolism and enter a state of hypometabolism or torpor that allows them to survive prolonged environmental stresses. His studies have identified molecular mechanisms that underlie metabolic arrest across phylogeny and that support phenomena including mammalian hibernation , estivation , and anoxia - and ischemia -tolerance. These studies hold key applications for medical science, particularly for preservation technologies that aim to extend the survival time of excised organs in cold or frozen storage. [ 9 ] Additional applications include insights into hyperglycemia in metabolic syndrome and diabetes , [ 10 ] and anoxic and ischemic damage caused by heart attack and stroke . [ 11 ] Furthermore, Storey's lab has created several web based programs freely available for data management, data plotting, and microRNA analysis . Dr. Kenneth B. Storey is among the top 2% of highly cited scientists in the world. [ 12 ]
https://en.wikipedia.org/wiki/Kenneth_B._Storey
Kenneth Ikechukwu Ozoemena is a Nigerian physical chemist , materials scientist , and academic. He is a research professor at the University of the Witwatersrand (Wits) in Johannesburg [ 1 ] where he Heads the South African SARChI Chair in Materials Electrochemistry and Energy Technologies (MEET), supported by the Department of Science and Innovation (DSI), National Research Foundation (NRF) and Wits. [ 2 ] Ozoemena group conducts interdisciplinary research across physics , chemistry , biomedical , chemical , and metallurgical engineering . [ 1 ] He has authored numerous peer-reviewed articles, [ 3 ] 11 book chapters, and edited books, including Nanomaterials for Fuel Cell Catalysis , and Nanomaterials in Advanced Batteries and Supercapacitors . [ 4 ] Ozoemena became a Fellow of the Royal Society of Chemistry (FRSC) in 2011, Fellow of the African Academy of Sciences (FAAS) in 2015, and a member of the Academy of Science of South Africa (ASSAf) in 2016. [ 5 ] He serves as an associate editor for Electrocatalysis [ 6 ] and co-Editor-in-Chief of Electrochemistry Communications . [ 7 ] He is an indigene of Obinikpa Umuokpara, Okohia in Umuna, Onuimo local government area of Imo State, Nigeria. Ozoemena earned his baccalaureate degree in industrial chemistry from the University of Abia in 1992 and went on to receive master's degrees in chemistry and pharmaceutical chemistry in 1997 and 1998, respectively, from the University of Lagos . In 2003, he completed his Ph.D. at Rhodes University in South Africa and served as a research fellow at the University of Pretoria . [ 1 ] Following his Ph.D., Ozoemena began his academic career as an Andrew W. Mellon Lecturer of Chemistry at Rhodes University in 2004 and held an appointment at the University of Pretoria as a senior lecturer of chemistry in 2006, and later as extraordinary professor of chemistry from 2009 to 2017. He was also appointed as an extraordinary professor of chemistry at the University of the Western Cape from 2011 to 2014, and an honorary professor of chemistry at the University of the Witwatersrand from 2014 to 2017. Subsequently, in 2017, after about an 8-year stint at the Council for Scientific and Industrial Research (CSIR), he was appointed as professor, and later promoted to research professor at the School of Chemistry of the University of the Witwatersrand. [ 8 ] He serves as an Honorary Visiting professor at the Wuhan University of Technology , China . [ 9 ] Ozoemena was elected African representative of the International Society of Electrochemistry from 2010 to 2015 and chair of the Scientific Meeting Committee (SMC) of the International Society of Electrochemistry. He was the chair of the Organising Committee of the 70th Annual Meeting of the International Society of Electrochemistry (ISE) Durban , the first conference of the ISE on the African continent. Subsequently, he served as the lead guest editor of the special issue of the conference in Electrochimica Acta . [ 10 ] Ozoemena has focused his research in the field of materials electrochemistry, with a specific interest in advanced batteries, fuel cells , and electrochemical sensors as the primary aspects of investigation. Ozoemena has worked on improving the structural and electrochemical properties of lithium-ion batteries. [ 11 ] [ 12 ] One of his innovations include the use of microwave-assisted synthesis [ 13 ] [ 14 ] to mitigate the problems of manganese dissolution and the so-called Jahn-Teller distortion which conspire against the development and commercialization of high-energy and low-cost manganese-based cathode materials. [ 15 ] Ozoemena's enquiry on the microwave-assisted synthesis and use of low cost and environmentally friendly manganese-based raw materials has led to the discovery of a new strategy of making triplite manganese fluorophosphate. [ 16 ] [ 17 ] In addition, Ozoemena group has demonstrated that nanostructured manganese-based complexes are promising materials for the development of high-performance supercapacitors and pseudocapacitors. [ 18 ] [ 19 ] Ozoemena worked on the use of microwave-assisted synthesis to bring about ‘top-down’ nanosizing of palladium catalysts, introducing the term “MITNAD” which is an acronym for “microwave-induced top-down nanostructuring and decoration”. [ 20 ] He has continued to explore the application of this technique and related techniques for the development of high-performance electrocatalysts for fuel cells and electrolyzers. [ 21 ] [ 22 ] Ozoemena and collaborators have studied several electrode materials that can enhance the efficacy of zinc-ion and rechargeable zinc-air batteries (RZAB). [ 23 ] The key research focus in this field has been to develop real and relevant RZAB technology for stationary and mobile applications. [ 24 ] Ozoemena has contributed in connecting biomedicine with electrochemistry, resulting in the creation of electrochemical bio- and immuno-sensors capable of detecting diseases that are mostly found in resource-limited countries, including tuberculosis in HIV-positive patients, [ 25 ] vibrio cholera toxins in water bodies, [ 26 ] substance abuse such as tramadol, [ 27 ] and human papillomavirus (HPV) biomarkers for cervical cancer. [ 28 ]
https://en.wikipedia.org/wiki/Kenneth_Ikechukwu_Ozoemena
Kenneth John Packer FRS [ 1 ] (18 May 1938 – 18 September 2021) was a British nuclear magnetic resonance (NMR) scientist who was amongst the pioneers of NMR application in the second half of the 20th century. [ 2 ] Born in Kettering , Packer studied chemistry at Imperial College London , before embarking on a PhD at the University of Cambridge , where his career in the field of NMR and its applications would begin. His NMR research was established at the School of Chemistry at the University of East Anglia (UEA), where he remained for twenty years from 1964 to 1984. He then left academia for almost a decade, working for BP Research, before returning as research chair in the chemistry department of the University of Nottingham , where he remained until retirement in 2001. He received the Royal Society of Chemistry 's Medal for Analytical Spectroscopy in 1986. [ 3 ] He was elected Fellow of the Royal Society in 1991. This article about an English scientist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kenneth_John_Packer
Kenneth D. Karlin was born on October 30, 1948, in Pasadena, California , [ 1 ] a professor of chemistry at Johns Hopkins University in Baltimore, Maryland. [ 2 ] Research in his group focuses on coordination chemistry relevant to biological and environmental processes, involving copper or heme complexes. [ 3 ] Of particular interest are reactivities of such complexes with nitrogen oxides, O 2 , and the oxidation of substrates by the resultant compounds. He is also the Editor-in-Chief of the book series Progress in Inorganic Chemistry . [ 4 ] Karlin is the son of Stanford mathematician Samuel Karlin . [ 8 ]
https://en.wikipedia.org/wiki/Kenneth_Karlin_(chemist)
Lieutenant General Sir Kenneth Morley Loch , KCIE , KBE , CB , MC , (18 September 1890 – 9 January 1961) was a Scottish soldier in the British Army and defence planner. Born on 18 September 1890, Loch was educated at Wellington College, Berkshire and the Royal Military Academy, Woolwich , and, upon passing out from the latter, received a commission as a second lieutenant into the Royal Artillery on 23 December 1910. [ 1 ] [ 2 ] [ 3 ] He saw action during World War I at the retreat from Mons and the battles of the Marne and Aisne , all in 1914. [ 3 ] Leaving the front lines in 1916 he became an instructor in gunnery at the School of Instruction for the Royal Horse Artillery and the Royal Field Artillery until he returned to front line service in the Italian Campaign of 1918. [ 3 ] During the war Loch was twice mentioned in dispatches and received the Military Cross (MC). [ 4 ] [ 2 ] Between the wars Loch remained in the army and attended the Staff College, Camberley from 1923 to 1924. [ 2 ] His fellow students included numerous future general officers , such as Dudley Johnson , John Smyth , Arthur Wakely , Montagu Stopford , Arthur Percival , Douglas Henry Pratt , Robert Stone , John Halsted , Frederick Pile , Michael Gambier-Parry , Henry Wemyss , Robert Pargiter , Edmond Schreiber , Alastair MacDougall , Roderic Petre , Balfour Hutchison , Leslie Hill and Gordon Macready . [ 2 ] He was involved in air defence preparations for Britain around the British Empire . [ 3 ] From 1926 to 1929 he was a General Staff Officer Grade 2 (GSO2) to the Territorial Army (TA) Air Defence Formations, and from 1932 to 1935 an instructor at the Staff College, Quetta ; GSO2 at the War Office , 1935–1937, and GSO1, Royal Air Force (RAF) Fighter Command , 1937–1938. [ 4 ] [ 2 ] From the beginning of World War II until 1941, Loch was Director of Anti-Aircraft and Coastal Defence, [ 3 ] [ 4 ] first as acting major-general , then from 25 November 1940 as temporary major-general. [ 5 ] He argued successfully against the use of chemical weapons in case of a German invasion of Britain (see Operation Sea Lion ). [ 6 ] After a three-year tour of inspection of anti-aircraft defences in the British Empire (a Special Employment), he became Master-General of Ordnance, India, from 1944 until his retirement in 1947. [ 3 ] After retiring the service Loch was with the British Council from 1947 to 1948, then served as a member of the Control Commission for Germany, 1948–1949, and returned to the British Council from 1950 to 1958. [ 4 ] He was also Chairman of Governors of Wellington College. [ 2 ] In 1929 Loch married Monica Joan Estelle Ruffer, the daughter of a German banker , and had two sons. He was also the uncle of the Labour Member of Parliament Tam Dalyell . [ 7 ]
https://en.wikipedia.org/wiki/Kenneth_Loch
Sir Kenneth "Ken" Murray (30 December 1930 – 7 April 2013) was a British molecular biologist and the Biogen Professor of Molecular Biology at the University of Edinburgh . [ 4 ] An important early figure in genetic engineering , Murray cofounded Biogen . There, he and his team developed one of the first vaccines against hepatitis B . Along with his wife, biologist Lady Noreen (née Parker) , Murray also founded the Darwin Trust of Edinburgh, a charity supporting young biologists in their doctoral studies. [ 5 ] Murray achieved a 1st class honours degree in chemistry followed by PhD from the University of Birmingham . From 1960 to 1964 he was a researcher at J. Murray Luck 's laboratory at Stanford University and from 1964 to 1967 he was a researcher at Fred Sanger's laboratory at Cambridge University . In 1967, he was appointed lecturer at the University of Edinburgh and in 1976 he became Head of Molecular Biology. In 1984 he was appointed Biogen Professor of Molecular Biology, a post which he retained until his retirement. He was elected a Fellow of the Royal Society in 1979, [ 6 ] Fellow of the Royal Society of Edinburgh in 1989 and awarded the RSE Royal Medal in 2000 with the citation "For their outstanding contribution to the development of Biotechnology, both nationally and internationally, through his development of what is now known as recombinant DNA technology." [ 7 ] Murray was born in Yorkshire and brought up in the Midlands . He left school at the age of 16 to become a laboratory technician at Boots in Nottingham . He studied part-time and obtained a degree in chemistry and then a PhD in microbiology from University of Birmingham . Sir Kenneth's wife, Lady Noreen Murray CBE, was elected a Fellow of the Royal Society in 1982. [ 8 ] She died on 12 May 2011 aged 76. [ 8 ] This article about an English scientist is a stub . You can help Wikipedia by expanding it . This article about a biologist from the United Kingdom is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kenneth_Murray_(biologist)
" Kenny Dies " is the thirteenth episode of the fifth season of the animated television series South Park , and the 78th episode of the series overall. "Kenny Dies" originally aired in the United States on December 5, 2001 on Comedy Central and in the United Kingdom on April 20, 2002 on Sky One . In the episode, Cartman comes across a truckload of fetuses he cannot sell thanks to a recent government ruling on stem cell research. When Kenny is diagnosed with a terminal illness, Cartman uses it to lobby Congress to restore stem cell research. The episode was written and directed by series co-creator Trey Parker and is rated TV-MA in the United States, except on syndicated broadcasts and post-2017 reruns on Comedy Central, where the episode is instead rated TV-14 . The gag of Kenny dying in almost every episode was dropped after this episode, and he did not reappear bodily until " Red Sleigh Down ", due to fans being upset over his absence all throughout season 6 in 2002. This was the final appearance of the "4th Grade" title sequence, which was first seen in " 4th Grade ." On the DVD commentary for the episode, Parker and Matt Stone state that they had originally planned to kill Kyle off for a year, but decided to kill Kenny instead as they were running out of original ways to kill him. The episode begins in an abortion clinic, a woman is giving her permission for scientists to use her dead fetus for stem cell research. However, during the transport of the fetuses the driver swerves and falls off a mountain road due to a deer running across the road and the truck gets destroyed which was carrying all the fetuses from the clinic to Alder Research Group. Cartman happens to be passing the crash at this moment, and he notices the fetuses from the truck. He then steals the fetuses, he starts calling companies hoping that he would be able to resell the fetuses and get a huge profit off of them. Meanwhile, the boys find out that Kenny has been diagnosed with a terminal disease, which while not named in the show is confirmed to be muscular dystrophy. Everyone is shocked by the news, and everyone except Stan tries their best to be confident for Kenny's sake. Stan, however, is too upset to be near Kenny in his state and refuses to even visit him. Cartman wants to restore stem cell research, and a researcher explains to Cartman about how the studies actually work. With this information, Cartman realizes that the SCR might also be used to help Kenny recover from his illness. He then thinks that the research could also be used to duplicate a " Shakey's Pizza ". The researcher recommends using lumber for that second idea. Then Cartman goes to the government court and gives a very thoughtful speech to the House of Representatives, requesting that the SCR ban be lifted, explaining about his best friend, Kenny, who is slowly dying as they speak. He then says that he believes the research can save Kenny's life. The House of Representatives committee then lifts the ban after Cartman sings "Heat of the Moment". Cartman begins visiting abortion clinics throughout the area, in order to collect even more aborted fetuses. Meanwhile, after Chef gives Stan a little lecture on death, and why God lets it happen at any given time, Stan finally comes to terms with Kenny dying. He also finds out that Stem Cell Research has been made legal again, and with the hope that a cure might be found, rushes to visit Kenny. Unfortunately, Kenny's hospital room is empty, and the teddy bear he was given is on the floor, seemingly holding onto the unmade bed Kenny slept in. Stan knows what this means - Kyle says he just stopped breathing, and it was over. He also adds that Kenny's last words were "Where's Stan?" At the funeral, Stan feels extremely guilty over not seeing Kenny before he died, and considers himself to have been Kenny's worst friend. As the funeral takes place, though, Cartman comes rushing in and drags Stan and Kyle out of the ceremony, explaining that they have to see the "miracle" that has just occurred. The boys are shocked and disgusted to see that, instead of using the stem cells to save Kenny, Cartman only wanted them so he could duplicate a Shakey's Pizza. Kyle is absolutely furious that Cartman used Kenny's illness and death just so he could have his own Shakey's and launches a furious beating against him, while Stan feels better now that he knows it was Cartman who was Kenny's worst friend. Serene Dominic of the Detroit Metro Times called the scene where Cartman leads members of the United States Congress in a sing-along of " Heat of the Moment " as the "Greatest Cartoon Moment" in the career of the original four members of Asia . [ 1 ] In an article for ESPN.com , Tim Kavanagh discussed stem cells and how they were used in the episode, writing, "This, as with many other important topics of our day, I learned from "South Park," specifically Episode 513, entitled 'Kenny Dies.'" [ 2 ] In a review of the South Park season 5 DVD release, Choire Sicha of The New York Times gave a "not-so-surprising surprise ending alert" that "Kenny finally really dies" at the end of the episode. [ 3 ] Alessandra Stanley of The New York Times cited the episode when noting that the political commentary on South Park, like The Simpsons and Freak Show , was genuinely "provocative" satire, unlike the "safer ... mainstream iconoclasm" of Saturday Night Live . [ 4 ] "Kenny Dies", along with the thirteen other episodes from South Park: the Complete Fifth Season , was released on a three-disc DVD set in the United States on February 22, 2005. The set includes brief audio commentaries by Parker and Stone for each episode. [ 5 ]
https://en.wikipedia.org/wiki/Kenny_Dies
Kensō Soai ( 硤合 憲三 , Soai Kensō ) (born 1950) is a Japanese organic chemist . He is a university lecturer in the Applied Chemistry Department of Tokyo University of Science . Soai studied at the University of Tokyo , where he received his Ph.D. in 1979 in organic synthesis under Teruaki Mukaiyama and was a fellow of the Japan Society for the Promotion of Science . He conducted his postdoctoral studies with Ernest L. Eliel at the University of North Carolina . In 1981, he became a lecturer at Tokyo University of Science, and was promoted to associate professor (1986) and full professor (1991). [ 1 ] He is involved in asymmetric and enantioselective synthesis , asymmetric autocatalysis , origin of chirality , The Soai reaction for alkylation of pyrimidine-5-carbaldehyde with diisopropylzinc is named after him. Soai was a visiting professor to many universities such as the ESPCI Paris (2001), Kyushu University (2005), Waseda University (2007-2010), University of Strasbourg (2008), and Jilin University (2010-2015). [ 1 ] This article about a Japanese scientist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kensō_Soai
Kentrophoros is a genus of ciliates in the class Karyorelictea . Ciliates in this genus lack a distinct oral apparatus and depend primarily on symbiotic bacteria for their nutrition. Kentrophoros is the sole genus in the family Kentrophoridae Jankowski 1980. [ 2 ] The type species of the genus is K. fasciolatus Sauerbrey 1928, first described from the Bay of Kiel . Synonyms are Centrophorus Kahl 1931 (an illegitimate synonym because the name was already used for a genus of sharks ) and Centrophorella Kahl 1935. Fifteen species of Kentrophoros have been formally described, although several of these names may be synonyms for the same species. [ 3 ] The ciliates are long and ribbon-shaped, like other karyorelictean ciliates that live in the marine interstitial habitat. [ 4 ] In some species, the cell body is folded or involuted into a tube or more elaborate shapes. The ventral side is ciliated , while the dorsal side is mostly unciliated except for a single "circle kinety" at the margin. [ 3 ] The dorsal side is covered with a single layer of symbiotic bacteria. Kentrophoros lacks a distinct oral apparatus, although densely-spaced kinetids associated with fibers (nematodesmata) at the anterior part of the cell may be vestiges of the oral apparatus. [ 3 ] The number and arrangement of nuclei within the cell are also variable between species. Some species have only one micronucleus and two macronuclei , but others can have multiple clusters of macro- and micronuclei, or so-called "composite nuclei" where each cluster of macro- and micronuclei is enclosed in another membrane. [ 5 ] Kentrophoros live in coastal marine sediments , where they prefer the interface between oxic and anoxic layers. [ 6 ] The dorsal side of Kentrophoros is covered in a single layer of rod-shaped bacterial symbionts . These bacteria gain their energy from oxidizing sulfide , and unlike other sulfur-oxidizing symbionts, lack the genetic capacity to fix CO 2 autotrophically into biomass; instead they appear to be entirely heterotrophic. [ 7 ] The ciliates ingest the bacteria as their primary food source. This symbiosis has therefore been called a " kitchen garden " carried by the ciliates to feed themselves. [ 8 ] The symbionts occupy about 50% of the total volume. They belong to a group in the Gammaproteobacteria for which the provisional name " Candidatus Kentron" has been proposed. [ 9 ] Similar symbioses between eukaryotic hosts and sulfur-oxidizing bacteria include the ciliate Zoothamnium niveum , oligochaete worm Olavius algarvensis , and flatworm Paracatenula .
https://en.wikipedia.org/wiki/Kentrophoros
The Kepler conjecture , named after the 17th-century mathematician and astronomer Johannes Kepler , is a mathematical theorem about sphere packing in three-dimensional Euclidean space . It states that no arrangement of equally sized spheres filling space has a greater average density than that of the cubic close packing ( face-centered cubic ) and hexagonal close packing arrangements. The density of these arrangements is around 74.05%. In 1998, Thomas Hales , following an approach suggested by Fejes Tóth (1953) , announced that he had a proof of the Kepler conjecture. Hales' proof is a proof by exhaustion involving the checking of many individual cases using complex computer calculations. Referees said that they were "99% certain" of the correctness of Hales' proof, and the Kepler conjecture was accepted as a theorem . In 2014, the Flyspeck project team, headed by Hales, announced the completion of a formal proof of the Kepler conjecture using a combination of the Isabelle and HOL Light proof assistants . In 2017, the formal proof was accepted by the journal Forum of Mathematics, Pi . [ 1 ] Imagine filling a large container with small equal-sized spheres: Say a porcelain gallon jug with identical marbles. The "density" of the arrangement is equal to the total volume of all the marbles, divided by the volume of the jug. To maximize the number of marbles in the jug means to create an arrangement of marbles stacked between the sides and bottom of the jug, that has the highest possible density, so that the marbles are packed together as closely as possible. Experiment shows that dropping the marbles in randomly, with no effort to arrange them tightly, will achieve a density of around 65%. [ 2 ] However, a higher density can be achieved by carefully arranging the marbles as follows: At each step there are at least two choices of how to place the next layer, so this otherwise unplanned method of stacking the spheres creates an uncountably infinite number of equally dense packings. The best known of these are called cubic close packing and hexagonal close packing . Each of these arrangements has an average density of The Kepler conjecture says that this is the best that can be done—no other arrangement of marbles has a higher average density: Despite there being astoundingly many different arrangements possible that follow the same procedure as steps 1–3, no packing (according to the procedure or not) can possibly fit more marbles into the same jug. The conjecture was first stated by Johannes Kepler ( 1611 ) in his paper 'On the six-cornered snowflake'. He had started to study arrangements of spheres as a result of his correspondence with the English mathematician and astronomer Thomas Harriot in 1606. Harriot was a friend and assistant of Sir Walter Raleigh , who had asked Harriot to find formulas for counting stacked cannonballs, an assignment which in turn led Raleigh's mathematician acquaintance into wondering about what the best way to stack cannonballs was. [ 3 ] Harriot published a study of various stacking patterns in 1591, and went on to develop an early version of atomic theory . Kepler did not have a proof of the conjecture, and the next step was taken by Carl Friedrich Gauss ( 1831 ), who proved that the Kepler conjecture is true if the spheres have to be arranged in a regular lattice . This meant that any packing arrangement that disproved the Kepler conjecture would have to be an irregular one. But eliminating all possible irregular arrangements is very difficult, and this is what made the Kepler conjecture so hard to prove. In fact, there are irregular arrangements that are denser than the cubic close packing arrangement over a small enough volume, but any attempt to extend these arrangements to fill a larger volume is now known to always reduce their density. After Gauss, no further progress was made towards proving the Kepler conjecture in the nineteenth century. In 1900 David Hilbert included it in his list of twenty three unsolved problems of mathematics —it forms part of Hilbert's eighteenth problem . The next step toward a solution was taken by László Fejes Tóth . Fejes Tóth (1953) showed that the problem of determining the maximum density of all arrangements (regular and irregular) could be reduced to a finite (but very large) number of calculations. This meant that a proof by exhaustion was, in principle, possible. As Fejes Tóth realised, a fast enough computer could turn this theoretical result into a practical approach to the problem. Meanwhile, attempts were made to find an upper bound for the maximum density of any possible arrangement of spheres. English mathematician Claude Ambrose Rogers (see Rogers (1958) ) established an upper bound value of about 78%, and subsequent efforts by other mathematicians reduced this value slightly, but this was still much larger than the cubic close packing density of about 74%. In 1990, Wu-Yi Hsiang claimed to have proven the Kepler conjecture. The proof was praised by Encyclopædia Britannica and Science and Hsiang was also honored at joint meetings of AMS-MAA. [ 4 ] Wu-Yi Hsiang ( 1993 , 2001 ) claimed to prove the Kepler conjecture using geometric methods. However Gábor Fejes Tóth (the son of László Fejes Tóth) stated in his review of the paper "As far as details are concerned, my opinion is that many of the key statements have no acceptable proofs." Hales (1994) gave a detailed criticism of Hsiang's work, to which Hsiang (1995) responded. The current consensus is that Hsiang's proof is incomplete. [ 5 ] Following the approach suggested [ 6 ] by László Fejes Tóth , Thomas Hales , then at the University of Michigan , determined that the maximum density of all arrangements could be found by minimizing a function with 150 variables. In 1992, assisted by his graduate student Samuel Ferguson, he embarked on a research program to systematically apply linear programming methods to find a lower bound on the value of this function for each one of a set of over 5,000 different configurations of spheres. If a lower bound (for the function value) could be found for every one of these configurations that was greater than the value of the function for the cubic close packing arrangement, then the Kepler conjecture would be proved. To find lower bounds for all cases involved solving about 100,000 linear programming problems. When presenting the progress of his project in 1996, Hales said that the end was in sight, but it might take "a year or two" to complete. In August 1998 Hales announced that the proof was complete. At that stage, it consisted of 250 pages of notes and 3 gigabytes of computer programs, data and results. Despite the unusual nature of the proof, the editors of the Annals of Mathematics agreed to publish it, provided it was accepted by a panel of twelve referees. In 2003, after four years of work, the head of the referee's panel, Gábor Fejes Tóth, reported that the panel were "99% certain" of the correctness of the proof, but they could not certify the correctness of all of the computer calculations. Hales (2005) published a 100-page paper describing the non-computer part of his proof in detail. Hales & Ferguson (2006) and several subsequent papers described the computational portions. Hales and Ferguson received the Fulkerson Prize for outstanding papers in the area of discrete mathematics for 2009. In January 2003, Hales announced the start of a collaborative project to produce a complete formal proof of the Kepler conjecture. The aim was to remove any remaining uncertainty about the validity of the proof by creating a formal proof that can be verified by automated proof checking software such as HOL Light and Isabelle . This project was called Flyspeck – an expansion of the acronym FPK standing for Formal Proof of Kepler . At the start of this project, in 2007, Hales estimated that producing a complete formal proof would take around 20 years of work. [ 7 ] Hales published a "blueprint" for the formal proof in 2012; [ 8 ] the completion of the project was announced on August 10, 2014. [ 9 ] In January 2015 Hales and 21 collaborators posted a paper titled "A formal proof of the Kepler conjecture" on the arXiv , claiming to have proved the conjecture. [ 10 ] In 2017, the formal proof was accepted by the journal Forum of Mathematics . [ 1 ]
https://en.wikipedia.org/wiki/Kepler_conjecture
In classical mechanics , the Kepler problem is a special case of the two-body problem , in which the two bodies interact by a central force that varies in strength as the inverse square of the distance between them. The force may be either attractive or repulsive. The problem is to find the position or speed of the two bodies over time given their masses , positions , and velocities . Using classical mechanics, the solution can be expressed as a Kepler orbit using six orbital elements . The Kepler problem is named after Johannes Kepler , who proposed Kepler's laws of planetary motion (which are part of classical mechanics and solved the problem for the orbits of the planets) and investigated the types of forces that would result in orbits obeying those laws (called Kepler's inverse problem ). [ 1 ] For a discussion of the Kepler problem specific to radial orbits, see Radial trajectory . General relativity provides more accurate solutions to the two-body problem, especially in strong gravitational fields . The inverse square law behind the Kepler problem is the most important central force law. [ 1 ] : 92 The Kepler problem is important in celestial mechanics , since Newtonian gravity obeys an inverse square law . Examples include a satellite moving about a planet, a planet about its sun, or two binary stars about each other. The Kepler problem is also important in the motion of two charged particles, since Coulomb’s law of electrostatics also obeys an inverse square law . The Kepler problem and the simple harmonic oscillator problem are the two most fundamental problems in classical mechanics . They are the only two problems that have closed orbits for every possible set of initial conditions, i.e., return to their starting point with the same velocity ( Bertrand's theorem ). [ 1 ] : 92 The Kepler problem also conserves the Laplace–Runge–Lenz vector , which has since been generalized to include other interactions. The solution of the Kepler problem allowed scientists to show that planetary motion could be explained entirely by classical mechanics and Newton’s law of gravity ; the scientific explanation of planetary motion played an important role in ushering in the Enlightenment . The Kepler problem begins with the empirical results of Johannes Kepler arduously derived by analysis of the astronomical observations of Tycho Brahe . After some 70 attempts to match the data to circular orbits, Kepler hit upon the idea of the elliptic orbit . He eventually summarized his results in the form of three laws of planetary motion . [ 2 ] What is now called the Kepler problem was first discussed by Isaac Newton as a major part of his Principia . His "Theorema I" begins with the first two of his three axioms or laws of motion and results in Kepler's second law of planetary motion. Next Newton proves his "Theorema II" which shows that if Kepler's second law results, then the force involved must be along the line between the two bodies. In other words, Newton proves what today might be called the "inverse Kepler problem": the orbit characteristics require the force to depend on the inverse square of the distance. [ 3 ] : 107 The central force F between two objects varies in strength as the inverse square of the distance r between them: where k is a constant and r ^ {\displaystyle \mathbf {\hat {r}} } represents the unit vector along the line between them. [ 4 ] The force may be either attractive ( k < 0) or repulsive ( k > 0). The corresponding scalar potential is: The equation of motion for the radius r {\displaystyle r} of a particle of mass m {\displaystyle m} moving in a central potential V ( r ) {\displaystyle V(r)} is given by Lagrange's equations ω ≡ d θ d t {\displaystyle \omega \equiv {\frac {d\theta }{dt}}} and the angular momentum L = m r 2 ω {\displaystyle L=mr^{2}\omega } is conserved. For illustration, the first term on the left-hand side is zero for circular orbits, and the applied inwards force d V d r {\displaystyle {\frac {dV}{dr}}} equals the centripetal force requirement m r ω 2 {\displaystyle mr\omega ^{2}} , as expected. If L is not zero the definition of angular momentum allows a change of independent variable from t {\displaystyle t} to θ {\displaystyle \theta } giving the new equation of motion that is independent of time The expansion of the first term is This equation becomes quasilinear on making the change of variables u ≡ 1 r {\displaystyle u\equiv {\frac {1}{r}}} and multiplying both sides by m r 2 L 2 {\displaystyle {\frac {mr^{2}}{L^{2}}}} After substitution and rearrangement: For an inverse-square force law such as the gravitational or electrostatic potential , the scalar potential can be written The orbit u ( θ ) {\displaystyle u(\theta )} can be derived from the general equation whose solution is the constant − k m L 2 {\displaystyle -{\frac {km}{L^{2}}}} plus a simple sinusoid where e {\displaystyle e} (the eccentricity ) and θ 0 {\displaystyle \theta _{0}} (the phase offset ) are constants of integration. This is the general formula for a conic section that has one focus at the origin; e = 0 {\displaystyle e=0} corresponds to a circle , e < 1 {\displaystyle e<1} corresponds to an ellipse, e = 1 {\displaystyle e=1} corresponds to a parabola , and e > 1 {\displaystyle e>1} corresponds to a hyperbola . The eccentricity e {\displaystyle e} is related to the total energy E {\displaystyle E} (cf. the Laplace–Runge–Lenz vector ) Comparing these formulae shows that E < 0 {\displaystyle E<0} corresponds to an ellipse (all solutions which are closed orbits are ellipses), E = 0 {\displaystyle E=0} corresponds to a parabola , and E > 0 {\displaystyle E>0} corresponds to a hyperbola . In particular, E = − k 2 m 2 L 2 {\displaystyle E=-{\frac {k^{2}m}{2L^{2}}}} for perfectly circular orbits (the central force exactly equals the centripetal force requirement , which determines the required angular velocity for a given circular radius). For a repulsive force ( k > 0) only e > 1 applies.
https://en.wikipedia.org/wiki/Kepler_problem
The Kepler space telescope is a defunct space telescope launched by NASA in 2009 [ 5 ] to discover Earth-sized planets orbiting other stars . [ 6 ] [ 7 ] Named after astronomer Johannes Kepler , [ 8 ] the spacecraft was launched into an Earth-trailing heliocentric orbit . The principal investigator was William J. Borucki . After nine and a half years of operation, the telescope's reaction control system fuel was depleted, and NASA announced its retirement on October 30, 2018. [ 9 ] [ 10 ] Designed to survey a portion of Earth's region of the Milky Way to discover Earth-size exoplanets in or near habitable zones and to estimate how many of the billions of stars in the Milky Way have such planets, [ 6 ] [ 11 ] [ 12 ] Kepler's sole scientific instrument is a photometer that continually monitored the brightness of approximately 150,000 main sequence stars in a fixed field of view. [ 13 ] These data were transmitted to Earth, then analyzed to detect periodic dimming caused by exoplanets that cross in front of their host star. Only planets whose orbits are seen edge-on from Earth could be detected. Kepler observed 530,506 stars, and had detected 2,778 confirmed planets as of June 16, 2023. [ 14 ] [ 15 ] The Kepler space telescope was part of NASA's Discovery Program of relatively low-cost science missions. The telescope's construction and initial operation were managed by NASA's Jet Propulsion Laboratory , with Ball Aerospace responsible for developing the Kepler flight system. [ 16 ] In January 2006, the project's launch was delayed eight months because of budget cuts and consolidation at NASA. [ 17 ] It was delayed again by four months in March 2006 due to fiscal problems. [ 17 ] During this time, the high-gain antenna was changed from a design using a gimbal to one fixed to the frame of the spacecraft to reduce cost and complexity, at the cost of one observation day per month. [ 18 ] The Ames Research Center was responsible for the ground system development, mission operations since December 2009, and scientific data analysis. The initial planned lifetime was three and a half years, [ 19 ] but greater-than-expected noise in the data , from both the stars and the spacecraft, meant additional time was needed to fulfill all mission goals. Initially, in 2012, the mission was expected to be extended until 2016, [ 20 ] but on July 14, 2012, one of the four reaction wheels used for pointing the spacecraft stopped turning, and completing the mission would only be possible if the other three all remained reliable. [ 21 ] Then, on May 11, 2013, a second one failed, disabling the collection of science data [ 22 ] and threatening the continuation of the mission. [ 23 ] On August 15, 2013, NASA announced that they had given up trying to fix the two failed reaction wheels. This meant the current mission needed to be modified, but it did not necessarily mean the end of planet hunting. NASA had asked the space science community to propose alternative mission plans "potentially including an exoplanet search, using the remaining two good reaction wheels and thrusters". [ 24 ] [ 25 ] [ 26 ] [ 27 ] On November 18, 2013, the K2 "Second Light" proposal was reported. This would include utilizing the disabled Kepler in a way that could detect habitable planets around smaller, dimmer red dwarfs . [ 28 ] [ 29 ] [ 30 ] [ 31 ] On May 16, 2014, NASA announced the approval of the K2 extension. [ 32 ] By January 2015, Kepler and its follow-up observations had found 1,013 confirmed exoplanets in about 440 star systems , along with a further 3,199 unconfirmed planet candidates. [ B ] [ 33 ] [ 34 ] Four planets have been confirmed through Kepler's K2 mission. [ 35 ] In November 2013, astronomers estimated, based on Kepler space mission data, that there could be as many as 40 billion rocky Earth-size exoplanets orbiting in the habitable zones of Sun-like stars and red dwarfs within the Milky Way . [ 36 ] [ 37 ] [ 38 ] It is estimated that 11 billion of these planets may be orbiting Sun-like stars. [ 39 ] The nearest such planet may be 3.7 parsecs (12 ly ) away, according to the scientists. [ 36 ] [ 37 ] On January 6, 2015, NASA announced the 1,000th confirmed exoplanet discovered by the Kepler space telescope. Four of the newly confirmed exoplanets were found to orbit within habitable zones of their related stars : three of the four, Kepler-438b , Kepler-442b and Kepler-452b , are almost Earth-size and likely rocky; the fourth, Kepler-440b , is a super-Earth . [ 40 ] On May 10, 2016, NASA verified 1,284 new exoplanets found by Kepler, the single largest finding of planets to date. [ 41 ] [ 42 ] [ 43 ] Kepler data have also helped scientists observe and understand supernovae ; measurements were collected every half-hour so the light curves were especially useful for studying these types of astronomical events. [ 44 ] On October 30, 2018, after the spacecraft ran out of fuel, NASA announced that the telescope would be retired. [ 45 ] The telescope was shut down the same day, bringing an end to its nine-year service. Kepler observed 530,506 stars and discovered 2,662 exoplanets over its lifetime. [ 15 ] A newer NASA mission, TESS , launched in 2018, is continuing the search for exoplanets. [ 46 ] The telescope has a mass of 1,039 kilograms (2,291 lb) and contains a Schmidt camera with a 0.95-meter (37.4 in) front corrector plate (lens) feeding a 1.4-meter (55 in) primary mirror —at the time of its launch this was the largest mirror on any telescope outside Earth orbit, [ 47 ] though the Herschel Space Observatory took this title a few months later. Its telescope has a 115 deg 2 (about 12-degree diameter) field of view (FoV), roughly equivalent to the size of one's fist held at arm's length. Of this, 105 deg 2 is of science quality, with less than 11% vignetting . The photometer has a soft focus to provide excellent photometry , rather than sharp images. The mission goal was a combined differential photometric precision (CDPP) of 20 ppm for a m (V)=12 Sun-like star for a 6.5-hour integration, though the observations fell short of this objective (see mission status ). The focal plane of the spacecraft's camera is made out of forty-two 50 × 25 mm (2 × 1 in) CCDs at 2200×1024 pixels each, possessing a total resolution of 94.6 megapixels , [ 48 ] [ 49 ] which at the time made it the largest camera system launched into space. [ 19 ] The array was cooled by heat pipes connected to an external radiator. [ 1 ] The CCDs were read out every 6.5 seconds (to limit saturation) and co-added on board for 58.89 seconds for short cadence targets, and 1765.5 seconds (29.4 minutes) for long cadence targets. [ 50 ] Due to the larger bandwidth requirements for the former, these were limited in number to 512 compared to 170,000 for long cadence. However, even though at launch Kepler had the highest data rate of any NASA mission, [ citation needed ] the 29-minute sums of all 95 million pixels constituted more data than could be stored and sent back to Earth. Therefore, the science team pre-selected the relevant pixels associated with each star of interest, amounting to about 6 percent of the pixels (5.4 megapixels). The data from these pixels was then requantized, compressed and stored, along with other auxiliary data, in the on-board 16 gigabyte solid-state recorder. Data that was stored and downlinked includes science stars, p-mode stars , smear, black level, background and full field-of-view images. [ 1 ] [ 51 ] The Kepler primary mirror is 1.4 meters (4.6 ft) in diameter. Manufactured by glass maker Corning using ultra-low expansion (ULE) glass , the mirror is specifically designed to have a mass only 14% that of a solid mirror of the same size. [ 52 ] [ 53 ] To produce a space telescope system with sufficient sensitivity to detect relatively small planets, as they pass in front of stars, a very high reflectance coating on the primary mirror was required. Using ion assisted evaporation , Surface Optics Corp. applied a protective nine-layer silver coating to enhance reflection and a dielectric interference coating to minimize the formation of color centers and atmospheric moisture absorption. [ 54 ] [ 55 ] In terms of photometric performance, Kepler worked well, much better than any Earth-bound telescope, but short of design goals. The objective was a combined differential photometric precision (CDPP) of 20 parts per million (PPM) on a magnitude 12 star for a 6.5-hour integration. This estimate was developed allowing 10 ppm for stellar variability, roughly the value for the Sun. The obtained accuracy for this observation has a wide range, depending on the star and position on the focal plane, with a median of 29 ppm. Most of the additional noise appears to be due to a larger-than-expected variability in the stars themselves (19.5 ppm as opposed to the assumed 10.0 ppm), with the rest due to instrumental noise sources slightly larger than predicted. [ 56 ] [ 48 ] Because decrease in brightness from an Earth-size planet transiting a Sun-like star is so small, only 80 ppm, the increased noise means each individual transit is only a 2.7 σ event, instead of the intended 4 σ. This, in turn, means more transits must be observed to be sure of a detection. Scientific estimates indicated that a mission lasting 7 to 8 years, as opposed to the originally planned 3.5 years, would be needed to find all transiting Earth-sized planets. [ 57 ] On April 4, 2012, the Kepler mission was approved for extension through the fiscal year 2016, [ 20 ] [ 58 ] but this also depended on all remaining reaction wheels staying healthy, which turned out not to be the case (see Reaction wheel issues below). Kepler orbits the Sun , [ 59 ] [ 60 ] which avoids Earth occultations , stray light, and gravitational perturbations and torques inherent in an Earth orbit. NASA has characterized Kepler's orbit as "Earth-trailing". [ 61 ] With an orbital period of 372.5 days, Kepler is slowly falling farther behind Earth (about 16 million miles per annum ). As of May 1, 2018 [update] , the distance to Kepler from Earth was about 0.917 AU (137 million km). [ 3 ] This means that after about 26 years Kepler will reach the other side of the Sun and will get back to the neighborhood of the Earth after 51 years. Until 2013 the photometer pointed to a field in the northern constellations of Cygnus , Lyra and Draco , which is well out of the ecliptic plane, so that sunlight never enters the photometer as the spacecraft orbits. [ 1 ] This is also the direction of the Solar System's motion around the center of the galaxy. Thus, the stars which Kepler observed are roughly the same distance from the Galactic Center as the Solar System , and also close to the galactic plane . This fact is important if position in the galaxy is related to habitability, as suggested by the Rare Earth hypothesis . Orientation is three-axis stabilized by sensing rotations using fine-guidance sensors located on the instrument focal plane (instead of rate sensing gyroscopes, e.g. as used on Hubble ). [ 1 ] and using reaction wheels and hydrazine thrusters [ 62 ] to control the orientation. Kepler was operated out of Boulder, Colorado , by the Laboratory for Atmospheric and Space Physics (LASP) under contract to Ball Aerospace & Technologies . The spacecraft's solar array was rotated to face the Sun at the solstices and equinoxes , so as to optimize the amount of sunlight falling on the solar array and to keep the heat radiator pointing towards deep space. [ 1 ] Together, LASP and Ball Aerospace controlled the spacecraft from a mission operations center located on the research campus of the University of Colorado . LASP performs essential mission planning and the initial collection and distribution of the science data. The mission's initial life-cycle cost was estimated at US$600 million, including funding for 3.5 years of operation. [ 1 ] In 2012, NASA announced that the Kepler mission would be funded until 2016 at a cost of about $20 million per year. [ 20 ] NASA contacted the spacecraft using the X band communication link twice a week for command and status updates. Scientific data are downloaded once a month using the K a band link at a maximum data transfer rate of approximately 550 kB/s . The high gain antenna is not steerable so data collection is interrupted for a day to reorient the whole spacecraft and the high gain antenna for communications to Earth. [ 1 ] : 16 Kepler was the first mission to rely on K a -band data downlink, and in turn collected statistics on K a -band performance for later missions using this technology. [ 63 ] The Kepler space telescope conducted its own partial analysis on board and only transmitted scientific data deemed necessary to the mission in order to conserve bandwidth. [ 64 ] Science data telemetry collected during mission operations at LASP is sent for processing to the Kepler Data Management Center (DMC) which is located at the Space Telescope Science Institute on the campus of Johns Hopkins University in Baltimore, Maryland . The science data telemetry is decoded and processed into uncalibrated FITS -format science data products by the DMC, which are then passed along to the Science Operations Center (SOC) at NASA Ames Research Center, for calibration and final processing. The SOC at NASA Ames Research Center (ARC) develops and operates the tools needed to process scientific data for use by the Kepler Science Office (SO). Accordingly, the SOC develops the pipeline data processing software based on scientific algorithms developed jointly by the SO and SOC. During operations, the SOC: [ 65 ] The SOC also evaluates the photometric performance on an ongoing basis and provides the performance metrics to the SO and Mission Management Office. Finally, the SOC develops and maintains the project's scientific databases, including catalogs and processed data. The SOC finally returns calibrated data products and scientific results back to the DMC for long-term archiving, and distribution to astronomers around the world through the Multimission Archive at STScI (MAST). On July 14, 2012, one of the four reaction wheels used for fine pointing of the spacecraft failed. [ 66 ] While Kepler requires only three reaction wheels to accurately aim the telescope, another failure would leave the spacecraft unable to aim at its original field. [ 67 ] After showing some problems in January 2013, a second reaction wheel failed on May 11, 2013, ending Kepler's primary mission. The spacecraft was put into safe mode, then from June to August 2013 a series of engineering tests were done to try to recover either failed wheel. By August 15, 2013, it was decided that the wheels were unrecoverable, [ 24 ] [ 25 ] [ 26 ] and an engineering report was ordered to assess the spacecraft's remaining capabilities. [ 24 ] This effort ultimately led to the "K2" follow-on mission observing different fields near the ecliptic. The reaction wheel failures were traced back to pitting caused by arcing between the steel ball bearings in the reaction wheel. The arcing was in turn caused by coronal mass ejections (CMEs) from the sun. Kepler's position far from Earth helped in determining the cause, due to the significant delay between the arrival of a CME at Kepler and Earth. [ 68 ] In January 2006, the project's launch was delayed eight months because of budget cuts and consolidation at NASA. [ 17 ] It was delayed again by four months in March 2006 due to fiscal problems. [ 17 ] At this time, the high-gain antenna was changed from a gimballed design to one fixed to the frame of the spacecraft to reduce cost and complexity, at the cost of one observation day per month. The Kepler observatory was launched on March 7, 2009, at 03:49:57 UTC aboard a Delta II rocket from Cape Canaveral Air Force Station , Florida. [ 2 ] [ 5 ] The launch was a success and all three stages were completed by 04:55 UTC. The cover of the telescope was jettisoned on April 7, 2009, and the first light images were taken on the next day. [ 69 ] [ 70 ] On April 20, 2009, it was announced that the Kepler science team had concluded that further refinement of the focus would dramatically increase the scientific return. [ 71 ] On April 23, 2009, it was announced that the focus had been successfully optimized by moving the primary mirror 40 micrometers (1.6 thousandths of an inch) towards the focal plane and tilting the primary mirror 0.0072 degree. [ 72 ] On May 13, 2009, at 00:01 UTC, Kepler successfully completed its commissioning phase and began its search for planets around other stars. [ 73 ] [ 74 ] On June 19, 2009, the spacecraft successfully sent its first science data to Earth. It was discovered that Kepler had entered safe mode on June 15. A second safe mode event occurred on July 2. In both cases the event was triggered by a processor reset . The spacecraft resumed normal operation on July 3 and the science data that had been collected since June 19 was downlinked that day. [ 75 ] On October 14, 2009, the cause of these safing events was determined to be a low voltage power supply that provides power to the RAD750 processor. [ 76 ] On January 12, 2010, one portion of the focal plane transmitted anomalous data, suggesting a problem with focal plane MOD-3 module, covering two out of Kepler's 42 CCDs . As of October 2010 [update] , the module was described as "failed", but the coverage still exceeded the science goals. [ 77 ] Kepler downlinked roughly twelve gigabytes of data [ 78 ] about once per month. [ 79 ] Kepler has a fixed field of view (FOV) against the sky. The diagram to the right shows the celestial coordinates and where the detector fields are located, along with the locations of a few bright stars with celestial north at the top left corner. The mission website has a calculator [ 80 ] that will determine if a given object falls in the FOV, and if so, where it will appear in the photo detector output data stream. Data on exoplanet candidates is submitted to the Kepler Follow-up Program , or KFOP, to conduct follow-up observations. Kepler's field of view covers 115 square degrees , around 0.25 percent of the sky, or "about two scoops of the Big Dipper". Thus, it would require around 400 Kepler-like telescopes to cover the whole sky. [ 81 ] The Kepler field contains portions of the constellations Cygnus , Lyra , and Draco . The nearest star system in Kepler's field of view is the triple star system Gliese 1245 , 15 light years from the Sun. The brown dwarf WISE J2000+3629, 22.8 ± 1 light years from the Sun is also in the field of view, but is invisible to Kepler due to emitting light primarily in infrared wavelengths. The scientific objective of the Kepler space telescope was to explore the structure and diversity of planetary systems . [ 82 ] This spacecraft observes a large sample of stars to achieve several key goals: Most of the exoplanets previously detected by other projects were giant planets , mostly the size of Jupiter and bigger. Kepler was designed to look for planets 30 to 600 times less massive, closer to the order of Earth's mass (Jupiter is 318 times more massive than Earth). The method used, the transit method , involves observing repeated transit of planets in front of their stars, which causes a slight reduction in the star's apparent magnitude , on the order of 0.01% for an Earth-size planet. The degree of this reduction in brightness can be used to deduce the diameter of the planet, and the interval between transits can be used to deduce the planet's orbital period, from which estimates of its orbital semi-major axis (using Kepler's laws ) and its temperature (using models of stellar radiation) can be calculated. [ citation needed ] The probability of a random planetary orbit being along the line-of-sight to a star is the diameter of the star divided by the diameter of the orbit. [ 84 ] For an Earth-size planet at 1 AU transiting a Sun-like star the probability is 0.47%, or about 1 in 210. [ 84 ] For a planet like Venus orbiting a Sun-like star the probability is slightly higher, at 0.65%; [ 84 ] If the host star has multiple planets, the probability of additional detections is higher than the probability of initial detection assuming planets in a given system tend to orbit in similar planes—an assumption consistent with current models of planetary system formation. [ 84 ] For instance, if a Kepler -like mission conducted by aliens observed Earth transiting the Sun, there is a 7% chance that it would also see Venus transiting. [ 84 ] Kepler's 115 deg 2 field of view gives it a much higher probability of detecting Earth-sized planets than the Hubble Space Telescope , which has a field of view of only 10 sq. arc-minutes . Moreover, Kepler is dedicated to detecting planetary transits, while the Hubble Space Telescope is used to address a wide range of scientific questions, and rarely looks continuously at just one starfield. Of the approximately half-million stars in Kepler's field of view, around 150,000 stars were selected for observation. More than 90,000 are G-type stars on, or near, the main sequence . Thus, Kepler was designed to be sensitive to wavelengths of 400–865 nm where brightness of those stars peaks. Most of the stars observed by Kepler have apparent visual magnitude between 14 and 16 but the brightest observed stars have apparent visual magnitude of 8 or lower. Most of the planet candidates were initially not expected to be confirmed due to being too faint for follow-up observations. [ 85 ] All the selected stars are observed simultaneously, with the spacecraft measuring variations in their brightness every thirty minutes. This provides a better chance for seeing a transit. The mission was designed to maximize the probability of detecting planets orbiting other stars. [ 1 ] [ 86 ] Because Kepler must observe at least three transits to confirm that the dimming of a star was caused by a transiting planet, and because larger planets give a signal that is easier to check, scientists expected the first reported results to be larger Jupiter-size planets in tight orbits. The first of these were reported after only a few months of operation. Smaller planets, and planets farther from their sun would take longer, and discovering planets comparable to Earth were expected to take three years or longer. [ 59 ] Data collected by Kepler is also being used for studying variable stars of various types and performing asteroseismology , [ 87 ] particularly on stars showing solar-like oscillations . [ 88 ] Once Kepler has collected and sent back the data, raw light curves are constructed. Brightness values are then adjusted to take the brightness variations due to the rotation of the spacecraft into account. The next step is processing (folding) light curves into a more easily observable form and letting software select signals that seem potentially transit-like. At this point, any signal that shows potential transit-like features is called a threshold crossing event. These signals are individually inspected in two inspection rounds, with the first round taking only a few seconds per target. This inspection eliminates erroneously selected non-signals, signals caused by instrumental noise and obvious eclipsing binaries. [ 89 ] Threshold crossing events that pass these tests are called Kepler Objects of Interest (KOI), receive a KOI designation and are archived. KOIs are inspected more thoroughly in a process called dispositioning. Those which pass the dispositioning are called Kepler planet candidates. The KOI archive is not static, meaning that a Kepler candidate could end up in the false-positive list upon further inspection. In turn, KOIs that were mistakenly classified as false positives could end up back in the candidates list. [ 90 ] Not all the planet candidates go through this process. Circumbinary planets do not show strictly periodic transits, and have to be inspected through other methods. In addition, third-party researchers use different data-processing methods, or even search planet candidates from the unprocessed light curve data. As a consequence, those planets may be missing KOI designation. Once suitable candidates have been found from Kepler data, it is necessary to rule out false positives with follow-up tests. Usually, Kepler candidates are imaged individually with more-advanced ground-based telescopes in order to resolve any background objects which could contaminate the brightness signature of the transit signal. [ 92 ] Another method to rule out planet candidates is astrometry for which Kepler can collect good data even though doing so was not a design goal. While Kepler cannot detect planetary-mass objects with this method, it can be used to determine if the transit was caused by a stellar-mass object. [ 93 ] There are a few different exoplanet detection methods which help to rule out false positives by giving further proof that a candidate is a real planet. One of the methods, called doppler spectroscopy , requires follow-up observations from ground-based telescopes. This method works well if the planet is massive or is located around a relatively bright star. While current spectrographs are insufficient for confirming planetary candidates with small masses around relatively dim stars, this method can be used to discover additional massive non-transiting planet candidates around targeted stars. [ citation needed ] In multiplanetary systems, planets can often be confirmed through transit timing variation by looking at the time between successive transits, which may vary if planets are gravitationally perturbed by each other. This helps to confirm relatively low-mass planets even when the star is relatively distant. Transit timing variations indicate that two or more planets belong to the same planetary system. There are even cases where a non-transiting planet is also discovered in this way. [ 94 ] Circumbinary planets show much larger transit timing variations between transits than planets gravitationally disturbed by other planets. Their transit duration times also vary significantly. Transit timing and duration variations for circumbinary planets are caused by the orbital motion of the host stars, rather than by other planets. [ 95 ] In addition, if the planet is massive enough, it can cause slight variations of the host stars' orbital periods. Despite being harder to find circumbinary planets due to their non-periodic transits, it is much easier to confirm them, as timing patterns of transits cannot be mimicked by an eclipsing binary or a background star system. [ 96 ] In addition to transits, planets orbiting around their stars undergo reflected-light variations—like the Moon , they go through phases from full to new and back again. Because Kepler cannot resolve the planet from the star, it sees only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small—the photometric precision required to see a close-in giant planet is about the same as to detect an Earth-sized planet in transit across a solar-type star—Jupiter-sized planets with an orbital period of a few days or less are detectable by sensitive space telescopes such as Kepler. In the long run, this method may help find more planets than the transit method, because the reflected light variation with orbital phase is largely independent of the planet's orbital inclination, and does not require the planet to pass in front of the disk of the star. In addition, the phase function of a giant planet is also a function of its thermal properties and atmosphere, if any. Therefore, the phase curve may constrain other planetary properties, such as the particle size distribution of the atmospheric particles. [ 97 ] Kepler's photometric precision is often high enough to observe a star's brightness changes caused by doppler beaming or a star's shape deformation by a companion. These can sometimes be used to rule out hot Jupiter candidates as false positives caused by a star or a brown dwarf when these effects are too noticeable. [ 98 ] However, there are some cases where such effects are detected even by planetary-mass companions such as TrES-2b . [ 99 ] If a planet cannot be detected through at least one of the other detection methods, it can be confirmed by determining if the possibility of a Kepler candidate being a real planet is significantly larger than any false-positive scenarios combined. One of the first methods was to see if other telescopes can see the transit as well. The first planet confirmed through this method was Kepler-22b which was also observed with the Spitzer Space Telescope in addition to analyzing any other false-positive possibilities. [ 100 ] Such confirmation is costly, as small planets can generally be detected only with space telescopes. In 2014, a new confirmation method called "validation by multiplicity" was announced. From the planets previously confirmed through various methods, it was found that planets in most planetary systems orbit in a relatively flat plane, similar to the planets found in the Solar System. This means that if a star has multiple planet candidates, it is very likely a real planetary system. [ 101 ] Transit signals still need to meet several criteria which rule out false-positive scenarios. For instance, it has to have considerable signal-to-noise ratio, it has at least three observed transits, orbital stability of those systems have to be stable and transit curve has to have a shape that partly eclipsing binaries could not mimic the transit signal. In addition, its orbital period needs to be 1.6 days or longer to rule out common false positives caused by eclipsing binaries. [ 102 ] Validation by multiplicity method is very efficient and allows to confirm hundreds of Kepler candidates in a relatively short amount of time. A new validation method using a tool called PASTIS has been developed. It makes it possible to confirm a planet even when only a single candidate transit event for the host star has been detected. A drawback of this tool is that it requires a relatively high signal-to-noise ratio from Kepler data, so it can mainly confirm only larger planets or planets around quiet and relatively bright stars. Currently, the analysis of Kepler candidates through this method is underway. [ 103 ] PASTIS was first successful for validating the planet Kepler-420b. [ 104 ] In April 2012, an independent panel of senior NASA scientists recommended that the Kepler mission be continued through 2016. According to the senior review, Kepler observations needed to continue until at least 2015 to achieve all the stated scientific goals. [ 105 ] On November 14, 2012, NASA announced the completion of Kepler's primary mission, and the beginning of its extended mission, which ended in 2018 when it ran out of fuel. [ 106 ] In July 2012, one of Kepler's four reaction wheels (wheel 2) failed. [ 24 ] On May 11, 2013, a second wheel (wheel 4) failed, jeopardizing the continuation of the mission, as three wheels are necessary for its planet hunting. [ 22 ] [ 23 ] Kepler had not collected science data since May because it was not able to point with sufficient accuracy. [ 107 ] On July 18 and 22 reaction wheels 4 and 2 were tested respectively; wheel 4 only rotated counter-clockwise but wheel 2 ran in both directions, albeit with significantly elevated friction levels. [ 108 ] A further test of wheel 4 on July 25 managed to achieve bi-directional rotation. [ 109 ] Both wheels, however, exhibited too much friction to be useful. [ 26 ] On August 2, NASA put out a call for proposals to use the remaining capabilities of Kepler for other scientific missions. Starting on August 8, a full systems evaluation was conducted. It was determined that wheel 2 could not provide sufficient precision for scientific missions and the spacecraft was returned to a "rest" state to conserve fuel. [ 24 ] Wheel 4 was previously ruled out because it exhibited higher friction levels than wheel 2 in previous tests. [ 109 ] Sending astronauts to fix Kepler is not an option because it orbits the Sun and is millions of kilometers from Earth. [ 26 ] On August 15, 2013, NASA announced that Kepler would not continue searching for planets using the transit method after attempts to resolve issues with two of the four reaction wheels failed. [ 24 ] [ 25 ] [ 26 ] An engineering report was ordered to assess the spacecraft's capabilities, its two good reaction wheels and its thrusters. [ 24 ] Concurrently, a scientific study was conducted to determine whether enough knowledge can be obtained from Kepler's limited scope to justify its $18 million per year cost. Possible ideas included searching for asteroids and comets, looking for evidence of supernovas, and finding huge exoplanets through gravitational microlensing . [ 26 ] Another proposal was to modify the software on Kepler to compensate for the disabled reaction wheels. Instead of the stars being fixed and stable in Kepler's field of view, they will drift. Proposed software was to track this drift and more or less completely recover the mission goals despite being unable to hold the stars in a fixed view. [ 110 ] Previously collected data continued to be analyzed. [ 111 ] In November 2013, a new mission plan named K2 "Second Light" was presented for consideration. [ 29 ] [ 30 ] [ 31 ] [ 112 ] K2 would involve using Kepler's remaining capability, photometric precision of about 300 parts per million, compared with about 20 parts per million earlier, to collect data for the study of " supernova explosions , star formation and Solar-System bodies such as asteroids and comets , ... " and for finding and studying more exoplanets . [ 29 ] [ 30 ] [ 112 ] In this proposed mission plan, Kepler would search a much larger area in the plane of Earth's orbit around the Sun . [ 29 ] [ 30 ] [ 112 ] Celestial objects, including exoplanets, stars and others, detected by the K2 mission would be associated with the EPIC acronym , standing for Ecliptic Plane Input Catalog . In early 2014, the spacecraft underwent successful testing for the K2 mission. [ 114 ] From March to May 2014, data from a new field called Field 0 was collected as a testing run. [ 115 ] On May 16, 2014, NASA announced the approval of extending the Kepler mission to the K2 mission. [ 32 ] Kepler's photometric precision for the K2 mission was estimated to be 50 ppm on a magnitude 12 star for a 6.5-hour integration. [ 116 ] In February 2014, photometric precision for the K2 mission using two-wheel, fine-point precision operations was measured as 44 ppm on magnitude 12 stars for a 6.5-hour integration. The analysis of these measurements by NASA suggests the K2 photometric precision approaches that of the Kepler archive of three-wheel, fine-point precision data. [ 117 ] On May 29, 2014, campaign fields 0 to 13 were reported and described in detail. [ 118 ] Field 1 of the K2 mission is set towards the Leo - Virgo region of the sky, while Field 2 is towards the "head" area of Scorpius and includes two globular clusters, Messier 4 and Messier 80 , [ 119 ] and part of the Scorpius–Centaurus association , which is only about 11 million years old [ 120 ] and 120–140 parsecs (380–470 ly ) distant [ 121 ] with probably over 1,000 members. [ 122 ] On December 18, 2014, NASA announced that the K2 mission had detected its first confirmed exoplanet, a super-Earth named HIP 116454 b . Its signature was found in a set of engineering data meant to prepare the spacecraft for the full K2 mission. Radial velocity follow-up observations were needed as only a single transit of the planet was detected. [ 123 ] During a scheduled contact on April 7, 2016, Kepler was found to be operating in emergency mode, the lowest operational and most fuel intensive mode. Mission operations declared a spacecraft emergency, which afforded them priority access to NASA's Deep Space Network . [ 124 ] [ 125 ] By the evening of April 8 the spacecraft had been upgraded to safe mode, and on April 10 it was placed into point-rest state, [ 126 ] a stable mode which provides normal communication and the lowest fuel burn. [ 124 ] At that time, the cause of the emergency was unknown, but it was not believed that Kepler's reaction wheels or a planned maneuver to support K2 's Campaign 9 were responsible. Operators downloaded and analyzed engineering data from the spacecraft, with the prioritization of returning to normal science operations. [ 124 ] [ 127 ] Kepler was returned to science mode on April 22. [ 128 ] The emergency caused the first half of Campaign 9 to be shortened by two weeks. [ 129 ] In June 2016, NASA announced a K2 mission extension of three additional years, beyond the expected exhaustion of on-board fuel in 2018. [ 130 ] In August 2018, NASA roused the spacecraft from sleep mode, applied a modified configuration to deal with thruster problems that degraded pointing performance, and began collecting scientific data for the 19th observation campaign, finding that the onboard fuel was not yet utterly exhausted. [ 131 ] On October 30, 2018, NASA announced that the spacecraft was out of fuel and its mission was officially ended. [ 132 ] The Kepler space telescope was in active operation from 2009 through 2013, with the first main results announced on January 4, 2010. As expected, the initial discoveries were all short-period planets. As the mission continued, additional longer-period candidates were found. As of November 2018 [update] , Kepler has discovered 5,011 exoplanet candidates and 2,662 confirmed exoplanets. [ 133 ] [ 134 ] As of August 2022, 2,056 exoplanet candidates remain to be confirmed and 2,711 are now confirmed exoplanets. [ 135 ] NASA held a press conference to discuss early science results of the Kepler mission on August 6, 2009. [ 136 ] At this press conference, it was revealed that Kepler had confirmed the existence of the previously known transiting exoplanet HAT-P-7b , and was functioning well enough to discover Earth-size planets. [ 137 ] [ 138 ] Because Kepler's detection of planets depends on seeing very small changes in brightness, stars that vary in brightness by themselves ( variable stars ) are not useful in this search. [ 79 ] From the first few months of data, Kepler scientists determined that about 7,500 stars from the initial target list are such variable stars. These were dropped from the target list, and replaced by new candidates. On November 4, 2009, the Kepler project publicly released the light curves of the dropped stars. [ 139 ] The first new planet candidate observed by Kepler was originally marked as a false positive because of uncertainties in the mass of its parent star. However, it was confirmed ten years later and is now designated Kepler-1658b . [ 140 ] [ 141 ] The first six weeks of data revealed five previously unknown planets, all very close to their stars. [ 142 ] [ 143 ] Among the notable results are one of the least dense planets yet found, [ 144 ] two low-mass white dwarfs [ 145 ] that were initially reported as being members of a new class of stellar objects, [ 146 ] and Kepler-16b , a well-characterized planet orbiting a binary star. On June 15, 2010, the Kepler mission released data on all but 400 of the ~156,000 planetary target stars to the public. 706 targets from this first data set have viable exoplanet candidates, with sizes ranging from as small as Earth to larger than Jupiter. The identity and characteristics of 306 of the 706 targets were given. The released targets included five [ citation needed ] candidate multi-planet systems, including six extra exoplanet candidates. [ 147 ] Only 33.5 days of data were available for most of the candidates. [ 147 ] NASA also announced data for another 400 candidates were being withheld to allow members of the Kepler team to perform follow-up observations. [ 148 ] The data for these candidates was published February 2, 2011. [ 149 ] (See the Kepler results for 2011 below.) The Kepler results, based on the candidates in the list released in 2010, implied that most candidate planets have radii less than half that of Jupiter. The results also imply that small candidate planets with periods less than thirty days are much more common than large candidate planets with periods less than thirty days and that the ground-based discoveries are sampling the large-size tail of the size distribution. [ 147 ] This contradicted older theories which had suggested small and Earth-size planets would be relatively infrequent. [ 150 ] [ 151 ] Based on extrapolations from the Kepler data, an estimate of around 100 million habitable planets in the Milky Way may be realistic. [ 152 ] Some media reports of the TED talk have led to the misunderstanding that Kepler had actually found these planets. This was clarified in a letter to the Director of the NASA Ames Research Center , for the Kepler Science Council dated August 2, 2010 states, "Analysis of the current Kepler data does not support the assertion that Kepler has found any Earth-like planets." [ 7 ] [ 153 ] [ 154 ] In 2010, Kepler identified two systems containing objects which are smaller and hotter than their parent stars: KOI 74 and KOI 81 . [ 155 ] These objects are probably low-mass white dwarfs produced by previous episodes of mass transfer in their systems. [ 145 ] On February 2, 2011, the Kepler team announced the results of analysis of the data taken between 2 May and September 16, 2009. [ 149 ] They found 1235 planetary candidates circling 997 host stars. (The numbers that follow assume the candidates are really planets, though the official papers called them only candidates. Independent analysis indicated that at least 90% of them are real planets and not false positives). [ 158 ] 68 planets were approximately Earth-size, 288 super-Earth -size, 662 Neptune-size, 165 Jupiter-size, and 19 up to twice the size of Jupiter. In contrast to previous work, roughly 74% of the planets are smaller than Neptune, most likely as a result of previous work finding large planets more easily than smaller ones. That February 2, 2011 release of 1235 exoplanet candidates included 54 that may be in the " habitable zone ", including five less than twice the size of Earth. [ 159 ] [ 160 ] There were previously only two planets thought to be in the "habitable zone", so these new findings represent an enormous expansion of the potential number of "Goldilocks planets" (planets of the right temperature to support liquid water). [ 161 ] All of the habitable zone candidates found thus far orbit stars significantly smaller and cooler than the Sun (habitable candidates around Sun-like stars will take several additional years to accumulate the three transits required for detection). [ 162 ] Of all the new planet candidates, 68 are 125% of Earth 's size or smaller, or smaller than all previously discovered exoplanets. [ 160 ] "Earth-size" and "super-Earth-size" is defined as "less than or equal to 2 Earth radii (Re)" [(or, Rp ≤ 2.0 Re) – Table 5]. [ 149 ] Six such planet candidates [namely: KOI 326.01 (Rp=0.85), KOI 701.03 (Rp=1.73), KOI 268.01 (Rp=1.75), KOI 1026.01 (Rp=1.77), KOI 854.01 (Rp=1.91), KOI 70.03 (Rp=1.96) – Table 6] [ 149 ] are in the "habitable zone." [ 159 ] A more recent study found that one of these candidates (KOI 326.01) is in fact much larger and hotter than first reported. [ 163 ] The frequency of planet observations was highest for exoplanets two to three times Earth-size, and then declined in inverse proportionality to the area of the planet. The best estimate (as of March 2011), after accounting for observational biases, was: 5.4% of stars host Earth-size candidates, 6.8% host super-Earth-size candidates, 19.3% host Neptune-size candidates, and 2.55% host Jupiter-size or larger candidates. Multi-planet systems are common; 17% of the host stars have multi-candidate systems, and 33.9% of all the planets are in multiple planet systems. [ 164 ] By December 5, 2011, the Kepler team announced that they had discovered 2,326 planetary candidates, of which 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Compared to the February 2011 figures, the number of Earth-size and super-Earth-size planets increased by 200% and 140% respectively. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars, marking a decrease from the February figure; this was due to the more stringent criteria in use in the December data. [ 165 ] On December 20, 2011, the Kepler team announced the discovery of the first Earth-size exoplanets , Kepler-20e [ 156 ] and Kepler-20f , [ 157 ] orbiting a Sun-like star , Kepler-20 . [ 166 ] Based on Kepler's findings, astronomer Seth Shostak estimated in 2011 that "within a thousand light-years of Earth", there are "at least 30,000" habitable planets. [ 167 ] Also based on the findings, the Kepler team has estimated that there are "at least 50 billion planets in the Milky Way", of which "at least 500 million" are in the habitable zone . [ 168 ] In March 2011, astronomers at NASA's Jet Propulsion Laboratory (JPL) reported that about "1.4 to 2.7 percent" of all Sun-like stars are expected to have Earth-size planets "within the habitable zones of their stars". This means there are "two billion" of these "Earth analogs" in the Milky Way alone. The JPL astronomers also noted that there are "50 billion other galaxies", potentially yielding more than one sextillion "Earth analog" planets if all galaxies have similar numbers of planets to the Milky Way. [ 169 ] In January 2012, an international team of astronomers reported that each star in the Milky Way may host " on average...at least 1.6 planets ", suggesting that over 160 billion star-bound planets may exist in the Milky Way. [ 170 ] [ 171 ] Kepler also recorded distant stellar super-flares , some of which are 10,000 times more powerful than the 1859 Carrington event . [ 172 ] The superflares may be triggered by close-orbiting Jupiter -sized planets. [ 172 ] The Transit Timing Variation (TTV) technique, which was used to discover Kepler-9d , gained popularity for confirming exoplanet discoveries. [ 173 ] A planet in a system with four stars was also confirmed, the first time such a system had been discovered. [ 174 ] As of 2012 [update] , there were a total of 2,321 candidates . [ 165 ] [ 175 ] [ 176 ] Of these, 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars. The Kepler team estimated that 5.4% of all stars host Earth-size planet candidates, and that 17% of all stars have multiple planets. According to a study by Caltech astronomers published in January 2013, the Milky Way contains at least as many planets as it does stars, resulting in 100–400 billion exoplanets . [ 177 ] [ 178 ] The study, based on planets orbiting the star Kepler-32 , suggests that planetary systems may be common around stars in the Milky Way. The discovery of 461 more candidates was announced on January 7, 2013. [ 107 ] The longer Kepler watches, the more planets with long periods it can detect. [ 107 ] Since the last Kepler catalog was released in February 2012, the number of candidates discovered in the Kepler data has increased by 20 percent and now totals 2,740 potential planets orbiting 2,036 stars A candidate, newly announced on January 7, 2013, was Kepler-69c (formerly, KOI-172.02 ), an Earth-size exoplanet orbiting a star similar to the Sun in the habitable zone and possibly habitable. [ 179 ] In April 2013, a white dwarf was discovered bending the light of its companion red dwarf in the KOI-256 star system. [ 180 ] In April 2013, NASA announced the discovery of three new Earth-size exoplanets— Kepler-62e , Kepler-62f , and Kepler-69c —in the habitable zones of their respective host stars, Kepler-62 and Kepler-69 . The new exoplanets are considered prime candidates for possessing liquid water and thus a habitable environment. [ 181 ] [ 182 ] [ 183 ] A more recent analysis has shown that Kepler-69c is likely more analogous to Venus, and thus unlikely to be habitable. [ 184 ] On May 15, 2013, NASA announced the space telescope had been crippled by failure of a reaction wheel that keeps it pointed in the right direction. A second wheel had previously failed, and the telescope required three wheels (out of four total) to be operational for the instrument to function properly. Further testing in July and August determined that while Kepler was capable of using its damaged reaction wheels to prevent itself from entering safe mode and of downlinking previously collected science data it was not capable of collecting further science data as previously configured. [ 185 ] Scientists working on the Kepler project said there was a backlog of data still to be looked at, and that more discoveries would be made in the following couple of years, despite the setback. [ 186 ] Although no new science data from Kepler field had been collected since the problem, an additional sixty-three candidates were announced in July 2013 based on the previously collected observations. [ 187 ] In November 2013, the second Kepler science conference was held. The discoveries included the median size of planet candidates getting smaller compared to early 2013, preliminary results of the discovery of a few circumbinary planets and planets in the habitable zone. [ 188 ] On February 13, over 530 additional planet candidates were announced residing around single planet systems. Several of them were nearly Earth-sized and located in the habitable zone. This number was further increased by about 400 in June 2014. [ 189 ] On February 26, scientists announced that data from Kepler had confirmed the existence of 715 new exoplanets. A new statistical method of confirmation was used called "verification by multiplicity" which is based on how many planets around multiple stars were found to be real planets. This allowed much quicker confirmation of numerous candidates which are part of multiplanetary systems. 95% of the discovered exoplanets were smaller than Neptune and four, including Kepler-296f, were less than 2 1/2 the size of Earth and were in habitable zones where surface temperatures are suitable for liquid water . [ 101 ] [ 190 ] [ 191 ] [ 192 ] In March, a study found that small planets with orbital periods of less than one day are usually accompanied by at least one additional planet with orbital period of 1–50 days. This study also noted that ultra-short period planets are almost always smaller than 2 Earth radii unless it is a misaligned hot Jupiter. [ 193 ] On April 17, the Kepler team announced the discovery of Kepler-186f , the first nearly Earth-sized planet located in the habitable zone. This planet orbits around a red dwarf. [ 194 ] In May 2014, K2 observations fields 0 to 13 were announced and described in detail. [ 118 ] K2 observations began in June 2014. In July 2014, the first discoveries from K2 field data were reported in the form of eclipsing binaries . Discoveries were derived from a Kepler engineering data set which was collected prior to campaign 0 [ 195 ] in preparation to the main K2 mission. [ 196 ] On September 23, 2014, NASA reported that the K2 mission had completed campaign 1, [ 197 ] the first official set of science observations, and that campaign 2 [ 198 ] was underway. [ 199 ] Campaign 3 [ 201 ] lasted from November 14, 2014, to February 6, 2015, and included "16,375 standard long cadence and 55 standard short cadence targets". [ 118 ] By May 10, 2016, the Kepler mission had verified 1,284 new planets. [ 41 ] Based on their size, about 550 could be rocky planets. Nine of these orbit in their stars' habitable zone : Kepler-560b , Kepler-705b , Kepler-1229b , Kepler-1410b , Kepler-1455b , Kepler-1544 b , Kepler-1593b , Kepler-1606b , and Kepler-1638b . [ 41 ] The Kepler team originally promised to release data within one year of observations. [ 214 ] However, this plan was changed after launch, with data being scheduled for release up to three years after its collection. [ 215 ] This resulted in considerable criticism, [ 216 ] [ 217 ] [ 218 ] [ 219 ] [ 220 ] leading the Kepler science team to release the third quarter of their data one year and nine months after collection. [ 221 ] The data through September 2010 (quarters 4, 5, and 6) was made public in January 2012. [ 222 ] Periodically, the Kepler team releases a list of candidates ( Kepler Objects of Interest , or KOIs) to the public. Using this information, a team of astronomers collected radial velocity data using the SOPHIE échelle spectrograph to confirm the existence of the candidate KOI-428b in 2010, later named Kepler-40b . [ 223 ] In 2011, the same team confirmed candidate KOI-423b, later named Kepler-39b . [ 224 ] Since December 2010, Kepler mission data has been used for the Planet Hunters project, which allows volunteers to look for transit events in the light curves of Kepler images to identify planets that computer algorithms might miss. [ 225 ] By June 2011, users had found sixty-nine potential candidates that were previously unrecognized by the Kepler mission team. [ 226 ] The team has plans to publicly credit amateurs who spot such planets. In January 2012, the BBC program Stargazing Live aired a public appeal for volunteers to analyse Planethunters.org data for potential new exoplanets. This led two amateur astronomers—one in Peterborough , England—to discover a new Neptune -sized exoplanet, to be named Threapleton Holmes B. [ 227 ] One hundred thousand other volunteers were also engaged in the search by late January, analyzing over one million Kepler images by early 2012. [ 228 ] One such exoplanet, PH1b (or Kepler-64b from its Kepler designation), was discovered in 2012. A second exoplanet, PH2b (Kepler-86b) was discovered in 2013. In April 2017, ABC Stargazing Live , a variation of BBC Stargazing Live , launched the Zooniverse project "Exoplanet Explorers". While Planethunters.org worked with archived data, Exoplanet Explorers used recently downlinked data from the K2 mission. On the first day of the project, 184 transit candidates were identified that passed simple tests. On the second day, the research team identified a star system, later named K2-138 , with a Sun-like star and four super-Earths in a tight orbit. In the end, volunteers helped to identify 90 exoplanet candidates. [ 229 ] [ 230 ] The citizen scientists that helped discover the new star system will be added as co-authors in the research paper when published. [ 231 ] Exoplanets discovered using Kepler 's data, but confirmed by outside researchers, include Kepler-39b, [ 224 ] Kepler-40b, [ 223 ] Kepler-41b , [ 232 ] Kepler-43b , [ 233 ] Kepler-44b , [ 234 ] Kepler-45b , [ 235 ] as well as the planets orbiting Kepler-223 [ 236 ] and Kepler-42 . [ 237 ] The "KOI" acronym indicates that the star is a K epler O bject of I nterest . The Kepler Input Catalog is a publicly searchable database of roughly 13.2 million targets used for the Kepler Spectral Classification Program and the Kepler mission. [ 238 ] [ 239 ] The catalog alone is not used for finding Kepler targets, because only a portion of the listed stars (about one-third of the catalog) can be observed by the spacecraft. [ 238 ] Kepler has been assigned an observatory code ( C55 ) in order to report its astrometric observations of small Solar System bodies to the Minor Planet Center . In 2013 the alternative NEOKepler mission was proposed, a search for near-Earth objects , in particular potentially hazardous asteroids (PHAs). Its unique orbit and larger field of view than existing survey telescopes allow it to look for objects inside Earth's orbit. It was predicted a 12-month survey could make a significant contribution to the hunt for PHAs as well as potentially locating targets for NASA's Asteroid Redirect Mission . [ 240 ] Kepler's first discovery in the Solar System, however, was (506121) 2016 BP 81 , a 200-kilometer cold classical Kuiper belt object located beyond the orbit of Neptune . [ 241 ] On October 30, 2018, NASA announced that the Kepler space telescope, having run out of fuel, and after nine years of service and the discovery of over 2,600 exoplanets , has been officially retired, and will maintain its current, safe orbit, away from Earth. [ 9 ] [ 10 ] The spacecraft was deactivated with a "goodnight" command sent from the mission's control center at the Laboratory for Atmospheric and Space Physics on November 15, 2018. [ 242 ] Kepler's retirement coincides with the 388th anniversary of Johannes Kepler 's death in 1630. [ 243 ] Other space-based exoplanet search projects Other ground-based exoplanet search projects Exoplanet catalogs and databases
https://en.wikipedia.org/wiki/Kepler_space_telescope
In plane geometry , the Kepler–Bouwkamp constant (or polygon inscribing constant ) is obtained as a limit of the following sequence . Take a circle of radius 1. Inscribe a regular triangle in this circle. Inscribe a circle in this triangle. Inscribe a square in it. Inscribe a circle, regular pentagon , circle, regular hexagon and so forth. The radius of the limiting circle is called the Kepler–Bouwkamp constant. [ 1 ] It is named after Johannes Kepler and Christoffel Bouwkamp [ de ] , and is the inverse of the polygon circumscribing constant . The decimal expansion of the Kepler–Bouwkamp constant is (sequence A085365 in the OEIS ) where ζ ( s ) = ∑ n = 1 ∞ 1 n s {\displaystyle \zeta (s)=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}}} is the Riemann zeta function . If the product is taken over the odd primes, the constant is obtained (sequence A131671 in the OEIS ).
https://en.wikipedia.org/wiki/Kepler–Bouwkamp_constant
In radiation physics , kerma is an acronym for "kinetic energy released per unit mass" (alternately, "kinetic energy released in matter", [ 1 ] "kinetic energy released in material", [ 2 ] or "kinetic energy released in materials" [ 3 ] ), defined as the sum of the initial kinetic energies of all the charged particles liberated by uncharged ionizing radiation (i.e., indirectly ionizing radiation such as photons and neutrons ) in a sample of matter , divided by the mass of the sample. It is defined by the quotient K = d E tr / d m {\displaystyle K=\operatorname {d} \!E_{\text{tr}}/\operatorname {d} \!m} . [ 4 ] The SI unit of kerma is the gray (Gy) (or joule per kilogram ), the same as the unit of absorbed dose . However, kerma can be different from absorbed dose, depending on the energies involved. This is because ionization energy is not accounted for. While kerma approximately equals absorbed dose at low energies, kerma is much higher than absorbed dose at higher energies, because some energy escapes from the absorbing volume in the form of bremsstrahlung (X-rays) or fast-moving electrons, and is not counted as absorbed dose. Photon energy is transferred to matter in a two-step process. First, energy is transferred to charged particles in the medium through various photon interactions (e.g. photoelectric effect , Compton scattering , pair production , and photodisintegration ). Next, these secondary charged particles transfer their energy to the medium through atomic excitation and ionizations. [ 5 ] For low-energy photons, kerma is numerically approximately the same as absorbed dose. For higher-energy photons, kerma is larger than absorbed dose because some highly energetic secondary electrons and X-rays escape the region of interest before depositing their energy. The escaping energy is counted in kerma, but not in absorbed dose. For low-energy X-rays, this is usually a negligible distinction. This can be understood when one looks at the components of kerma. There are two independent contributions to the total kerma, collision kerma k col {\displaystyle k_{\text{col}}} and radiative kerma k rad {\displaystyle k_{\text{rad}}} – thus, K = k col + k rad {\displaystyle K=k_{\text{col}}+k_{\text{rad}}} . Collision kerma results in the production of electrons that dissipate their energy as ionization and excitation due to the interaction between the charged particle and the atomic electrons. Radiative kerma results in the production of radiative photons due to the interaction between the charged particle and atomic nuclei (mostly via Bremsstrahlung radiation), but can also include photons produced by annihilation of positrons in flight. [ 4 ] Frequently, the quantity k col {\displaystyle k_{\text{col}}} is of interest, and is usually expressed as where g is the average fraction of energy transferred to electrons that is lost through bremsstrahlung. Air kerma is of importance in the practical calibration of instruments for photon measurement, where it is used for the traceable calibration of gamma instrument metrology facilities using a "free air" ion chamber to measure air kerma. IAEA safety report 16 states "The quantity air kerma should be used for calibrating the reference photon radiation fields and reference instruments. Radiation protection monitoring instruments should be calibrated in terms of dose equivalent quantities. Area dosimeters or dose ratemeters should be calibrated in terms of the ambient dose equivalent, H*(10), or the directional dose equivalent, H′(0.07), without any phantom present, i.e. free in air." [ 6 ] Conversion coefficients from air kerma in Gy to equivalent dose in Sv are published in the International Commission on Radiological Protection (ICRP) report 74 (1996). For instance, air kerma rate is converted to tissue equivalent dose using a factor of Sv/Gy (air) = 1.21 for Cs 137 at 0.662 MeV. [ 7 ]
https://en.wikipedia.org/wiki/Kerma_(physics)
Kermack–McKendrick theory is a hypothesis that predicts the number and distribution of cases of an immunizing infectious disease over time as it is transmitted through a population based on characteristics of infectivity and recovery, under a strong-mixing assumption. Building on the research of Ronald Ross and Hilda Hudson , A. G. McKendrick and W. O. Kermack published their theory in a set of three articles from 1927, 1932, and 1933. Kermack–McKendrick theory is one of the sources of the SIR model and other related compartmental models . This theory was the first to explicitly account for the dependence of infection characteristics and transmissibility on the age of infection. Because of their seminal importance to the field of theoretical epidemiology, these articles were republished in the Bulletin of Mathematical Biology in 1991. [ 1 ] [ 2 ] [ 3 ] In its initial form, Kermack–McKendrick theory is a partial differential-equation model that structures the infected population in terms of age-of-infection, while using simple compartments for people who are susceptible (S), infected (I), and recovered/removed (R). Specified initial conditions would change over time according to where δ ( a ) {\displaystyle \delta (a)} is a Dirac delta-function and the infection pressure This formulation is equivalent to defining the incidence of infection i ( t , 0 ) = λ S {\displaystyle i(t,0)=\lambda S} . Only in the special case when the removal rate γ ( a ) {\displaystyle \gamma (a)} and the transmission rate β ( a ) {\displaystyle \beta (a)} are constant for all ages can the epidemic dynamics be expressed in terms of the prevalence I ( t ) {\displaystyle I(t)} , leading to the standard compartmental SIR model . This model only accounts for infection and removal events, which are sufficient to describe a simple epidemic, including the threshold condition necessary for an epidemic to start, but can not explain endemic disease transmission or recurring epidemics. In their subsequent articles, Kermack and McKendrick extended their theory to allow for birth, migration, and death, as well as imperfect immunity. In modern notation, their model can be represented as where b 0 {\displaystyle b_{0}} is the immigration rate of susceptibles, b j is the per-capita birth rate for state j , m j is the per-capita mortality rate of individuals in state j , σ {\displaystyle \sigma } is the relative-risk of infection to recovered individuals who are partially immune, and the infection pressure Kermack and McKendrick were able to show that it admits a stationary solution where disease is endemic, as long as the supply of susceptible individuals is sufficiently large. This model is difficult to analyze in its full generality, and a number of open questions remain regarding its dynamics.
https://en.wikipedia.org/wiki/Kermack–McKendrick_theory
In algebra , the kernel of a homomorphism (function that preserves the structure ) is generally the inverse image of 0 (except for groups whose operation is denoted multiplicatively, where the kernel is the inverse image of 1). An important special case is the kernel of a linear map . The kernel of a matrix , also called the null space , is the kernel of the linear map defined by the matrix. The kernel of a homomorphism is reduced to 0 (or 1) if and only if the homomorphism is injective , that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective. For some types of structure, such as abelian groups and vector spaces , the possible kernels are exactly the substructures of the same type. This is not always the case, and, sometimes, the possible kernels have received a special name, such as normal subgroup for groups and two-sided ideals for rings . Kernels allow defining quotient objects (also called quotient algebras in universal algebra , and cokernels in category theory ). For many types of algebraic structure, the fundamental theorem on homomorphisms (or first isomorphism theorem ) states that image of a homomorphism is isomorphic to the quotient by the kernel. The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether a homomorphism is injective. In these cases, the kernel is a congruence relation . This article is a survey for some important types of kernels in algebraic structures. The mathematician Pontryagin is credited with using the word "kernel" in 1931 to describe the elements of a group that were sent to the identity element in another group. [ 1 ] Let G and H be groups and let f be a group homomorphism from G to H . If e H is the identity element of H , then the kernel of f is the preimage of the singleton set { e H }; that is, the subset of G consisting of all those elements of G that are mapped by f to the element e H . [ 2 ] [ 3 ] [ 4 ] The kernel is usually denoted ker f (or a variation). In symbols: Since a group homomorphism preserves identity elements, the identity element e G of G must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the singleton set { e G }. If f were not injective, then the non-injective elements can form a distinct element of its kernel: there would exist a , b ∈ G such that a ≠ b and f ( a ) = f ( b ) . Thus f ( a ) f ( b ) −1 = e H . f is a group homomorphism, so inverses and group operations are preserved, giving f ( ab −1 ) = e H ; in other words, ab −1 ∈ ker f , and ker f would not be the singleton. Conversely, distinct elements of the kernel violate injectivity directly: if there would exist an element g ≠ e G ∈ ker f , then f ( g ) = f ( e G ) = e H , thus f would not be injective. ker f is a subgroup of G and further it is a normal subgroup . Thus, there is a corresponding quotient group G / (ker f ) . This is isomorphic to f ( G ), the image of G under f (which is a subgroup of H also), by the first isomorphism theorem for groups. Ring homomorphisms Algebraic structures Related structures Algebraic number theory Noncommutative algebraic geometry Free algebra Clifford algebra Let R and S be rings (assumed unital ) and let f be a ring homomorphism from R to S . If 0 S is the zero element of S , then the kernel of f is its kernel as additive groups. [ 3 ] It is the preimage of the zero ideal {0 S }, which is, the subset of R consisting of all those elements of R that are mapped by f to the element 0 S . The kernel is usually denoted ker f (or a variation). In symbols: Since a ring homomorphism preserves zero elements, the zero element 0 R of R must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the singleton set {0 R }. This is always the case if R is a field , and S is not the zero ring . Since ker f contains the multiplicative identity only when S is the zero ring, it turns out that the kernel is generally not a subring of R. The kernel is a sub rng , and, more precisely, a two-sided ideal of R . Thus, it makes sense to speak of the quotient ring R / (ker f ) . The first isomorphism theorem for rings states that this quotient ring is naturally isomorphic to the image of f (which is a subring of S ). (Note that rings need not be unital for the kernel definition). Let V and W be vector spaces over a field (or more generally, modules over a ring ) and let T be a linear map from V to W . If 0 W is the zero vector of W , then the kernel of T (or null space [ 5 ] [ 2 ] ) is the preimage of the zero subspace { 0 W }; that is, the subset of V consisting of all those elements of V that are mapped by T to the element 0 W . The kernel is usually denoted as ker T , or some variation thereof: Since a linear map preserves zero vectors, the zero vector 0 V of V must belong to the kernel. The transformation T is injective if and only if its kernel is reduced to the zero subspace. The kernel ker T is always a linear subspace of V . [ 2 ] Thus, it makes sense to speak of the quotient space V / (ker T ) . The first isomorphism theorem for vector spaces states that this quotient space is naturally isomorphic to the image of T (which is a subspace of W ). As a consequence, the dimension of V equals the dimension of the kernel plus the dimension of the image. One can define kernels for homomorphisms between modules over a ring in an analogous manner. This includes kernels for homomorphisms between abelian groups as a special case. [ 2 ] This example captures the essence of kernels in general abelian categories ; see Kernel (category theory) . Let R {\displaystyle R} be a ring , and let M {\displaystyle M} and N {\displaystyle N} be R {\displaystyle R} - modules . If φ : M → N {\displaystyle \varphi :M\to N} is a module homomorphism, then the kernel is defined to be: [ 2 ] Every kernel is a submodule of the domain module. [ 2 ] Let M and N be monoids and let f be a monoid homomorphism from M to N . Then the kernel of f is the subset of the direct product M × M consisting of all those ordered pairs of elements of M whose components are both mapped by f to the same element in N [ citation needed ] . The kernel is usually denoted ker f (or a variation thereof). In symbols: Since f is a function , the elements of the form ( m , m ) must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the diagonal set {( m , m ) : m in M } . It turns out that ker f is an equivalence relation on M , and in fact a congruence relation . Thus, it makes sense to speak of the quotient monoid M / (ker f ) . The first isomorphism theorem for monoids states that this quotient monoid is naturally isomorphic to the image of f (which is a submonoid of N ; for the congruence relation). This is very different in flavor from the above examples. In particular, the preimage of the identity element of N is not enough to determine the kernel of f . Let G be the cyclic group on 6 elements {0, 1, 2, 3, 4, 5} with modular addition , H be the cyclic on 2 elements {0, 1} with modular addition, and f the homomorphism that maps each element g in G to the element g modulo 2 in H . Then ker f = {0, 2, 4} , since all these elements are mapped to 0 H . The quotient group G / (ker f ) has two elements: {0, 2, 4} and {1, 3, 5} . It is indeed isomorphic to H . Given a isomorphism φ : G → H {\displaystyle \varphi :G\to H} , one has ker ⁡ φ = 1 {\displaystyle \ker \varphi =1} . [ 2 ] On the other hand, if this mapping is merely a homomorphism where H is the trivial group, then φ ( g ) = 1 {\displaystyle \varphi (g)=1} for all g ∈ G {\displaystyle g\in G} , so thus ker ⁡ φ = G {\displaystyle \ker \varphi =G} . [ 2 ] Let φ : R 2 → R {\displaystyle \varphi :\mathbb {R} ^{2}\to \mathbb {R} } be the map defined as φ ( ( x , y ) ) = x {\displaystyle \varphi ((x,y))=x} . Then this is a homomorphism with the kernel consisting precisely the points of the form ( 0 , y ) {\displaystyle (0,y)} . This mapping is considered the "projection onto the x-axis." [ 2 ] A similar phenomenon occurs with the mapping f : ( R × ) 2 → R × {\displaystyle f:(\mathbb {R} ^{\times })^{2}\to \mathbb {R} ^{\times }} defined as f ( a , b ) = b {\displaystyle f(a,b)=b} , where the kernel is the points of the form ( a , 1 ) {\displaystyle (a,1)} [ 4 ] For a non-abelian example, let Q 8 {\displaystyle Q_{8}} denote the Quaternion group , and V 4 {\displaystyle V_{4}} the Klein 4-group . Define a mapping φ : Q 8 → V 4 {\displaystyle \varphi :Q_{8}\to V_{4}} to be: Then this mapping is a homomorphism where ker ⁡ φ = { ± 1 } {\displaystyle \ker \varphi =\{\pm 1\}} . [ 2 ] Consider the mapping φ : Z → Z / 2 Z {\displaystyle \varphi :\mathbb {Z} \to \mathbb {Z} /2\mathbb {Z} } where the later ring is the integers modulo 2 and the map sends each number to its parity ; 0 for even numbers, and 1 for odd numbers. This mapping turns out to be a homomorphism, and since the additive identity of the later ring is 0, the kernel is precisely the even numbers. [ 2 ] Let φ : Q [ x ] → Q {\displaystyle \varphi :\mathbb {Q} [x]\to \mathbb {Q} } be defined as φ ( p ( x ) ) = p ( 0 ) {\displaystyle \varphi (p(x))=p(0)} . This mapping , which happens to be a homomorphism, sends each polynomial to its constant term. It maps a polynomial to zero if and only if said polynomial's constant term is 0. [ 2 ] If we instead work with polynomials with real coefficients, then we again receive a homomorphism with its kernel being the polynomials with constant term 0. [ 4 ] If V and W are finite-dimensional and bases have been chosen, then T can be described by a matrix M , and the kernel can be computed by solving the homogeneous system of linear equations M v = 0 . In this case, the kernel of T may be identified to the kernel of the matrix M , also called "null space" of M . The dimension of the null space, called the nullity of M , is given by the number of columns of M minus the rank of M , as a consequence of the rank–nullity theorem . Solving homogeneous differential equations often amounts to computing the kernel of certain differential operators . For instance, in order to find all twice- differentiable functions f from the real line to itself such that let V be the space of all twice differentiable functions, let W be the space of all functions, and define a linear operator T from V to W by for f in V and x an arbitrary real number . Then all solutions to the differential equation are in ker T . The kernel of a homomorphism can be used to define a quotient algebra . For instance, if φ : G → H {\displaystyle \varphi :G\to H} denotes a group homomorphism, and we set K = ker ⁡ φ {\displaystyle K=\ker \varphi } , we can consider G / K {\displaystyle G/K} to be the set of fibers of the homomorphism φ {\displaystyle \varphi } , where a fiber is merely the set of points of the domain mapping to a single chosen point in the range. [ 2 ] If X a ∈ G / K {\displaystyle X_{a}\in G/K} denotes the fiber of the element a ∈ H {\displaystyle a\in H} , then we can give a group operation on the set of fibers by X a X b = X a b {\displaystyle X_{a}X_{b}=X_{ab}} , and we call G / K {\displaystyle G/K} the quotient group (or factor group), to be read as "G modulo K" or "G mod K". [ 2 ] The terminology arises from the fact that the kernel represents the fiber of the identity element of the range, H {\displaystyle H} , and that the remaining elements are simply "translates" of the kernel, so the quotient group is obtained by "dividing out" by the kernel. [ 2 ] The fibers can also be described by looking at the domain relative to the kernel; given X ∈ G / K {\displaystyle X\in G/K} and any element u ∈ X {\displaystyle u\in X} , then X = u K = K u {\displaystyle X=uK=Ku} where: [ 2 ] these sets are called the left and right cosets respectively, and can be defined in general for any arbitrary subgroup aside from the kernel. [ 2 ] [ 4 ] [ 3 ] The group operation can then be defined as u K ∘ v K = ( u k ) K {\displaystyle uK\circ vK=(uk)K} , which is well-defined regardless of the choice of representatives of the fibers. [ 2 ] [ 3 ] According to the first isomorphism theorem , we have an isomorphism μ : G / K → φ ( G ) {\displaystyle \mu :G/K\to \varphi (G)} , where the later group is the image of the homomorphism φ {\displaystyle \varphi } , and the isomorphism is defined as μ ( u K ) = φ ( u ) {\displaystyle \mu (uK)=\varphi (u)} , and such map is also well-defined. [ 2 ] [ 3 ] For rings , modules , and vector spaces , one can define the respective quotient algebras via the underlying additive group structure, with cosets represented as x + K {\displaystyle x+K} . Ring multiplication can be defined on the quotient algebra the same way as in the group (and be well-defined). For a ring R {\displaystyle R} (possibly a field when describing vector spaces) and a module homomorphism φ : M → N {\displaystyle \varphi :M\to N} with kernel K = ker ⁡ φ {\displaystyle K=\ker \varphi } , one can define scalar multiplication on G / K {\displaystyle G/K} by r ( x + K ) = r x + K {\displaystyle r(x+K)=rx+K} for r ∈ R {\displaystyle r\in R} and x ∈ M {\displaystyle x\in M} , which will also be well-defined. [ 2 ] The structure of kernels allows for the building of quotient algebras from structures satisfying the properties of kernels. Any subgroup N {\displaystyle N} of a group G {\displaystyle G} can construct a quotient G / N {\displaystyle G/N} by the set of all cosets of N {\displaystyle N} in G {\displaystyle G} . [ 2 ] The natural way to turn this into a group, similar to the treatment for the quotient by a kernel, is to define an operation on (left) cosets by u N ⋅ v N = ( u v ) N {\displaystyle uN\cdot vN=(uv)N} , however this operation is well defined if and only if the subgroup N {\displaystyle N} is closed under conjugation under G {\displaystyle G} , that is, if g ∈ G {\displaystyle g\in G} and n ∈ N {\displaystyle n\in N} , then g n g − 1 ∈ N {\displaystyle gng^{-1}\in N} . Furthermore, the operation being well defined is sufficient for the quotient to be a group. [ 2 ] Subgroups satisfying this property are called normal subgroups . [ 2 ] Every kernel of a group is a normal subgroup, and for a given normal subgroup N {\displaystyle N} of a group G {\displaystyle G} , the natural projection π ( g ) = g N {\displaystyle \pi (g)=gN} is a homomorphism with ker ⁡ π = N {\displaystyle \ker \pi =N} , so the normal subgroups are precisely the subgroups which are kernels. [ 2 ] The closure under conjugation, however, gives an "internal" [ 2 ] criterion for when a subgroup is a kernel for some homomorphism. For a ring R {\displaystyle R} , treating it as a group, one can take a quotient group via an arbitrary subgroup I {\displaystyle I} of the ring, which will be normal due to the ring's additive group being abelian . To define multiplication on R / I {\displaystyle R/I} , the multiplication of cosets, defined as ( r + I ) ( s + I ) = r s + I {\displaystyle (r+I)(s+I)=rs+I} needs to be well-defined. Taking representative r + α {\displaystyle r+\alpha } and s + β {\displaystyle s+\beta } of r + I {\displaystyle r+I} and s + I {\displaystyle s+I} respectively, for r , s ∈ R {\displaystyle r,s\in R} and α , β ∈ I {\displaystyle \alpha ,\beta \in I} , yields: [ 2 ] Setting r = s = 0 {\displaystyle r=s=0} implies that I {\displaystyle I} is closed under multiplication, while setting α = s = 0 {\displaystyle \alpha =s=0} shows that r β ∈ I {\displaystyle r\beta \in I} , that is, I {\displaystyle I} is closed under arbitrary multiplication by elements on the left. Similarly, taking r = β = 0 {\displaystyle r=\beta =0} implies that I {\displaystyle I} is also closed under multiplication by arbitrary elements on the right. [ 2 ] Any subgroup of R {\displaystyle R} that is closed under multiplication by any element of the ring is called an ideal . [ 2 ] Analogously to normal subgroups, the ideals of a ring are precisely the kernels of homomorphisms. [ 2 ] Kernels are used to define exact sequences of homomorphisms for groups and modules . If A, B, and C are modules, then a pair of homomorphisms ψ : A → B , φ : B → C {\displaystyle \psi :A\to B,\varphi :B\to C} is said to be exact if image ψ = ker ⁡ φ {\displaystyle {\text{image }}\psi =\ker \varphi } . An exact sequence is then a sequence of modules and homomorphism ⋯ → X n − 1 → X n → X n + 1 → ⋯ {\displaystyle \cdots \to X_{n-1}\to X_{n}\to X_{n+1}\to \cdots } where each adjacent pair of homomorphisms is exact. [ 2 ] All the above cases may be unified and generalized in universal algebra . Let A and B be algebraic structures of a given type and let f be a homomorphism of that type from A to B . Then the kernel of f is the subset of the direct product A × A consisting of all those ordered pairs of elements of A whose components are both mapped by f to the same element in B . [ 6 ] [ 7 ] The kernel is usually denoted ker f (or a variation). In symbols: The homomorphism f is injective if and only if its kernel is exactly the diagonal set {( a , a ) : a ∈ A } , which is always at least contained inside the kernel. [ 6 ] [ 7 ] It is easy to see that ker f is an equivalence relation on A , and in fact a congruence relation . Thus, it makes sense to speak of the quotient algebra A / (ker f ) . The first isomorphism theorem in general universal algebra states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B ). [ 6 ] [ 7 ] Note that the definition of kernel here (as in the monoid example) doesn't depend on the algebraic structure; it is a purely set -theoretic concept. For more on this general concept, outside of abstract algebra, see kernel of a function . Sometimes algebras are equipped with a nonalgebraic structure in addition to their algebraic operations. For example, one may consider topological groups or topological vector spaces , which are equipped with a topology . In this case, we would expect the homomorphism f to preserve this additional structure [ citation needed ] ; in the topological examples, we would want f to be a continuous map . The process may run into a snag with the quotient algebras, which may not be well-behaved. In the topological examples, we can avoid problems by requiring that topological algebraic structures be Hausdorff (as is usually done); then the kernel (however it is constructed) will be a closed set and the quotient space will work fine (and also be Hausdorff) [ citation needed ] . The notion of kernel in category theory is a generalization of the kernels of abelian algebras; see Kernel (category theory) . The categorical generalization of the kernel as a congruence relation is the kernel pair . (There is also the notion of difference kernel , or binary equalizer .) Lang, Serge (2002). Algebra . Graduate Texts in Mathematics . Springer . ISBN 0-387-95385-X .
https://en.wikipedia.org/wiki/Kernel_(algebra)
In mathematics , the kernel of a linear map , also known as the null space or nullspace , is the part of the domain which is mapped to the zero vector of the co-domain ; the kernel is always a linear subspace of the domain. [ 1 ] That is, given a linear map L : V → W between two vector spaces V and W , the kernel of L is the vector space of all elements v of V such that L ( v ) = 0 , where 0 denotes the zero vector in W , [ 2 ] or more symbolically: ker ⁡ ( L ) = { v ∈ V ∣ L ( v ) = 0 } = L − 1 ( 0 ) . {\displaystyle \ker(L)=\left\{\mathbf {v} \in V\mid L(\mathbf {v} )=\mathbf {0} \right\}=L^{-1}(\mathbf {0} ).} The kernel of L is a linear subspace of the domain V . [ 3 ] [ 2 ] In the linear map L : V → W , {\displaystyle L:V\to W,} two elements of V have the same image in W if and only if their difference lies in the kernel of L , that is, L ( v 1 ) = L ( v 2 ) if and only if L ( v 1 − v 2 ) = 0 . {\displaystyle L\left(\mathbf {v} _{1}\right)=L\left(\mathbf {v} _{2}\right)\quad {\text{ if and only if }}\quad L\left(\mathbf {v} _{1}-\mathbf {v} _{2}\right)=\mathbf {0} .} From this, it follows by the first isomorphism theorem that the image of L is isomorphic to the quotient of V by the kernel: im ⁡ ( L ) ≅ V / ker ⁡ ( L ) . {\displaystyle \operatorname {im} (L)\cong V/\ker(L).} In the case where V is finite-dimensional , this implies the rank–nullity theorem : dim ⁡ ( ker ⁡ L ) + dim ⁡ ( im ⁡ L ) = dim ⁡ ( V ) . {\displaystyle \dim(\ker L)+\dim(\operatorname {im} L)=\dim(V).} where the term rank refers to the dimension of the image of L , dim ⁡ ( im ⁡ L ) , {\displaystyle \dim(\operatorname {im} L),} while nullity refers to the dimension of the kernel of L , dim ⁡ ( ker ⁡ L ) . {\displaystyle \dim(\ker L).} [ 4 ] That is, Rank ⁡ ( L ) = dim ⁡ ( im ⁡ L ) and Nullity ⁡ ( L ) = dim ⁡ ( ker ⁡ L ) , {\displaystyle \operatorname {Rank} (L)=\dim(\operatorname {im} L)\qquad {\text{ and }}\qquad \operatorname {Nullity} (L)=\dim(\ker L),} so that the rank–nullity theorem can be restated as Rank ⁡ ( L ) + Nullity ⁡ ( L ) = dim ⁡ ( domain ⁡ L ) . {\displaystyle \operatorname {Rank} (L)+\operatorname {Nullity} (L)=\dim \left(\operatorname {domain} L\right).} When V is an inner product space , the quotient V / ker ⁡ ( L ) {\displaystyle V/\ker(L)} can be identified with the orthogonal complement in V of ker ⁡ ( L ) {\displaystyle \ker(L)} . This is the generalization to linear operators of the row space , or coimage, of a matrix. The notion of kernel also makes sense for homomorphisms of modules , which are generalizations of vector spaces where the scalars are elements of a ring , rather than a field . The domain of the mapping is a module, with the kernel constituting a submodule . Here, the concepts of rank and nullity do not necessarily apply. If V and W are topological vector spaces such that W is finite-dimensional, then a linear operator L : V → W is continuous if and only if the kernel of L is a closed subspace of V . Consider a linear map represented as a m × n matrix A with coefficients in a field K (typically R {\displaystyle \mathbb {R} } or C {\displaystyle \mathbb {C} } ), that is operating on column vectors x with n components over K . The kernel of this linear map is the set of solutions to the equation A x = 0 , where 0 is understood as the zero vector . The dimension of the kernel of A is called the nullity of A . In set-builder notation , N ⁡ ( A ) = Null ⁡ ( A ) = ker ⁡ ( A ) = { x ∈ K n ∣ A x = 0 } . {\displaystyle \operatorname {N} (A)=\operatorname {Null} (A)=\operatorname {ker} (A)=\left\{\mathbf {x} \in K^{n}\mid A\mathbf {x} =\mathbf {0} \right\}.} The matrix equation is equivalent to a homogeneous system of linear equations : A x = 0 ⇔ a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = 0 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = 0 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = 0 . {\displaystyle A\mathbf {x} =\mathbf {0} \;\;\Leftrightarrow \;\;{\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\;\cdots \;+\;&&a_{1n}x_{n}&&\;=\;&&&0\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\;\cdots \;+\;&&a_{2n}x_{n}&&\;=\;&&&0\\&&&&&&&&&&\vdots \ \;&&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\;\cdots \;+\;&&a_{mn}x_{n}&&\;=\;&&&0{\text{.}}\\\end{alignedat}}} Thus the kernel of A is the same as the solution set to the above homogeneous equations. The kernel of a m × n matrix A over a field K is a linear subspace of K n . That is, the kernel of A , the set Null( A ) , has the following three properties: The product A x can be written in terms of the dot product of vectors as follows: A x = [ a 1 ⋅ x a 2 ⋅ x ⋮ a m ⋅ x ] . {\displaystyle A\mathbf {x} ={\begin{bmatrix}\mathbf {a} _{1}\cdot \mathbf {x} \\\mathbf {a} _{2}\cdot \mathbf {x} \\\vdots \\\mathbf {a} _{m}\cdot \mathbf {x} \end{bmatrix}}.} Here, a 1 , ... , a m denote the rows of the matrix A . It follows that x is in the kernel of A , if and only if x is orthogonal (or perpendicular) to each of the row vectors of A (since orthogonality is defined as having a dot product of 0). The row space , or coimage, of a matrix A is the span of the row vectors of A . By the above reasoning, the kernel of A is the orthogonal complement to the row space. That is, a vector x lies in the kernel of A , if and only if it is perpendicular to every vector in the row space of A . The dimension of the row space of A is called the rank of A , and the dimension of the kernel of A is called the nullity of A . These quantities are related by the rank–nullity theorem [ 4 ] rank ⁡ ( A ) + nullity ⁡ ( A ) = n . {\displaystyle \operatorname {rank} (A)+\operatorname {nullity} (A)=n.} The left null space , or cokernel , of a matrix A consists of all column vectors x such that x T A = 0 T , where T denotes the transpose of a matrix. The left null space of A is the same as the kernel of A T . The left null space of A is the orthogonal complement to the column space of A , and is dual to the cokernel of the associated linear transformation. The kernel, the row space, the column space, and the left null space of A are the four fundamental subspaces associated with the matrix A . The kernel also plays a role in the solution to a nonhomogeneous system of linear equations: A x = b or a 11 x 1 + a 12 x 2 + ⋯ + a 1 n x n = b 1 a 21 x 1 + a 22 x 2 + ⋯ + a 2 n x n = b 2 ⋮ a m 1 x 1 + a m 2 x 2 + ⋯ + a m n x n = b m {\displaystyle A\mathbf {x} =\mathbf {b} \quad {\text{or}}\quad {\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\;\cdots \;+\;&&a_{1n}x_{n}&&\;=\;&&&b_{1}\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\;\cdots \;+\;&&a_{2n}x_{n}&&\;=\;&&&b_{2}\\&&&&&&&&&&\vdots \ \;&&&\\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\;\cdots \;+\;&&a_{mn}x_{n}&&\;=\;&&&b_{m}\\\end{alignedat}}} If u and v are two possible solutions to the above equation, then A ( u − v ) = A u − A v = b − b = 0 {\displaystyle A(\mathbf {u} -\mathbf {v} )=A\mathbf {u} -A\mathbf {v} =\mathbf {b} -\mathbf {b} =\mathbf {0} } Thus, the difference of any two solutions to the equation A x = b lies in the kernel of A . It follows that any solution to the equation A x = b can be expressed as the sum of a fixed solution v and an arbitrary element of the kernel. That is, the solution set to the equation A x = b is { v + x ∣ A v = b ∧ x ∈ Null ⁡ ( A ) } , {\displaystyle \left\{\mathbf {v} +\mathbf {x} \mid A\mathbf {v} =\mathbf {b} \land \mathbf {x} \in \operatorname {Null} (A)\right\},} Geometrically, this says that the solution set to A x = b is the translation of the kernel of A by the vector v . See also Fredholm alternative and flat (geometry) . The following is a simple illustration of the computation of the kernel of a matrix (see § Computation by Gaussian elimination , below for methods better suited to more complex calculations). The illustration also touches on the row space and its relation to the kernel. Consider the matrix A = [ 2 3 5 − 4 2 3 ] . {\displaystyle A={\begin{bmatrix}2&3&5\\-4&2&3\end{bmatrix}}.} The kernel of this matrix consists of all vectors ( x , y , z ) ∈ R 3 for which [ 2 3 5 − 4 2 3 ] [ x y z ] = [ 0 0 ] , {\displaystyle {\begin{bmatrix}2&3&5\\-4&2&3\end{bmatrix}}{\begin{bmatrix}x\\y\\z\end{bmatrix}}={\begin{bmatrix}0\\0\end{bmatrix}},} which can be expressed as a homogeneous system of linear equations involving x , y , and z : 2 x + 3 y + 5 z = 0 , − 4 x + 2 y + 3 z = 0. {\displaystyle {\begin{aligned}2x+3y+5z&=0,\\-4x+2y+3z&=0.\end{aligned}}} The same linear equations can also be written in matrix form as: [ 2 3 5 0 − 4 2 3 0 ] . {\displaystyle \left[{\begin{array}{ccc|c}2&3&5&0\\-4&2&3&0\end{array}}\right].} Through Gauss–Jordan elimination , the matrix can be reduced to: [ 1 0 1 / 16 0 0 1 13 / 8 0 ] . {\displaystyle \left[{\begin{array}{ccc|c}1&0&1/16&0\\0&1&13/8&0\end{array}}\right].} Rewriting the matrix in equation form yields: x = − 1 16 z y = − 13 8 z . {\displaystyle {\begin{aligned}x&=-{\frac {1}{16}}z\\y&=-{\frac {13}{8}}z.\end{aligned}}} The elements of the kernel can be further expressed in parametric vector form , as follows: [ x y z ] = c [ − 1 / 16 − 13 / 8 1 ] ( where c ∈ R ) {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}=c{\begin{bmatrix}-1/16\\-13/8\\1\end{bmatrix}}\quad ({\text{where }}c\in \mathbb {R} )} Since c is a free variable ranging over all real numbers, this can be expressed equally well as: [ x y z ] = c [ − 1 − 26 16 ] . {\displaystyle {\begin{bmatrix}x\\y\\z\end{bmatrix}}=c{\begin{bmatrix}-1\\-26\\16\end{bmatrix}}.} The kernel of A is precisely the solution set to these equations (in this case, a line through the origin in R 3 ). Here, the vector (−1,−26,16) T constitutes a basis of the kernel of A . The nullity of A is therefore 1, as it is spanned by a single vector. The following dot products are zero: [ 2 3 5 ] [ − 1 − 26 16 ] = 0 a n d [ − 4 2 3 ] [ − 1 − 26 16 ] = 0 , {\displaystyle {\begin{bmatrix}2&3&5\end{bmatrix}}{\begin{bmatrix}-1\\-26\\16\end{bmatrix}}=0\quad \mathrm {and} \quad {\begin{bmatrix}-4&2&3\end{bmatrix}}{\begin{bmatrix}-1\\-26\\16\end{bmatrix}}=0,} which illustrates that vectors in the kernel of A are orthogonal to each of the row vectors of A . These two (linearly independent) row vectors span the row space of A —a plane orthogonal to the vector (−1,−26,16) T . With the rank 2 of A , the nullity 1 of A , and the dimension 3 of A , we have an illustration of the rank-nullity theorem. A basis of the kernel of a matrix may be computed by Gaussian elimination . For this purpose, given an m × n matrix A , we construct first the row augmented matrix [ A I ] , {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}},} where I is the n × n identity matrix . Computing its column echelon form by Gaussian elimination (or any other suitable method), we get a matrix [ B C ] . {\displaystyle {\begin{bmatrix}B\\\hline C\end{bmatrix}}.} A basis of the kernel of A consists in the non-zero columns of C such that the corresponding column of B is a zero column . In fact, the computation may be stopped as soon as the upper matrix is in column echelon form: the remainder of the computation consists in changing the basis of the vector space generated by the columns whose upper part is zero. For example, suppose that A = [ 1 0 − 3 0 2 − 8 0 1 5 0 − 1 4 0 0 0 1 7 − 9 0 0 0 0 0 0 ] . {\displaystyle A={\begin{bmatrix}1&0&-3&0&2&-8\\0&1&5&0&-1&4\\0&0&0&1&7&-9\\0&0&0&0&0&0\end{bmatrix}}.} Then [ A I ] = [ 1 0 − 3 0 2 − 8 0 1 5 0 − 1 4 0 0 0 1 7 − 9 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 ] . {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}}={\begin{bmatrix}1&0&-3&0&2&-8\\0&1&5&0&-1&4\\0&0&0&1&7&-9\\0&0&0&0&0&0\\\hline 1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&1&0&0&0\\0&0&0&1&0&0\\0&0&0&0&1&0\\0&0&0&0&0&1\end{bmatrix}}.} Putting the upper part in column echelon form by column operations on the whole matrix gives [ B C ] = [ 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 0 3 − 2 8 0 1 0 − 5 1 − 4 0 0 0 1 0 0 0 0 1 0 − 7 9 0 0 0 0 1 0 0 0 0 0 0 1 ] . {\displaystyle {\begin{bmatrix}B\\\hline C\end{bmatrix}}={\begin{bmatrix}1&0&0&0&0&0\\0&1&0&0&0&0\\0&0&1&0&0&0\\0&0&0&0&0&0\\\hline 1&0&0&3&-2&8\\0&1&0&-5&1&-4\\0&0&0&1&0&0\\0&0&1&0&-7&9\\0&0&0&0&1&0\\0&0&0&0&0&1\end{bmatrix}}.} The last three columns of B are zero columns. Therefore, the three last vectors of C , [ 3 − 5 1 0 0 0 ] , [ − 2 1 0 − 7 1 0 ] , [ 8 − 4 0 9 0 1 ] {\displaystyle \left[\!\!{\begin{array}{r}3\\-5\\1\\0\\0\\0\end{array}}\right],\;\left[\!\!{\begin{array}{r}-2\\1\\0\\-7\\1\\0\end{array}}\right],\;\left[\!\!{\begin{array}{r}8\\-4\\0\\9\\0\\1\end{array}}\right]} are a basis of the kernel of A . Proof that the method computes the kernel: Since column operations correspond to post-multiplication by invertible matrices, the fact that [ A I ] {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}}} reduces to [ B C ] {\displaystyle {\begin{bmatrix}B\\\hline C\end{bmatrix}}} means that there exists an invertible matrix P {\displaystyle P} such that [ A I ] P = [ B C ] , {\displaystyle {\begin{bmatrix}A\\\hline I\end{bmatrix}}P={\begin{bmatrix}B\\\hline C\end{bmatrix}},} with B {\displaystyle B} in column echelon form. Thus A P = B {\displaystyle AP=B} , I P = C {\displaystyle IP=C} , and A C = B {\displaystyle AC=B} . A column vector v {\displaystyle \mathbf {v} } belongs to the kernel of A {\displaystyle A} (that is A v = 0 {\displaystyle A\mathbf {v} =\mathbf {0} } ) if and only if B w = 0 , {\displaystyle B\mathbf {w} =\mathbf {0} ,} where w = P − 1 v = C − 1 v {\displaystyle \mathbf {w} =P^{-1}\mathbf {v} =C^{-1}\mathbf {v} } . As B {\displaystyle B} is in column echelon form, B w = 0 {\displaystyle B\mathbf {w} =\mathbf {0} } , if and only if the nonzero entries of w {\displaystyle \mathbf {w} } correspond to the zero columns of B {\displaystyle B} . By multiplying by C {\displaystyle C} , one may deduce that this is the case if and only if v = C w {\displaystyle \mathbf {v} =C\mathbf {w} } is a linear combination of the corresponding columns of C {\displaystyle C} . The problem of computing the kernel on a computer depends on the nature of the coefficients. If the coefficients of the matrix are exactly given numbers, the column echelon form of the matrix may be computed with Bareiss algorithm more efficiently than with Gaussian elimination. It is even more efficient to use modular arithmetic and Chinese remainder theorem , which reduces the problem to several similar ones over finite fields (this avoids the overhead induced by the non-linearity of the computational complexity of integer multiplication). [ citation needed ] For coefficients in a finite field, Gaussian elimination works well, but for the large matrices that occur in cryptography and Gröbner basis computation, better algorithms are known, which have roughly the same computational complexity , but are faster and behave better with modern computer hardware . [ citation needed ] For matrices whose entries are floating-point numbers , the problem of computing the kernel makes sense only for matrices such that the number of rows is equal to their rank: because of the rounding errors , a floating-point matrix has almost always a full rank , even when it is an approximation of a matrix of a much smaller rank. Even for a full-rank matrix, it is possible to compute its kernel only if it is well conditioned , i.e. it has a low condition number . [ 5 ] [ citation needed ] Even for a well conditioned full rank matrix, Gaussian elimination does not behave correctly: it introduces rounding errors that are too large for getting a significant result. As the computation of the kernel of a matrix is a special instance of solving a homogeneous system of linear equations, the kernel may be computed with any of the various algorithms designed to solve homogeneous systems. A state of the art software for this purpose is the Lapack library. [ citation needed ]
https://en.wikipedia.org/wiki/Kernel_(linear_algebra)
In set theory , the kernel of a function f {\displaystyle f} (or equivalence kernel [ 1 ] ) may be taken to be either An unrelated notion is that of the kernel of a non-empty family of sets B , {\displaystyle {\mathcal {B}},} which by definition is the intersection of all its elements: ker ⁡ B = ⋂ B ∈ B B . {\displaystyle \ker {\mathcal {B}}~=~\bigcap _{B\in {\mathcal {B}}}\,B.} This definition is used in the theory of filters to classify them as being free or principal . Kernel of a function For the formal definition, let f : X → Y {\displaystyle f:X\to Y} be a function between two sets . Elements x 1 , x 2 ∈ X {\displaystyle x_{1},x_{2}\in X} are equivalent if f ( x 1 ) {\displaystyle f\left(x_{1}\right)} and f ( x 2 ) {\displaystyle f\left(x_{2}\right)} are equal , that is, are the same element of Y . {\displaystyle Y.} The kernel of f {\displaystyle f} is the equivalence relation thus defined. [ 2 ] Kernel of a family of sets The kernel of a family B ≠ ∅ {\displaystyle {\mathcal {B}}\neq \varnothing } of sets is [ 3 ] ker ⁡ B := ⋂ B ∈ B B . {\displaystyle \ker {\mathcal {B}}~:=~\bigcap _{B\in {\mathcal {B}}}B.} The kernel of B {\displaystyle {\mathcal {B}}} is also sometimes denoted by ∩ B . {\displaystyle \cap {\mathcal {B}}.} The kernel of the empty set , ker ⁡ ∅ , {\displaystyle \ker \varnothing ,} is typically left undefined. A family is called fixed and is said to have non-empty intersection if its kernel is not empty. [ 3 ] A family is said to be free if it is not fixed; that is, if its kernel is the empty set. [ 3 ] Like any equivalence relation, the kernel can be modded out to form a quotient set , and the quotient set is the partition: { { w ∈ X : f ( x ) = f ( w ) } : x ∈ X } = { f − 1 ( y ) : y ∈ f ( X ) } . {\displaystyle \left\{\,\{w\in X:f(x)=f(w)\}~:~x\in X\,\right\}~=~\left\{f^{-1}(y)~:~y\in f(X)\right\}.} This quotient set X / = f {\displaystyle X/=_{f}} is called the coimage of the function f , {\displaystyle f,} and denoted coim ⁡ f {\displaystyle \operatorname {coim} f} (or a variation). The coimage is naturally isomorphic (in the set-theoretic sense of a bijection ) to the image , im ⁡ f ; {\displaystyle \operatorname {im} f;} specifically, the equivalence class of x {\displaystyle x} in X {\displaystyle X} (which is an element of coim ⁡ f {\displaystyle \operatorname {coim} f} ) corresponds to f ( x ) {\displaystyle f(x)} in Y {\displaystyle Y} (which is an element of im ⁡ f {\displaystyle \operatorname {im} f} ). Like any binary relation , the kernel of a function may be thought of as a subset of the Cartesian product X × X . {\displaystyle X\times X.} In this guise, the kernel may be denoted ker ⁡ f {\displaystyle \ker f} (or a variation) and may be defined symbolically as [ 2 ] ker ⁡ f := { ( x , x ′ ) : f ( x ) = f ( x ′ ) } . {\displaystyle \ker f:=\{(x,x'):f(x)=f(x')\}.} The study of the properties of this subset can shed light on f . {\displaystyle f.} If X {\displaystyle X} and Y {\displaystyle Y} are algebraic structures of some fixed type (such as groups , rings , or vector spaces ), and if the function f : X → Y {\displaystyle f:X\to Y} is a homomorphism , then ker ⁡ f {\displaystyle \ker f} is a congruence relation (that is an equivalence relation that is compatible with the algebraic structure), and the coimage of f {\displaystyle f} is a quotient of X . {\displaystyle X.} [ 2 ] The bijection between the coimage and the image of f {\displaystyle f} is an isomorphism in the algebraic sense; this is the most general form of the first isomorphism theorem . If f : X → Y {\displaystyle f:X\to Y} is a continuous function between two topological spaces then the topological properties of ker ⁡ f {\displaystyle \ker f} can shed light on the spaces X {\displaystyle X} and Y . {\displaystyle Y.} For example, if Y {\displaystyle Y} is a Hausdorff space then ker ⁡ f {\displaystyle \ker f} must be a closed set . Conversely, if X {\displaystyle X} is a Hausdorff space and ker ⁡ f {\displaystyle \ker f} is a closed set, then the coimage of f , {\displaystyle f,} if given the quotient space topology, must also be a Hausdorff space. A space is compact if and only if the kernel of every family of closed subsets having the finite intersection property (FIP) is non-empty; [ 4 ] [ 5 ] said differently, a space is compact if and only if every family of closed subsets with F.I.P. is fixed .
https://en.wikipedia.org/wiki/Kernel_(set_theory)
In physics and engineering , the radiative heat transfer from one surface to another is the equal to the difference of incoming and outgoing radiation from the first surface. In general, the heat transfer between surfaces is governed by temperature, surface emissivity properties and the geometry of the surfaces. The relation for heat transfer can be written as an integral equation with boundary conditions based upon surface conditions. Kernel functions can be useful in approximating and solving this integral equation. The radiative heat exchange depends on the local surface temperature of the enclosure and the properties of the surfaces, but does not depend upon the media. Because media neither absorb, emit, nor scatter radiation. Governing equation of heat transfer between two surface A i and A j q ( r i ) = ∫ λ = 0 ∞ ∫ ψ i = 0 2 π ∫ θ i = 0 π 2 ε λ , i ( λ , ψ i , θ i , r i ) I b λ , i ( cos ⁡ θ i sin ⁡ θ i ) d θ i d ψ i d λ − ∑ j = 1 N ∫ λ = 0 ∞ ρ λ , i ( λ , ψ r , j , θ r , j , ψ j , θ j , r i ) I λ , k ( λ , ψ k , θ k , r i ) cos ⁡ θ j cos ⁡ θ k | r k − r j | 2 d A k {\displaystyle {\begin{aligned}q(r_{i})={}&\int _{\lambda =0}^{\infty }\int _{\psi _{i}=0}^{2\pi }\int _{\theta _{i}=0}^{\frac {\pi }{2}}\varepsilon _{\lambda ,i}(\lambda ,\psi _{i},\theta _{i},r_{i})I_{b\lambda ,i}(\cos \theta _{i}\sin \theta _{i})\,d\theta _{i}\,d\psi _{i}\,d\lambda \\&-\sum _{j=1}^{N}\int _{\lambda =0}^{\infty }\rho _{\lambda ,i}(\lambda ,\psi _{r,j},\theta _{r,j},\psi _{j},\theta _{j},r_{i})I_{\lambda ,k}(\lambda ,\psi _{k},\theta _{k},r_{i}){\frac {\cos \theta _{j}\cos \theta _{k}}{|r_{k}-r_{j}|^{2}}}\,dA_{k}\end{aligned}}} where If the surface of the enclosure is approximated as gray and diffuse surface, and so the above equation can be written as after the analytical procedure q ( r ) + ε ( r ) E b = ε ( r ) ∮ K ( r , r ′ ) [ E b ( r ′ ) + 1 − ε ( r ′ ) ε ( r ) d Γ ( r ′ ) ] {\displaystyle q(r)+\varepsilon (r)E_{b}=\varepsilon (r)\oint K(r,r')\left[E_{b}(r')+1-{\frac {\varepsilon (r')}{\varepsilon (r)}}d\Gamma (r')\right]} where E b {\displaystyle E_{b}} is the black body emissive power which is given as the function of temperature of the black body E b ( r ) = σ T 4 ( r ) {\displaystyle E_{b}(r)=\sigma T^{4}(r)} where σ {\displaystyle \sigma } is the Stefan–Boltzmann constant . Kernel functions provide a way to manipulate data as though it were projected into a higher dimensional space, by operating on it in its original space. So that data in higher-dimensional space become more easily separable. Kernel function is also used in integral equation for surface radiation exchanges. Kernel function relates to both the geometry of the enclosure and its surface properties. Kernel function depends on geometry of the body. In above equation K ( r , r′ ) is the kernel function for the integral, which for 3-D problems takes the following form K ( r , r ′ ) = − n ( r − r ′ ) n ′ ( r − r ′ ) π | r − r ′ | 4 F = cos ⁡ θ r cos ⁡ θ r ′ π | r − r ′ | 4 F {\displaystyle K(r,r')=-{\frac {n(r-r')n'(r-r')}{\pi |r-r'|^{4}}}F={\frac {\cos \theta _{r}\cos \theta _{r}'}{\pi |r-r'|^{4}}}F} where F assumes a value of one when the surface element I sees the surface element J , otherwise it is zero if the ray is blocked and θr is the angle at point r , and θr ′ at point r ′. The parameter F depends on the geometric configuration of the body, so the kernel function highly irregular for a geometrically complex enclosure. For 2-D and axisymmetric configurations, the kernel function can be analytically integrated along the z or θ direction. The integration of the kernel function is K ( r , r ′ ) = − ∬ F n ( r − r ′ ) n ′ ( r − r ′ ) π | r − r ′ | 4 d z ′ d z = n ( r − r ′ ) n ′ ( r − r ′ ) π | r − r ′ | 4 F {\displaystyle K(r,r')=-\iint F{\frac {n(r-r')n'(r-r')}{\pi |r-r'|^{4}}}\,dz'\,dz={\frac {n(r-r')n'(r-r')}{\pi |r-r'|^{4}}}F} Here n denotes the unit normal of element I at the azimuth angle ϕ ′ being zero, and n ′ refers to the unit normal of element J with any azimuth angle ϕ ′. The mathematical expressions for n and n ′ are as follows: n = ( cos ⁡ θ , 0 , sin ⁡ θ ) n ′ = ( cos ⁡ θ ′ sin ⁡ ϕ ′ , cos ⁡ θ ′ sin ⁡ ϕ ′ , sin ⁡ θ ′ ) {\displaystyle {\begin{aligned}n&=(\cos \theta ,0,\sin \theta )\\n'&=(\cos \theta '\sin \phi ',\cos \theta '\sin \phi ',\sin \theta ')\end{aligned}}} Substituting these terms into equation, the kernel function is rearranged in terms of the azimuth angle ϕ'- K ( ϕ ′ ) = ( c ′ + d cos ⁡ ϕ ′ ) ( c ″ + d cos ⁡ ϕ ′ ) π ( c + d cos ⁡ ϕ ′ ) 2 F {\displaystyle K(\phi ')={\frac {(c'+d\cos \phi ')(c''+d\cos \phi ')}{\pi (c+d\cos \phi ')^{2}}}F} where c = r i 2 + r j 2 + Z j 2 d = − 2 r i r j c ′ = Z j sin ⁡ θ − r i cos ⁡ θ d ′ = r j cos ⁡ θ c ″ = Z j sin ⁡ θ ′ + r j cos ⁡ θ ′ d ″ = − r i cos ⁡ θ ′ {\displaystyle {\begin{aligned}c&=r_{i}^{2}+r_{j}^{2}+Z_{j}^{2}\\d&=-2r_{i}r_{j}\\c'&=Z_{j}\sin \theta -r_{i}\cos \theta \\d'&=r_{j}\cos \theta \\c''&=Z_{j}\sin \theta '+r_{j}\cos \theta '\\d''&=-r_{i}\cos \theta '\end{aligned}}} Relation d { arctan ⁡ ( c − d c + d tan ⁡ ϕ 2 ) } d x = c 2 − d 2 2 ( c + d cos ⁡ ϕ ) {\displaystyle {\frac {d\left\{\arctan \left({\sqrt {\frac {c-d}{c+d}}}\tan {\frac {\phi }{2}}\right)\right\}}{dx}}={\frac {\sqrt {c^{2}-d^{2}}}{2(c+d\cos \phi )}}} holds for this particular case. The final expression for the kernel function is k ¯ ( ϕ ) = 2 ∫ 0 ϕ k ( ϕ ′ ) d ϕ ′ = − 2 π [ A ϕ + b arctan ⁡ ( c − d c + d tan ⁡ ϕ 2 ) + C sin ⁡ ϕ c + d cos ⁡ ϕ ] {\displaystyle {\bar {k}}(\phi )=2\int _{0}^{\phi }k(\phi ')\,d\phi '=-{\frac {2}{\pi }}\left[A\phi +b\arctan \left({\sqrt {\frac {c-d}{c+d}}}\tan {\frac {\phi }{2}}\right)+C{\frac {\sin \phi }{c+d\cos \phi }}\right]} where A = d ′ d ″ d 2 B = 2 ( c 2 − d 2 ) ( d ′ f + e d ″ ) + c d e f d ( c 2 − d 2 ) c 2 − d 2 C = d e f d 2 − c 2 e = d c ′ − c d ′ d f = d c ″ − c d ″ d {\displaystyle {\begin{aligned}A&={\frac {d'd''}{d^{2}}}\\B&=2{\frac {(c^{2}-d^{2})(d'f+ed'')+cdef}{d(c^{2}-d^{2}){\sqrt {c^{2}-d^{2}}}}}\\C&={\frac {def}{d^{2}-c^{2}}}\\e&={\frac {dc'-cd'}{d}}\\f&={\frac {dc''-cd''}{d}}\end{aligned}}}
https://en.wikipedia.org/wiki/Kernel_function_for_solving_integral_equation_of_surface_radiation_exchanges
In mathematics , a closure operator on a set S is a function cl : P ( S ) → P ( S ) {\displaystyle \operatorname {cl} :{\mathcal {P}}(S)\rightarrow {\mathcal {P}}(S)} from the power set of S to itself that satisfies the following conditions for all sets X , Y ⊆ S {\displaystyle X,Y\subseteq S} Closure operators are determined by their closed sets , i.e., by the sets of the form cl( X ), since the closure cl( X ) of a set X is the smallest closed set containing X . Such families of "closed sets" are sometimes called closure systems or " Moore families ". [ 1 ] A set together with a closure operator on it is sometimes called a closure space . Closure operators are also called " hull operators ", which prevents confusion with the "closure operators" studied in topology . E. H. Moore studied closure operators in his 1910 Introduction to a form of general analysis , whereas the concept of the closure of a subset originated in the work of Frigyes Riesz in connection with topological spaces. [ 2 ] Though not formalized at the time, the idea of closure originated in the late 19th century with notable contributions by Ernst Schröder , Richard Dedekind and Georg Cantor . [ 3 ] The usual set closure from topology is a closure operator. Other examples include the linear span of a subset of a vector space , the convex hull or affine hull of a subset of a vector space or the lower semicontinuous hull f ¯ {\displaystyle {\overline {f}}} of a function f : E → R ∪ { ± ∞ } {\displaystyle f\colon E\to \mathbb {R} \cup \{\pm \infty \}} , where E {\displaystyle E} is e.g. a normed space , defined implicitly epi ⁡ ( f ¯ ) = epi ⁡ ( f ) ¯ {\displaystyle \operatorname {epi} ({\overline {f}})={\overline {\operatorname {epi} (f)}}} , where epi ⁡ ( f ) {\displaystyle \operatorname {epi} (f)} is the epigraph of a function f {\displaystyle f} . The relative interior ri {\displaystyle \operatorname {ri} } is not a closure operator: although it is idempotent, it is not increasing and if C 1 {\displaystyle C_{1}} is a cube in R 3 {\displaystyle \mathbb {R} ^{3}} and C 2 {\displaystyle C_{2}} is one of its faces, then C 2 ⊂ C 1 {\displaystyle C_{2}\subset C_{1}} , but ri ⁡ ( C 1 ) ≠ ∅ ≠ ri ⁡ ( C 2 ) {\displaystyle \operatorname {ri} (C_{1})\neq \emptyset \neq \operatorname {ri} (C_{2})} and ri ⁡ ( C 1 ) ∩ ri ⁡ ( C 2 ) = ∅ {\displaystyle \operatorname {ri} (C_{1})\cap \operatorname {ri} (C_{2})=\emptyset } , so it is not increasing. [ 4 ] In topology, the closure operators are topological closure operators , which must satisfy for all n ∈ N {\displaystyle n\in \mathbb {N} } (Note that for n = 0 {\displaystyle n=0} this gives cl ⁡ ( ∅ ) = ∅ {\displaystyle \operatorname {cl} (\varnothing )=\varnothing } ). In algebra and logic , many closure operators are finitary closure operators , i.e. they satisfy In the theory of partially ordered sets , which are important in theoretical computer science , closure operators have a more general definition that replaces ⊆ {\displaystyle \subseteq } with ≤ {\displaystyle \leq } . (See § Closure operators on partially ordered sets .) The topological closure of a subset X of a topological space consists of all points y of the space, such that every neighbourhood of y contains a point of X . The function that associates to every subset X its closure is a topological closure operator. Conversely, every topological closure operator on a set gives rise to a topological space whose closed sets are exactly the closed sets with respect to the closure operator. Finitary closure operators play a relatively prominent role in universal algebra , and in this context they are traditionally called algebraic closure operators . Every subset of an algebra generates a subalgebra : the smallest subalgebra containing the set. This gives rise to a finitary closure operator. Perhaps the best known example for this is the function that associates to every subset of a given vector space its linear span . Similarly, the function that associates to every subset of a given group the subgroup generated by it, and similarly for fields and all other types of algebraic structures . The linear span in a vector space and the similar algebraic closure in a field both satisfy the exchange property: If x is in the closure of the union of A and { y } but not in the closure of A , then y is in the closure of the union of A and { x }. A finitary closure operator with this property is called a matroid . The dimension of a vector space, or the transcendence degree of a field (over its prime field ) is exactly the rank of the corresponding matroid. The function that maps every subset of a given field to its algebraic closure is also a finitary closure operator, and in general it is different from the operator mentioned before. Finitary closure operators that generalize these two operators are studied in model theory as dcl (for definable closure ) and acl (for algebraic closure ). The convex hull in n -dimensional Euclidean space is another example of a finitary closure operator. It satisfies the anti-exchange property: If x is in the closure of the union of { y } and A , but not in the union of { y } and closure of A , then y is not in the closure of the union of { x } and A . Finitary closure operators with this property give rise to antimatroids . As another example of a closure operator used in algebra, if some algebra has universe A and X is a set of pairs of A , then the operator assigning to X the smallest congruence containing X is a finitary closure operator on A x A . [ 5 ] Suppose you have some logical formalism that contains certain rules allowing you to derive new formulas from given ones. Consider the set F of all possible formulas, and let P be the power set of F , ordered by ⊆. For a set X of formulas, let cl( X ) be the set of all formulas that can be derived from X . Then cl is a closure operator on P . More precisely, we can obtain cl as follows. Call "continuous" an operator J such that, for every directed class T , This continuity condition is on the basis of a fixed point theorem for J . Consider the one-step operator J of a monotone logic. This is the operator associating any set X of formulas with the set J ( X ) of formulas that are either logical axioms or are obtained by an inference rule from formulas in X or are in X . Then such an operator is continuous and we can define cl( X ) as the least fixed point for J greater or equal to X . In accordance with such a point of view, Tarski, Brown, Suszko and other authors proposed a general approach to logic based on closure operator theory. Also, such an idea is proposed in programming logic (see Lloyd 1987) and in fuzzy logic (see Gerla 2000). Around 1930, Alfred Tarski developed an abstract theory of logical deductions that models some properties of logical calculi. Mathematically, what he described is just a finitary closure operator on a set (the set of sentences ). In abstract algebraic logic , finitary closure operators are still studied under the name consequence operator , which was coined by Tarski. The set S represents a set of sentences, a subset T of S a theory, and cl( T ) is the set of all sentences that follow from the theory. Nowadays the term can refer to closure operators that need not be finitary; finitary closure operators are then sometimes called finite consequence operators . The closed sets with respect to a closure operator on S form a subset C of the power set P ( S ). Any intersection of sets in C is again in C . In other words, C is a complete meet-subsemilattice of P ( S ). Conversely, if C ⊆ P ( S ) is closed under arbitrary intersections, then the function that associates to every subset X of S the smallest set Y ∈ C such that X ⊆ Y is a closure operator. There is a simple and fast algorithm for generating all closed sets of a given closure operator. [ 6 ] A closure operator on a set is topological if and only if the set of closed sets is closed under finite unions, i.e., C is a meet-complete sublattice of P ( S ). Even for non-topological closure operators, C can be seen as having the structure of a lattice. (The join of two sets X , Y ⊆ P ( S ) being cl( X ∪ {\displaystyle \cup } Y ).) But then C is not a sublattice of the lattice P ( S ). Given a finitary closure operator on a set, the closures of finite sets are exactly the compact elements of the set C of closed sets. It follows that C is an algebraic poset . Since C is also a lattice, it is often referred to as an algebraic lattice in this context. Conversely, if C is an algebraic poset, then the closure operator is finitary. Each closure operator on a finite set S is uniquely determined by its images of its pseudo-closed sets. [ 7 ] These are recursively defined: A set is pseudo-closed if it is not closed and contains the closure of each of its pseudo-closed proper subsets. Formally: P ⊆ S is pseudo-closed if and only if A partially ordered set (poset) is a set together with a partial order ≤, i.e. a binary relation that is reflexive ( a ≤ a ), transitive ( a ≤ b ≤ c implies a ≤ c ) and antisymmetric ( a ≤ b ≤ a implies a = b ). Every power set P ( S ) together with inclusion ⊆ is a partially ordered set. A function cl: P → P from a partial order P to itself is called a closure operator if it satisfies the following axioms for all elements x , y in P . More succinct alternatives are available: the definition above is equivalent to the single axiom for all x , y in P . Using the pointwise order on functions between posets, one may alternatively write the extensiveness property as id P ≤ cl, where id is the identity function . A self-map k that is increasing and idempotent, but satisfies the dual of the extensiveness property, i.e. k ≤ id P is called a kernel operator , [ 8 ] interior operator , [ 9 ] or dual closure . [ 10 ] As examples, if A is a subset of a set B , then the self-map on the powerset of B given by μ A ( X ) = A ∪ X is a closure operator, whereas λ A ( X ) = A ∩ X is a kernel operator. The ceiling function from the real numbers to the real numbers, which assigns to every real x the smallest integer not smaller than x , is another example of a closure operator. A fixpoint of the function cl, i.e. an element c of P that satisfies cl( c ) = c , is called a closed element . A closure operator on a partially ordered set is determined by its closed elements. If c is a closed element, then x ≤ c and cl( x ) ≤ c are equivalent conditions. Every Galois connection (or residuated mapping ) gives rise to a closure operator (as is explained in that article). In fact, every closure operator arises in this way from a suitable Galois connection. [ 11 ] The Galois connection is not uniquely determined by the closure operator. One Galois connection that gives rise to the closure operator cl can be described as follows: if A is the set of closed elements with respect to cl, then cl: P → A is the lower adjoint of a Galois connection between P and A , with the upper adjoint being the embedding of A into P . Furthermore, every lower adjoint of an embedding of some subset into P is a closure operator. "Closure operators are lower adjoints of embeddings." Note however that not every embedding has a lower adjoint. Any partially ordered set P can be viewed as a category , with a single morphism from x to y if and only if x ≤ y . The closure operators on the partially ordered set P are then nothing but the monads on the category P . Equivalently, a closure operator can be viewed as an endofunctor on the category of partially ordered sets that has the additional idempotent and extensive properties. If P is a complete lattice , then a subset A of P is the set of closed elements for some closure operator on P if and only if A is a Moore family on P , i.e. the largest element of P is in A , and the infimum (meet) of any non-empty subset of A is again in A . Any such set A is itself a complete lattice with the order inherited from P (but the supremum (join) operation might differ from that of P ). When P is the powerset Boolean algebra of a set X , then a Moore family on P is called a closure system on X . The closure operators on P form themselves a complete lattice; the order on closure operators is defined by cl 1 ≤ cl 2 iff cl 1 ( x ) ≤ cl 2 ( x ) for all x in P .
https://en.wikipedia.org/wiki/Kernel_operator
A kernel panic (sometimes abbreviated as KP [ 1 ] ) is a safety measure taken by an operating system 's kernel upon detecting an internal fatal error in which either it is unable to safely recover or continuing to run the system would have a higher risk of major data loss. The term is largely specific to Unix and Unix-like systems. The equivalent on Microsoft Windows operating systems is a stop error , often called a "blue screen of death". The kernel routines that handle panics, known as panic() in AT&T -derived and BSD Unix source code, are generally designed to output an error message to the console , dump an image of kernel memory to disk for post-mortem debugging , and then either wait for the system to be manually rebooted, or initiate an automatic reboot . [ 2 ] The information provided is of a highly technical nature and aims to assist a system administrator or software developer in diagnosing the problem. Kernel panics can also be caused by errors originating outside kernel space . For example, many Unix operating systems panic if the init process, which runs in user space , terminates. [ 3 ] [ 4 ] The Unix kernel maintains internal consistency and runtime correctness with assertions as the fault detection mechanism. The basic assumption is that the hardware and the software should perform correctly and a failure of an assertion results in a panic , i.e. a voluntary halt to all system activity. [ 5 ] The kernel panic was introduced in an early version of Unix and demonstrated a major difference between the design philosophies of Unix and its predecessor Multics . Multics developer Tom van Vleck recalls a discussion of this change with Unix developer Dennis Ritchie : I remarked to Dennis that easily half the code I was writing in Multics was error recovery code. He said, "We left all that stuff out. If there's an error, we have this routine called panic, and when it is called, the machine crashes, and you holler down the hall, 'Hey, reboot it. ' " [ 6 ] The original panic() function was essentially unchanged from Fifth Edition UNIX to the VAX -based UNIX 32V and output only an error message with no other information, then dropped the system into an endless idle loop. As the Unix codebase was enhanced, the panic() function was also enhanced to dump various forms of debugging information to the console. A panic may occur as a result of a hardware failure or a software bug in the operating system. In many cases, the operating system is capable of continued operation after an error has occurred. If the system is in an unstable state, rather than risking security breaches and data corruption, the operating system stops in order to prevent further damage, which helps to facilitate diagnosis of the error and may restart automatically. [ 7 ] After recompiling a kernel binary image from source code , a kernel panic while booting the resulting kernel is a common problem if the kernel was not correctly configured, compiled or installed. [ 8 ] Add-on hardware or malfunctioning RAM could also be sources of fatal kernel errors during start up, due to incompatibility with the OS or a missing device driver . [ 9 ] A kernel may also go into panic() if it is unable to locate a root file system . [ 10 ] During the final stages of kernel userspace initialization, a panic is typically triggered if the spawning of init fails. A panic might also be triggered if the init process terminates, as the system would then be unusable. [ 11 ] The following is an implementation of the Linux kernel final initialization in kernel_init() : [ 12 ] Kernel panics appear in Linux like in other Unix-like systems; however, serious but non-fatal errors can generate another kind of error condition, known as a kernel oops . [ 13 ] In this case, the kernel normally continues to run after killing the offending process . As an oops could cause some subsystems or resources to become unavailable, they can later lead to a full kernel panic. On Linux, a kernel panic causes keyboard LEDs to blink as a visual indication of a critical condition. [ 14 ] As of Linux 6.10, drm_panic was merged allowing DRM drivers to support drawing a panic screen to inform the user that a panic occurred. This allows a panic screen to appear even when a display server was running when the panic occurred. [ 15 ] As of Linux 6.12, drm_panic was extended where the stack trace can be encoded as a QR code . [ 16 ] When a kernel panic occurs in Mac OS X 10.2 through 10.7, the computer displays a multilingual message informing the user that they need to reboot the system. [ 17 ] Prior to 10.2, a more traditional Unix-style panic message was displayed; in 10.8 and later, the computer automatically reboots and the message is only displayed as a skippable warning afterward. The format of the message varies from version to version: [ 18 ] If five new kernel panics occur within three minutes of the first one, the Mac will display a prohibitory sign for thirty seconds, and then shut down; this is known as a "recurring kernel panic". [ 19 ] In all versions above 10.2, the text is superimposed on a standby symbol and is not full screen. Debugging information is saved in NVRAM and written to a log file on reboot. In 10.7 there is a feature to automatically restart after a kernel panic. In some cases, on 10.2 and later, white text detailing the error may appear in addition to the standby symbol.
https://en.wikipedia.org/wiki/Kernel_panic
The Kernighan–Lin algorithm is a heuristic algorithm for finding partitions of graphs . The algorithm has important practical application in the layout of digital circuits and components in electronic design automation of VLSI . [ 1 ] [ 2 ] The input to the algorithm is an undirected graph G = ( V , E ) with vertex set V , edge set E , and (optionally) numerical weights on the edges in E . The goal of the algorithm is to partition V into two disjoint subsets A and B of equal (or nearly equal) size, in a way that minimizes the sum T of the weights of the subset of edges that cross from A to B . If the graph is unweighted, then instead the goal is to minimize the number of crossing edges; this is equivalent to assigning weight one to each edge. The algorithm maintains and improves a partition, in each pass using a greedy algorithm to pair up vertices of A with vertices of B , so that moving the paired vertices from one side of the partition to the other will improve the partition. After matching the vertices, it then performs a subset of the pairs chosen to have the best overall effect on the solution quality T . Given a graph with n vertices, each pass of the algorithm runs in time O ( n 2 log n ) . In more detail, for each a ∈ A {\displaystyle a\in A} , let I a {\displaystyle I_{a}} be the internal cost of a , that is, the sum of the costs of edges between a and other nodes in A , and let E a {\displaystyle E_{a}} be the external cost of a , that is, the sum of the costs of edges between a and nodes in B . Similarly, define I b {\displaystyle I_{b}} , E b {\displaystyle E_{b}} for each b ∈ B {\displaystyle b\in B} . Furthermore, let be the difference between the external and internal costs of s . If a and b are interchanged, then the reduction in cost is where c a , b {\displaystyle c_{a,b}} is the cost of the possible edge between a and b . The algorithm attempts to find an optimal series of interchange operations between elements of A {\displaystyle A} and B {\displaystyle B} which maximizes T o l d − T n e w {\displaystyle T_{old}-T_{new}} and then executes the operations, producing a partition of the graph to A and B . [ 1 ] Source: [ 2 ]
https://en.wikipedia.org/wiki/Kernighan–Lin_algorithm
Kerogen is solid, insoluble organic matter in sedimentary rocks . It consists of a variety of organic materials, including dead plants, algae, and other microorganisms, that have been compressed and heated by geological processes. All the kerogen on earth is estimated to contain 10 16 tons of carbon. This makes it the most abundant source of organic compounds on earth, exceeding the total organic content of living matter 10,000-fold. [ 1 ] The type of kerogen present in a particular rock formation depends on the type of organic material that was originally present. Kerogen can be classified by these origins: lacustrine (e.g., algal ), marine (e.g., planktonic ), and terrestrial (e.g., pollen and spores ). The type of kerogen depends also on the degree of heat and pressure it has been subjected to, and the length of time the geological processes ran. The result is that a complex mixture of organic compounds resides in sedimentary rocks, serving as the precursor for the formation of hydrocarbons such as oil and gas. In short, kerogen amounts to fossilized organic matter that has been buried and subjected to high temperatures and pressures over millions of years, resulting in various chemical reactions and transformations. Kerogen is insoluble in normal organic solvents and it does not have a specific chemical formula . Upon heating, kerogen converts in part to liquid and gaseous hydrocarbons. Petroleum and natural gas form from kerogen. [ 2 ] The name "kerogen" was introduced by the Scottish organic chemist Alexander Crum Brown in 1906, [ 3 ] [ 4 ] [ 5 ] [ 6 ] derived from the Greek words for wax and origin (Greek: κηρός "wax" and -gen, γένεσις "origin"). The increased production of hydrocarbons from shale has motivated a revival of research into the composition, structure, and properties of kerogen. Many studies have documented dramatic and systematic changes in kerogen composition across the range of thermal maturity relevant to the oil and gas industry. Analyses of kerogen are generally performed on samples prepared by acid demineralization with critical point drying , which isolates kerogen from the rock matrix without altering its chemical composition or microstructure. [ 7 ] Kerogen is formed during sedimentary diagenesis from the degradation of living matter. The original organic matter can comprise lacustrine and marine algae and plankton and terrestrial higher-order plants. During diagenesis, large biopolymers from, e.g., proteins , lipids , and carbohydrates in the original organic matter, decompose partially or completely. This breakdown process can be viewed as the reverse of photosynthesis . [ 8 ] These resulting units can then polycondense to form geopolymers . The formation of geopolymers in this way accounts for the large molecular weights and diverse chemical compositions associated with kerogen. The smallest units are the fulvic acids , the medium units are the humic acids , and the largest units are the humins . This polymerization usually happens alongside the formation and/or sedimentation of one or more mineral components resulting in a sedimentary rock like oil shale . When kerogen is contemporaneously deposited with geologic material, subsequent sedimentation and progressive burial or overburden provide elevated pressure and temperature owing to lithostatic and geothermal gradients in Earth's crust. Resulting changes in the burial temperatures and pressures lead to further changes in kerogen composition including loss of hydrogen , oxygen , nitrogen , sulfur , and their associated functional groups , and subsequent isomerization and aromatization Such changes are indicative of the thermal maturity state of kerogen. Aromatization allows for molecular stacking in sheets, which in turn drives changes in physical characteristics of kerogen, such as increasing molecular density, vitrinite reflectance , and spore coloration (yellow to orange to brown to black with increasing depth/thermal maturity). During the process of thermal maturation , kerogen breaks down in high-temperature pyrolysis reactions to form lower-molecular-weight products including bitumen, oil, and gas. The extent of thermal maturation controls the nature of the product, with lower thermal maturities yielding mainly bitumen/oil and higher thermal maturities yielding gas. These generated species are partially expelled from the kerogen-rich source rock and in some cases can charge into a reservoir rock. Kerogen takes on additional importance in unconventional resources , particularly shale. In these formations, oil and gas are produced directly from the kerogen-rich source rock (i.e. the source rock is also the reservoir rock). Much of the porosity in these shales is found to be hosted within the kerogen, rather than between mineral grains as occurs in conventional reservoir rocks. [ 9 ] [ 10 ] Thus, kerogen controls much of the storage and transport of oil and gas in shale. [ 9 ] Another possible method of formation is that vanabin -containing organisms cleave the core out of chlorin -based compounds such as the magnesium in chlorophyll and replace it with their vanadium center in order to attach and harvest energy via light-harvesting complexes . It is theorized that the bacteria contained in worm castings, Rhodopseudomonas palustris , do this during its photoautotrophism mode of metabolism. Over time colonies of light harvesting bacteria solidify, forming kerogen [ citation needed ] . Kerogen is a complex mixture of organic chemical compounds that make up the most abundant fraction of organic matter in sedimentary rocks . [ 12 ] As kerogen is a mixture of organic materials, it is not defined by a single chemical formula. Its chemical composition varies substantially between and even within sedimentary formations. For example, kerogen from the Green River Formation oil shale deposit of western North America contains elements in the proportions carbon 215 : hydrogen 330 : oxygen 12 : nitrogen 5 : sulfur 1. [ 13 ] Kerogen is insoluble in normal organic solvents in part because of the high molecular weight of its component compounds. The soluble portion is known as bitumen . When heated to the right temperatures in the earth's crust , ( oil window c. 50–150 °C , gas window c. 150–200 °C, both depending on how quickly the source rock is heated) some types of kerogen release crude oil or natural gas , collectively known as hydrocarbons ( fossil fuels ). When such kerogens are present in high concentration in rocks such as organic-rich mudrocks shale , they form possible source rocks . Shales that are rich in kerogen but have not been heated to required temperature to generate hydrocarbons instead may form oil shale deposits. The chemical composition of kerogen has been analyzed by several forms of solid state spectroscopy. These experiments typically measure the speciations (bonding environments) of different types of atoms in kerogen. One technique is 13 C NMR spectroscopy , which measures carbon speciation. NMR experiments have found that carbon in kerogen can range from almost entirely aliphatic ( sp 3 hybridized ) to almost entirely aromatic ( sp 2 hybridized ), with kerogens of higher thermal maturity typically having higher abundance of aromatic carbon. [ 14 ] Another technique is Raman spectroscopy . Raman scattering is characteristic of, and can be used to identify, specific vibrational modes and symmetries of molecular bonds. The first-order Raman spectra of kerogen comprises two principal peaks; [ 15 ] a so-called G band ("graphitic") attributed to in-plane vibrational modes of well-ordered sp 2 carbon and a so-called D band ("disordered") from symmetric vibrational modes of sp 2 carbon associated with lattice defects and discontinuities. The relative spectral position (Raman shift) and intensity of these carbon species is shown to correlate to thermal maturity, [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] with kerogens of higher thermal maturity having higher abundance of graphitic/ordered aromatic carbons. Complementary and consistent results have been obtained with infrared (IR) spectroscopy , which show that kerogen has higher fraction of aromatic carbon and shorter lengths of aliphatic chains at higher thermal maturities. [ 22 ] [ 23 ] These results can be explained by the preferential removal of aliphatic carbons by cracking reactions during pyrolysis, where the cracking typically occurs at weak C–C bonds beta to aromatic rings and results in the replacement of a long aliphatic chain with a methyl group. At higher maturities, when all labile aliphatic carbons have already been removed—in other words, when the kerogen has no remaining oil-generation potential—further increase in aromaticity can occur from the conversion of aliphatic bonds (such as alicyclic rings) to aromatic bonds. IR spectroscopy is sensitive to carbon-oxygen bonds such as quinones , ketones , and esters , so the technique can also be used to investigate oxygen speciation. It is found that the oxygen content of kerogen decreases during thermal maturation (as has also been observed by elemental analysis), with relatively little observable change in oxygen speciation. [ 22 ] Similarly, sulfur speciation can be investigated with X-ray absorption near edge structure (XANES) spectroscopy, which is sensitive to sulfur-containing functional groups such as sulfides , thiophenes , and sulfoxides . Sulfur content in kerogen generally decreases with thermal maturity, and sulfur speciation includes a mix of sulfides and thiophenes at low thermal maturities and is further enriched in thiophenes at high maturities. [ 24 ] [ 25 ] Overall, changes in kerogen composition with respect to heteroatom chemistry occur predominantly at low thermal maturities (bitumen and oil windows), while changes with respect to carbon chemistry occur predominantly at high thermal maturities (oil and gas windows). The microstructure of kerogen also evolves during thermal maturation, as has been inferred by scanning electron microscopy (SEM) imaging showing the presence of abundant internal pore networks within the lattice of thermally mature kerogen. [ 9 ] [ 26 ] Analysis by gas sorption demonstrated that the internal specific surface area of kerogen increases by an order of magnitude (~ 40 to 400 m 2 /g) during thermal maturation. [ 27 ] [ 28 ] X-ray and neutron diffraction studies have examined the spacing between carbon atoms in kerogen, revealing during thermal maturation a shortening of carbon-carbon distances in covalently bonded carbons (related to the transition from primarily aliphatic to primarily aromatic bonding) but a lengthening of carbon-carbon distances in carbons at greater bond separations (related to the formation of kerogen-hosted porosity). [ 29 ] This evolution is attributed to the formation of kerogen-hosted pores left behind as segments of the kerogen molecule are cracked off during thermal maturation. These changes in composition and microstructure result in changes in the properties of kerogen. For example, the skeletal density of kerogen increases from approximately 1.1 g/ml at low thermal maturity to 1.7 g/ml at high thermal maturity. [ 30 ] [ 31 ] [ 32 ] This evolution is consistent with the change in carbon speciation from predominantly aliphatic (similar to wax, density < 1 g/ml) to predominantly aromatic (similar to graphite, density > 2 g/ml) with increasing thermal maturity. Additional studies have explored the spatial heterogeneity of kerogen at small length scales. Individual particles of kerogen arising from different inputs are identified and assigned as different macerals . This variation in starting material may lead to variations in composition between different kerogen particles, leading to spatial heterogeneity in kerogen composition at the micron length scale. Heterogeneity between kerogen particles may also arise from local variations in catalysis of pyrolysis reactions due to the nature of the minerals surrounding different particles. Measurements performed with atomic force microscopy coupled to infrared spectroscopy (AFM-IR) and correlated with organic petrography have analyzed the evolution of the chemical composition and mechanical properties of individual macerals of kerogen with thermal maturation at the nanoscale. [ 33 ] These results indicate that all macerals decrease in oxygen content and increase in aromaticity (decrease in aliphalicity) during thermal maturation, but some macerals undergo large changes while other macerals undergo relatively small changes. In addition, macerals that are richer in aromatic carbon are mechanically stiffer than macerals that are richer in aliphatic carbon, as expected because highly aromatic forms of carbon (such as graphite) are stiffer than highly aliphatic forms of carbon (such as wax). Labile kerogen breaks down to generate principally liquid hydrocarbons (i.e., oil ), refractory kerogen breaks down to generate principally gaseous hydrocarbons, and inert kerogen generates no hydrocarbons but forms graphite . In organic petrography, the different components of kerogen can be identified by microscopic inspection and are classified as macerals . This classification was developed originally for coal (a sedimentary rock that is rich in organic matter of terrestrial origin) but is now applied to the study of other kerogen-rich sedimentary deposits. The Van Krevelen diagram is one method of classifying kerogen by "types", where kerogens form distinct groups when the ratios of hydrogen to carbon and oxygen to carbon are compared. [ 34 ] Type I kerogens are characterized by high initial hydrogen-to-carbon (H/C) ratios and low initial oxygen-to-carbon (O/C) ratios. This kerogen is rich in lipid-derived material and is commonly, but not always, from algal organic matter in lacustrine (freshwater) environments. On a mass basis, rocks containing type I kerogen yield the largest quantity of hydrocarbons upon pyrolysis . Hence, from the theoretical view, shales containing type I kerogen are the most promising deposits in terms of conventional oil retorting. [ 35 ] Type II kerogens are characterized by intermediate initial H/C ratios and intermediate initial O/C ratios. Type II kerogen is principally derived from marine organic materials, which are deposited in reducing sedimentary environments. The sulfur content of type II kerogen is generally higher than in other kerogen types, and sulfur is found in substantial amounts in the associated bitumen. Although pyrolysis of type II kerogen yields less oil than type I, the amount yielded is still sufficient for type II-bearing sedimentary deposits to be petroleum source rocks. Similar to type II but with high sulfur content. Type III kerogens are characterized by low initial H/C ratios and high initial O/C ratios. Type III kerogens are derived from terrestrial plant matter, specifically from precursor compounds including cellulose , lignin (a non-carbohydrate polymer formed from phenyl-propane units that binds the strings of cellulose together); terpenes and phenols . Coal is an organic-rich sedimentary rock that is composed predominantly of this kerogen type. On a mass basis, type III kerogens generate the lowest oil yield of principal kerogen types. Type IV kerogen comprises mostly inert organic matter in the form of polycyclic aromatic hydrocarbons . They have no potential to produce hydrocarbons. [ 37 ] The diagram on the right shows the organic carbon cycle with the flow of kerogen (black solid lines) and the flow of biospheric carbon (green solid lines), showing both the fixation of atmospheric CO 2 by terrestrial and marine primary productivity . The combined flux of reworked kerogen and biospheric carbon into ocean sediments constitutes total organic carbon burial entering the endogenous kerogen pool. [ 38 ] [ 39 ] Carbonaceous chondrite meteorites contain kerogen-like components. [ 40 ] Such material is thought to have formed the terrestrial planets . Kerogenous materials have been detected also in interstellar clouds and dust around stars . [ 41 ] The Curiosity rover has detected organic deposits similar to kerogen in mudstone samples in Gale Crater on Mars using a revised drilling technique. The presence of benzene and propane also indicates the possible presence of kerogen-like materials, from which hydrocarbons are derived. [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] [ 48 ] [ 49 ] [ 50 ] Helgeson, H.C.et al. (2009). "A chemical and thermodynamic model of oil generation in hydrocarbon source rocks". Geochim. Cosmochim. Acta. 73 , 594–695. [ 1 ] Marakushev, S.A.; Belonogova, O.V. (2021), "An inorganic origin of the “oil-source” rocks carbon substance". Georesursy = Georesources. 23 , 164–176. [ 2 ]
https://en.wikipedia.org/wiki/Kerogen
Kerosene , or paraffin , is a combustible hydrocarbon liquid which is derived from petroleum . It is widely used as a fuel in aviation as well as households. Its name derives from the Greek κηρός ( kērós ) meaning " wax "; it was registered as a trademark by Nova Scotia geologist and inventor Abraham Gesner in 1854 before evolving into a generic trademark . It is sometimes spelled kerosine in scientific and industrial usage. [ 1 ] Kerosene is widely used to power jet engines of aircraft ( jet fuel ), as well as some rocket engines in a highly refined form called RP-1 . It is also commonly used as a cooking and lighting fuel, and for fire toys such as poi . In parts of Asia, kerosene is sometimes used as fuel for small outboard motors or even motorcycles . [ 2 ] World total kerosene consumption for all purposes is equivalent to about 5,500,000 barrels per day as of July 2023. [ 3 ] The term "kerosene" is common in much of Argentina , Australia , Canada , India , New Zealand , Nigeria , and the United States , [ 4 ] [ 5 ] while the term paraffin (or a closely related variant) is used in Chile , East Africa , South Africa , Norway , and the United Kingdom . [ 6 ] The term "lamp oil", or the equivalent in the local languages, is common in the majority of Asia and the Southeastern United States , although in Appalachia , it is also commonly referred to as " coal oil ". [ 7 ] The name "paraffin" is also used to refer to a number of distinct petroleum byproducts other than kerosene. For instance, liquid paraffin (called mineral oil in the US) is a more viscous and highly refined product which is used as a laxative. Paraffin wax is a waxy solid extracted from petroleum. To prevent confusion between kerosene and the much more flammable and volatile gasoline (petrol) , some jurisdictions regulate markings or colourings for containers used to store or dispense kerosene. For example, in the United States, Pennsylvania requires that portable containers used at retail service stations for kerosene be colored blue, as opposed to red (for gasoline ) or yellow (for diesel ). [ 8 ] [ 9 ] The World Health Organization considers kerosene to be a polluting fuel and recommends that "governments and practitioners immediately stop promoting its household use". [ 10 ] Kerosene smoke contains high levels of harmful particulate matter , and household use of kerosene is associated with higher risks of cancer , respiratory infections, asthma , tuberculosis , cataracts , and adverse pregnancy outcomes. [ 11 ] Kerosene is a low- viscosity , clear liquid formed from hydrocarbons obtained from the fractional distillation of petroleum between 150 and 275 °C (300 and 525 °F), resulting in a mixture with a density of 0.78–0.81 g/cm 3 . It is miscible with petroleum solvents , but not with water. It is composed of hydrocarbon molecules that typically contain between 6-20 carbon atoms per molecule , [ 12 ] predominantly containing 9 to 16 carbon atoms. [ 13 ] Regardless of crude oil source or processing history, kerosene's major components are branched- and straight-chain alkanes (hydrocarbon chains) and naphthenes (cycloalkanes), which normally account for at least 70% of volume. Aromatic hydrocarbons such as alkylbenzenes (single ring) and alkylnaphthalenes (double ring), do not normally exceed 25% by volume of kerosene streams. Olefins are usually not present at more than 5% by volume. [ 14 ] The heat of combustion of kerosene is similar to that of diesel fuel ; its lower heating value is 43.1 MJ / kg (around 18,500 Btu / lb ), and its higher heating value is 46.2 MJ/kg (19,900 Btu/lb). [ 15 ] ASTM International recognizes two grades of kerosene: 1-K (less than 0.04% sulfur by weight) and 2-K (0.3% sulfur by weight). [ 16 ] Grade 1-K kerosene burns cleaner with fewer deposits, fewer toxins, and less frequent maintenance than 2-K, and is the preferred grade for indoor heaters and stoves. [ 17 ] In the United Kingdom, two grades of heating oil are defined. BS 2869 Class C1 is the lightest grade used for lanterns, camping stoves, and wick heaters, and mixed with petrol in some vintage combustion engines as a substitute for tractor vaporizing oil . [ 18 ] BS 2869 Class C2 is a heavier distillate, which is used as domestic heating oil. Premium kerosene is usually sold in 5- or 20-litre containers from hardware, camping and garden stores, and is often dyed purple. Standard kerosene is usually dispensed in bulk by a tanker and is undyed. National and international standards define the properties of several grades of kerosene used for jet fuel . Flash point and freezing point properties are particularly interesting for operation and safety; the standards also define additives for control of static electricity and other purposes. Kerosene is liquid around room temperature : 25 °C (77 °F). The flash point of kerosene is between 37 °C (99 °F) and 65 °C (149 °F), and its autoignition temperature is 220 °C (428 °F). [ 19 ] The freezing point of kerosene depends on grade, with commercial aviation fuel standardized at −47 °C (−53 °F). Grade 1-K kerosene freezes around −40 °C (−40 °F, 233 K). [ 20 ] The process of distilling crude oil/petroleum into kerosene, as well as other hydrocarbon compounds, was first written about in the ninth century by the Persian scholar Rāzi (or Rhazes). In his Kitab al-Asrar ( Book of Secrets ), the physician and chemist Razi described two methods for the production of kerosene, termed naft abyad (نفط ابيض "white naphtha"), using an apparatus called an alembic . One method used clay as an absorbent , and later the other method using chemicals like ammonium chloride ( sal ammoniac ). The distillation process was repeated until most of the volatile hydrocarbon fractions had been removed and the final product was perfectly clear and safe to burn. Kerosene was also produced during the same period from oil shale and bitumen by heating the rock to extract the oil, which was then distilled. [ 21 ] During the Chinese Ming Dynasty , the Chinese made use of kerosene through extracting and purifying petroleum and then converted it into lamp fuel. [ 22 ] The Chinese made use of petroleum for lighting lamps and heating homes as early as 1500 BC. [ 23 ] Although "coal oil" was well known by industrial chemists at least as early as the 1700s as a byproduct of making coal gas and coal tar, it burned with a smoky flame that prevented its use for indoor illumination. In cities, much indoor illumination was provided by piped-in coal gas , but outside the cities, and for spot lighting within the cities, the lucrative market for fueling indoor lamps was supplied by whale oil , specifically that from sperm whales , which burned brighter and cleaner. [ 24 ] [ 25 ] Canadian geologist Abraham Pineo Gesner claimed that in 1846, he had given a public demonstration in Charlottetown , Prince Edward Island of a new process he had discovered. [ 24 ] [ note 1 ] He heated coal in a retort , and distilled from it a clear, thin fluid that he showed made an excellent lamp fuel. He coined the name "kerosene" for his fuel, a contraction of keroselaion , meaning wax-oil . [ 26 ] The cost of extracting kerosene from coal was high. Gesner recalled from his extensive knowledge of New Brunswick 's geology a naturally occurring asphaltum called albertite . He was blocked from using it by the New Brunswick coal conglomerate because they had coal extraction rights for the province, and he lost a court case when their experts claimed albertite was a form of coal. [ 27 ] In 1854, Gesner moved to Newtown Creek , Long Island , New York . There, he secured backing from a group of businessmen. They formed the North American Gas Light Company, to which he assigned his patents. Despite clear priority of discovery, Gesner did not obtain his first kerosene patent until 1854, two years after James Young 's United States patent. [ 28 ] [ 29 ] Gesner's method of purifying the distillation products appears to have been superior to Young's, resulting in a cleaner and better-smelling fuel. Manufacture of kerosene under the Gesner patents began in New York in 1854 and later in Boston —being distilled from bituminous coal and oil shale . [ 26 ] Gesner registered the word "Kerosene" as a trademark in 1854, and for several years, only the North American Gas Light Company and the Downer Company (to which Gesner had granted the right) were allowed to call their lamp oil "Kerosene" in the United States. [ 30 ] In 1848, Scottish chemist James Young experimented with oil discovered seeping in a coal mine as a source of lubricating oil and illuminating fuel. When the seep became exhausted, he experimented with the dry distillation of coal, especially the resinous "boghead coal" ( torbanite ). He extracted a number of useful liquids from it, one of which he named paraffine oil because at low temperatures, it congealed into a substance that resembled paraffin wax. Young took out a patent on his process and the resulting products in 1850, and built the first truly commercial oil-works in the world at Bathgate in 1851, using oil extracted from locally mined torbanite, shale, and bituminous coal. In 1852, he took out a United States patent for the same invention. These patents were subsequently upheld in both countries in a series of lawsuits, and other producers were obliged to pay him royalties. [ 26 ] In 1851, Samuel Martin Kier began selling lamp oil to local miners, under the name "Carbon Oil". He distilled this from crude oil by a process of his own invention. He also invented a new lamp to burn his product. [ 31 ] He has been dubbed the Grandfather of the American Oil Industry by historians. [ 32 ] Kier's salt wells began to be fouled with petroleum in the 1840s. At first, Kier simply dumped the oil into the nearby Pennsylvania Main Line Canal as useless waste, but later he began experimenting with several distillates of the crude oil, along with a chemist from eastern Pennsylvania. [ 33 ] Ignacy Łukasiewicz , a Polish pharmacist residing in Lviv , and his partner Jan Zeh [ pl ] had been experimenting with different distillation techniques, trying to improve on Gesner's kerosene process, but using oil from a local petroleum seep . Many people knew of his work, but paid little attention to it. On the night of 31 July 1853, doctors at the local hospital needed to perform an emergency operation, virtually impossible by candlelight. They therefore sent a messenger for Łukasiewicz and his new lamps. The lamp burned so brightly and cleanly that the hospital officials ordered several lamps plus a large supply of fuel. Łukasiewicz realized the potential of his work and quit the pharmacy to find a business partner, and then traveled to Vienna to register his technique with the government. Łukasiewicz moved to the Gorlice region of Poland in 1854, and sank several wells across southern Poland over the following decade, setting up a refinery near Jasło in 1859. [ 34 ] The petroleum discovery by Edwin Drake – Drake Well – in western Pennsylvania in 1859 caused a great deal of public excitement and investment drilling in new wells, not only in Pennsylvania, but also in Canada, where petroleum had been discovered at Oil Springs, Ontario in 1858, and southern Poland, where Ignacy Łukasiewicz had been distilling lamp oil from petroleum seeps since 1852. The increased supply of petroleum allowed oil refiners to entirely side-step the oil-from-coal patents of both Young and Gesner, and produce illuminating oil from petroleum without paying royalties to anyone. As a result, the illuminating oil industry in the United States completely switched over to petroleum in the 1860s. The petroleum-based illuminating oil was widely sold as Kerosene, and the trade name soon lost its proprietary status, and became the lower-case generic product "kerosene". [ 35 ] Because Gesner's original Kerosene had been also known as "coal oil", generic kerosene from petroleum was commonly called "coal oil" in some parts of the United States well into the 20th century. In the United Kingdom, manufacturing oil from coal (or oil shale) continued into the early 20th century, although increasingly overshadowed by petroleum oils. As kerosene production increased, whaling declined. The American whaling fleet , which had been steadily growing for 50 years, reached its all-time peak of 199 ships in 1858. By 1860, just two years later, the fleet had dropped to 167 ships. The Civil War cut into American whaling temporarily, but only 105 whaling ships returned to sea in 1866, the first full year of peace, and that number dwindled until only 39 American ships set out to hunt whales in 1876. [ 36 ] Kerosene, made first from coal and oil shale, then from petroleum, had largely taken over whaling's lucrative market in lamp oil. Electric lighting started displacing kerosene as an illuminant in the late 19th century, especially in urban areas. However, kerosene remained the predominant commercial end-use for petroleum refined in the United States until 1909, when it was exceeded by motor fuels. The rise of the gasoline-powered automobile in the early 20th century created a demand for the lighter hydrocarbon fractions, and refiners invented methods to increase their output of gasoline, while decreasing their output of kerosene. In addition, some of the heavier hydrocarbons that previously went into kerosene were incorporated into diesel fuel. Kerosene kept some market share by being increasingly used in stoves and portable heaters. [ 37 ] A pilot project by ETH Zurich used solar power to produce kerosene from carbon dioxide and water in July 2022. The product can be used in existing aviation applications, and "can also be blended with fossil-derived kerosene". [ 38 ] [ 39 ] Kerosene is produced by fractional distillation of crude oil in an oil refinery . It condenses at a temperature intermediate between diesel fuel , which is less volatile, and naphtha and gasoline , which are more volatile. Kerosene made up 8.5 percent by volume of petroleum refinery output in 2021 in the United States, of which nearly all was kerosene-type jet fuel (8.4 percent). [ 40 ] The fuel, also known as heating oil in the UK and Ireland, remains widely used in kerosene lamps and lanterns in the developing world. [ 41 ] Although it replaced whale oil , the 1873 edition of Elements of Chemistry said, "The vapor of this substance [kerosene] mixed with air is as explosive as gunpowder." [ 42 ] This statement may have been due to the common practice of adulterating kerosene with cheaper but more volatile hydrocarbon mixtures, such as naphtha . [ 43 ] Kerosene was a significant fire risk; in 1880, nearly two of every five New York City fires were caused by defective kerosene lamps. [ 44 ] In less-developed countries kerosene is an important source of energy for cooking and lighting. It is used as a cooking fuel in portable stoves for backpackers . As a heating fuel, it is often used in portable stoves, and is sold in some filling stations . It is sometimes used as a heat source during power failures. Kerosene is widely used in Japan and Chile as a home heating fuel for portable and installed kerosene heaters. In Chile and Japan, kerosene can be readily bought at any filling station or be delivered to homes in some cases. [ 45 ] In the United Kingdom and Ireland, kerosene is often used as a heating fuel in areas not connected to a gas pipeline network. It is used less for cooking, with LPG being preferred because it is easier to light. Kerosene is often the fuel of choice for range cookers such as Rayburn . Additives such as RangeKlene can be put into kerosene to ensure that it burns cleaner and produces less soot when used in range cookers. [ 46 ] The Amish , who generally abstain from the use of electricity, rely on kerosene for lighting at night. More ubiquitous in the late 19th and early 20th centuries, kerosene space heaters were often built into kitchen ranges, and kept many farm and fishing families warm and dry through the winter. At one time, citrus growers used a smudge pot fueled by kerosene to create a pall of thick smoke over a grove in an effort to prevent freezing temperatures from damaging crops. " Salamanders " are kerosene space heaters used on construction sites to dry out building materials and to warm workers. Before the days of electrically lighted road barriers, highway construction zones were marked at night by kerosene fired, pot-bellied torches. Most of these uses of kerosene created thick black smoke because of the low temperature of combustion. A notable exception, discovered in the early 19th century, is the use of a gas mantle mounted above the wick on a kerosene lamp. Looking like a delicate woven bag above the woven cotton wick, the mantle is a residue of mineral materials (mostly thorium dioxide ), heated to incandescence by the flame from the wick. The thorium and cerium oxide combination produces both a whiter light and a greater fraction of the energy in the form of visible light than a black body at the same temperature would. These types of lamps are still in use today in areas of the world without electricity, because they give a much better light than a simple wick-type lamp does. [ citation needed ] Recently, a multipurpose lantern that doubles as a cook stove has been introduced in India in areas with no electricity. [ 47 ] In countries such as Nigeria, kerosene is the main fuel used for cooking, especially by the poor, and kerosene stoves have replaced traditional wood-based cooking appliances. As such, increases in the price of kerosene can have a major political and environmental consequence. The Indian government subsidizes the fuel to keep the price very low, to around 15 U.S. cents per liter as of February 2007, as keeping the price low discourages dismantling of forests for cooking fuel. [ 48 ] In Nigeria, an attempt by the government to remove a fuel subsidy that includes kerosene met with strong opposition. [ 49 ] Kerosene is used as a fuel in portable stoves , especially in Primus stoves invented in 1892. Portable kerosene stoves are reliable and durable in everyday use, and perform especially well under adverse conditions. In outdoor activities and mountaineering, a decisive advantage of pressurized kerosene stoves over gas cartridge stoves is their particularly high thermal output and their ability to operate at very low ambient temperatures in winter or at high altitude. Wick stoves like Perfection's or wickless like Boss continue to be used by the Amish and off grid living, and in natural disasters where there is no power available. In the early to mid-20th century, kerosene or tractor vaporizing oil was used as a cheap fuel for tractors and hit-and-miss engines . A petrol-paraffin engine would start on gasoline, then switch over to kerosene once the engine warmed up. On some engines, a heat valve on the manifold would route the exhaust gasses around the intake pipe, heating the kerosene to the point where it was vaporized and could be ignited by an electric spark . In Europe following the Second World War, automobiles were similarly modified to run on kerosene rather than gasoline, which they would have to import and pay heavy taxes on. Besides additional piping and the switch between fuels, the head gasket was replaced by a much thicker one to diminish the compression ratio (making the engine less powerful and less efficient, but able to run on kerosene). The necessary equipment was sold under the trademark "Econom". [ 50 ] During the fuel crisis of the 1970s , Saab-Valmet developed and series-produced the Saab 99 Petro that ran on kerosene, turpentine or gasoline. The project, codenamed "Project Lapponia", was headed by Simo Vuorio, and towards the end of the 1970s, a working prototype was produced based on the Saab 99 GL. The car was designed to run on two fuels. Gasoline was used for cold starts and when extra power was needed, but normally it ran on kerosene or turpentine. The idea was that the gasoline could be made from peat using the Fischer–Tropsch process . Between 1980 and 1984, 3,756 Saab 99 Petros and 2,385 Talbot Horizons (a version of the Chrysler Horizon that integrated many Saab components) were made. One reason to manufacture kerosene-fueled cars was that, in Finland, kerosene was less heavily taxed than gasoline. [ 51 ] Kerosene is used to fuel smaller-horsepower outboard motors built by Yamaha, Suzuki, and Tohatsu. Primarily used on small fishing craft, these are dual-fuel engines that start on gasoline and then transition to kerosene once the engine reaches optimum operating temperature . Multiple fuel Evinrude and Mercury Racing engines also burn kerosene, as well as jet fuel. [ 52 ] Today, kerosene is mainly used in fuel for jet engines in several grades. One highly refined form of the fuel is known as RP-1 , and is often burned with liquid oxygen as rocket fuel . These fuel grade kerosenes meet specifications for smoke points and freeze points . The combustion reaction can be approximated as follows, with the molecular formula C 12 H 26 ( dodecane ): In the initial phase of liftoff, the Saturn V launch vehicle was powered by the reaction of liquid oxygen with RP-1. [ 53 ] For the five 6.4 meganewton sea-level thrust F-1 rocket engines of the Saturn V, burning together, the reaction generated roughly 1.62 × 10 11 watts (J/s) (162 gigawatt) or 217 million horsepower. [ 53 ] Kerosene is sometimes used as an additive in diesel fuel to prevent gelling or waxing in cold temperatures. [ 54 ] Ultra-low sulfur kerosene is a custom-blended fuel used by the New York City Transit Authority to power its bus fleet. The transit agency started using this fuel in 2004, prior to the widespread adoption of ultra-low-sulfur diesel , which has since become the standard. In 2008, the suppliers of the custom fuel failed to tender for a renewal of the transit agency's contract, leading to a negotiated contract at a significantly increased cost. [ 55 ] JP-8 (for "Jet Propellant 8"), a kerosene-based fuel, is used by the United States military as a replacement in diesel fueled vehicles and for powering aircraft. JP-8 is also used by the U.S. military and its NATO allies as a fuel for heaters, stoves, tanks, and as a replacement for diesel fuel in the engines of nearly all tactical ground vehicles and electrical generators. Aliphatic kerosene is a type of kerosene which has a low aromatic hydrocarbon content. The aromatic content of crude oil varies greatly from oil field to oil field, however by solvent extraction, it is possible to separate aromatic hydrocarbons from aliphatic (alkane) hydrocarbons. A common method is solvent extraction with methanol, DMSO or sulfolane . Aromatic kerosene is a grade of kerosene with a large concentration of aromatic hydrocarbons, an example of this would be Exon 's Solvesso 150. Kerosene is commonly used in metal extraction as the diluent . For example, in copper extraction by LIX-84, it can be used in mixer settlers. [ 56 ] Kerosene is used as a diluent in the PUREX extraction process, but it is increasingly being supplanted by dodecane and other artificial hydrocarbons such as TPH (hydrogenated propylene trimer). Traditionally the UK plants at Sellafield used aromatic kerosene to reduce the radiolysis of TBP while the French nuclear industry tended to use diluents with very little aromatic content. The French nuclear reprocessing plants typically use TPH as their diluent. In recent times, it has been shown by Mark Foreman at Chalmers that aliphatic kerosene can be replaced in solvent extraction with HVO100, which is a second generation biodiesel made by Neste . [ 57 ] In X-ray crystallography , kerosene can be used to store crystals. When a hydrated crystal is left in air, dehydration may occur slowly. This makes the color of the crystal become dull. Kerosene can keep air away from the crystal. It can be also used to prevent air from re-dissolving in a boiled liquid, [ 58 ] and to store alkali metals such as potassium , sodium , and rubidium (with the exception of lithium , which is less dense than kerosene, causing it to float). [ 59 ] Kerosene is often used in the entertainment industry for fire performances, such as fire breathing , fire juggling or poi , and fire dancing . Because of its low flame temperature when burnt in free air, the risk is lower should the performer come in contact with the flame. Kerosene is generally not recommended as fuel for indoor fire dancing, as it produces an unpleasant (to some) odor, which becomes poisonous in sufficient concentration. Ethanol was sometimes used instead, but the flames it produces look less impressive, and its lower flash point poses a high risk. As a petroleum product miscible with many industrial liquids, kerosene can be used as both a solvent, able to remove other petroleum products, such as chain grease, and as a lubricant , with less risk of combustion when compared to using gasoline . It can also be used as a cooling agent in metal production and treatment (oxygen-free conditions). [ 60 ] In the petroleum industry, kerosene is often used as a synthetic hydrocarbon for corrosion experiments to simulate crude oil in field conditions. Kerosene can be used as an adhesive remover on hard-to-remove mucilage or adhesive left by stickers on a glass surface (such as in show windows of stores). [ 58 ] It can be used to remove candle wax that has dripped onto a glass surface; it is recommended that the excess wax be scraped off prior to applying kerosene via a soaked cloth or tissue paper. [ 58 ] It can be used to clean bicycle and motorcycle chains of old lubricant before relubrication. [ 58 ] It can also be used to thin oil-based paint used in fine art. Some artists even use it to clean their brushes; however, it leaves the bristles greasy to the touch. It has seen use for water tank mosquito control in Australia, where a temporary thin floating layer above the water protects it until the defective tank is repaired. [ 61 ] The World Health Organization considers kerosene to be a polluting fuel and recommends that "governments and practitioners immediately stop promoting its household use". [ 62 ] Kerosene smoke contains high levels of harmful particulate matter , and household use of kerosene is associated with higher risks of cancer , respiratory infections, asthma , tuberculosis , cataract , and adverse pregnancy outcomes. [ 63 ] Ingestion of kerosene is harmful. Kerosene is sometimes recommended as a folk remedy for killing head lice , but health agencies warn against this as it can cause burns and serious illness. A kerosene shampoo can even be fatal if fumes are inhaled. [ 64 ] [ 65 ] People can be exposed to kerosene in the workplace by breathing it in, swallowing it, skin contact, and eye contact. The US National Institute for Occupational Safety and Health has set a recommended exposure limit of 100 mg/m 3 over an 8-hour workday. [ 66 ]
https://en.wikipedia.org/wiki/Kerosene
The Kerr/CFT correspondence is an extension of the AdS/CFT correspondence or gauge-gravity duality to rotating black holes (which are described by the Kerr metric ). [ 1 ] The duality works for black holes whose near-horizon geometry can be expressed as a product of AdS 3 and a single compact coordinate. The AdS/CFT duality then maps this to a two-dimensional conformal field theory (the compact coordinate being analogous to the S 5 factor in Maldacena 's original work), from which the correct Bekenstein entropy can then be deduced. [ 2 ] The original form of the duality applies to black holes with the maximum value of angular momentum , but it has now been speculatively extended to all lesser values. [ 3 ] This black hole -related article is a stub . You can help Wikipedia by expanding it . This string theory -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kerr/CFT_correspondence
Kerr frequency combs (also known as microresonator frequency combs ) are optical frequency combs which are generated from a continuous wave pump laser by the Kerr nonlinearity . This coherent conversion of the pump laser to a frequency comb takes place inside an optical resonator which is typically of micrometer to millimeter in size and is therefore termed a microresonator . The coherent generation of the frequency comb from a continuous wave laser with the optical nonlinearity as a gain sets Kerr frequency combs apart from today's most common optical frequency combs. These frequency combs are generated by mode-locked lasers where the dominating gain stems from a conventional laser gain medium, which is pumped incoherently. Because Kerr frequency combs only rely on the nonlinear properties of the medium inside the microresonator and do not require a broadband laser gain medium, broad Kerr frequency combs can in principle be generated around any pump frequency. While the principle of Kerr frequency combs is applicable to any type of optical resonator, the requirement for Kerr frequency comb generation is a pump laser field intensity above the parametric threshold of the nonlinear process. This requirement is easier to fulfill inside a microresonator because of the possible very low losses inside microresonators (and corresponding high quality factors ) and because of the microresonators’ small mode volumes . These two features combined result in a large field enhancement of the pump laser inside the microresonator which allow the generation of broad Kerr frequency combs for reasonable powers of the pump laser. One important property of Kerr frequency combs, which is a direct consequence of the small dimensions of the microresonators and their resulting large free spectral ranges (FSR) , is the large mode spacing of typical Kerr frequency combs. For mode-locked lasers this mode spacing, which defines the distance in between adjacent teeth of the frequency comb, is typically in the range of 10 MHz to 1 GHz. For Kerr frequency combs the typical range is from around 10 GHz to 1 THz. The coherent generation of an optical frequency comb from a continuous wave pump laser is not a unique property of Kerr frequency combs. Optical frequency combs generated with cascaded optical modulators also possess this property. For certain application this property can be advantageous. For example, to stabilize the offset frequency of the Kerr frequency comb one can directly apply feedback to the pump laser frequency. In principle it is also possible to generate a Kerr frequency comb around a particular continuous wave laser in order to use the bandwidth of the frequency comb to determine the exact frequency of the continuous wave laser. Since their first demonstration in silica micro-toroid resonators, [ 1 ] Kerr frequency combs have been demonstrated in a variety of microresonator platforms which notably also include crystalline microresonators [ 2 ] and integrated photonics platforms such as waveguide resonators made from silicon nitride . [ 3 ] More recent research has expanded the range of available platforms further which now includes diamond , [ 4 ] aluminum nitride , [ 5 ] lithium niobate , [ 6 ] and, for mid-infrared pump wavelengths, silicon . [ 7 ] Because both use the nonlinear effects of the propagation medium, the physics of Kerr frequency combs and of supercontinuum generation from pulsed lasers is very similar. In addition to the nonlinearity, the chromatic dispersion of the medium also plays a crucial role for these systems. As a result of the interplay of nonlinearity and dispersion, solitons can form. The most relevant type of solitons for Kerr frequency comb generation are bright dissipative cavity solitons, [ 8 ] [ 9 ] which are sometimes also called dissipative Kerr solitons (DKS). These bright solitons have helped to significantly advance the field of Kerr frequency combs as they provide a way to generate ultra-short pulses which in turn represent a coherent, broadband optical frequency comb, in a more reliable fashion than what was possible before. In its simplest form with only the Kerr nonlinearity and second order dispersion the physics of Kerr frequency combs and dissipative solitons can be described well by the Lugiato–Lefever equation . [ 10 ] Other effects such as the Raman effect [ 11 ] and higher order dispersion effects require additional terms in the equation.
https://en.wikipedia.org/wiki/Kerr_frequency_comb
The Kerr metric or Kerr geometry describes the geometry of empty spacetime around a rotating uncharged axially symmetric black hole with a quasispherical event horizon . The Kerr metric is an exact solution of the Einstein field equations of general relativity ; these equations are highly non-linear , which makes exact solutions very difficult to find. The Kerr metric is a generalization to a rotating body of the Schwarzschild metric, discovered by Karl Schwarzschild in 1915, which described the geometry of spacetime around an uncharged, spherically symmetric, and non-rotating body. The corresponding solution for a charged , spherical, non-rotating body, the Reissner–Nordström metric , was discovered soon afterwards (1916–1918). However, the exact solution for an uncharged, rotating black hole, the Kerr metric, remained unsolved until 1963, when it was discovered by Roy Kerr . [ 1 ] [ 2 ] : 69–81 The natural extension to a charged, rotating black hole, the Kerr–Newman metric , was discovered shortly thereafter in 1965. These four related solutions may be summarized by the following table, where Q represents the body's electric charge and J represents its spin angular momentum : According to the Kerr metric, a rotating body should exhibit frame-dragging (also known as Lense–Thirring precession ), a distinctive prediction of general relativity. The first measurement of this frame dragging effect was done in 2011 by the Gravity Probe B experiment. Roughly speaking, this effect predicts that objects coming close to a rotating mass will be entrained to participate in its rotation, not because of any applied force or torque that can be felt, but rather because of the swirling curvature of spacetime itself associated with rotating bodies. In the case of a rotating black hole, at close enough distances, all objects – even light – must rotate with the black hole; the region where this holds is called the ergosphere . The light from distant sources can travel around the event horizon several times (if close enough); creating multiple images of the same object . To a distant viewer, the apparent perpendicular distance between images decreases at a factor of e 2 π (about 500). However, fast spinning black holes have less distance between multiplicity images. [ 3 ] [ 4 ] Rotating black holes have surfaces where the metric seems to have apparent singularities ; the size and shape of these surfaces depends on the black hole's mass and angular momentum. The outer surface encloses the ergosphere and has a shape similar to a flattened sphere. The inner surface marks the event horizon; objects passing into the interior of this horizon can never again communicate with the world outside that horizon. However, neither surface is a true singularity, since their apparent singularity can be eliminated in a different coordinate system . A similar situation obtains when considering the Schwarzschild metric which also appears to result in a singularity at ⁠ r = r s {\displaystyle r=r_{\text{s}}} ⁠ dividing the space above and below r s into two disconnected patches; using a different coordinate transformation one can then relate the extended external patch to the inner patch (see Schwarzschild metric § Singularities and black holes ) – such a coordinate transformation eliminates the apparent singularity where the inner and outer surfaces meet. Objects between these two surfaces must co-rotate with the rotating black hole, as noted above; this feature can in principle be used to extract energy from a rotating black hole, up to its invariant mass energy, Mc 2 . The LIGO experiment that first detected gravitational waves, announced in 2016, also provided the first direct observation of a pair of Kerr black holes. [ 5 ] The Kerr metric is commonly expressed in one of two forms, the Boyer–Lindquist form and the Kerr–Schild form. It can be readily derived from the Schwarzschild metric, using the Newman–Janis algorithm [ 6 ] by Newman–Penrose formalism (also known as the spin–coefficient formalism), [ 7 ] Ernst equation , [ 8 ] or Ellipsoid coordinate transformation. [ 9 ] The Kerr metric describes the geometry of spacetime in the vicinity of a mass ⁠ M {\displaystyle M} ⁠ rotating with angular momentum ⁠ J {\displaystyle J} ⁠ . [ 10 ] The metric (or equivalently its line element for proper time ) in Boyer–Lindquist coordinates is [ 11 ] [ 12 ] where the coordinates ⁠ r , θ , ϕ {\displaystyle r,\theta ,\phi } ⁠ are standard oblate spheroidal coordinates , which are equivalent to the cartesian coordinates [ 13 ] [ 14 ] where r s {\displaystyle r_{\text{s}}} is the Schwarzschild radius and where for brevity, the length scales ⁠ a , Σ {\displaystyle a,\Sigma } ⁠ and ⁠ Δ {\displaystyle \Delta } ⁠ have been introduced as A key feature to note in the above metric is the cross-term ⁠ d t d ϕ {\displaystyle dt\,d\phi } ⁠ . This implies that there is coupling between time and motion in the plane of rotation that disappears when the black hole's angular momentum goes to zero. In the non-relativistic limit where ⁠ M {\displaystyle M} ⁠ (or, equivalently, ⁠ r s {\displaystyle r_{\text{s}}} ⁠ ) goes to zero, the Kerr metric becomes the orthogonal metric for the oblate spheroidal coordinates The Kerr metric can be expressed in "Kerr–Schild" form , using a particular set of Cartesian coordinates as follows. [ 15 ] [ 16 ] [ 17 ] These solutions were proposed by Kerr and Schild in 1965. Notice that k is a unit 3-vector , making the 4-vector a null vector , with respect to both g and η . [ 18 ] Here M is the constant mass of the spinning object, η is the Minkowski tensor , and a is a constant rotational parameter of the spinning object. It is understood that the vector ⁠ a → {\displaystyle {\vec {a}}} ⁠ is directed along the positive z-axis. The quantity r is not the radius, but rather is implicitly defined by Notice that the quantity r becomes the usual radius R when the rotational parameter ⁠ a {\displaystyle a} ⁠ approaches zero. In this form of solution, units are selected so that the speed of light is unity ( ⁠ c = 1 {\displaystyle c=1} ⁠ ). At large distances from the source ( R ≫ a ), these equations reduce to the Eddington–Finkelstein form of the Schwarzschild metric. In the Kerr–Schild form of the Kerr metric, the determinant of the metric tensor is everywhere equal to negative one, even near the source. [ 19 ] As the Kerr metric (along with the Kerr–NUT metric ) is axially symmetric, it can be cast into a form to which the Belinski–Zakharov transform can be applied. This implies that the Kerr black hole has the form of a gravitational soliton . [ 20 ] If the complete rotational energy ⁠ E r o t = c 2 ( M − M i r r ) {\displaystyle E_{\rm {rot}}=c^{2}\left(M-M_{\rm {irr}}\right)} ⁠ of a black hole is extracted, for example with the Penrose process , [ 21 ] [ 22 ] the remaining mass cannot shrink below the irreducible mass. Therefore, if a black hole rotates with the spin ⁠ a = M {\displaystyle a=M} ⁠ , its total mass-equivalent ⁠ M {\displaystyle M} ⁠ is higher by a factor of ⁠ 2 {\displaystyle {\sqrt {2}}} ⁠ in comparison with a corresponding Schwarzschild black hole where ⁠ M {\displaystyle M} ⁠ is equal to ⁠ M irr {\displaystyle M_{\text{irr}}} ⁠ . The reason for this is that in order to get a static body to spin, energy needs to be applied to the system. Because of the mass–energy equivalence this energy also has a mass-equivalent, which adds to the total mass–energy of the system, ⁠ M {\displaystyle M} ⁠ . The total mass equivalent ⁠ M {\displaystyle M} ⁠ (the gravitating mass) of the body (including its rotational energy ) and its irreducible mass ⁠ M irr {\displaystyle M_{\text{irr}}} ⁠ are related by [ 23 ] [ 24 ] Since even a direct check on the Kerr metric involves cumbersome calculations, the contravariant components ⁠ g i k {\displaystyle g^{ik}} ⁠ of the metric tensor in Boyer–Lindquist coordinates are shown below in the expression for the square of the four-gradient operator : [ 21 ] We may rewrite the Kerr metric ( 1 ) in the following form: This metric is equivalent to a co-rotating reference frame that is rotating with angular speed Ω that depends on both the radius r and the colatitude θ , where Ω is called the Killing horizon . Thus, an inertial reference frame is entrained by the rotating central mass to participate in the latter's rotation; this is called frame-dragging, and has been tested experimentally. [ 25 ] Qualitatively, frame-dragging can be viewed as the gravitational analog of electromagnetic induction. An "ice skater", in orbit over the equator and rotationally at rest with respect to the stars, extends her arms. The arm extended toward the black hole will be torqued spinward. The arm extended away from the black hole will be torqued anti-spinward. She will therefore be rotationally sped up, in a counter-rotating sense to the black hole. This is the opposite of what happens in everyday experience. If she is already rotating at a certain speed when she extends her arms, inertial effects and frame-dragging effects will balance and her spin will not change. Due to the equivalence principle , gravitational effects are locally indistinguishable from inertial effects, so this rotation rate, at which when she extends her arms nothing happens, is her local reference for non-rotation. This frame is rotating with respect to the fixed stars and counter-rotating with respect to the black hole. A useful metaphor is a planetary gear system with the black hole being the sun gear, the ice skater being a planetary gear and the outside universe being the ring gear. This can also be interpreted through Mach's principle . There are several important surfaces in the Kerr metric ( 1 ). The inner surface corresponds to an event horizon similar to that observed in the Schwarzschild metric; this occurs where the purely radial component g rr of the metric goes to infinity. Solving the quadratic equation ⁠ 1 / g rr ⁠ = 0 yields the solution: which in natural units (that give ⁠ G = M = c = 1 {\displaystyle G=M=c=1} ⁠ ) simplifies to: While in the Schwarzschild metric the event horizon is also the place where the purely temporal component g tt of the metric changes sign from positive to negative, in Kerr metric that happens at a different distance. Again solving a quadratic equation g tt = 0 yields the solution: or in natural units: Due to the cos 2 θ term in the square root, this outer surface resembles a flattened sphere that touches the inner surface at the poles of the rotation axis, where the colatitude θ equals 0 or π ; the space between these two surfaces is called the ergosphere. Within this volume, the purely temporal component g tt is negative, i.e., acts like a purely spatial metric component. Consequently, particles within this ergosphere must co-rotate with the inner mass, if they are to retain their time-like character. A moving particle experiences a positive proper time along its worldline , its path through spacetime. However, this is impossible within the ergosphere, where g tt is negative, unless the particle is co-rotating around the interior mass ⁠ M {\displaystyle M} ⁠ with an angular speed at least of ⁠ Ω {\displaystyle \Omega } ⁠ . Thus, no particle can move in the direction opposite to central mass's rotation within the ergosphere. As with the event horizon in the Schwarzschild metric, the apparent singularity at r H is due to the choice of coordinates (i.e., it is a coordinate singularity ). In fact, the spacetime can be smoothly continued through it by an appropriate choice of coordinates. In turn, the outer boundary of the ergosphere at r E is not singular by itself even in Kerr coordinates due to non-zero ⁠ d t d ϕ {\displaystyle dt\ d\phi } ⁠ term. A black hole in general is surrounded by a surface, called the event horizon and situated at the Schwarzschild radius for a nonrotating black hole, where the escape velocity is equal to the velocity of light. Within this surface, no observer/particle can maintain itself at a constant radius. It is forced to fall inwards, and so this is sometimes called the static limit . A rotating black hole has the same static limit at its event horizon but there is an additional surface outside the event horizon named the "ergosurface" given by in Boyer–Lindquist coordinates , which can be intuitively characterized as the sphere where "the rotational velocity of the surrounding space" is dragged along with the velocity of light. Within this sphere the dragging is greater than the speed of light, and any observer/particle is forced to co-rotate. The region outside the event horizon but inside the surface where the rotational velocity is the speed of light, is called the ergosphere (from Greek ergon meaning work ). Particles falling within the ergosphere are forced to rotate faster and thereby gain energy. Because they are still outside the event horizon, they may escape the black hole. The net process is that the rotating black hole emits energetic particles at the cost of its own total energy. The possibility of extracting spin energy from a rotating black hole was first proposed by the mathematician Roger Penrose in 1969 and is thus called the Penrose process. Rotating black holes in astrophysics are a potential source of large amounts of energy and are used to explain energetic phenomena, such as gamma-ray bursts . The Kerr geometry exhibits many noteworthy features: the maximal analytic extension includes a sequence of asymptotically flat exterior regions, each associated with an ergosphere , stationary limit surfaces, event horizons , Cauchy horizons , closed timelike curves , and a ring-shaped curvature singularity . The geodesic equation can be solved exactly in closed form. In addition to two Killing vector fields (corresponding to time translation and axisymmetry ), the Kerr geometry admits a remarkable Killing tensor . There is a pair of principal null congruences (one ingoing and one outgoing ). The Weyl tensor is algebraically special , in fact it has Petrov type D . The global structure is known. Topologically, the homotopy type of the Kerr spacetime can be simply characterized as a line with circles attached at each integer point. Note that the inner Kerr geometry is unstable with regard to perturbations in the interior region. This instability means that although the Kerr metric is axis-symmetric, a black hole created through gravitational collapse may not be so. [ 13 ] This instability also implies that many of the features of the Kerr geometry described above may not be present inside such a black hole. [ 27 ] [ 28 ] A surface on which light can orbit a black hole is called a photon sphere. The Kerr solution has infinitely many photon spheres , lying between an inner one and an outer one. In the nonrotating, Schwarzschild solution, with ⁠ a = 0 {\displaystyle a=0} ⁠ , the inner and outer photon spheres degenerate, so that there is only one photon sphere at a single radius. The greater the spin of a black hole, the farther from each other the inner and outer photon spheres move. A beam of light traveling in a direction opposite to the spin of the black hole will circularly orbit the hole at the outer photon sphere. A beam of light traveling in the same direction as the black hole's spin will circularly orbit at the inner photon sphere. Orbiting geodesics with some angular momentum perpendicular to the axis of rotation of the black hole will orbit on photon spheres between these two extremes. Because the spacetime is rotating, such orbits exhibit a precession, since there is a shift in the ⁠ ϕ {\displaystyle \phi } ⁠ variable after completing one period in the ⁠ θ {\displaystyle \theta } ⁠ variable. The equations of motion for test particles in the Kerr spacetime are governed by four constants of motion . [ 29 ] The first is the invariant mass ⁠ μ {\displaystyle \mu } ⁠ of the test particle, defined by the relation − μ 2 = p α g α β p β , {\displaystyle -\mu ^{2}=p^{\alpha }g_{\alpha \beta }p^{\beta },} where ⁠ p α {\displaystyle p^{\alpha }} ⁠ is the four-momentum of the particle. Furthermore, there are two constants of motion given by the time translation and rotation symmetries of Kerr spacetime, the energy ⁠ E {\displaystyle E} ⁠ , and the component of the orbital angular momentum parallel to the spin of the black hole ⁠ L z {\displaystyle L_{z}} ⁠ . [ 21 ] [ 30 ] E = − p t , {\displaystyle E=-p_{t},} and L z = p ϕ {\displaystyle L_{z}=p_{\phi }} Using Hamilton–Jacobi theory , Brandon Carter showed that there exists a fourth constant of motion, ⁠ Q {\displaystyle Q} ⁠ , [ 29 ] now referred to as the Carter constant . It is related to the total angular momentum of the particle and is given by Q = p θ 2 + cos 2 ⁡ θ ( a 2 ( μ 2 − E 2 ) + ( L z sin ⁡ θ ) 2 ) . {\displaystyle Q=p_{\theta }^{2}+\cos ^{2}\theta \left(a^{2}\left(\mu ^{2}-E^{2}\right)+\left({\frac {L_{z}}{\sin \theta }}\right)^{2}\right).} Since there are four (independent) constants of motion for degrees of freedom, the equations of motion for a test particle in Kerr spacetime are integrable . Using these constants of motion, the trajectory equations for a test particle can be written (using natural units of ⁠ G = M = c = 1 {\displaystyle G=M=c=1} ⁠ ), [ 29 ] Σ d r d λ = ± R ( r ) Σ d θ d λ = ± Θ ( θ ) Σ d ϕ d λ = − ( a E − L z sin 2 ⁡ θ ) + a Δ P ( r ) Σ d t d λ = − a ( a E sin 2 ⁡ θ − L z ) + r 2 + a 2 Δ P ( r ) {\displaystyle {\begin{aligned}\Sigma {\frac {dr}{d\lambda }}&=\pm {\sqrt {R(r)}}\\\Sigma {\frac {d\theta }{d\lambda }}&=\pm {\sqrt {\Theta (\theta )}}\\\Sigma {\frac {d\phi }{d\lambda }}&=-\left(aE-{\frac {L_{z}}{\sin ^{2}\theta }}\right)+{\frac {a}{\Delta }}P(r)\\\Sigma {\frac {dt}{d\lambda }}&=-a\left(aE\sin ^{2}\theta -L_{z}\right)+{\frac {r^{2}+a^{2}}{\Delta }}P(r)\end{aligned}}} with where ⁠ λ {\displaystyle \lambda } ⁠ is an affine parameter such that ⁠ d x α d λ = p α {\displaystyle {\frac {dx^{\alpha }}{d\lambda }}=p^{\alpha }} ⁠ . In particular, when ⁠ μ > 0 {\displaystyle \mu >0} ⁠ the affine parameter ⁠ λ {\displaystyle \lambda } ⁠ , is related to the proper time ⁠ τ {\displaystyle \tau } ⁠ through ⁠ λ = τ / μ {\displaystyle \lambda =\tau /\mu } ⁠ . Because of the frame-dragging -effect, a zero-angular-momentum observer (ZAMO) is corotating with the angular velocity ⁠ Ω {\displaystyle \Omega } ⁠ which is defined with respect to the bookkeeper's coordinate time ⁠ t {\displaystyle t} ⁠ . [ 31 ] The local velocity ⁠ v {\displaystyle v} ⁠ of the test-particle is measured relative to a probe corotating with ⁠ Ω {\displaystyle \Omega } ⁠ . The gravitational time-dilation between a ZAMO at fixed ⁠ r {\displaystyle r} ⁠ and a stationary observer far away from the mass is t τ = ( a 2 + r 2 ) 2 − a 2 Δ sin 2 ⁡ θ Δ Σ . {\displaystyle {\frac {t}{\tau }}={\sqrt {\frac {\left(a^{2}+r^{2}\right)^{2}-a^{2}\Delta \sin ^{2}\theta }{\Delta \ \Sigma }}}.} In Cartesian Kerr–Schild coordinates, the equations for a photon are [ 32 ] x ¨ + i y ¨ = 4 i M a r Σ 2 W [ x ˙ + i y ˙ − x + i y r r ˙ ] − M ( x + i y ) ( 4 r 2 Σ − 1 ) C − a 2 W 2 r Σ 2 {\displaystyle {\ddot {x}}+i{\ddot {y}}=4iMa{\frac {r}{\Sigma ^{2}}}W\left[{\dot {x}}+i{\dot {y}}-{\frac {x+iy}{r}}{\dot {r}}\right]-M(x+iy)\left({\frac {4r^{2}}{\Sigma }}-1\right){\frac {C-a^{2}W^{2}}{r\Sigma ^{2}}}} z ¨ = − M z ( 4 r 2 Σ − 1 ) C r Σ 2 {\displaystyle {\ddot {z}}=-Mz\left({\frac {4r^{2}}{\Sigma }}-1\right){\frac {C}{r\Sigma ^{2}}}} where ⁠ C {\displaystyle C} ⁠ is analogous to Carter's constant and ⁠ W {\displaystyle W} ⁠ is a useful quantity C = p θ 2 + ( a E sin ⁡ θ − L z sin ⁡ θ ) 2 {\displaystyle C=p_{\theta }^{2}+\left(aE\sin {\theta }-{\frac {L_{z}}{\sin {\theta }}}\right)^{2}} W = t ˙ − a sin 2 ⁡ θ ϕ ˙ {\displaystyle W={\dot {t}}-a\sin ^{2}{\theta }{\dot {\phi }}} If we set ⁠ a = 0 {\displaystyle a=0} ⁠ , the Schwarzschild geodesics are restored. The group of isometries of the Kerr metric is the subgroup of the ten-dimensional Poincaré group which takes the two-dimensional locus of the singularity to itself. It retains the time translations (one dimension) and rotations around its axis of rotation (one dimension). Thus it has two dimensions. Like the Poincaré group, it has four connected components: the component of the identity; the component which reverses time and longitude; the component which reflects through the equatorial plane; and the component that does both. In physics, symmetries are typically associated with conserved constants of motion, in accordance with Noether's theorem . As shown above, the geodesic equations have four conserved quantities: one of which comes from the definition of a geodesic, and two of which arise from the time translation and rotation symmetry of the Kerr geometry. The fourth conserved quantity does not arise from a symmetry in the standard sense and is commonly referred to as a hidden symmetry. The location of the event horizon is determined by the larger root of ⁠ Δ = 0 {\displaystyle \Delta =0} ⁠ . When ⁠ r s / 2 < a {\displaystyle r_{\text{s}}/2<a} ⁠ (i.e. ⁠ G M 2 < J c {\displaystyle GM^{2}<Jc} ⁠ ), there are no (real valued) solutions to this equation, and there is no event horizon. With no event horizons to hide it from the rest of the universe, the black hole ceases to be a black hole and will instead be a naked singularity . [ 33 ] Although the Kerr solution appears to be singular at the roots of ⁠ Δ = 0 {\displaystyle \Delta =0} ⁠ , these are actually coordinate singularities, and, with an appropriate choice of new coordinates, the Kerr solution can be smoothly extended through the values of ⁠ r {\displaystyle r} ⁠ corresponding to these roots. The larger of these roots determines the location of the event horizon, and the smaller determines the location of a Cauchy horizon . A (future-directed, time-like) curve can start in the exterior and pass through the event horizon. Once having passed through the event horizon, the ⁠ r {\displaystyle r} ⁠ coordinate now behaves like a time coordinate, so it must decrease until the curve passes through the Cauchy horizon. [ 34 ] The Kerr metric, which describes the spacetime geometry around a rotating black hole, can be extended beyond the inner event horizon. In the Boyer-Lindquist coordinate system ( t , r , θ , ϕ ) {\displaystyle (t,r,\theta ,\phi )} , this inner horizon is located at r − = M − M 2 − a 2 . {\displaystyle r_{-}=M-{\sqrt {M^{2}-a^{2}}}.} As one crosses this inner horizon, the radial coordinate r {\displaystyle r} continues to decrease, even becoming negative. At r = 0 {\displaystyle r=0} , a peculiar feature arises: a ring singularity. Unlike the point singularity in the Schwarzschild metric (a non-rotating black hole), the Kerr singularity is not a single point but a ring lying in the equatorial plane ( θ = π / 2 {\displaystyle \theta =\pi /2} ). This ring singularity acts as a portal to a new region of spacetime. If we avoid the equatorial plane ( θ ≠ π / 2 {\displaystyle \theta \neq \pi /2} ), we can smoothly continue the coordinate r {\displaystyle r} to negative values. This region with r < 0 {\displaystyle r<0} is interpreted as an entirely new, asymptotically flat universe, often called the "anti-universe." This anti-universe has some surprising properties: Negative ADM Mass: The anti-universe possesses a negative Arnowitt-Deser-Misner (ADM) mass , which can be thought of as the total mass-energy of the spacetime as measured at infinity. A negative mass is a highly unusual concept in general relativity, and its physical interpretation is still debated. Within the anti-universe, an even stranger phenomenon occurs. The metric component g ϕ ϕ {\displaystyle g_{\phi \phi }} , which is related to the azimuthal direction around the ring singularity, can change sign. Specifically, g ϕ ϕ {\displaystyle g_{\phi \phi }} is given by: g ϕ ϕ = − ( r 2 + a 2 ) 2 + Δ a 2 sin 2 ⁡ θ Σ sin 2 ⁡ θ . {\displaystyle g_{\phi \phi }={\frac {-(r^{2}+a^{2})^{2}+\Delta a^{2}\sin ^{2}\theta }{\Sigma }}\sin ^{2}\theta .} When g ϕ ϕ {\displaystyle g_{\phi \phi }} becomes negative, the coordinate ϕ {\displaystyle \phi } becomes timelike, and a linear combination of the coordinates t {\displaystyle t} and ϕ {\displaystyle \phi } becomes spacelike. This leads to the existence of closed timelike curves (CTCs). A CTC is a path through spacetime where an object could travel back to its own past, violating causality. The boundary where g ϕ ϕ {\displaystyle g_{\phi \phi }} changes sign and CTCs first appear is called the Cauchy horizon . It is defined by the condition g ϕ ϕ = 0 {\displaystyle g_{\phi \phi }=0} , which gives ( r 2 + a 2 ) 2 = a 2 Δ sin 2 ⁡ θ {\displaystyle (r^{2}+a^{2})^{2}=a^{2}\Delta \sin ^{2}\theta } The Cauchy horizon acts as a boundary beyond which the familiar notions of cause and effect break down. The presence of CTCs raises fundamental questions about the predictability and consistency of the laws of physics in these extreme regions of spacetime. The anti-universe region of the extended Kerr metric is a fascinating and perplexing theoretical construct. It presents a scenario with a negative mass, reversed time orientation, and the possibility of time travel through closed timelike curves. [ 27 ] [ 28 ] While the physical reality of the anti-universe remains uncertain, its study provides valuable insights into the nature of spacetime, gravity, and the limits of our current understanding of the universe. While it is expected that the exterior region of the Kerr solution is stable, and that all rotating black holes will eventually approach a Kerr metric, the interior region of the solution appears to be unstable, much like a pencil balanced on its point. [ 35 ] [ 13 ] This is related to the idea of cosmic censorship . The Kerr geometry is a particular example of a stationary axially symmetric vacuum solution to the Einstein field equation . The family of all stationary axially symmetric vacuum solutions to the Einstein field equation are the Ernst vacuums . The Kerr solution is also related to various non-vacuum solutions which model black holes. For example, the Kerr–Newman electrovacuum models a (rotating) black hole endowed with an electric charge, while the Kerr–Vaidya null dust models a (rotating) hole with infalling electromagnetic radiation. The special case ⁠ a = 0 {\displaystyle a=0} ⁠ of the Kerr metric yields the Schwarzschild metric, which models a nonrotating black hole which is static and spherically symmetric , in the Schwarzschild coordinates . (In this case, every Geroch moment but the mass vanishes.) The interior of the Kerr geometry, or rather a portion of it, is locally isometric to the Chandrasekhar–Ferrari CPW vacuum , an example of a colliding plane wave model. This is particularly interesting, because the global structure of this CPW solution is quite different from that of the Kerr geometry, and in principle, an experimenter could hope to study the geometry of (the outer portion of) the Kerr interior by arranging the collision of two suitable gravitational plane waves . Each asymptotically flat Ernst vacuum can be characterized by giving the infinite sequence of relativistic multipole moments , the first two of which can be interpreted as the mass and angular momentum of the source of the field. There are alternative formulations of relativistic multipole moments due to Hansen, Thorne, and Geroch, which turn out to agree with each other. The relativistic multipole moments of the Kerr geometry were computed by Hansen; they turn out to be Thus, the special case of the Schwarzschild vacuum ( ⁠ a = 0 {\displaystyle a=0} ⁠ ) gives the "monopole point source " of general relativity. [ a ] Weyl multipole moments arise from treating a certain metric function (formally corresponding to Newtonian gravitational potential) which appears the Weyl–Papapetrou chart for the Ernst family of all stationary axisymmetric vacuum solutions using the standard euclidean scalar multipole moments . They are distinct from the moments computed by Hansen, above. In a sense, the Weyl moments only (indirectly) characterize the "mass distribution" of an isolated source, and they turn out to depend only on the even order relativistic moments. In the case of solutions symmetric across the equatorial plane the odd order Weyl moments vanish. For the Kerr vacuum solutions, the first few Weyl moments are given by In particular, we see that the Schwarzschild vacuum has nonzero second order Weyl moment, corresponding to the fact that the "Weyl monopole" is the Chazy–Curzon vacuum solution, not the Schwarzschild vacuum solution, which arises from the Newtonian potential of a certain finite length uniform density thin rod . In weak field general relativity, it is convenient to treat isolated sources using another type of multipole, which generalize the Weyl moments to mass multipole moments and momentum multipole moments , characterizing respectively the distribution of mass and of momentum of the source. These are multi-indexed quantities whose suitably symmetrized and anti-symmetrized parts can be related to the real and imaginary parts of the relativistic moments for the full nonlinear theory in a rather complicated manner. Perez and Moreschi have given an alternative notion of "monopole solutions" by expanding the standard NP tetrad of the Ernst vacuums in powers of ⁠ r {\displaystyle r} ⁠ (the radial coordinate in the Weyl–Papapetrou chart). According to this formulation: In this sense, the Kerr vacuums are the simplest stationary axisymmetric asymptotically flat vacuum solutions in general relativity. The Kerr geometry is often used as a model of a rotating black hole but if the solution is held to be valid only outside some compact region (subject to certain restrictions), in principle, it should be able to be used as an exterior solution to model the gravitational field around a rotating massive object other than a black hole such as a neutron star , or the Earth. This works out very nicely for the non-rotating case, where the Schwarzschild vacuum exterior can be matched to a Schwarzschild fluid interior, and indeed to more general static spherically symmetric perfect fluid solutions. However, the problem of finding a rotating perfect-fluid interior which can be matched to a Kerr exterior, or indeed to any asymptotically flat vacuum exterior solution, has proven very difficult. In particular, the Wahlquist fluid , which was once thought to be a candidate for matching to a Kerr exterior, is now known not to admit any such matching. At present, it seems that only approximate solutions modeling slowly rotating fluid balls are known (These are the relativistic analog of oblate spheroidal balls with nonzero mass and angular momentum but vanishing higher multipole moments). However, the exterior of the Neugebauer–Meinel disk , an exact dust solution which models a rotating thin disk, approaches in a limiting case the ⁠ G M 2 = c J {\displaystyle GM^{2}=cJ} ⁠ Kerr geometry. Physical thin-disk solutions obtained by identifying parts of the Kerr spacetime are also known. [ 36 ]
https://en.wikipedia.org/wiki/Kerr_metric
The Kerr–Newman metric describes the spacetime geometry around a mass which is electrically charged and rotating. It is a vacuum solution which generalizes the Kerr metric (which describes an uncharged, rotating mass) by additionally taking into account the energy of an electromagnetic field , making it the most general asymptotically flat and stationary solution of the Einstein–Maxwell equations in general relativity . As an electrovacuum solution , it only includes those charges associated with the magnetic field; it does not include any free electric charges. Because observed astronomical objects do not possess an appreciable net electric charge [ citation needed ] (the magnetic fields of stars arise through other processes), the Kerr–Newman metric is primarily of theoretical interest. The model lacks description of infalling baryonic matter , light ( null dusts ) or dark matter , and thus provides an incomplete description of stellar mass black holes and active galactic nuclei . The solution however is of mathematical interest and provides a fairly simple cornerstone for further exploration. [ citation needed ] The Kerr–Newman solution is a special case of more general exact solutions of the Einstein–Maxwell equations with non-zero cosmological constant . [ 1 ] In December of 1963, Roy Kerr and Alfred Schild found the Kerr–Schild metrics that gave all Einstein spaces that are exact linear perturbations of Minkowski space . In early 1964, Kerr looked for all Einstein–Maxwell spaces with this same property. By February of 1964, the special case where the Kerr–Schild spaces were charged (including the Kerr–Newman solution) was known but the general case where the special directions were not geodesics of the underlying Minkowski space proved very difficult. The problem was given to George Debney to try to solve but was given up by March 1964. About this time Ezra T. Newman found the solution for charged Kerr by guesswork. In 1965, Ezra "Ted" Newman found the axisymmetric solution of Einstein's field equation for a black hole which is both rotating and electrically charged. [ 2 ] [ 3 ] This formula for the metric tensor g μ ν {\displaystyle g_{\mu \nu }\!} is called the Kerr–Newman metric. It is a generalisation of the Kerr metric for an uncharged spinning point-mass, which had been discovered by Roy Kerr two years earlier. [ 4 ] Four related solutions may be summarized by the following table: where Q represents the body's electric charge and J represents its spin angular momentum . Newman's result represents the simplest stationary , axisymmetric , asymptotically flat solution of Einstein's equations in the presence of an electromagnetic field in four dimensions. It is sometimes referred to as an "electrovacuum" solution of Einstein's equations. Any Kerr–Newman source has its rotation axis aligned with its magnetic axis. [ 5 ] Thus, a Kerr–Newman source is different from commonly observed astronomical bodies, for which there is a substantial angle between the rotation axis and the magnetic moment . [ 6 ] Specifically, neither the Sun , nor any of the planets in the Solar System have magnetic fields aligned with the spin axis. Thus, while the Kerr solution describes the gravitational field of the Sun and planets, the magnetic fields arise by a different process. If the Kerr–Newman potential is considered as a model for a classical electron, it predicts an electron having not just a magnetic dipole moment, but also other multipole moments, such as an electric quadrupole moment. [ 7 ] An electron quadrupole moment has not yet been experimentally detected; it appears to be zero. [ 7 ] In the G = 0 limit, the electromagnetic fields are those of a charged rotating disk inside a ring where the fields are infinite. The total field energy for this disk is infinite, and so this G = 0 limit does not solve the problem of infinite self-energy . [ 8 ] Like the Kerr metric for an uncharged rotating mass, the Kerr–Newman interior solution exists mathematically but is probably not representative of the actual metric of a physically realistic rotating black hole due to issues with the stability of the Cauchy horizon , due to mass inflation driven by infalling matter. Although it represents a generalization of the Kerr metric, it is not considered as very important for astrophysical purposes, since one does not expect that realistic black holes have a significant electric charge (they are expected to have a minuscule positive charge, but only because the proton has a much larger momentum than the electron, and is thus more likely to overcome electrostatic repulsion and be carried by momentum across the horizon). The Kerr–Newman metric defines a black hole with an event horizon only when the combined charge and angular momentum are sufficiently small: [ 9 ] An electron's angular momentum J and charge Q (suitably specified in geometrized units ) both exceed its mass M , in which case the metric has no event horizon. Thus, there can be no such thing as a black hole electron — only a naked spinning ring singularity . [ 10 ] Such a metric has several seemingly unphysical properties, such as the ring's violation of the cosmic censorship hypothesis , and also appearance of causality-violating closed timelike curves in the immediate vicinity of the ring. [ 11 ] A 2009 paper by Russian theorist Alexander Burinskii considered an electron as a generalization of the previous models by Israel (1970) [ 12 ] and Lopez (1984), [ 13 ] which truncated the "negative" sheet of the Kerr-Newman metric, obtaining the source of the Kerr-Newman solution in the form of a relativistically rotating disk. Lopez's truncation regularized the Kerr-Newman metric by a cutoff at : r = r e = e 2 / 2 M {\displaystyle r=r_{e}=e^{2}/2M} , replacing the singularity by a flat regular space-time, the so called "bubble". Assuming that the Lopez bubble corresponds to a phase transition similar to the Higgs symmetry breaking mechanism, Burinskii showed that a gravity-created ring singularity forms by regularization the superconducting core of the electron model [ 14 ] and should be described by the supersymmetric Landau-Ginzburg field model of phase transition: By omitting Burinsky's intermediate work, we come to the recent new proposal: to consider the truncated by Israel and Lopez negative sheet of the KN solution as the sheet of the positron. [ 15 ] This modification unites the KN solution with the model of QED, and shows the important role of the Wilson lines formed by frame-dragging of the vector potential. As a result, the modified KN solution acquires a strong interaction with Kerr's gravity caused by the additional energy contribution of the electron-positron vacuum and creates the Kerr–Newman relativistic circular string of Compton size. The Kerr–Newman metric can be seen to reduce to other exact solutions in general relativity in limiting cases. It reduces to Alternately, if gravity is intended to be removed, Minkowski space arises if the gravitational constant G is zero, without taking the mass and charge to zero. In this case, the electric and magnetic fields are more complicated than simply the fields of a charged magnetic dipole ; the zero-gravity limit is not trivial. [ citation needed ] The Kerr–Newman metric describes the geometry of spacetime for a rotating charged black hole with mass M , charge Q and angular momentum J . The formula for this metric depends upon what coordinates or coordinate conditions are selected. Two forms are given below: Boyer–Lindquist coordinates, and Kerr–Schild coordinates. The gravitational metric alone is not sufficient to determine a solution to the Einstein field equations; the electromagnetic stress tensor must be given as well. Both are provided in each section. One way to express this metric is by writing down its line element in a particular set of spherical coordinates , [ 16 ] also called Boyer–Lindquist coordinates : where the coordinates ( r , θ , ϕ ) are standard spherical coordinate system , and the length scales: have been introduced for brevity. Here r s is the Schwarzschild radius of the massive body, which is related to its total mass-equivalent M by where G is the gravitational constant , and r Q is a length scale corresponding to the electric charge Q of the mass where ε 0 is the vacuum permittivity . The electromagnetic potential in Boyer–Lindquist coordinates is [ 17 ] [ 18 ] while the Maxwell tensor is defined by In combination with the Christoffel symbols the second order equations of motion can be derived with where q {\displaystyle q} is the charge per mass of the test particle. The Kerr–Newman metric can be expressed in the Kerr–Schild form, using a particular set of Cartesian coordinates , proposed by Kerr and Schild in 1965. The metric is as follows. [ 19 ] [ 20 ] [ 21 ] Notice that k is a unit vector . Here M is the constant mass of the spinning object, Q is the constant charge of the spinning object, η is the Minkowski metric , and a = J / M is a constant rotational parameter of the spinning object. It is understood that the vector a → {\displaystyle {\vec {a}}} is directed along the positive z-axis, i.e. a → = a z ^ {\displaystyle {\vec {a}}=a{\hat {z}}} . The quantity r is not the radius, but rather is implicitly defined by the relation Notice that the quantity r becomes the usual radius R when the rotational parameter a approaches zero. In this form of solution, units are selected so that the speed of light is unity ( c = 1). In order to provide a complete solution of the Einstein–Maxwell equations , the Kerr–Newman solution not only includes a formula for the metric tensor, but also a formula for the electromagnetic potential: [ 19 ] [ 22 ] At large distances from the source ( R ≫ a ), these equations reduce to the Reissner–Nordström metric with: In the Kerr–Schild form of the Kerr–Newman metric, the determinant of the metric tensor is everywhere equal to negative one, even near the source. [ 1 ] The electric and magnetic fields can be obtained in the usual way by differentiating the four-potential to obtain the electromagnetic field strength tensor . It will be convenient to switch over to three-dimensional vector notation. The static electric and magnetic fields are derived from the vector potential and the scalar potential like this: Using the Kerr–Newman formula for the four-potential in the Kerr–Schild form, in the limit of the mass going to zero, yields the following concise complex formula for the fields: [ 23 ] The quantity omega ( Ω {\displaystyle \Omega } ) in this last equation is similar to the Coulomb potential , except that the radius vector is shifted by an imaginary amount. This complex potential was discussed as early as the nineteenth century, by the French mathematician Paul Émile Appell . [ 24 ] The total mass-equivalent M , which contains the electric field-energy and the rotational energy , and the irreducible mass M irr are related by [ 25 ] [ 26 ] which can be inverted to obtain In order to electrically charge and/or spin a neutral and static body, energy has to be applied to the system. Due to the mass–energy equivalence , this energy also has a mass-equivalent; therefore M is always higher than M irr . If for example the rotational energy of a black hole is extracted via the Penrose processes , [ 27 ] [ 28 ] the remaining mass–energy will always stay greater than or equal to M irr . Setting 1 / g r r {\displaystyle 1/g_{rr}} to 0 and solving for r {\displaystyle r} gives the inner and outer event horizon , which is located at the Boyer–Lindquist coordinate Repeating this step with g t t {\displaystyle g_{tt}} gives the inner and outer ergosphere For brevity, we further use nondimensionalized quantities normalized against G {\displaystyle G} , M {\displaystyle M} , c {\displaystyle c} and 4 π ϵ 0 {\displaystyle 4\pi \epsilon _{0}} , where a {\displaystyle a} reduces to J c / G / M 2 {\displaystyle Jc/G/M^{2}} and Q {\displaystyle Q} to Q / ( M 4 π ϵ 0 G ) {\textstyle Q/(M{\sqrt {4\pi \epsilon _{0}G}})} , and the equations of motion for a test particle of charge q {\displaystyle q} become [ 29 ] [ 30 ] with E {\displaystyle E} for the total energy and L z {\displaystyle L_{z}} for the axial angular momentum. C {\displaystyle C} is the Carter constant : where p θ = θ ˙ ρ 2 {\displaystyle p_{\theta }={\dot {\theta }}\ \rho ^{2}} is the poloidial component of the test particle's angular momentum, and δ {\displaystyle \delta } the orbital inclination angle. and with μ 2 = 0 {\displaystyle \mu ^{2}=0} and μ 2 = 1 {\displaystyle \mu ^{2}=1} for particles are also conserved quantities. is the frame dragging induced angular velocity. The shorthand term χ {\displaystyle \chi } is defined by The relation between the coordinate derivatives r ˙ , θ ˙ , ϕ ˙ {\displaystyle {\dot {r}},\ {\dot {\theta }},\ {\dot {\phi }}} and the local 3-velocity v {\displaystyle v} is for the radial, for the poloidial, for the axial and for the total local velocity, where is the axial radius of gyration (local circumference divided by 2π), and the gravitational time dilation component. The local radial escape velocity for a neutral particle is therefore
https://en.wikipedia.org/wiki/Kerr–Newman_metric
The Kerr–Newman–de–Sitter metric (KNdS) [ 1 ] [ 2 ] is one of the most general stationary solutions of the Einstein–Maxwell equations in general relativity that describes the spacetime geometry in the region surrounding an electrically charged, rotating mass embedded in an expanding universe. It generalizes the Kerr–Newman metric by taking into account the cosmological constant Λ {\displaystyle \Lambda } . In those coordinates the local clocks and rulers are at constant r {\displaystyle {\rm {r}}} and have no local orbital angular momentum ( L z = 0 ) {\displaystyle {\rm {(L_{z}=0)}}} , therefore they are corotating with the frame-dragging velocity relative to the fixed stars. In (+, −, −, −) signature and in natural units of G = M = c = k e = 1 {\displaystyle {\rm {G=M=c=k_{e}=1}}} the KNdS metric is [ 3 ] [ 4 ] [ 5 ] [ 6 ] g t t = − 3 [ a 2 sin 2 ⁡ θ ( a 2 Λ cos 2 ⁡ θ + 3 ) + a 2 ( Λ r 2 − 3 ) + Λ r 4 − 3 r 2 + 6 r − 3 ℧ 2 ] ( a 2 Λ + 3 ) 2 ( a 2 cos 2 ⁡ θ + r 2 ) {\displaystyle g_{\rm {tt}}={\rm {-{\frac {3\ [a^{2}\ \sin ^{2}\theta \left(a^{2}\ \Lambda \ \cos ^{2}\theta +3\right)+a^{2}\left(\Lambda \ r^{2}-3\right)+\Lambda \ r^{4}-3\ r^{2}+6\ r-3\mho ^{2}]}{\left(a^{2}\ \Lambda +3\right)^{2}\left(a^{2}\cos ^{2}\theta +r^{2}\right)}}}}} g r r = − a 2 cos 2 ⁡ θ + r 2 ( a 2 + r 2 ) ( 1 − Λ r 2 3 ) − 2 r + ℧ 2 {\displaystyle g_{\rm {rr}}={\rm {-{\frac {a^{2}\ \cos ^{2}\theta +r^{2}}{\left(a^{2}+r^{2}\right)\left(1-{\frac {\Lambda \ r^{2}}{3}}\right)-2\ r+\mho ^{2}}}}}} g θ θ = − 3 ( a 2 cos 2 ⁡ θ + r 2 ) a 2 Λ cos 2 ⁡ θ + 3 {\displaystyle g_{\rm {\theta \theta }}={\rm {-{\frac {3\left(a^{2}\ \cos ^{2}\theta +r^{2}\right)}{a^{2}\ \Lambda \ \cos ^{2}\theta +3}}}}} g ϕ ϕ = 9 { 1 3 ( a 2 + r 2 ) 2 sin 2 ⁡ θ ( a 2 Λ cos 2 ⁡ θ + 3 ) − a 2 sin 4 ⁡ θ [ ( a 2 + r 2 ) ( 1 − Λ r 2 / 3 ) − 2 r + ℧ 2 ] } − ( a 2 Λ + 3 ) 2 ( a 2 cos 2 ⁡ θ + r 2 ) {\displaystyle g_{\rm {\phi \phi }}={\rm {\frac {9\ \{{\frac {1}{3}}\left(a^{2}+r^{2}\right)^{2}\sin ^{2}\theta \left(a^{2}\ \Lambda \cos ^{2}\theta +3\right)-a^{2}\sin ^{4}\theta \ [\left(a^{2}+r^{2}\right)\left(1-\Lambda \ r^{2}/3\right)-2\ r+\mho ^{2}]\}}{-\left(a^{2}\ \Lambda +3\right)^{2}\left(a^{2}\cos ^{2}\theta +r^{2}\right)}}}} g t ϕ = 3 a sin 2 ⁡ θ [ a 2 Λ ( a 2 + r 2 ) cos 2 ⁡ θ + a 2 Λ r 2 + Λ r 4 + 6 r − 3 ℧ 2 ] ( a 2 Λ + 3 ) 2 ( a 2 cos 2 ⁡ θ + r 2 ) {\displaystyle g_{\rm {t\phi }}={\rm {\frac {3\ a\ \sin ^{2}\theta \ [a^{2}\ \Lambda \left(a^{2}+r^{2}\right)\cos ^{2}\theta +a^{2}\ \Lambda \ r^{2}+\Lambda \ r^{4}+6\ r-3\ \mho ^{2}]}{\left(a^{2}\ \Lambda +3\right)^{2}\left(a^{2}\ \cos ^{2}\theta +r^{2}\right)}}}} with all the other metric tensor components g μ ν = 0 {\displaystyle g_{\mu \nu }=0} , where a {\displaystyle {\rm {a}}} is the black hole's spin parameter, ℧ {\displaystyle {\rm {\mho }}} its electric charge, and Λ = 3 H 2 {\displaystyle {\rm {\Lambda =3H^{2}}}} [ 7 ] the cosmological constant with H {\displaystyle {\rm {H}}} as the time-independent Hubble parameter . The electromagnetic 4-potential is A μ = { 3 r ℧ ( a 2 Λ + 3 ) ( a 2 cos 2 ⁡ θ + r 2 ) , 0 , 0 , − 3 a r ℧ sin 2 ⁡ θ ( a 2 Λ + 3 ) ( a 2 cos 2 ⁡ θ + r 2 ) } {\displaystyle {\rm {A_{\mu }=\left\{{\frac {3\ r\ \mho }{\left(a^{2}\ \Lambda +3\right)\left(a^{2}\ \cos ^{2}\theta +r^{2}\right)}},\ 0,\ 0,\ -{\frac {3\ a\ r\ \mho \ \sin ^{2}\theta }{\left(a^{2}\ \Lambda +3\right)\left(a^{2}\ \cos ^{2}\theta +r^{2}\right)}}\right\}}}} The frame-dragging angular velocity is ω = d ϕ d t = − g t ϕ g ϕ ϕ = a [ a 2 Λ ( a 2 + r 2 ) cos 2 ⁡ θ + a 2 Λ r 2 + 6 r + Λ r 4 − 3 ℧ 2 ] a 2 sin 2 ⁡ θ [ a 2 ( Λ r 2 − 3 ) + 6 r + Λ r 4 − 3 r 2 − 3 ℧ 2 ] + a 2 Λ ( a 2 + r 2 ) 2 cos 2 ⁡ θ + 3 ( a 2 + r 2 ) 2 {\displaystyle \omega ={\frac {\rm {d\phi }}{\rm {dt}}}=-{\frac {g_{\rm {t\phi }}}{g_{\rm {\phi \phi }}}}={\rm {\frac {a\ [a^{2}\ \Lambda \left(a^{2}+r^{2}\right)\cos ^{2}\theta +a^{2}\ \Lambda \ r^{2}+6\ r+\Lambda \ r^{4}-3\ \mho ^{2}]}{a^{2}\ \sin ^{2}\theta \ [a^{2}\left(\Lambda \ r^{2}-3\right)+6\ r+\Lambda \ r^{4}-3\ r^{2}-3\ \mho ^{2}]+a^{2}\ \Lambda \ \left(a^{2}+r^{2}\right)^{2}\cos ^{2}\theta +3\ \left(a^{2}+r^{2}\right)^{2}}}}} and the local frame-dragging velocity relative to constant { r , θ , ϕ } {\displaystyle {\rm {\{r,\theta ,\phi \}}}} positions (the speed of light at the ergosphere ) ν = g t ϕ g t ϕ = − a 2 sin 2 ⁡ θ [ a 2 Λ ( a 2 + r 2 ) cos 2 ⁡ θ + a 2 Λ r 2 + 6 r + Λ r 4 − 3 ℧ 2 ] 2 ( a 2 Λ cos 2 ⁡ θ + 3 ) ( a 2 + r 2 − a 2 sin 2 ⁡ θ ) 2 [ a 2 ( Λ r 2 − 3 ) + 6 r + Λ r 4 − 3 r 2 − 3 ℧ 2 ] {\displaystyle \nu ={\sqrt {g_{\rm {t\phi }}\ g^{\rm {t\phi }}}}={\rm {\sqrt {-{\frac {a^{2}\ \sin ^{2}\theta \ [a^{2}\ \Lambda \left(a^{2}+r^{2}\right)\cos ^{2}\theta +a^{2}\Lambda \ r^{2}+6\ r+\Lambda \ r^{4}-3\ \mho ^{2}]^{2}}{\left(a^{2}\ \Lambda \ \cos ^{2}\theta +3\right)\left(a^{2}+r^{2}-a^{2}\sin ^{2}\theta \right)^{2}[a^{2}\left(\Lambda \ r^{2}-3\right)+6\ r+\Lambda \ r^{4}-3\ r^{2}-3\ \mho ^{2}]}}}}}} The escape velocity (the speed of light at the horizons) relative to the local corotating zero-angular momentum observer is v = 1 − 1 / g t t = 3 ( a 2 Λ cos 2 ⁡ θ + 3 ) ( a 2 + r 2 − a 2 sin 2 ⁡ θ ) 2 [ a 2 ( Λ r 2 − 3 ) + Λ r 4 − 3 r 2 + 6 r − 3 ℧ 2 ] ( a 2 Λ + 3 ) 2 ( a 2 cos 2 ⁡ θ + r 2 ) { a 2 Λ ( a 2 + r 2 ) 2 cos 2 ⁡ θ + 3 ( a 2 + r 2 ) 2 + a 2 sin 2 ⁡ θ [ a 2 ( Λ r 2 − 3 ) + Λ r 4 − 3 r 2 + 6 r − 3 ℧ 2 ] } + 1 {\displaystyle {\rm {v}}={\sqrt {1-1/g^{\rm {tt}}}}={\rm {\sqrt {{\frac {3\left(a^{2}\Lambda \cos ^{2}\theta +3\right)\left(a^{2}+r^{2}-a^{2}\sin ^{2}\theta \right)^{2}\left[a^{2}\left(\Lambda r^{2}-3\right)+\Lambda r^{4}-3r^{2}+6r-3\mho ^{2}\right]}{\left(a^{2}\Lambda +3\right)^{2}\left(a^{2}\cos ^{2}\theta +r^{2}\right)\{a^{2}\Lambda \left(a^{2}+r^{2}\right)^{2}\cos ^{2}\theta +3\left(a^{2}+r^{2}\right)^{2}+a^{2}\sin ^{2}\theta \left[a^{2}\left(\Lambda r^{2}-3\right)+\Lambda r^{4}-3r^{2}+6r-3\mho ^{2}\right]\}}}+1}}}} The conserved quantities in the equations of motion x ¨ μ = − ∑ α , β ( Γ α β μ x ˙ α x ˙ β + q F μ β x ˙ α g α β ) {\displaystyle {\rm {{\ddot {x}}^{\mu }=-\sum _{\alpha ,\beta }\ (\Gamma _{\alpha \beta }^{\mu }\ {\dot {x}}^{\alpha }\ {\dot {x}}^{\beta }+q\ {\rm {F}}^{\mu \beta }\ {\rm {\dot {x}}}^{\alpha }}}\ g_{\alpha \beta })} where x ˙ {\displaystyle {\rm {\dot {x}}}} is the four velocity , q {\displaystyle {\rm {q}}} is the test particle's specific charge and F {\displaystyle {\rm {F}}} the Maxwell–Faraday tensor F μ ν = ∂ A μ ∂ x ν − ∂ A ν ∂ x μ {\displaystyle {\rm {{\ F}_{\mu \nu }={\frac {\partial A_{\mu }}{\partial x^{\nu }}}-{\frac {\partial A_{\nu }}{\partial x^{\mu }}}}}} are the total energy E = − p t = g t t t ˙ + g t ϕ ϕ ˙ + q A t {\displaystyle {\rm {E=-p_{t}}}=g_{\rm {tt}}{\rm {\dot {t}}}+g_{\rm {t\phi }}{\rm {\dot {\phi }}}+{\rm {q\ A_{t}}}} and the covariant axial angular momentum L z = p ϕ = − g ϕ ϕ ϕ ˙ − g t ϕ t ˙ − q A ϕ {\displaystyle {\rm {L_{z}=p_{\phi }}}=-g_{\rm {\phi \phi }}{\rm {\dot {\phi }}}-g_{\rm {t\phi }}{\rm {\dot {t}}}-{\rm {q\ A_{\phi }}}} The overdot stands for differentiation by the testparticle's proper time τ {\displaystyle \tau } or the photon's affine parameter , so x ˙ = d x / d τ , x ¨ = d 2 x / d τ 2 {\displaystyle {\rm {{\dot {x}}=dx/d\tau ,\ {\ddot {x}}=d^{2}x/d\tau ^{2}}}} . To get g r r = 0 {\displaystyle g_{\rm {rr}}=0} coordinates we apply the transformation d t = d u − d r ( a 2 Λ / 3 + 1 ) ( a 2 + r 2 ) ( a 2 + r 2 ) ( 1 − Λ r 2 / 3 ) − 2 r + ℧ 2 {\displaystyle {\rm {dt=du-{\frac {dr\left(a^{2}\ \Lambda /3+1\right)\left(a^{2}+r^{2}\right)}{\left(a^{2}+r^{2}\right)\left(1-\Lambda \ r^{2}/3\right)-2\ r+\mho ^{2}}}}}} d ϕ = d φ − a d r ( a 2 Λ / 3 + 1 ) ( a 2 + r 2 ) ( 1 − Λ r 2 / 3 ) − 2 r + ℧ 2 {\displaystyle {\rm {d\phi =d\varphi -{\frac {a\ dr\left(a^{2}\ \Lambda /3+1\right)}{\left(a^{2}+r^{2}\right)\left(1-\Lambda \ r^{2}/3\right)-2\ r+\mho ^{2}}}}}} and get the metric coefficients g u r = − 3 a 2 Λ + 3 {\displaystyle g_{\rm {ur}}={\rm {-{\frac {3}{a^{2}\ \Lambda +3}}}}} g r φ = 3 a sin 2 ⁡ θ a 2 Λ + 3 {\displaystyle g_{\rm {r\varphi }}={\rm {\frac {3\ a\sin ^{2}\theta }{a^{2}\ \Lambda +3}}}} g u u = g t t , g θ θ = g θ θ , g φ φ = g ϕ ϕ , g u φ = g t ϕ {\displaystyle g_{\rm {uu}}=g_{\rm {tt}}\ ,\ \ g_{\theta \theta }=g_{\theta \theta }\ ,\ \ g_{\rm {\varphi \varphi }}=g_{\rm {\phi \phi }}\ ,\ \ g_{\rm {u\varphi }}=g_{\rm {t\phi }}} and all the other g μ ν = 0 {\displaystyle g_{\mu \nu }=0} , with the electromagnetic vector potential A μ = { 3 r ℧ ( a 2 Λ + 3 ) ( a 2 cos 2 ⁡ θ + r 2 ) , 3 r ℧ a 2 ( Λ r 2 − 3 ) + 6 r + Λ r 4 − 3 ( r 2 + ℧ 2 ) , 0 , − 3 a r ℧ sin 2 ⁡ θ ( a 2 Λ + 3 ) ( a 2 cos 2 ⁡ θ + r 2 ) } {\displaystyle {\rm {A_{\mu }=\left\{{\frac {3\ r\ \mho }{\left(a^{2}\ \Lambda +3\right)\left(a^{2}\cos ^{2}\theta +r^{2}\right)}},{\frac {3\ r\ \mho }{a^{2}\left(\Lambda \ r^{2}-3\right)+6\ r+\Lambda \ r^{4}-3\left(r^{2}+\mho ^{2}\right)}},\ 0,\ -{\frac {3\ a\ r\ \mho \sin ^{2}\theta }{\left(a^{2}\ \Lambda +3\right)\left(a^{2}\cos ^{2}\theta +r^{2}\right)}}\right\}}}} Defining t ¯ = u − r {\displaystyle {\rm {{\bar {t}}=u-r}}} ingoing lightlike worldlines give a 45 ∘ {\displaystyle 45^{\circ }} light cone on a { t ¯ , r } {\displaystyle \{{\rm {{\bar {t}},\ r\}}}} spacetime diagram . The horizons are at g r r = 0 {\displaystyle g^{\rm {rr}}=0} and the ergospheres at g t t | | g u u = 0 {\displaystyle g_{\rm {tt}}||g_{\rm {uu}}=0} . This can be solved numerically or analytically. Like in the Kerr and Kerr–Newman metrics, the horizons have constant Boyer–Lindquist r {\displaystyle {\rm {r}}} , while the ergospheres' radii also depend on the polar angle θ {\displaystyle \theta } . This gives 3 positive solutions each (including the black hole's inner and outer horizons and ergospheres as well as the cosmic ones) and a negative solution for the space at r < 0 {\displaystyle {\rm {r<0}}} in the antiverse [ 8 ] [ 9 ] behind the ring singularity , which is part of the probably unphysical extended solution of the metric. With a negative Λ {\displaystyle \Lambda } (the anti–de–Sitter variant with an attractive cosmological constant), there are no cosmic horizon and ergosphere, only the black hole-related ones. In the Nariai limit [ 10 ] the black hole's outer horizon and ergosphere coincide with the cosmic ones (in the Schwarzschild–de–Sitter metric to which the KNdS reduces with a = ℧ = 0 {\displaystyle {\rm {a=\mho =0}}} that would be the case when Λ = 1 / 9 {\displaystyle \Lambda =1/9} ). The Ricci scalar for the KNdS metric is R = − 4 Λ {\displaystyle {\rm {R=-4\Lambda }}} , and the Kretschmann scalar is K = { 220 a 12 Λ 2 cos ⁡ ( 6 θ ) + 66 a 12 Λ 2 cos ⁡ ( 8 θ ) + 12 a 12 Λ 2 cos ⁡ ( 10 θ ) + a 12 Λ 2 cos ⁡ ( 12 θ ) + {\displaystyle {\rm {K=\{220a^{12}\Lambda ^{2}\cos(6\theta )+66a^{12}\Lambda ^{2}\cos(8\theta )+12a^{12}\Lambda ^{2}\cos(10\theta )+a^{12}\Lambda ^{2}\cos(12\theta )+}}} 462 a 12 Λ 2 + 1080 a 10 Λ 2 r 2 cos ⁡ ( 6 θ ) + 240 a 10 Λ 2 r 2 cos ⁡ ( 8 θ ) + 24 a 10 Λ 2 r 2 cos ⁡ ( 10 θ ) + {\displaystyle {\rm {462a^{12}\Lambda ^{2}+1080a^{10}\Lambda ^{2}r^{2}\cos(6\theta )+240a^{10}\Lambda ^{2}r^{2}\cos(8\theta )+24a^{10}\Lambda ^{2}r^{2}\cos(10\theta )+}}} 3024 a 10 Λ 2 r 2 + 1920 a 8 Λ 2 r 4 cos ⁡ ( 6 θ ) + 240 a 8 Λ 2 r 4 cos ⁡ ( 8 θ ) + 8400 a 8 Λ 2 r 4 − {\displaystyle {\rm {3024a^{10}\Lambda ^{2}r^{2}+1920a^{8}\Lambda ^{2}r^{4}\cos(6\theta )+240a^{8}\Lambda ^{2}r^{4}\cos(8\theta )+8400a^{8}\Lambda ^{2}r^{4}-}}} 1152 a 6 cos ⁡ ( 6 θ ) − 11520 a 6 + 1280 a 6 Λ 2 r 6 cos ⁡ ( 6 θ ) + 12800 a 6 Λ 2 r 6 + 207360 a 4 r 2 − {\displaystyle {\rm {1152a^{6}\cos(6\theta )-11520a^{6}+1280a^{6}\Lambda ^{2}r^{6}\cos(6\theta )+12800a^{6}\Lambda ^{2}r^{6}+207360a^{4}r^{2}-}}} 138240 a 4 r ℧ 2 + 11520 a 4 Λ 2 r 8 + 16128 a 4 ℧ 4 − 276480 a 2 r 4 + 368640 a 2 r 3 ℧ 2 + {\displaystyle {\rm {138240a^{4}r\mho ^{2}+11520a^{4}\Lambda ^{2}r^{8}+16128a^{4}\mho ^{4}-276480a^{2}r^{4}+368640a^{2}r^{3}\mho ^{2}+}}} 6144 a 2 Λ 2 r 10 − 104448 a 2 r 2 ℧ 4 + 3 a 4 cos ⁡ ( 4 θ ) [ 165 a 8 Λ 2 + 960 a 6 Λ 2 r 2 + 2240 a 4 Λ 2 r 4 − {\displaystyle {\rm {6144a^{2}\Lambda ^{2}r^{10}-104448a^{2}r^{2}\mho ^{4}+3a^{4}\cos(4\theta )[165a^{8}\Lambda ^{2}+960a^{6}\Lambda ^{2}r^{2}+2240a^{4}\Lambda ^{2}r^{4}-}}} 256 a 2 ( 9 − 10 Λ 2 r 6 ) + 256 ( 90 r 2 − 60 r ℧ 2 + 5 Λ 2 r 8 + 7 ℧ 4 ) ] + 24 a 2 cos ⁡ ( 2 θ ) [ 33 a 10 Λ 2 + {\displaystyle {\rm {256a^{2}(9-10\Lambda ^{2}r^{6})+256(90r^{2}-60r\mho ^{2}+5\Lambda ^{2}r^{8}+7\mho ^{4})]+24a^{2}\cos(2\theta )[33a^{10}\Lambda ^{2}+}}} 210 a 8 Λ 2 r 2 + 560 a 6 Λ 2 r 4 − 80 a 4 ( 9 − 10 Λ 2 r 6 ) + 128 a 2 ( 90 r 2 − 60 r ℧ 2 + 5 Λ 2 r 8 + {\displaystyle {\rm {210a^{8}\Lambda ^{2}r^{2}+560a^{6}\Lambda ^{2}r^{4}-80a^{4}(9-10\Lambda ^{2}r^{6})+128a^{2}(90r^{2}-60r\mho ^{2}+5\Lambda ^{2}r^{8}+}}} 7 ℧ 4 ) + 256 r 2 ( − 45 r 2 + 60 r ℧ 2 + Λ 2 r 8 − 17 ℧ 4 ) ] + 36864 r 6 − 73728 r 5 ℧ 2 + {\displaystyle {\rm {7\mho ^{4})+256r^{2}(-45r^{2}+60r\mho ^{2}+\Lambda ^{2}r^{8}-17\mho ^{4})]+36864r^{6}-73728r^{5}\mho ^{2}+}}} 2048 Λ 2 r 12 + 43008 r 4 ℧ 4 } ÷ { 12 [ a 2 cos ⁡ ( 2 θ ) + a 2 + 2 r 2 ] 6 } . {\displaystyle {\rm {2048\Lambda ^{2}r^{12}+43008r^{4}\mho ^{4}\}\div \{12[a^{2}\cos(2\theta )+a^{2}+2r^{2}]^{6}\}{\text{.}}}}}
https://en.wikipedia.org/wiki/Kerr–Newman–de–Sitter_metric
Kersti Hermansson (born in 1951) is a Professor for Inorganic Chemistry at Uppsala University . [ 1 ] She did her PhD on "The Electron Distribution in the Bound Water Molecule" in 1984. [ 1 ] From 1984 to 1986, she had a postdoctoral fellowship from the Swedish Research Council with Dr. E. Clementi at IBM-Kingston, USA. [ 1 ] From 1986–1988, she was a Högskolelektor in Inorganic Chemistry at Uppsala University . [ 1 ] In 1988, she was a docent of Inorganic Chemistry at Uppsala University . [ 1 ] In 1996, she was a Biträdande professor. [ 1 ] Since 2000, she is a professor of Inorganic Chemistry at Uppsala University . [ 1 ] During this time (2008-2013), she was a part-time guest professor at KTH Stockholm . [ 1 ] Her research focuses on condensed-matter chemistry including the investigation of chemical bonding and development of quantum chemical methods. [ 2 ] She received several prizes for her research:
https://en.wikipedia.org/wiki/Kersti_Hermansson
Kerstin N. Nordstrom is an American physicist who is the Clare Boothe Luce Assistant Professor of Physics in the Department of Physics at Mount Holyoke College . Her research focuses on soft matter physics; her work has been featured in the LA Times [ 1 ] and in the BBC News. [ 2 ] Nordstrom completed a bachelor's degree in physics and mathematics at Bryn Mawr College in 2004. [ 3 ] [ 4 ] She joined the University of Pennsylvania as a graduate student, earning a Master of Science in 2006 and a PhD in 2010. Her doctoral thesis focused on the "Jamming and flow of soft particle suspensions." [ 5 ] In 2011, Nordstrom joined the University of Maryland, College Park as a postdoctoral researcher. At the University of Maryland, Nordstrom worked on several topics, including how beds of granular materials respond to impact [ 6 ] and how razor clams burrow in sand. [ 7 ] In 2014, Nordstrom joined Mount Holyoke College as an assistant professor. [ 8 ] She is interested in complex fluid flows, including the systems of solid particles found in granular materials. [ 9 ]
https://en.wikipedia.org/wiki/Kerstin_Nordstrom
The Kesternich test is a common name for the corrosion test with sulfur dioxide (SO 2 ) under general moisture condensation. This test was developed in 1951 by Wilhelm Kesternich [ 1 ] to simulate the damaging effects of acid rain . Acid rain and acidic industrial pollutants are corrosive and can degrade coatings and plated surfaces. Kesternich testing, or sulfur dioxide testing, simulates acid rain or industrial chemical exposure to evaluate the relative corrosion resistance of the coating, substrate, or part itself. The test can be used for coatings or for base materials. The test method is defined by various standards, DIN EN ISO 6988, [ 2 ] DIN 50018, [ 3 ] ASTM G87, ISO 3231, ISO 22479 are the most common. The parts to be tested are placed in a test chamber with a capacity of 300L and exposed to warm, moist air in combination with a certain amount of sulfur dioxide Note: Sulfur is interchangeable with Sulphur and SO2 is the abbreviation for Sulfur/Sulphur Dioxide The test chamber has a set volume of 300L. The construction of the inner housing of the chamber and the devices for arranging the samples must be made of inert and corrosion-resistant materials so that there is no reaction between the sample to be tested and the material of the chamber. The SO 2 injection can be performed manually or automatically depending on the chamber. The principle of the test is simple: • A defined amount of distilled or deionized water is poured into the floor pan of the test chamber. • The samples are placed in the chamber above the water level. The door or hood of the chamber is tightly closed and hermetically sealed for safety. • A fixed volume of sulfur dioxide is introduced into the chamber, usually either 1 L (0.33%) or 2 L (0.66%) Volume of SO 2 . The test is performed in two sections, o Test section 1: 8 hours warm-up to 40±3 °C (relative humidity 100%) o Test section 2: 16 hours cooling to 18 to 28 °C (relative humidity max. 75 %) The test is usually carried out in cycles of 24 hours each. Caution: The atmosphere containing sulfur dioxide must not be released into the room air. Take samples from the chamber and let them dry in the room air. An initial assessment is made before any corrosion products are removed. When the tested parts are cleaned, the evaluation criteria must be taken into account. Possible characteristics for evaluation: appearance after the test, appearance after removal of the corrosion products, number and size of imperfections, time to first corrosion, loss of mass. The results are described in a test report. • DIN EN ISO 6988 Metallic and other inorganic coatings - Test with sulfur dioxide with general moisture condensation, March 1997. [ 2 ] • DIN 50018 testing in an alternating condensation climate with an atmosphere containing sulfur dioxide, June 1997. [ 3 ] • The importance of corrosion test procedures with special consideration of the SO 2 test according to DIN 50018, Wilhelm Kesternich, published in Materials and Corrosion Volume 16, Issue 3, pages 193-201, March 1965. [ 4 ]
https://en.wikipedia.org/wiki/Kesternich_test
In organic chemistry , a ketene is an organic compound of the form RR'C=C=O , where R and R' are two arbitrary monovalent chemical groups (or two separate substitution sites in the same molecule). [ 1 ] The name may also refer to the specific compound ethenone H 2 C=C=O , the simplest ketene. [ 2 ] Although they are highly useful, most ketenes are unstable . When used as reagents in a chemical procedure, they are typically generated when needed, and consumed as soon as (or while) they are produced. [ 1 ] Ketenes were first studied as a class by Hermann Staudinger before 1905. [ 3 ] Ketenes were systematically investigated by Hermann Staudinger in 1905 in the form of diphenylketene (conversion of α {\displaystyle \alpha } -chlorodiphenyl acetyl chloride with zinc). Staudinger was inspired by the first examples of reactive organic intermediates and stable radicals discovered by Moses Gomberg in 1900 (compounds with triphenylmethyl group). [ 4 ] Ketenes are highly electrophilic at the carbon atom bonded with the heteroatom, due to its sp character. Ketenes can be formed with different heteroatoms bonded to the sp carbon atom, such as O , S or Se , respectively called ketenes, thioketenes and selenoketenes. Ethenone , the simplest ketene, has different experimental lengths for each of its double bonds; the C=O bond is 1.160 Å and the C=C bond is 1.314 Å. The angle between the two H atoms is 121.5°, similar to the theoretically ideal angle in alkenes between sp 2 carbon atoms and H substituents. [ 5 ] Ketenes are unstable and cannot be stored. In the absence of nucleophiles with which to react, ethenone dimerises to give β- lactone , a cyclic ester . If the ketene is disubstituted, the dimerisation product is a substituted cyclobutadione. For monosubstituted ketenes, the dimerisation could afford either the ester or the diketone product. Ethenone is produced on a commercial scale by thermal dehydration of acetic acid . Substituted ketenes can be prepared from acyl chlorides by an elimination reaction in which HCl is lost: In this reaction, a base, usually triethylamine , removes the acidic proton alpha to the carbonyl group, inducing the formation of the carbon-carbon double bond and the loss of a chloride ion: Ketenes can also be formed from α- diazoketones by the Wolff rearrangement , and from vinylene carbonate by phosphorus(V) sulfide and irradiation. [ 6 ] Another way to generate ketenes is through flash vacuum thermolysis (FVT) with 2- pyridylamines . Plüg and Wentrup developed a method in 1997 that improved on FVT reactions to produce ketenes with a stable FVT that is moisture insensitive, using mild conditions (480 °C). The N-pyridylamines are prepared via a condensation with R- malonates with N-amino( pyridene ) and DCC as the solvent. [ 7 ] A more robust method for preparing ketenes is the carbonylation of metal-carbenes , and in situ reaction of the thus produced highly reactive ketenes with suitable reagents such as imines , amines , or alcohols . [ 8 ] This method is an efficient one‐pot tandem protocol of the carbonylation of α‐diazocarbonyl compounds and a variety of N ‐tosylhydrazones catalysed by Co(II)– porphyrin metalloradicals leading to the formation of ketenes, which subsequently react with a variety of nucleophiles and imines to form esters , amides and β‐lactams . This system has a broad substrate scope and can be applied to various combinations of carbene precursors, nucleophiles and imines. [ 9 ] Ethenone can be produced through pyrolysis of acetone vapours over a hot filament in an apparatus that was eventually developed into the "ketene lamp" or "Hurd lamp" (named for Charles D. Hurd). [ 10 ] Due to their cumulated double bonds , ketenes are very reactive. [ 11 ] By reaction with alcohols, carboxylic acid esters are formed: Ketenes react with a carboxylic acids to form carboxylic acid anhydrides : Ketenes react with ammonia and amines to give the corresponding amides : By reaction with water, carboxylic acids are formed from ketenes: Enol esters are formed from ketenes with enolisable carbonyl compounds . The following example shows the reaction of ethenone with acetone to form a propen-2-yl acetate: At room temperature, ketene quickly dimerizes to diketene , but the ketene can be recovered by heating: Ketenes can react with alkenes , carbonyl compounds, carbodiimides and imines in a [2+2] cycloaddition . The example shows the synthesis of a β-lactam by the reaction of a ketene with an imine (see Staudinger synthesis ): [ 12 ] [ 13 ] Ketenes are generally very reactive, and participate in various ketene cycloadditions . One important process is the dimerization to give propiolactones . A specific example is the dimerization of the ketene of stearic acid to afford alkyl ketene dimers , which are widely used in the paper industry. [ 1 ] AKD's react with the hydroxyl groups on the cellulose via esterification reaction. They will also undergo [2+2] cycloaddition reactions with electron-rich alkynes to form cyclobutenones , or carbonyl groups to form beta- lactones . With imines , beta-lactams are formed. This is the Staudinger synthesis, a facile route to this important class of compounds. With acetone , ketene reacts to give isopropenyl acetate . [ 1 ] A variety of hydroxylic compounds can add as nucleophiles, forming either enol or ester products. As examples, a water molecule easily adds to ketene to give 1,1-dihydroxyethene and acetic anhydride is produced by the reaction of acetic acid with ketene. Reactions between diols ( HO−R−OH ) and bis-ketenes ( O=C=CH−R'−CH=C=O ) yield polyesters with a repeat unit of ( −O−R−O−CO−R'−CO ). Ethyl acetoacetate , an important starting material in organic synthesis, can be prepared using a diketene in reaction with ethanol . They directly form ethyl acetoacetate, and the yield is high when carried out under controlled circumstances; this method is therefore used industrially.
https://en.wikipedia.org/wiki/Ketene
Ketenimines are a group of organic compounds sharing a common functional group with the general structure R 1 R 2 C=C=NR 3 . A ketenimine is a cumulated alkene and imine and is related to an allene and a ketene . The parent compound is ketenimine or CH 2 CNH. The most recent work by Bane et al. investigates the rovibrational structure of the ν 8 and ν 12 bands in the high-resolution FTIR spectrum, complementing the earlier analysis of the pure rotational spectrum. This pair of Coriolis coupled bands provide a rare example where intensity sharing between bands yields sufficient intensity for an otherwise invisible band (ν 12 ). [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Ketenimines
A ketenyl anion contains a C=C=O allene -like functional group, similar to ketene , with a negative charge on either terminal carbon or oxygen atom, forming resonance structures by moving a lone pair of electrons on C-C-O bond. Ketenes have been sources for many organic compounds with its reactivity despite a challenge to isolate them as crystal. Precedent method to obtain this product has been at gas phase or at reactive intermediate, and synthesis of ketene is used be done in extreme conditions (i.e., high temperature, low pressure). [ 1 ] [ 2 ] [ 3 ] Recently found stabilized ketenyl anions become easier to prepare compared to precedent synthetic procedure. A major feature about stabilized ketene is that it can be prepared from carbon monoxide (CO) reacting with main-group starting materials such as ylides , silylene , and phosphinidene to synthesize and isolate for further steps. As CO becomes a more common carbon source for various type of synthesis, [ 4 ] this recent finding about stabilizing ketene with main-group elements opens a variety of synthetic routes to target desired products. Gessner et al. first revealed a synthetic route for stabilized ketenyl anion using metalated ylides in 2022. [ 5 ] In their paper, upon introducing CO, metalated ylide with posassium cation exchange CO with phosphine group R, also known for carbonylation of ylide. Their isolated ketenyl anion [K(PPh 2 (=S)CCO] is stable solid for a week under inert atmosphere, and its crystal structure was characterized. An alternate synthetic pathway for synthesizing ketenyl anion from ylide, shown in Figure 2, includes sulfuration on diphenylphosphine group, deprotonation on carbon center, and CO substitution in exchange of triphenylphosphine leaving. This synthesis resulted in 88% isolation of the product. Later in their studies, the ketenyl anion product upon carbonylation can be selective by changing electron-withdrawing ability on a certain leaving group and Lewis acidity of coordinated alkali metal cation. [ 6 ] In their example with ylide containing phosphine group and tosyl group (Ts), Gessner et al. was able to produce the ketenyl anion product more selective by modifying those parameters, shown in Figure 2. As R group is more electron-withdrawing group, it becomes more likely to leave than tosyl group. For example, changing R group from cyclohexyl group (Cy) to phenyl group (Ph) favored the ketenyl anion product with R 1 group leaving by 76%. This is because phenyl group is less electron rich and less nucleophilic compared to cyclohexyl group, resulting in more stable by itself. [ 7 ] For alkali metal cation trend, when triphenylphosphine group is present, changing from M = Li to M = K favored in phosphine group leaving by 9%. Although it is a small effect compared to leaving group effect, this is due to Lewis acidity [ 8 ] on metal cations because a stronger Lewis acidic metal cation (Li > K in Lewis acidity) attracts tosyl group to interact, resulting in increasing leaving group ability. Inoue et al. presented synthetic route of stabilizing ketene via silica-carbonyl anion, silicon analogue of ketene. [ 9 ] They motivated this goals from recent reactivity study of silylene and disilane activating CO and isolating intermediate, hypothesizing that silica-ketenyl anion is also capable to stabilize ketene. [ 10 ] [ 11 ] [ 12 ] While Gessner et al . uses ylides to accept CO, Inoue et al . uses silylene anion with another silyl group substituted to afford insertion of CO or carbonylation at room temperature in exchange of silyl group. Liu et al. had another approach to stabilize and isolate ketene by using carbene coordinated by phosphinidene . [ 13 ] Carbene coordinated by 2,6-diisopropylphenyl(Dipp)-substituted phosphinidene and dinitrogen (N 2 ) perform N 2 /CO ligand exchange. The starting material is similar to N-heterocyclic carbene with bulky substituents, invented by Bertrand. [ 14 ] In their studies, this reaction is concerted and thermodynamically favorable (-47.4 kcal/mol relative to N 2 -coordinated carbene) on coordinating CO ligand to NHC. This product is stable at room temperature inert atmosphere for a month, and no decomposition while heating in THF at 80 °C for 12 hours was observed. As shown in Figure 5, ketenyl anion has two major resonance structures: ketenyl form and ynolate form. Due to the resonance structures, alkali metal cations can be coordinated to either at central carbon atom or terminal oxygen atom depending on its electronic structure. [ 5 ] [ 6 ] A series of structural analysis revealed both ketene and ynolate structures evenly contribute to the overall electronic structure of ketenyl anion. From an example in Gessner's paper, the crystal structure of the ketenyl anion K[PPh 2 (=S)CCO] had the bond length of C-C bond (1.245 Å) and C-O bond (1.215 Å). [ 5 ] By comparing these bond length with Pyykkő's analysis on bond, [ 15 ] C-C bond is in between double bond and triple bond whereas C-O bond is in between single bond and double bond. In natural bond orbital (NBO) analysis, [ 16 ] [ 17 ] Wiberg bond index is found to be 2.06 and 1.72 for C-C bond and C-O bond, respectively. These values also suggests that both double and triple bond character for C-C bond (range of 1.20 - 1.34 Å) and both single bond and double bond character for C-O bond (range of 1.24 - 1.38 Å). The characteristic of allene-like (C=C=C) structure is also applied other ketenyl anion compounds so far. Inoue's silica-ketenyl anion product, shown in Figure 3, had Wiberg bond index of 1.68 and 1.76 for Si-C bond and C-O bond, respectively. [ 9 ] Their bond indices demonstrate that both Si-C and C-O bonds have part of double bond character that contributes of Si=C=O structure. This ketenyl anion can dimerize in solid state as oxygen atoms interacts with alkali metal cation. This dimer can be broken up by adding M( 18-crown-6 ) (where M = alkali metal cation), resulting in isolation of single ketenyl anion structure. [ 5 ] [ 9 ] Intrinsic bond orbitals (IBO) of the molecule [K(PPh 2 (=S)CCO] reveal molecular orbital describing π-orbital of C-C and C-O and delocalized orbital on oxygen atom. The stability of ketenyl anion is come from the decrease of charge on ketene carbon from parent ketene to ketenyl anion. In Gessner's study, parent ketenyl anion [H-C= C =O] − has smaller positive charge (+4.0 e) on C compared to parent ketene [H 2 C= C =O] (+7.0 e on C ). [ 5 ] This drops of charge makes the ketene less amphiphilic, leading to a more stable compound. The advantage of using ketenyl anion molecule is to synthesize desired compound selectively without concerning dimerization before synthesizing a target product. [ 21 ] In ylide-ketenyl anion, electrophile can be substituted in exchange of metal to functionalize the ketene moiety at high yield. [ 5 ] Since the central carbon is negatively charged, this nucleophilicity enable substitution with a series of electrophilic compounds such as triphenylmethyl group. Some ketenyl anion can further react with other compounds to form a new functional group. For example, after electrophilic substitution of ketenyl anion with triphenylmethyl group, the treatment with water results in formation of carboxylic acid at C=O moiety. Reported compounds from Gessner et al . had more than 90% yield isolated as solid. Not only at the central carbon where a cation can be coordinated, other carbon atom and terminal oxygen atom can also be functionalized upon electrophilic substitution. This reactivity allows activation of chemical bonds such as S-S and C=O bonds and new bonds C-S bond and C=C bond. [ 5 ] These products requires CO and substrates of interests, which highlight new synthetic pathways of organic compounds at room temperature instead of extreme conditions such as pyrolysis . [ 2 ] A stabilized ketenyl anion also undergoes dimerization with disubstituted phosphine compound to form a heterocyclic product. [ 5 ] In this reaction, an intermediate is proposed to be electrophilic substitution of a disubstituted phosphine compound followed by dimerization. In different ketenyl anion compound, cleavage of C sp -H bond, C=N bond, and I 2 bond at room temperature were also reported in phosphinidene-stabilized ketene. [ 13 ] For I 2 cleaving reaction, the mechanism is proposed to be cleavage of the bond at central carbon and migration of I to phosphorus atom.
https://en.wikipedia.org/wiki/Ketenyl_anion
The ketimine Mannich reaction is an asymmetric synthetic technique using differences in starting material to push a Mannich reaction to create an enantiomeric product with steric and electronic effects, through the creation of a ketimine group. Typically, this is done with a reaction with proline or another nitrogen -containing heterocycle , which control chirality with that of the catalyst . This has been theorized [ 1 ] to be caused by the restriction of undesired (E)-isomer by preventing the ketone from accessing non-reactive tautomers. Generally, a Mannich reaction is the combination of an amine, a ketone with a β-acidic proton and aldehyde to create a condensed product in a β-addition to the ketone. This occurs through an attack on the ketone with a suitable catalytic-amine unto its electron-starved carbon, from which an imine is created. This then undergoes electrophilic addition with a compound containing an acidic proton (which is an enol). It is theoretically possible for either of the carbonyl-containing molecules to create diastereomers, but with the addition of catalysts which restrict addition as of the enamine creation, it is possible to extract a single product with limited purification steps and in some cases as reported by List et al.; [ 2 ] practical one-pot syntheses are possible. The process of selecting a carbonyl-group gives the reaction a direct versus indirect distinction, wherein the latter case represents pre-formed products restricting the reaction's pathway and the other does not. Ketimines selects a reaction group, and circumvent a requirement for indirect pathways. It is well-known that Lewis acids and bases can influence carbonyl activity by either protonation of the oxygen or de-protonation of a beta-site to influence electrophilicity at the carbon. In the former case, there is a stabilization effect and priming for a potential leaving group by providing an equilibrium between a formal positive charge and an alcohol group. In either case, the carbon sees enhanced reactivity as the bonds are strained to equalize charge by seeking a nucleophile to which the building charge will equalize into in the subsequent reaction. Potentiation in such reactions is therefore generally driven by how well a material can produce and stabilize a charge gradient through resonance or inductive effects in substituents. The surfaces of acids can catalyze these effects by stabilizing the distortion of the carbonyl's electron cloud, leading to sensitivity in the reaction to acid-base conditions, with acidic conditions typically favoring reaction by enhancing carbonyl leavability. Mannich reactions are named after their pioneer, Carl Mannich. After the discovery of Mannich reaction in 1912, he found that combination of a ketone, aldehyde and amine consistently produced an addition of the aldehyde unto the site originally containing an acidic β-proton in the ketone. He theorized the mechanism to be one of mixed-aldol; featuring the dehydration of an alcohol and Michael-Addition of the complex. The reaction suffered from a lack of specificity, and this problem persisted until 1997, in which an asymmetrical method was discovered independently by several researchers. Kobayashi [ 3 ] et al. used Brønsted-acid to demonstrate that the organic reaction could occur in an aqueous medium, and also found an enantiomeric excess of nearly 55%. It was hypothesized that one of the reactants must be using the acid surface to change its electronic structure since otherwise the mixture would be immiscible, though the study used chiral Lewis acid as catalyst. Prior, Hajos [ 4 ] et al. demonstrated a method by which L-proline could be used in an aldol-cyclization. Subsequent studies have focused on improving the catalyst or materials through substituent effects, from which controls using sterics or restriction of charge sites are expected to improve catalytic yields. Alternatives to the catalyst are not readily explored due to the ubiquity of L-proline, its low cost, and its high selectivity; cementing its proliferation as a catalytic standard. L-proline restricted reactions through its five-membered ring which favours the addition of another reactant in a fixed direction. List [ 2 ] et al. theorize in their study that this restriction is primarily in the carboxylic-acid group next to the enamine attachment, which can stabilize the imine product. List also expands upon the role of the imine by listing its ratio with the aldehyde to be 1 using 1 H-NMR methods, indicating that the Michael Addition is rate-determining and not the proline-complexation or, obviously, aldehyde activation. This is further confirmed in a Hammett study on para-substituted aromatic aldehydes, in which List confirmed positive correlation between withdrawing effect and increasing imine reactivity, a sign that Michael addition had to be happening with the former aldehyde as an electrophile. [ 2 ] Substituent studies have focused on increasing or decreasing steric or electronic influences on material to favor product. In 2012, [ 5 ] the first non-aromatic ketiminoester was produced with the capability of reduction into either syn- or anti- lactones with NaBH 4 reduction. Kano et al. demonstrated a 20:1 excess of desired product, electrophilic enhancement of the ketone by different flanking esters. BINOL phosphoric acid-catalyzed Mannich reaction (2019) Reddy and coworkers proposed the method to produce endocyclic N-acyl ketimines from a stable precursor, 3-aryl-3-hydroxyisoindolin-1-ones in 2019. The reaction yields a high stereoselectivity under high temperature as the adjacent quaternary and stereogenic centers creating chirality. [ 6 ] Meanwhile, if 3-hydroxy-3-pentylisoindoline-1-one is used, 95% enamide would be generated instead of Mannich product under the same mechanism. Proline-catalyzed direct asymmetric Mannich reaction of 3-substituted-2H-1,4-benzoxazines (2013) In 2013, Wang and coworkers studied using 3,4-dihydro-2H-1,4-benzoxazines to produce N-heterocyclic products. [ 7 ] This reaction was the first catalytic asymmetric Mannich reaction of 3,4-dihydro-2H-1,4-benzoxazines. The aromatic ring strain in 3,4-dihydro-2H-1,4-benzoxazines helps increase reactivity of the C=N double bond and gives the considerable yield with no electron withdrawing substituents. Similar mechanism also occurred in the reaction catalyzed by wheat germ lipase in 2016 provided by Guan, He and coworkers. [ 8 ] Copper-catalyzed asymmetric Mannich reaction of 2H-azirines with β-keto amides (2018) In 2018, Lin, Feng, and coworkers developed the first copper-catalyzed catalytic enantioselective Mannich reaction using 2H-azirines and β-ketoamides. [ 9 ] The racemic 2H-azirines was first applied and one enantiomer of the 2H-azirine would react with chiral Cu-enolated complex to help overcome its three neighbouring stereogenic centers and gave proper products. Copper(I)-catalyzed asymmetric decarboxylative Mannich reaction of 2H-azirines (2019) In 2019, Yin and coworkers used 2H-azirines as an electrophile and a base at the same time to deprotonate cyanoacetic acid and thus a further decarboxylation could proceed. Proton transfer strategy was also discussed since the protonated 2H-azirine obtains high electrophilicity while nonprotonated ones does not even react with exact same environment. Various substituents on both nucleophile and electrophile gave high enantiomer-enriched yields of azirines. [ 10 ] Zn-ProPhenol catalyzed asymmetric Mannich reaction of 2H-azirines with alkynyl cycloalkyl ketones (2020) In 2020, Trost and coworker proposed a Mannich reaction of 2H-aziridines with alkylnyl cycloalkyl ketones under a bimetallic Zn-ProPhenol complex as the catalyst. The bimetallic Zn-ProPhenol complex employed in the reaction to activate both nucleophile and electrophile since the compound obtains a Bronsted basic site and a Lewis acidic group at chiral center at the same time. The hydrogen bonding between N from aziridine and H from carbonyl group might also contribute chirality centred at nitrogen atom. In this case, N-H bond acetylation product was also generated from the reaction. [ 11 ] Chiral phosphoric acid-catalyzed Mukaiyama–Mannich reaction of endocyclic N-acyl ketimines (2020) In 2020, Zhang, Ma, and coworkers used 3-hydroxyisoindolin-1-ones to produce endocyclic N-acyl ketimines by chiral phosphoric acid-catalyzed Mukaiyama–Mannich reaction. Difluorinated silyl enol ethers are involved to produce enantioenriched fluoroalkyl-functionalized isoindolones first. Through the reaction under different conditions, two features were discovered. Firstly, hexafluoroisopropyl alcohol could enhance the reactivity and enantioselectivity. Secondly, reaction is better catalyzed by catalyst with trifluoromethyl group chiral barriers. [ 12 ] Isatin-derived ketimine commonly undergoes catalytic asymmetric Mannich reaction to generate 3-substituted-3-aminooxindoles. The structure could be found in various natural or biological molecules, and such reaction could lead to enantio-enriched products. Moreover, the isatin-derived ketimines are comparatively reactive as an electrophile and thus was commonly used in various methods or reactions. Morimoto, Ohshima, and coworkers applied decarboxylative Mannich reaction on N-unprotected isatin-derived ketimines in 2018. [ 13 ] The researchers applied an enantio-enriched chiral oxindoles on the 3-positions with primary amines. In this case, substrates with less substituted groups are more likely to undergo Mannich reactions with higher yield. Also, this routine allows a great range of ketoacids to react with ketimines before decarboxylation and generate Mannich products. Wolf and coworkers further developed a copper-catalyzed stereo-divergent Mannich reaction in 2020. [ 14 ] Isatin-derived ketimines first react with α-fluoro-α-arylnitrile and result in a chiral cuprous keteniminate complex. Then, BTMG would act as a nucleophile and the stereoselectivity would be determined by the chosen of chiral ligands. From experiment, Segphos–copper complex generated anti diastereomers while Taniaphos complex results in syn diastereomers. Feng and coworkers used chiral N,N' -dioxide/Zn(II) complexes to catalyze silyl enol ethers and ketimines in 2019. [ 15 ] The reaction started with an α-alkenyl addition of silyl enol ethers. Dioxide (compound 21 and 22 in scheme 20) was proved to the best reaction condition for a wide range of isatin-derived ketimines to generate great yield of products with high stereoselectivity. Pyrazolinone-derived ketimines are also found to be suitable substates. The mechanism demonstrates that Mukaiyama–Mannich addition intermediates produced β-amino silyl enol ethers as product. Esters, perfluorinated alkyl, or alkyne groups are good electrophiles, that can be used to form ketimine enolate with different nucleophiles. However, subtle differences in ketimine structures can significantly affect stereoselectivities and yields. Nevertheless, many medicinally and synthetically significant chiral structure can be created by creative design on modified ketimines. In 2016, a novel direct asymmetric Mannich reaction of ketiminoesters with thionolactones were proposed by Terada et al. by using bis(guanidino)iminophosphorane as a chiral organosuperbase catalyst. (scheme 21) [ 16 ] Comparing to thionolactones as an appropriate nucleophile, lactone is not acidic enough to be enolized by base catalyst, which makes lactone unable to act as a nucleophile. For electrophiles, having proper substituents on benzoyl protecting group can improve yields and diastereoselectivity. For instance, comparing to trifluromethyl group on ketiminoester, methoxy substituents gives much lower yield and selectivity. In 2016, direct asymmetric catalytic Mannich reaction of an alpha, beta-unsaturated gamma-butyrolactam with ketiminoesters was proposed by Kumagai el al. using soft Lewis acid/hard Brønsted base cooperative catalysis. (scheme 22) [ 17 ] As a result, alpha-addition products were synthesized from alpha,beta-unsaturated gamma-butyrolactams onto ketimines through asymmetric catalysis, which is a novel catalytic asymmetric reaction. The corresponding phosphinoyl protected ketiminoester did not react without the cooperative catalysis by the soft Lewis acid / hard Bronsted base, which means the interaction between copper catalyst (as soft Lewis acid) and thiophosphinoyl protecting group of ketimines (as soft Lewis base) is significant for this successful Mannich reaction. In 2017, a direct enantioselective Mannich reaction with an N-unprotected trifluoromethyl ketiminoester was proposed by Morimoto et al. (scheme 23) [ 18 ] The unprotected N-H ketimines alleviate problems caused by E/Z isomerism in the asymmetric catalytic reactions, although they generally have lower stability. Morimoto et al. showed that the unprotected N-H ketimines are able to react with malonates, esters, oxindoles, and cyclic beta-keto-nitiles under different temperatures to give products whose diastereoselectivities can be controlled under asymmetric catalysis. In 2018, Zn-ProPhenol was used to catalyze asymmetric Mannich reaction of butenolides with perfluoroalkyl alkynyl ketimines, which is proposed by Trost et al. (scheme 24) [ 19 ] Several features about this reaction were reported: 1. Ketimines are more electrophilic, resulted by fluorinated groups 2. Alkyne group has less steric effect, and its high s character makes ketimines more electrophilic 3. Steric difference between alkynyl and perfluoroalkyl groups would give ketimines in a single diastereomeric form, which is critical for high stereoselectivities. The reaction is broad in scope, which allows further contribution in medicinal field and synthetic chemistry. Chiral amines like amino acid derivatives can catalyze many enantioselective transformations, although enantiomers of amino acids are hard to access sometimes. In 2019, Lan et al. proposed a strategy to apply their chiral amine catalyst with minimal modification to the Mannich reaction of alkynyl ketiminoesters. (scheme 25) [ 20 ] Their application shows that catalyst 26 and MeCN can be replaced by 28 and dichloroethane for many alkynyl trifluoromethyl-, pyrazolinone-derived-, and isatin-derived ketimines to give the same stereoselectivities and yields. It is hard to control E/Z selectivity for ketimines having two structurally similar substituents, which makes such kind of ketimines merely used in asymmetric catalysis field comparing to ketimines that have single diastereomeric form. In 2021, Kano et al. proposed method to use alkynyl ketimines as synthetic equivalents of dialkyl ketimines, and dialkyl ketimes are a typical example of isomeric ketimine mixtures. The alkynyl groups can be reduced by hydrogenation back to corresponding alkyl groups. Moreover, a single diastereomeric alkynyl alkyl N-protected ketimines can be obtained by using steric difference between alkyl and alkynyl group, which makes them be stereoselectively controlled by chiral catalyst to give asymmetric Mannich product. Additionally, small steric hindrance and high s character of alkynyl group result in its high nucleophilicity, making alkynyl-substituted ketimines more reactive than alkyl counterparts. Even though additional step of hydrogenation is needed, the advantages provided by alkynyl group are still impressive. To solve the problem that a single diastereomeric form of N-Boc alkyl alkynyl ketimines are difficult to access, Kano et al. proposed a synthesis to form these ketimines (reaction 1 and 2), followed by operating chiral amine-catalyzed Mannich reactions (reaction 3). (scheme 26) [ 21 ] The chiral amine catalyzed the asymmetric Mannich reaction of aldehyde and (Z)-alkyl alkynyl ketimines. The method they used is stereodivergent, in which the phenylcyclopropane amine 29 gives syn products, and the proline catalyst gives anti product. After the hydrogenation step, they successfully obtained chiral amines substituted with two similar alkyl groups, which is hard to obtained in a single enantiomeric compound. Ketimine Mannich catalysis has been identified as a synthetic tool for primary alcohols. [ 2 ] A useful path to dihydroquinazolinone, a precursor to drugs with anti-viral and cardiovascular benefits, was also found by Liu [ 22 ] et Al. using N-Heterocyclic carbene-catalyzation [4+2] annulation of β-methyl enals.
https://en.wikipedia.org/wiki/Ketimine_Mannich_reaction
Ketoconazole , sold under the brand name Nizoral , among others, is an antiandrogen , antifungal , and antiglucocorticoid medication used to treat a number of fungal infections . [ 10 ] Applied to the skin it is used for fungal skin infections such as tinea , cutaneous candidiasis , pityriasis versicolor , dandruff , and seborrheic dermatitis . [ 11 ] Taken by mouth it is a less preferred option and recommended for only severe infections when other agents cannot be used. [ 10 ] Other uses include treatment of excessive male-patterned hair growth in women and Cushing's syndrome . [ 10 ] Common side effects when applied to the skin include redness. [ 11 ] Common side effects when taken by mouth include nausea , headache , and liver problems . [ 10 ] Liver problems may result in death or the need for a liver transplantation . [ 10 ] [ 12 ] Other severe side effects when taken orally include QT prolongation , adrenocortical insufficiency , and anaphylaxis . [ 10 ] [ 12 ] It is an imidazole and works by hindering the production of ergosterol required for the fungal cell membrane , thereby slowing growth. [ 10 ] Ketoconazole was patented in 1977 by Belgian pharmaceutical company Janssen , and came into medical use in 1981. [ 13 ] It is available as a generic medication and formulations that are applied to the skin are over the counter in the United Kingdom . [ 11 ] In 2022, it was the 175th most commonly prescribed medication in the United States, with more than 2 million prescriptions. [ 14 ] [ 15 ] The formulation that is taken by mouth was withdrawn in the European Union and in Australia in 2013, [ 16 ] [ 17 ] and in China in 2015. [ 18 ] In addition, its use was restricted in the United States and Canada in 2013. [ 17 ] Topically administered ketoconazole is usually prescribed for fungal infections of the skin and mucous membranes, such as athlete's foot , ringworm , candidiasis (yeast infection or thrush), jock itch , and tinea versicolor . [ 19 ] Topical ketoconazole is also used as a treatment for dandruff (seborrheic dermatitis of the scalp) and for seborrheic dermatitis on other areas of the body, perhaps acting in these conditions by suppressing levels of the fungus Malassezia furfur on the skin. [ 19 ] [ 20 ] [ 21 ] Ketoconazole has activity against many kinds of fungi that may cause human disease, such as Candida , Histoplasma , Coccidioides , and Blastomyces (although it is not active against Aspergillus ), chromomycosis and paracoccidioidomycosis . [ 22 ] [ 12 ] First made in 1977, [ 19 ] ketoconazole was the first orally-active azole antifungal medication. [ 22 ] However, ketoconazole has largely been replaced as a first-line systemic antifungal medication by other azole antifungal agents, such as fluconazole and/or itraconazole , because of ketoconazole's greater toxicity, poorer absorption, and more limited spectrum of activity. [ 22 ] [ 23 ] Ketoconazole is used orally in dosages of 200 to 400 mg per day in the treatment of superficial and deep fungal infections. [ 24 ] Ketoconazole shampoo in conjunction with an oral 5α-reductase inhibitor such as finasteride or dutasteride has been used off label to treat androgenic alopecia . It was speculated that antifungal properties of ketoconazole reduce scalp microflora and consequently may reduce follicular inflammation that contributes to alopecia. [ 25 ] Limited clinical studies suggest ketoconazole shampoo used either alone [ 26 ] [ 27 ] or in combination with other treatments [ 28 ] may be useful in reducing hair loss in some cases. [ 29 ] The side effects of ketoconazole are sometimes harnessed in the treatment of non-fungal conditions. While ketoconazole blocks the synthesis of the sterol ergosterol in fungi, in humans, at high dosages (>800 mg/day), it potently inhibits the activity of several enzymes necessary for the conversion of cholesterol to steroid hormones such as testosterone and cortisol . [ 22 ] [ 24 ] Specifically, ketoconazole has been shown to inhibit cholesterol side-chain cleavage enzyme , which converts cholesterol to pregnenolone , 17α-hydroxylase and 17,20-lyase , [ 24 ] which convert pregnenolone into androgens , and 11β-hydroxylase , which converts 11-deoxycortisol to cortisol . [ 30 ] All of these enzymes are mitochondrial cytochrome p450 enzymes. [ 31 ] Based on these antiandrogen and antiglucocorticoid effects, ketoconazole has been used with some success as a second-line treatment for certain forms of advanced prostate cancer [ 24 ] [ 32 ] and for the suppression of glucocorticoid synthesis in the treatment of Cushing's syndrome . [ 33 ] However, in the treatment of prostate cancer, concomitant glucocorticoid administration is needed to prevent adrenal insufficiency . [ 24 ] Ketoconazole has additionally been used, in lower dosages, to treat hirsutism and, in combination with a GnRH analogue , male-limited precocious puberty . [ 24 ] In any case, the risk of hepatotoxicity with ketoconazole limits its use in all of these indications, especially in those that are benign such as hirsutism. [ 24 ] Ketoconazole has been used to prevent the testosterone flare at the initiation of GnRH agonist therapy in men with prostate cancer. [ 34 ] Oral ketoconazole has various contraindications , such as concomitant use with certain other drugs due to known drug interactions . [ 5 ] Other contraindications of oral ketoconazole include liver disease , adrenal insufficiency , and known hypersensitivity to oral ketoconazole. [ 5 ] Vomiting, diarrhea, nausea, constipation, abdominal pain, upper abdominal pain, dry mouth, dysgeusia , dyspepsia , flatulence , tongue discoloration may occur. [ 35 ] The drug may cause adrenal insufficiency so the level of the adrenocortical hormones should be monitored while taking it. [ 12 ] [ 35 ] Oral ketoconazole at a dosage range of 400 to 2,000 mg/day has been found to result in a rate of gynecomastia of 21%. [ 36 ] In July 2013, the US Food and Drug Administration (FDA) issued a warning that taking ketoconazole by mouth can cause severe liver injuries and adrenal gland problems: adrenal insufficiency and worsening of other diseases related to the gland conditions. [ 12 ] It recommends oral tablets should not be a first-line treatment for any fungal infection. It should be used for the treatment of certain fungal infections, known as endemic mycoses, only when alternative antifungal therapies are not available or not tolerated. [ 12 ] As contraindication it should not be used in people with acute or chronic liver disease . [ 12 ] Anaphylaxis after the first dose may occur. [ medical citation needed ] Other cases of hypersensitivity include urticaria . [ 10 ] [ 5 ] The topical formulations have not been associated with liver damage, adrenal problems, or drug interactions. These formulations include creams, shampoos, foams, and gels applied to the skin, unlike the ketoconazole tablets, which are taken by mouth. [ 12 ] Ketoconazole is categorized as pregnancy category C (Risk not ruled out) in the US. [ 37 ] Research in animals has shown it to cause teratogenesis when administered in high doses. [ 37 ] A subsequent trial in Europe failed to show a risk to infants of mothers receiving ketoconazole. [ 38 ] In the event of an overdose of oral ketoconazole, treatment should be supportive and based on symptoms . [ 5 ] Activated charcoal may be administered within the first hour following overdose of oral ketoconazole. [ 5 ] The concomitant use of the following medications are contraindicated with ketoconazole tablets: [ 5 ] [ 35 ] The following medications are not recommended with ketoconazole: [ 5 ] [ 35 ] Ritonavir (an antiretrovial medication ), is known for increasing the activity of ketoconazole. So it is recommended to reduce dosage. [ 5 ] There is also a list of drugs which significantly decrease systemic exposure to the ketoconazole and drugs whose systemic exposure is increased by the ketoconazole. [ 5 ] [ 35 ] As an antifungal, ketoconazole is structurally similar to imidazole , and interferes with the fungal synthesis of ergosterol , a constituent of fungal cell membranes , as well as certain enzymes . As with all azole antifungal agents, ketoconazole works principally by inhibiting the enzyme cytochrome P450 14α-demethylase (CYP51A1). [ 31 ] This enzyme participates in the sterol biosynthesis pathway that leads from lanosterol to ergosterol . Lower doses of fluconazole and itraconazole are required to kill fungi compared to ketoconazole, as they have been found to have a greater affinity for fungal cell membranes. Resistance to ketoconazole has been observed in a number of clinical fungal isolates, including Candida albicans . Experimentally, resistance usually arises as a result of mutations in the sterol biosynthesis pathway. Defects in the sterol 5-6 desaturase enzyme reduce the toxic effects of azole inhibition of the 14-alpha demethylation step. Multidrug-resistance (MDR) genes can also play a role in reducing cellular levels of the drug. As azole antifungals all act at the same point in the sterol pathway, resistant isolates are normally cross-resistant to all members of the azole family. [ 39 ] [ 40 ] As an antiandrogen , ketoconazole operates through at least two mechanisms of action. First, and most notably, high oral doses of ketoconazole (e.g. 40 mg three times per day) block both testicular and adrenal androgen biosynthesis, leading to a reduction in circulating testosterone levels. [ 24 ] [ 41 ] It produces this effect through inhibition of 17α-hydroxylase and 17,20-lyase , which are involved in the synthesis and degradation of steroids, including the precursors of testosterone . [ 24 ] Due to its efficacy at reducing systemic androgen levels, ketoconazole has been used with some success as a treatment for androgen-dependent prostate cancer. [ 42 ] Second, ketoconazole is an androgen receptor antagonist , competing with androgens such as testosterone and dihydrotestosterone (DHT) for binding to the androgen receptor . This effect is thought to be quite weak however, even with high oral doses of ketoconazole. [ 43 ] Ketoconazole, along with miconazole , has been found to act as an antagonist of the glucocorticoid receptor . [ 44 ] [ 45 ] Ketoconazole is a racemic mixture consisting of cis -(2 S ,4 R )-(−) and cis -(2 R ,4 S )-(+) enantiomers. [ 9 ] The cis -(2 S ,4 R ) isomer was more potent in inhibiting progesterone 17α,20-lyase than its enantiomer ( IC 50 values of 0.05 and 2.38 μ M, respectively) and in inhibiting 11β-hydroxylase (IC 50 values of 0.152 and 0.608 μ M, respectively). Both isomers were relatively weak inhibitors of human placental aromatase . [ 8 ] Oral ketoconazole has been used clinically as a steroidogenesis inhibitor in men, women, and children at dosages of 200 to 1,200 mg/day. [ 46 ] [ 47 ] [ 48 ] Numerous small studies have investigated the effects of oral ketoconazole on hormone levels in humans. [ 49 ] It has been found in men to significantly decrease testosterone and estradiol levels and to significantly increase luteinizing hormone , progesterone , and 17α-hydroxyprogesterone levels, whereas levels of androstenedione , follicle-stimulating hormone , and prolactin were unaffected. [ 49 ] [ 50 ] [ 47 ] The ratio of testosterone to estradiol is also decreased during oral ketoconazole therapy in men. [ 47 ] Suppression of testosterone levels by ketoconazole is generally partial and has often been found to be transient. [ 49 ] Better effects on suppression of testosterone levels have been observed in men when ketoconazole is combined with a GnRH agonist to suppress the hypothalamic–pituitary–gonadal axis , which prevents compensatory upregulation of luteinizing hormone secretion and consequent activation of gonadal testosterone production. [ 47 ] In premenopausal women with polycystic ovary syndrome , ketoconazole has been found to significantly decrease levels of androstenedione and testosterone and significantly increase levels of 17α-hydroxyprogesterone and estradiol. [ 48 ] [ 51 ] Studies in postmenopausal women with breast cancer have found that ketoconazole significantly decreases androstenedione levels, slightly decreases estradiol levels, and does not affect estrone levels. [ 52 ] This indicates minimal inhibition of aromatase by ketoconazole in vivo in humans. [ 52 ] Ketoconazole has also been found to decrease levels of endogenous corticosteroids , such as cortisol , corticosterone , and aldosterone , as well as vitamin D . [ 53 ] [ 47 ] Ketoconazole has been found to displace dihydrotestosterone and estradiol from sex hormone-binding globulin in vitro , but this was not found to be relevant in vivo . [ 47 ] Ketoconazole has been found to inhibit the activity of the cation channel TRPM5 . [ 54 ] When administered orally, ketoconazole is best absorbed at highly acidic levels, so antacids or other causes of decreased stomach acid levels will lower the drug's absorption. Absorption can be increased by taking it with an acidic beverage, such as cola . [ 55 ] Ketoconazole is very lipophilic and tends to accumulate in fatty tissues. Ketoconazole is a synthetic imidazole . [ 56 ] [ 57 ] It is a nonsteroidal compound. [ 56 ] [ 57 ] It is a racemic mixture of two enantiomers , levoketoconazole ((2 S ,4 R )-(−)-ketoconazole) and dextroketoconazole ((2 R ,4 S )-(+)-ketoconazole). [ 56 ] [ 57 ] Levoketoconazole is under development for potential clinical use as a steroidogenesis inhibitor with better tolerability and less toxicity than ketoconazole. Other steroidogenesis inhibitors besides ketoconazole and levoketoconazole include the nonsteroidal compound aminoglutethimide and the steroidal compound abiraterone acetate . [ citation needed ] Ketoconazole was discovered in 1976 at Janssen Pharmaceuticals . [ 58 ] It was patented in 1977, [ 13 ] followed by introduction in the United States in July 1981. [ 17 ] [ 7 ] [ 59 ] [ 13 ] Following its introduction, ketoconazole was the only systemic antifungal available for almost a decade. [ 17 ] Ketoconazole was introduced as the prototypical medication of the imidazole group of antifungals. [ 60 ] Oral ketoconazole has been replaced with oral fluconazole or itraconazole for many mycoses . [ 60 ] Due to incidence of serious liver toxicity , the use of oral ketoconazole was suspended in France in July 2011, following review. [ 17 ] This event triggered an evaluation of oral ketoconazole throughout the rest of the European Union. [ 17 ] [ 61 ] In 2013, oral ketoconazole was withdrawn in the European Union and Australia, and strict restrictions were placed on the use of oral ketoconazole in the United States and Canada. [ 17 ] Oral ketoconazole is indicated for use in these countries when the indication is a severe or life-threatening systemic infection and alternatives are unavailable. [ 17 ] However, topical ketoconazole, which does not distribute systemically, is safe and widely used still. [ 17 ] Ketoconazole HRA was approved for use in the European Union for treatment of Cushing's syndrome in November 2013. [ 6 ] [ 62 ] Ketoconazole is the generic name of the drug and its INN Tooltip International Nonproprietary Name , USAN Tooltip United States Adopted Name , BAN Tooltip British Approved Name , and JAN Tooltip Japanese Accepted Name . [ 56 ] [ 57 ] [ 63 ] [ 64 ] Ketoconazole has been marketed under a large number of brand names. [ 56 ] [ 57 ] [ 63 ] [ 64 ] Ketoconazole is available widely throughout the world. [ 57 ] [ 64 ] In 2013, the European Medicines Agency 's Committee for Medicinal Products for Human Use (CHMP) recommended that a ban be imposed on the use of oral ketoconazole for systemic use in humans throughout the European Union, after concluding that the risk of serious liver injury from systemic ketoconazole outweighs its benefits. [ 65 ] As of March 2019, oral levoketoconazole (developmental code name COR-003, tentative brand name Recorlev) is phase III clinical trials for the treatment of Cushing's syndrome . [ 66 ] Oral levoketoconazole may have a lower risk of liver toxicity than oral ketoconazole. [ 67 ] Ketoconazole is sometimes prescribed as an antifungal by veterinarians for use in pets, often as unflavored tablets that may need to be cut to smaller size for correct dosage. [ 68 ]
https://en.wikipedia.org/wiki/Ketoconazole
In organic chemistry , a ketone / ˈ k iː t oʊ n / is an organic compound with the structure R−C(=O)−R' , where R and R' can be a variety of carbon -containing substituents . Ketones contain a carbonyl group −C(=O)− (a carbon-oxygen double bond C=O). The simplest ketone is acetone (where R and R' are methyl ), with the formula (CH 3 ) 2 CO . Many ketones are of great importance in biology and industry. Examples include many sugars ( ketoses ), many steroids (e.g., testosterone ) and the solvent acetone . [ 1 ] The word ketone is derived from Aketon , an old German word for acetone . [ 2 ] [ 3 ] According to the rules of IUPAC nomenclature , ketone names are derived by changing the suffix -ane of the parent alkane to -anone . Typically, the position of the carbonyl group is denoted by a number, but traditional nonsystematic names are still generally used for the most important ketones, for example acetone and benzophenone . These nonsystematic names are considered retained IUPAC names, [ 4 ] although some introductory chemistry textbooks use systematic names such as "2-propanone" or "propan-2-one" for the simplest ketone ( C H 3 −C(= O )−CH 3 ) instead of "acetone". The derived names of ketones are obtained by writing separately the names of the two alkyl groups attached to the carbonyl group, followed by "ketone" as a separate word. Traditionally the names of the alkyl groups were written in order of increasing complexity, for example methyl ethyl ketone . However, according to the rules of IUPAC nomenclature , the alkyl groups are written alphabetically, for example ethyl methyl ketone . When the two alkyl groups are the same, the prefix "di-" is added before the name of alkyl group. The positions of other groups are indicated by Greek letters , the α-carbon being the atom adjacent to carbonyl group. Although used infrequently, oxo is the IUPAC nomenclature for the oxo group (=O) and used as prefix when the ketone does not have the highest priority. Other prefixes, however, are also used. For some common chemicals (mainly in biochemistry), keto refer to the ketone functional group . The ketone carbon is often described as sp 2 hybridized , a description that includes both their electronic and molecular structure. Ketones are trigonal planar around the ketonic carbon, with C–C–O and C–C–C bond angles of approximately 120°. Ketones differ from aldehydes in that the carbonyl group (C=O) is bonded to two carbons within a carbon skeleton . In aldehydes, the carbonyl is bonded to one carbon and one hydrogen and are located at the ends of carbon chains. Ketones are also distinct from other carbonyl-containing functional groups , such as carboxylic acids , esters and amides . [ 5 ] The carbonyl group is polar because the electronegativity of the oxygen is greater than that for carbon. Thus, ketones are nucleophilic at oxygen and electrophilic at carbon. Because the carbonyl group interacts with water by hydrogen bonding , ketones are typically more soluble in water than the related methylene compounds. Ketones are hydrogen-bond acceptors. Ketones are not usually hydrogen-bond donors and cannot hydrogen-bond to themselves. Because of their inability to serve both as hydrogen-bond donors and acceptors, ketones tend not to "self-associate" and are more volatile than alcohols and carboxylic acids of comparable molecular weights . These factors relate to the pervasiveness of ketones in perfumery and as solvents. Ketones are classified on the basis of their substituents. One broad classification subdivides ketones into symmetrical and unsymmetrical derivatives, depending on the equivalency of the two organic substituents attached to the carbonyl center. Acetone and benzophenone ( (C 6 H 5 ) 2 CO ) are symmetrical ketones. Acetophenone (C 6 H 5 C(O)CH 3 ) is an unsymmetrical ketone. Many kinds of diketones are known, some with unusual properties. The simplest is diacetyl (CH 3 C(O)C(O)CH 3 ) , once used as butter-flavoring in popcorn . Acetylacetone (pentane-2,4-dione) is virtually a misnomer (inappropriate name) because this species exists mainly as the monoenol CH 3 C(O)CH=C(OH)CH 3 . Its enolate is a common ligand in coordination chemistry . Ketones containing alkene and alkyne units are often called unsaturated ketones. A widely used member of this class of compounds is methyl vinyl ketone , CH 3 C(O)CH=CH 2 , a α,β-unsaturated carbonyl compound . Many ketones are cyclic. The simplest class have the formula (CH 2 ) n CO , where n varies from 2 for cyclopropanone ( (CH 2 ) 2 CO ) to the tens. Larger derivatives exist. Cyclohexanone ( (CH 2 ) 5 CO ), a symmetrical cyclic ketone, is an important intermediate in the production of nylon . Isophorone , derived from acetone, is an unsaturated, asymmetrical ketone that is the precursor to other polymers . Muscone , 3-methylpentadecanone, is an animal pheromone . Another cyclic ketone is cyclobutanone , having the formula (CH 2 ) 3 CO . An aldehyde differs from a ketone in that it has a hydrogen atom attached to its carbonyl group, making aldehydes easier to oxidize. Ketones do not have a hydrogen atom bonded to the carbonyl group, and are therefore more resistant to oxidation. They are oxidized only by powerful oxidizing agents which have the ability to cleave carbon–carbon bonds. Ketones (and aldehydes) absorb strongly in the infra-red spectrum near 1750 cm −1 , which is assigned to ν C=O ("carbonyl stretching frequency"). The energy of the peak is lower for aryl and unsaturated ketones. [ 6 ] Whereas 1 H NMR spectroscopy is generally not useful for establishing the presence of a ketone, 13 C NMR spectra exhibit signals somewhat downfield of 200 ppm depending on structure. Such signals are typically weak due to the absence of nuclear Overhauser effects . Since aldehydes resonate at similar chemical shifts , multiple resonance experiments are employed to definitively distinguish aldehydes and ketones. Ketones give positive results in Brady's test , the reaction with 2,4-dinitrophenylhydrazine to give the corresponding hydrazone. Ketones may be distinguished from aldehydes by giving a negative result with Tollens' reagent or with Fehling's solution . Methyl ketones give positive results for the iodoform test . [ 7 ] Ketones also give positive results when treated with m -dinitrobenzene in presence of dilute sodium hydroxide to give violet coloration. Many methods exist for the preparation of ketones in industrial scale and academic laboratories. Ketones are also produced in various ways by organisms; see the section on biochemistry below. In industry, the most important method probably involves oxidation of hydrocarbons , often with air. For example, a billion kilograms of cyclohexanone are produced annually by aerobic oxidation of cyclohexane . Acetone is prepared by air-oxidation of cumene . For specialized or small scale organic synthetic applications, ketones are often prepared by oxidation of secondary alcohols : Typical strong oxidants (source of "O" in the above reaction) include potassium permanganate or a Cr(VI) compound. Milder conditions make use of the Dess–Martin periodinane or the Moffatt–Swern methods. Many other methods have been developed, examples include: [ 8 ] Ketones that have at least one alpha-hydrogen , undergo keto-enol tautomerization ; the tautomer is an enol . Tautomerization is catalyzed by both acids and bases. Usually, the keto form is more stable than the enol. This equilibrium allows ketones to be prepared via the hydration of alkynes . C−H bonds adjacent to the carbonyl in ketones are more acidic p K a ≈ 20) than the C−H bonds in alkane (p K a ≈ 50). This difference reflects resonance stabilization of the enolate ion that is formed upon deprotonation . The relative acidity of the α-hydrogen is important in the enolization reactions of ketones and other carbonyl compounds. The acidity of the α-hydrogen also allows ketones and other carbonyl compounds to react as nucleophiles at that position, with either stoichiometric and catalytic base. Using very strong bases like lithium diisopropylamide (LDA, p K a of conjugate acid ~36) under non-equilibrating conditions (–78 °C, 1.1 equiv LDA in THF, ketone added to base), the less-substituted kinetic enolate is generated selectively, while conditions that allow for equilibration (higher temperature, base added to ketone, using weak or insoluble bases, e.g., CH 3 CH 2 ONa in CH 3 CH 2 OH , or NaH ) provides the more-substituted thermodynamic enolate . Ketones are also weak bases, undergoing protonation on the carbonyl oxygen in the presence of Brønsted acids . Ketonium ions (i.e., protonated ketones) are strong acids, with p K a values estimated to be somewhere between –5 and –7. [ 19 ] [ 20 ] Although acids encountered in organic chemistry are seldom strong enough to fully protonate ketones, the formation of equilibrium concentrations of protonated ketones is nevertheless an important step in the mechanisms of many common organic reactions, like the formation of an acetal, for example. Acids as weak as pyridinium cation (as found in pyridinium tosylate) with a p K a of 5.2 are able to serve as catalysts in this context, despite the highly unfavorable equilibrium constant for protonation ( K eq < 10 −10 ). An important set of reactions follow from the susceptibility of the carbonyl carbon toward nucleophilic addition and the tendency for the enolates to add to electrophiles. Nucleophilic additions include in approximate order of their generality: [ 8 ] Ketones are cleaved by strong oxidizing agents and at elevated temperatures. Their oxidation involves carbon–carbon bond cleavage to afford a mixture of carboxylic acids having lesser number of carbon atoms than the parent ketone. Ketones do not appear in standard amino acids , nucleic acids, nor lipids. The formation of organic compounds in photosynthesis occurs via the ketone ribulose-1,5-bisphosphate . Many sugars are ketones, known collectively as ketoses . The best known ketose is fructose ; it mostly exists as a cyclic hemiketal , which masks the ketone functional group. Fatty acid synthesis proceeds via ketones. Acetoacetate is an intermediate in the Krebs cycle which releases energy from sugars and carbohydrates. [ 22 ] In medicine, acetone , acetoacetate, and beta-hydroxybutyrate are collectively called ketone bodies , generated from carbohydrates , fatty acids , and amino acids in most vertebrates , including humans. Ketone bodies are elevated in the blood ( ketosis ) after fasting, including a night of sleep; in both blood and urine in starvation ; in hypoglycemia , due to causes other than hyperinsulinism ; in various inborn errors of metabolism , and intentionally induced via a ketogenic diet , and in ketoacidosis (usually due to diabetes mellitus). Although ketoacidosis is characteristic of decompensated or untreated type 1 diabetes , ketosis or even ketoacidosis can occur in type 2 diabetes in some circumstances as well. Ketones are produced on massive scales in industry as solvents, polymer precursors, and pharmaceuticals. In terms of scale, the most important ketones are acetone , methylethyl ketone , and cyclohexanone . [ 23 ] They are also common in biochemistry, but less so than in organic chemistry in general. The combustion of hydrocarbons is an uncontrolled oxidation process that gives ketones as well as many other types of compounds. Although it is difficult to generalize on the toxicity of such a broad class of compounds, simple ketones are, in general, not highly toxic. This characteristic is one reason for their popularity as solvents. Exceptions to this rule are the unsaturated ketones such as methyl vinyl ketone with LD 50 of 7 mg/kg (oral). [ 23 ]
https://en.wikipedia.org/wiki/Ketone
Ketonic decarboxylation (also known as decarboxylative ketonization ) is a type of organic reaction involving decarboxylation , converting two equivalents of a carboxylic acid ( R−C(=O)OH ) to a symmetric ketone ( R 2 C=O ). The reaction typically requires heat and a metal catalyst, and generally proceeds in low yields. It can be thought of as a decarboxylative Claisen condensation with Water and carbon dioxide are byproducts: [ 1 ] Bases promote this reaction. The reaction mechanism is proposed to involve nucleophilic attack of the alpha-carbon of one acid group on the other carboxylic acid group, possibly as a concerted reaction with the decarboxylation. [ 1 ] The initial formation of an intermediate carbanion via decarboxylation of one of the acid groups prior to the nucleophilic attack is unlikely since the byproduct resulting from the carbanion's protonation by the acid has not been reported. [ 2 ] This reaction is different from oxidative decarboxylation , which proceeds through a radical mechanism and is characterised by a different product distribution in isotopic labeling experiments with two different carboxylic acids. With two different carboxylic acids the reaction has poor selectivity for the mixed product, unless one of the acids (for example, a small, volatile one) is used in large excess to minimise the loss of the other acid. The dry distillation of calcium acetate to give acetone was reported by Charles Friedel in 1858 [ 3 ] and until World War I ketonization was the premier commercial method for its production. [ 4 ] Ketonic decarboxylation of propanoic acid over a manganese(II) oxide catalyst in a tube furnace affords 3-pentanone . [ 5 ] 5-Nonanone , which is potentially of interest as a diesel fuel, can be produced from valeric acid . [ 6 ] Stearone is prepared by heating magnesium stearate . [ 7 ] The processing of biomass often produces chemicals that are too oxygen-rich. Ketonic decarboxylation is one strategy to shift the C/O ratio. [ 8 ] Dozens or more metal oxides have been investigated as catalysts for the decarboxylation. Early work focused on the oxides of calcium and thorium. [ 9 ] Of commercial interest are related ketonizations using cerium(IV) oxide and manganese dioxide on alumina as the catalysts . The synthesis of 4-heptanone illustrates the production of the metal carboxylate in situ. Iron powder and butyric acid are converted to iron butyrate. Pyrolysis of that salt gives the ketone. [ 10 ] The intramolecular version of ketonic decarboxylation is often called the Ružička large-ring synthesis (or Ružička cyclization ), named for Lavoslav Ružička who developed the technique from prior methods that could synthesize small cyclic compounds from calcium salts of dicarboxylic acids . [ 11 ] It was the first direct route to cyclic compounds with more than 8 members and was used by Ružička to produce macrocyclic molecules with up to 34 carbon atoms. One target for such reactions are the naturally occurring fragrances civetone and muscone . The method involved dry distillation of dibasic salts of a dicarboxylic acid, such as thorium, cerium, and yttrium salts, mixed with copper powder to improve heat transfer. This method was low-yielding for large ring sizes and was eventually supplanted by various methods using the high dilution principle . [ 12 ] A more conventional example of intramolecular ketonization is the conversion of adipic acid to cyclopentanone with barium hydroxide . [ 13 ] Ketonization can also refer to the conversion of some enols to the ketone. Such a conversion is the reverse of enolization . [ 14 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ketonic_decarboxylation
Ketosis is a metabolic state characterized by elevated levels of ketone bodies in the blood or urine. Physiological ketosis is a normal response to low glucose availability. In physiological ketosis, ketones in the blood are elevated above baseline levels, but the body's acid–base homeostasis is maintained. This contrasts with ketoacidosis , an uncontrolled production of ketones that occurs in pathologic states and causes a metabolic acidosis , which is a medical emergency. Ketoacidosis is most commonly the result of complete insulin deficiency in type 1 diabetes or late-stage type 2 diabetes . Ketone levels can be measured in blood, urine or breath and are generally between 0.5 and 3.0 millimolar (mM) in physiological ketosis, while ketoacidosis may cause blood concentrations greater than 10 mM. [ 1 ] Trace levels of ketones are always present in the blood and increase when blood glucose reserves are low and the liver shifts from primarily metabolizing carbohydrates to metabolizing fatty acids. [ 2 ] This occurs during states of increased fatty acid oxidation such as fasting, starvation, carbohydrate restriction, or prolonged exercise. When the liver rapidly metabolizes fatty acids into acetyl-CoA , some acetyl-CoA molecules can then be converted into ketone bodies: pyruvate , acetoacetate , beta-hydroxybutyrate , and acetone . [ 1 ] [ 2 ] These ketone bodies can function as an energy source as well as signalling molecules. [ 3 ] The liver itself cannot utilize these molecules for energy, so the ketone bodies are released into the blood for use by peripheral tissues including the brain. [ 2 ] When ketosis is induced by carbohydrate restriction, it is sometimes referred to as nutritional ketosis. A low-carbohydrate, moderate protein diet that can lead to ketosis is called a ketogenic diet . Ketosis is well-established as a treatment for epilepsy and is also effective in treating type 2 diabetes. [ 4 ] Normal serum levels of ketone bodies are less than 0.5 mM. Hyperketonemia is conventionally defined as levels in excess of 1 mM. [ 1 ] Physiological ketosis is the non-pathological (normal functioning) elevation of ketone bodies that can result from any state of increased fatty acid oxidation including fasting, prolonged exercise, or very low-carbohydrate diets such as the ketogenic diet . [ 5 ] In physiological ketosis, serum ketone levels generally remain below 3 mM. [ 1 ] Ketoacidosis is a pathological state of uncontrolled production of ketones that results in a metabolic acidosis , with serum ketone levels typically in excess of 3 mM. Ketoacidosis is most commonly caused by a deficiency of insulin in type 1 diabetes or late stage type 2 diabetes but can also be the result of chronic heavy alcohol use, salicylate poisoning , or isopropyl alcohol ingestion. [ 1 ] [ 2 ] Ketoacidosis causes significant metabolic derangements and is a life-threatening medical emergency. [ 2 ] Ketoacidosis is distinct from physiological ketosis as it requires failure of the normal regulation of ketone body production. [ 6 ] [ 5 ] Elevated blood ketone levels are most often caused by accelerated ketone production but may also be caused by consumption of exogenous ketones or precursors. When glycogen and blood glucose reserves are low, a metabolic shift occurs in order to save glucose for the brain which is unable to use fatty acids for energy. This shift involves increasing fatty acid oxidation and production of ketones in the liver as an alternate energy source for the brain as well as the skeletal muscles, heart, and kidney. [ 2 ] [ 3 ] Low levels of ketones are always present in the blood and increase under circumstances of low glucose availability. For example, after an overnight fast, 2–6% of energy comes from ketones and this increases to 30–40% after a 3-day fast. [ 1 ] [ 2 ] The amount of carbohydrate restriction required to induce a state of ketosis is variable and depends on activity level, insulin sensitivity , genetics, age and other factors, but ketosis will usually occur when consuming less than 50 grams of carbohydrates per day for at least three days. [ 7 ] [ 8 ] Neonates, pregnant women and lactating women are populations that develop physiological ketosis especially rapidly in response to energetic challenges such as fasting or illness. This can progress to ketoacidosis in the setting of illness, although it occurs rarely. Propensity for ketone production in neonates is caused by their high-fat breast milk diet, disproportionately large central nervous system and limited liver glycogen. [ 1 ] [ 9 ] The precursors of ketone bodies include fatty acids from adipose tissue or the diet and ketogenic amino acids . [ 10 ] [ 11 ] The formation of ketone bodies occurs via ketogenesis in the mitochondrial matrix of liver cells. Fatty acids can be released from adipose tissue by adipokine signaling of high glucagon and epinephrine levels and low insulin levels. High glucagon and low insulin correspond to times of low glucose availability such as fasting. [ 12 ] Fatty acids bound to coenzyme A allow penetration into mitochondria. Once inside the mitochondrion, the bound fatty acids are used as fuel in cells predominantly through beta oxidation , which cleaves two carbons from the acyl-CoA molecule in every cycle to form acetyl-CoA . Acetyl-CoA enters the citric acid cycle , where it undergoes an aldol condensation with oxaloacetate to form citric acid ; citric acid then enters the tricarboxylic acid cycle (TCA), which harvests a very high energy yield per carbon in the original fatty acid. [ 13 ] Acetyl-CoA can be metabolized through the TCA cycle in any cell, but it can also undergo ketogenesis in the mitochondria of liver cells. [ 1 ] When glucose availability is low, oxaloacetate is diverted away from the TCA cycle and is instead used to produce glucose via gluconeogenesis . This utilization of oxaloacetate in gluconeogenesis can make it unavailable to condense with acetyl-CoA, preventing entrance into the TCA cycle. In this scenario, energy can be harvested from acetyl-CoA through ketone production. In ketogenesis, two acetyl-CoA molecules condense to form acetoacetyl-CoA via thiolase . Acetoacetyl-CoA briefly combines with another acetyl-CoA via HMG-CoA synthase to form hydroxy-β-methylglutaryl-CoA . Hydroxy-β-methylglutaryl-CoA form the ketone body acetoacetate via HMG-CoA lyase . Acetoacetate can then reversibly convert to another ketone body— D-β-hydroxybutyrate —via D-β-hydroxybutyrate dehydrogenase. Alternatively, acetoacetate can spontaneously degrade to a third ketone body (acetone) and carbon dioxide , which generates much greater concentrations of acetoacetate and D-β-hydroxybutyrate. The resulting ketone bodies cannot be used for energy by the liver so are exported from the liver to supply energy to the brain and peripheral tissues. In addition to fatty acids, deaminated ketogenic amino acids can also be converted into intermediates in the citric acid cycle and produce ketone bodies. [ 11 ] Ketone levels can be measured by testing urine, blood or breath. There are limitations in directly comparing these methods as they measure different ketone bodies. Urine testing is the most common method of testing for ketones. Urine test strips utilize a nitroprusside reaction with acetoacetate to give a semi-quantitative measure based on color change of the strip. Although beta-hydroxybutyrate is the predominant circulating ketone, urine test strips only measure acetoacetate. Urinary ketones often correlate poorly with serum levels because of variability in excretion of ketones by the kidney, influence of hydration status, and renal function. [ 1 ] [ 8 ] Finger-stick ketone meters allow instant testing of beta-hydroxybutyrate levels in the blood, similar to glucometers . Beta-hydroxybutrate levels in blood can also be measured in a laboratory. [ 1 ] Ketosis induced by a ketogenic diet is a long-accepted treatment for refractory epilepsy . [ 14 ] Ketosis can improve markers of metabolic syndrome through reduction in serum triglycerides , elevation in high-density lipoprotein (HDL) as well as increased size and volume of low-density lipoprotein (LDL) particles. These changes are consistent with an improved lipid profile despite potential increases in total cholesterol level. [ 7 ] [ 15 ] The safety of ketosis from low-carbohydrate diets is often called into question by clinicians, researchers and the media. [ 16 ] [ 17 ] [ 18 ] A common safety concern stems from the misunderstanding of the difference between physiological ketosis and pathologic ketoacidosis. [ 6 ] [ 7 ] There is also continued debate whether chronic ketosis is a healthy state or a stressor to be avoided. Some argue that humans evolved to avoid ketosis and should not be in ketosis long-term. [ 18 ] The counter-argument is that there is no physiological requirement for dietary carbohydrates, as adequate energy can be made via gluconeogenesis and ketogenesis indefinitely. [ 19 ] Alternatively, the switching between a ketotic and fed state has been proposed to have beneficial effects on metabolic and neurologic health. [ 3 ] The effects of sustaining ketosis for up to two years are known from studies of people following a strict ketogenic diet for epilepsy or type 2 diabetes; these include short-term adverse effects leading to potential long-term ones. [ 20 ] However, literature on longer term effects of intermittent ketosis is lacking. [ 20 ] Some medications require attention when in a state of ketosis, especially several classes of diabetes medication. SGLT2 inhibitor medications have been associated with cases of euglycemic ketoacidosis – a rare state of high ketones causing a metabolic acidosis with normal blood glucose levels. This usually occurs with missed insulin doses, illness, dehydration or adherence to a low-carbohydrate diet while taking the medication. [ 21 ] Additionally, medications used to directly lower blood glucose including insulin and sulfonylureas may cause hypoglycemia if they are not titrated prior to starting a diet that results in ketosis. [ 20 ] There may be side effects when changing over from glucose metabolism to fat metabolism. [ 22 ] These may include headache, fatigue, dizziness, insomnia, difficulty in exercise tolerance, constipation, and nausea, especially in the first days and weeks after starting a ketogenic diet. [ 20 ] Breath may develop a sweet, fruity flavor via production of acetone that is exhaled because of its high volatility. [ 7 ] Most adverse effects of long-term ketosis reported are in children because of its longstanding acceptance as a treatment for pediatric epilepsy. These include compromised bone health, stunted growth, hyperlipidemia , and kidney stones . [ 23 ] Ketosis induced by a ketogenic diet should not be pursued by people with pancreatitis because of the high dietary fat content. Ketosis is also contraindicated in pyruvate carboxylase deficiency , porphyria , and other rare genetic disorders of fat metabolism . [ 24 ] In dairy cattle , ketosis commonly occurs during the first weeks after giving birth to a calf and is sometimes referred to as acetonemia . This is the result of an energy deficit when intake is inadequate to compensate for the increased metabolic demand of lactating. The elevated β-hydroxybutyrate concentrations can depress gluconeogenesis, feed intake and the immune system, as well as have an impact on milk composition. [ 25 ] Point of care diagnostic tests can be useful to screen for ketosis in cattle. [ 26 ] In sheep , ketosis, evidenced by hyperketonemia with beta-hydroxybutyrate in blood over 0.7 mmol/L, is referred to as pregnancy toxemia . [ 27 ] [ 28 ] This may develop in late pregnancy in ewes bearing multiple fetuses and is associated with the considerable metabolic demands of the pregnancy. [ 29 ] [ 30 ] In ruminants, because most glucose in the digestive tract is metabolized by rumen organisms, glucose must be supplied by gluconeogenesis . [ 31 ] Pregnancy toxemia is most likely to occur in late pregnancy due to metabolic demand from rapid fetal growth and may be triggered by insufficient feed energy intake due to weather conditions, stress or other causes. [ 28 ] Prompt recovery may occur with natural parturition, Caesarean section or induced abortion. Prevention through appropriate feeding and other management is more effective than treatment of advanced stages of pregnancy toxemia. [ 32 ]
https://en.wikipedia.org/wiki/Ketosis
The Kettleman Hills Hazardous Waste Facility is a large (1,600 acre; 4,000 hectare) hazardous waste and municipal solid waste disposal facility, operated by Waste Management, Inc. The landfill is located at 35°57′45″N 120°00′37″W  /  35.9624°N 120.0102°W  / 35.9624; -120.0102  ( Kettleman Hills Hazardous Waste Facility ) , 3.5 mi (5.6 km) southwest of Kettleman City on State Route 41 in the western San Joaquin Valley , Kings County, California . Environmental justice health issues have been an ongoing concern for the community and waste facility. The Kettleman Hills Hazardous Waste Facility is frequently criticized for its alleged health threats by the local organization 'The Town for Clean Air and Water' ( El Pueblo para El Aire y Agua Limpio ), and by environmental groups such as Greenaction . Waste Management, Inc. counters that it is a significant local employer, and donates funds to the local community, including Kettleman City Elementary School. The facility manager, Bob Henry, pointed out in a 2007 newspaper interview that it is periodically inspected by as many as nine federal, state, and local agencies. In 2007, Maricela Mares-Alatorre, a leader of 'People for Clean Air and Water,' was quoted as saying: "Are we supposed to be happy that they're getting more trash? Donations don't buy you health." [ 1 ] In June 1975 Kings County, CA issued a permit to McKay Trucking for the disposal of oil field wastes in the Kettleman Hills area. Subsequently, in 1979, the land was bought by Chemical Waste Management, Inc., a subsidiary of Waste Management, Inc . [ 2 ] [ 3 ] In 1982, Chemical Waste Management, Inc. received a permit for the processing of PCBs (known carcinogens), and soon after a Class I waste disposal permit that allowed for the processing of nearly any type of hazardous waste. [ 3 ] At the time, the establishment of this facility and its repurposing to a full Class I hazardous waste disposal site went unnoticed by the residents of Kettleman City within 4 miles away from the facility. [ 4 ] Community opposition and awareness of the facility emerged in the early 1980s; motivated in part by articles published in local newspapers about multimillion-dollar fines levied against Chemical Waste Management, Inc. for violating environmental laws. [ 4 ] Because of its low average socioeconomic standing and 95% Latino population, Kettleman City and the nearby waste disposal site attracted the attention of the environmental justice movement . Advocates argued that Chemical Waste Management, Inc. and the local government were practicing environmental racism , pointing out that the nearest communities to all three major waste disposal sites in California were poor and almost entirely non-white. [ 5 ] Environmental Justice groups also cited the 1984 Cerrell Report by the California Waste Management Board that suggested placing waste disposal sites and incinerators primarily in the communities that would be least able to resist. [ 4 ] In 1988, a phone call from Greenpeace organizer Bradley Angel revealed to the Kettleman community that Chemical Waste Management, Inc. was planning to add a toxic waste incinerator to the disposal facility. Several residents attended a preliminary hearing on the incinerator and were concerned that a potential health risk would be placed so close to their town. [ 5 ] This trial marked the beginning of a community-led grassroots environmental justice movement in Kettleman City, as well as the beginning of the ongoing conflict between the people of Kettleman City and Chemical Waste Management, Inc. In 1989, more than 200 residents from Kettleman City showed up to testify at the Kings County Planning Commission hearing on the incinerator project. [ 4 ] When the incinerator project was approved, the residents of Kettleman City filed a lawsuit against Chemical Waste Management, Inc., alleging that their environmental impact report did not comply with the California Environmental Quality Act . [ 6 ] A trial court ruled that Chemical Waste Management, Inc. had not sufficiently analyzed the environmental and health impacts of the incinerator, and had failed to provide a full Spanish version of their impacts report, preventing participation from Kettleman City residents. The company appealed the court's decision, but dropped the plans for the incinerator before an appeal decision had been reached. [ 6 ] Following the controversy over the incinerator, the waste disposal site continued to operate and reached its economic peak of operation during the early 1990s. [ 7 ] Although the people of Kettleman City continued to protest against Chemical Waste Management, there were no major environmental justice battles in Kettleman City during these years, apart from the continued criticism of activists that the location of hazardous waste exhibited structural racism . [ 8 ] The 1990s also saw the Kettleman Hills facility facing regular scrutiny and fines from the EPA and the TSCA enforcement committee. The EPA reports that the facility committed environmental violations every year between 1990 and 1995, as well as in 1998. [ 9 ] In 2004, a TSCA compliance inspection of the Kettleman Hills Facility reached the conclusion that there had been no monitoring of certain PCBs since 1995, and that the failure to dispose of PCBs within the legal time limit had gone unreported on two occasions. That same report concluded that from 2002-2003 there were 16 spills of hazardous waste, and six instances of waste being stored in unsealed containers. [ 3 ] The facility also underwent numerous permit expansions under Class I and Class III waste disposal during the mid to late 2000s, many of which were accompanied by public hearings that were well represented by Kettleman City community leaders. [ 3 ] The year 2009 marked the resurgence of the environmental justice movement in Kettleman City, as well as the renewal of the conflict between the people of Kettleman City and Chemical Waste Management, Inc. The environmental justice advocacy group Greenaction released a statement alleging a possible correlation between the recent high number of birth defects in the region, with 14 cases in 2007-2008, five of which were cleft lip and/or palate , and the presence of the waste disposal facility. [ 10 ] [ 11 ] The residents of Kettleman City voiced opposition to Chemical Waste Management, Inc.'s facility with a community-led movement. [ 12 ] The Kings County's Public Health Officer was skeptical, stating that he understood the community's concern, but believed the birth defects were a "statistical anomaly". However, the state promised to conduct a wide range investigation into the high number of birth defects as well as into the water quality and the frequency of asthma and cancer. [ 10 ] Not long after the birth defects controversy began, Chemical Waste Management, Inc. applied for a permit to expand the facility, and was met with heavy resistance from the Kettleman City community during the 60-day comment period. [ 13 ] The Cal EPA granted them the permit to expand, pending the findings of the birth defects investigation, on December 21, 2009. [ 3 ] The initial findings of the Cal EPA and the California Department of Public Health (CDPH) were released in community meetings in early 2010. The report found that the birth defect rate was inconclusive due to a small sample size and no significant sources of environmental exposure were found in the city. [ 3 ] However, the pesticides found in homes warranted further investigation. [ 3 ] On January 20, soon after the initial findings had been released, Greenaction filed a lawsuit against the Kings County Board of Supervisors. Greenaction alleged that in light of the inconclusive and possibly flawed birth defect report, the approval of Chemical Waste Management, Inc.'s expansion permit violated the CEQA and state civil rights laws. [ 14 ] That same week, California Senator Barbara Boxer issued a statement saying that "the Kettleman Hills Hazardous Waste Facility should not be expanded until there is more conclusive results on the potential health impacts on the local community." [ 15 ] Within a week of the lawsuit, Governor Schwarzenegger ordered the Cal EPA and the CDPH to launch a full investigation into the correlation between the disposal facility and the health issues experienced by the residents of Kettleman City. The full 160-page report was released on November 22, 2010, and found that none of the possible exposures to hazardous materials could be correlated to an increase in birth defects. [ 16 ] The report's recommendations were to continue to pursue a safer source of drinking water for Kettleman City, as well as to continue investigation into the effects of the trace presence of contaminants. [ 16 ] A further update was released in 2011 stating that the level of birth defects in Kettleman City had dropped below their 2008 levels. [ 17 ] During the birth defects controversy, Chemical Waste Management, Inc. was fined $402,000 on November 29, 2011 for violating the TSCA. [ 9 ] The investigation concluded that they had been using improper disposal techniques for carcinogenic PCBs and other hazardous material wastes without treatment. [ 9 ] [ 18 ] Waste Management, Inc. was further ordered to set aside $600,000 for the purpose of physical and operational improvements to the facility. [ 18 ] On July 2, 2013, the California Department of Toxic Substances Control—DTSC released a draft decision on a permit modification that would allow Waste Management, Inc. to increase the capacity of the hazardous waste landfill. This modification would add about 14 landfill acres and increases the capacity by 50 percent, or approximately 5 million cubic yards. The effect of this expansion would add an estimated 4.6 billion pounds of toxic waste to the site. [ 19 ] On May 21, 2014, the DTSC issued a final permit approving the company's planned expansion to allow an additional 5.2 million metric tons of capacity. [ 20 ] [ 21 ] [ 22 ] Appeals of the permit issuance were filed by Greenaction together with People for Clean Air and Water of Kettleman City and by the Center for Race, Poverty and the Environment. On October 14, 2014, the Department of Toxic Substances Control denied both appeals. Bradley Angel of Greenaction was quoted as saying that his group would continue to challenge the permit with a lawsuit and by filing administrative civil rights complaints in both state and federal courts. [ 23 ] Waste Management, Inc. was fined in March 2013 for failing to report more than 70 toxic waste spills at the Kettleman Hills facility to the DTSC, which the company claimed was irrelevant to its prospect for expansion. [ 19 ] [ 24 ] [ 25 ] It was later extended to October 11, 2013. [ 26 ] The Resource Conservation and Recovery Act (RCRA) was enacted in 1976 to govern the disposal of solid and hazardous waste. According to the Environmental Protection Agency , the goals of the RCRA are to focus on protecting human health and the environment from waste disposal as well as conserving natural resources and reducing waste generation. [ 27 ] The “cradle to grave” provision of Subtitle C established a management program which required a strict permitting and control process for the creation and disposal of hazardous waste. Under this program, treatment, storage and disposal facilities (TSDFs) must meet specific criteria in order to be permitted or be allowed interim status to continue operating while un-permitted. The Kettleman Hills Hazardous Waste Facility is governed under the RCRA and thus is subject to these permitting requirements. As a result, the Kettleman Hills Facility has been required to request a modification of their current permit by the California Department of Toxic Substance Control in order to expand the facility by the requested 14 acres to make room for the disposal of more waste. [ 28 ] The Toxic Substances Control Act (TSCA), passed in 1976 worked to regulate the sale and use of chemicals in the U.S, but its primary focus was on polychlorinated biphenyls (PCBs). Its effects on the Kettleman Hills Hazardous Waste Facility is similar to the RCRA by requiring permitting by the EPA in order to make the requested 14 acre expansion into a site which allows for the dumping of PCBs. [ 29 ] During this permitting process, the TSCA PCB permit application requires the EPA to issue a public notice once a decision has been reached, as well as hold a 60-day formal comment period for the public, as well as a formal public hearing. [ 30 ] In February 1994, Executive Order 12898 was implemented which addressed issues of minority and low-income groups bearing the burden of disproportionate negative environmental and health effects. [ 31 ] It states that “each Federal Agency shall make achieving environmental justice part of its mission by identifying and addressing as appropriate, disproportionately high and adverse human health or environmental effects of its programs, policies, and activities on minority populations and low income populations.” [ 32 ] It also set forth a series of steps for assisting and aiding federal organizations in addressing environmental injustice. Requirements of the executive order, such as translating documents into the native language of a community have been key issues for groups such as Greenaction in the dispute with the Kettleman Hills Hazardous Waste Facility. Issues of environmental justice are pertinent to the largely minority population Kettleman City community as studies show that blacks and respondents at lower educational levels, and to a lesser degree, lower income levels were significantly more likely to live within a mile of a polluting facility. [ 33 ] Greenaction , a San Francisco-based environmental justice organization, has been working with the local community to document the cases of infant deaths and believes there are issues of environmental injustice due to the city's demographics. [ 34 ] Within Kettleman City 's population that is 25 years or older, only 30% have completed high school or its equivalent, and 56.4% have less than a 9th grade education. [ 35 ] The majority of residents are from Mexico and are Spanish-speaking. In the 2000 Census , the median household income was $22,409 and 43.7% of the population was living below the poverty level. Compared to the US population, Kettleman City residents are younger, and are more likely to rent rather than own their homes. [ 36 ] Environmental justice is principally concerned with people's and communities’ entitlement to equal protection of environmental and public health laws and regulations. [ 37 ] There are three separate perspectives on why environmental injustices exist: economic, sociopolitical, and racial. However, the three categories are not mutually exclusive, considering that, for example, economic motives may coincide with sociopolitical factors. [ 37 ] Economic explanations argue that the industry is not intentionally discriminating against racial, ethnic, or poor minority groups. The industry is only trying to maximize profits and siting a new facility in areas where the land is cheap serves to maximize profits. Industrial labor pools and manufacturing materials tend to be cheaper in this aspect as well. The racial and socioeconomic composition of a community may subsequently change with the addition of a waste facility. The ensuing negative health and environmental impacts result in a flight of affluent residents to more desirable neighborhoods, subsequently driving land values even lower. Thus, the depression of property values results in an influx of poor and people of color as housing becomes more affordable. Sociopolitical explanations argue that industry and government seek the path of least resistance when siting new hazardous waste facilities. In an effort to avoid controversy, sites are located in areas where communities are least capable of mounting an opposition. More plainly, facilities are located in areas where communities are not capable to politicize and oppose the new factory. Further, communities without a high degree of pre-existing social capital as well as low levels of voting behavior, home ownership, wealth, and disposable income are more vulnerable to high concentrations of polluting facilities than other communities. [ 37 ] Racial discrimination explanations illustrate that although present-day siting decisions may be based on a rational desire to place new facilities in areas that have been zoned industrial, these wind up disproportionately in communities composed primarily of people of color because of past discriminatory decisions about where to line industrial zones. Current decisions that may seem facially neutral may have discriminatory outcomes because of past discriminatory actions [ 37 ] In 2008 Waste Management applied to expand their landfill for the purpose of storing polychlorinated biphenyl (PCB) waste. The United States Environmental Protection Agency (EPA) issued an investigation of a variety of invertebrates, fish, reptiles, mammals, and plants were in order to determine an environmental impact, if any. The EPA's July 2009 report followed that two species may be affected by the expansion: the San Joaquin kit fox and the blunt-nosed leopard lizard . [ 38 ] Furthermore, in 1984, Waste Management discovered groundwater contamination underneath two formerly unlined ponds. Pond P-09 was lined in 1985 and cleanup began in 1988. Pond P-12 stopped receiving waste in 1985 and was closed in 1997. Groundwater cleanup began in 1985. Waste Management has 48 monitoring wells in the surrounding and impacted areas as of December 2012. The California Department of Toxic Substances Control (DTSC) ordered the facility to clean up spills of PCBs around the storage building, and as of February 2012, DTSC reported that the soil cleanup and removal was the final step in protecting human health and the environment as there was no longer a threat. [ 39 ] Under the direction of the Kings County Planning Agency, in March 2008 CH2M Hill , an independent engineering and consulting firm, prepared an impact report aimed to identify and evaluate potentially significant adverse environmental impacts associated with the proposed expansion of the Kettleman Hills Facility. [ 40 ] On January 29, 2010, Governor Arnold Schwarzenegger directed the Cal/EPA and California Department of Public Health (CDPH) to assess the potential for a relationship between environmental contaminants and health defects. [ 41 ] This was spurred by community concerns regarding a recent outbreak of birth defects . The Chemical Waste Management Kettleman Hills facility was found to not be the single direct causal factor in the recent birth defects.The Cal/EPA tested the air, soil, and water at agricultural operations, the Kettleman Hills Hazardous Waste Facility, the Kettleman City Elementary School, and possible illegal dump sites for 27 pesticides , air pollutants , arsenic , lead , soil and soil gas contaminants . [ 42 ] The Cal/EPA found higher than normal levels of arsenic in tap and well water, low levels of lead in town and school wells, benzene in air near treatment unit that removes chemical from well water, one home yard that had high levels of a banned pesticide. [ 36 ] The community of Kettleman City has suspected negative health effects caused by the Waste Facility multiple times. [ 36 ] [ 43 ] In 2007, the surrounding community in Kettleman City voiced their concerns regarding potential health effects in response to Chemical Waste Management's application for a federal permit renewal in the same year. The permit was to allow the facility to continue storing and disposing PCB waste at the facility. The EPA responded by ordering the facility to conduct a PCB Congener study. The facility collected soil, vegetation, and air samples at the perimeter of the CWM Facility to be tested in a State-certified laboratory. The findings concluded that concentrations of PCB congeners measured in soil samples are 2,000 times below the EPA's risk-based levels residential clean-up levels. The risk of health impacts from soil, vegetation, and air were within an acceptable range as with other rural areas without any known PCB activity. Concentrations of PCB congeners measured in soils, vegetation and air do not adversely affect ecological species, and there is no evidence that PCB congeners would migrate off-site at concentrations that would adversely affect the neighboring environment and communities. [ 44 ] In 2009, the community suspected birth defects caused by the waste facility. [ 36 ] However, CDPH has not determined Kettleman Waste Facility to be the cause of health effects. [ 36 ] The Kings County Department of Public Health stated that it would continue its investigation but that their preliminary determination was "that to the extent that a cluster may exist, it is most likely a random event unrelated to any environmental exposure unique to Kettleman City." [ 45 ] Even though “scientifically rigorous studies of causes of human birth defects generally require evaluation of hundreds of birth defects or more", the CDPH's objectives were limited and largely focused on evaluating risk factors because of the fewer than dozen cases of birth defects in Kettleman City. [ 36 ] The CDPH investigated a total of 11 eligible children born with major, structural birth defects between 2007 and March 31, 2010 to mothers who had lived in the Kettleman City area during their pregnancies. Through a mixed-methods approach consisting of interviews and supplementary medical history reports the CDPH did not find a specific cause or environmental exposure among the mother that would explain the increase in the number of children born with birth defects. It was found that some children had multiple abnormalities, while others had single defects. All the birth defects represented different underlying conditions, but a few shared some features. [ 36 ] None of the mothers interviewed during the investigation used tobacco , alcohol , or other drugs that could cause the birth defects. [ 46 ] Furthermore, the mothers were all in good health and did not have suspect medical histories. [ 42 ] The conclusions of the investigation have been contended by Greenaction and the local community group "El Pueblo para El Aire y Agua Limpoio" ("People for Clean Air and water"). [ 47 ] Bradley Angel of Greenaction in Kettleman City argues that the investigation was not thorough enough, neglecting to test blood and tissue samples of those affected. [ 47 ] He also notes that the investigation did not adequately test for pesticides inside homes. [ 47 ] The reported defects were also evident in areas around California and elsewhere. The findings concluded that, coupled with the lack of any shared unusual exposures, the birth defects were not caused by a common factor. [ 43 ] A California Department of Public Health (CDPH) update for 2009-2011 concluded that the rates of birth defects in Kettleman City in 2010 and 2011 appear to be returning to the lower rates seen prior to 2008. CDPH reviewed the Birth Defects Registry data from Kettleman City and still did not find a common cause for the birth defects. [ 48 ] The issue in Kettleman City has resulted in much discussion over the financial implications for the county, city, and company. The Kettleman Hills Hazardous Waste Facility initially requested the expansion due to the near filling of their 10.7 million-cubic-yard site. Lily Quiroa, a spokeswoman for Waste Management, Inc. has stated that as a result of the denial of expansion, “we have laid off more than two-thirds of our employees. There has been a big impact on our business here, and it has had an impact in the economy in this county." In 2012, it was reported that the company paid approximately $312,000 in fines as a result of not reporting waste spills that had occurred. [ 49 ] Aside from the company, the county also stands to gain $1.5 million in fees from the truckloads of waste deposited annually, as well as $17.5 million to the economy of the county from the facility's operation. The residents of Kettleman City have been promised financial benefits as well if the expansion is passed. Chemical Waste Inc. has offered to provide numerous donations as well as payoff the remaining debt that the Kettleman City Community Services District owes on its water system, an estimated $552,000, which would allow for the community to receive grant money from the state. [ 50 ] Other such examples of donations include $150,000 to create safe pedestrian crossing spaces for residents to use on Highway 41 . [ 49 ] Maricela Mares-Alatorre is in opposition to such funding however, stating that “I know Chem Waste likes to frame it as they’re being good neighbors, but the truth is that they’re buying good will. If you give me a choice between my good will and the health of the community, the health of my family, I’m going to choose the health of my family.” [ 51 ]
https://en.wikipedia.org/wiki/Kettleman_Hills_Hazardous_Waste_Facility
A ketyl group in organic chemistry is an anion radical that contains a group R 2 C − O • . It is the product of the 1-electron reduction of a ketone. Another mesomeric structure has the radical position on carbon and the negative charge on oxygen. [ 1 ] Ketyls can be formed as radical anions by one-electron reduction of carbonyls with alkali metals . Sodium and potassium metal reduce benzophenone in THF solution to the soluble ketyl radical. Ketyls are also invoked as intermediates in the pinacol coupling reaction . The ketyl radicals derived from the reaction of sodium and benzophenone is a common laboratory desiccant . Ketyls react quickly with water, peroxides, and with oxygen. Thus, the deep purple coloration qualitatively indicates dry, peroxide-free, and oxygen-free conditions. The method for drying is still popular in many laboratories due to its ability to produce such pure solvent quickly. An alternative option for chemists interested only in water-free solvent is the use of molecular sieves . This is a much safer method than using an alkali metal still, produces solvent as dry as sodium-ketyl (though not as dry as potassium, or potassium-sodium alloy) but takes longer. [ 2 ] Sodium benzophenone ketyl reacts with oxygen to give sodium benzoate and sodium phenoxide . Potassium-benzophenone ketyl is used as a reductant for the preparation of organoiron compounds. [ 3 ] When excess alkali metal is present, benzophenone ketyl may be reduced to the ketone dianion, resulting in a color transformation from deep blue to purple: [ 4 ] : 899–900
https://en.wikipedia.org/wiki/Ketyl
In fluid dynamics , the Keulegan–Carpenter number , also called the period number , is a dimensionless quantity describing the relative importance of the drag forces over inertia forces for bluff objects in an oscillatory fluid flow . Or similarly, for objects that oscillate in a fluid at rest. For small Keulegan–Carpenter number inertia dominates, while for large numbers the ( turbulence ) drag forces are important. The Keulegan–Carpenter number K C is defined as: [ 1 ] where: The Keulegan–Carpenter number is named after Garbis H. Keulegan (1890–1989) and Lloyd H. Carpenter. A closely related parameter, also often used for sediment transport under water waves , is the displacement parameter δ : [ 1 ] with A the excursion amplitude of fluid particles in oscillatory flow and L a characteristic diameter of the sediment material. For sinusoidal motion of the fluid, A is related to V and T as A = VT/(2π) , and: The Keulegan–Carpenter number can be directly related to the Navier–Stokes equations , by looking at characteristic scales for the acceleration terms: Dividing these two acceleration scales gives the Keulegan–Carpenter number. A somewhat similar parameter is the Strouhal number , in form equal to the reciprocal of the Keulegan–Carpenter number. The Strouhal number gives the vortex shedding frequency resulting from placing an object in a steady flow, so it describes the flow unsteadiness as a result of an instability of the flow downstream of the object. Conversely, the Keulegan–Carpenter number is related to the oscillation frequency of an unsteady flow into which the object is placed.
https://en.wikipedia.org/wiki/Keulegan–Carpenter_number
The Kew Rule was used by some authors to determine the application of synonymous names in botanical nomenclature up to about 1906, [ 1 ] but was and still is contrary to codes of botanical nomenclature including the International Code of Nomenclature for algae, fungi, and plants . Index Kewensis , a publication that aimed to list all botanical names for seed plants at the ranks of species and genus , used the Kew Rule until its Supplement IV was published in 1913 (prepared 1906–1910). [ 1 ] The Kew Rule applied rules of priority in a more flexible way, so that when transferring a species to a new genus, there was no requirement to retain the epithet of the original species name, and future priority of the new name was counted from the time the species was transferred to the new genus. [ 2 ] The effect has been summarized as "nomenclature used by an established monographer or in a major publication should be adopted". [ 3 ] This is contrary to the modern article 11.4 of the Code of Nomenclature. [ 4 ] The first discussion in print of what was to become known as the Kew Rule appears to have occurred in 1877 between Henry Trimen and Alphonse Pyramus de Candolle . [ 5 ] Trimen did not think it was reasonable for older names discovered in the literature to destabilize the nomenclature that had been well accepted: [ 6 ] Probably all botanists are agreed that it is very desirable to retain when possible old specific names, but some of the best authors do not certainly consider themselves bound by any generally accepted rule in this matter. Still less will they be inclined to allow that a writer is at liberty, as M. de Candolle thinks, to reject the specific appellations made by an author whose genera are accepted, in favour of older ones in other genera. It will appear to such that to do this is to needlessly create in each case another synonym. The first botanical code of nomenclature that declared itself to be binding was the 1906 publication Règles internationales de la nomenclature botanique adoptées par le Congres International de Botanique de Vienne 1905 that followed from the 1905 International Botanical Congress . [ 5 ] The Kew Rule was outlawed by this code. The end of the Kew Rule brought about considerable upheaval in botanical nomenclature. Many new species names were coined to resurrect older epithets, for example, in 1917 Willis Jepson wrote: [ 7 ] "The plant so long known as Brodiaea grandiflora Smith ... [was] first published as Hookera coronaria Salisbury (1806). The correct name, then, is Brodiaea coronaria Jepson, n. comb. " Names that had previously been conserved to improve the stability of well-known plant names often now no longer required conservation, and other names that had been formed using the Kew Rule and had become well known, were illegitimate. The entire previous list of conserved and rejected names was consequently replaced in 1959 with a reworked list. [ 8 ] Previously overlooked botanical literature has continued to yield new examples of forgotten older names for more than 100 years since the Kew Rule was banished from the International Code of Nomenclature. [ 2 ]
https://en.wikipedia.org/wiki/Kew_Rule
A key is a component of a musical instrument , the purpose and function of which depends on the instrument. However, the term is most often used in the context of keyboard instruments , in which case it refers to the exterior part of the instrument that the player physically interacts in the process of sound production. On instruments equipped with tuning machines such as guitars or mandolins , a key is part of a tuning machine . It is a worm gear with a key shaped end used to turn a cog, which, in turn, is attached to a post which winds the string. The key is used to make pitch adjustments to a string. With other instruments, zithers and drums , for example, a key is essentially a small wrench used to turn a tuning machine or lug. On woodwind instruments such as a flute or saxophone , keys are finger operated levers used to open or close tone holes , the operation of which effectively shortens or lengthens the resonating tube of the instrument. By doing so, the player is able to physically manipulate the range of resonating sound frequencies capable of being produced by the tubes that has been altered into various “effective” lengths, based on specific key configurations. [ 1 ] The keys on the keyboard of a pipe organ also open and close various mechanical valves. However, rather than directed influencing the paths the airflow takes within the same tube, the configuration of these valves instead determines through which of the numerous separate organ pipes, each of which tuned for a specific note, the air stream flows through. [ 2 ] The keys of an accordion direct the air flow from manually operated bellows across various tuned vibrating reeds. On other keyboard instruments , a key may be a lever which mechanically triggers a hammer to strike a group of strings, as on a piano , or an electric switch which energizes an audio oscillator as on an electronic organ or a synthesizer . Piano keys have often been made from ivory over the instrument's history, such that a common phrase for playing the piano has been to "tickle the ivories". [ 3 ] [ 4 ] [ 5 ] This article relating to musical instruments is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Key_(instrument)
Key Biodiversity Areas (KBA) are geographical regions that have been determined to be of international importance in terms of biodiversity conservation, using globally standardized criteria published by the IUCN as part of a collaboration between scientists, conservation groups, and government bodies across the world. [ 1 ] The purpose of Key Biodiversity Areas is to identify regions that are in need of protection by governments or other agencies. [ 1 ] KBAs extend the Important Bird Area (IBA) concept to other taxonomic groups and are now being identified in many parts of the world. Examples of types of KBAs include Important Plant Areas (IPAs), Ecologically and Biologically Significant Areas (EBSAs) in the High Seas, Alliance for Zero Extinction (AZE) sites, Prime Butterfly Areas, Important Mammal Areas and Important Sites for Freshwater Biodiversity, with prototype criteria developed for freshwater molluscs and fish and for marine systems. The determination of KBAs often brings sites onto the conservation agenda that hadn't previously been identified as needing protection due to the nature of the two non-exclusive criteria used to determine them; vulnerability ; and irreplaceability. [ 2 ] The KBA global standard [ 3 ] was published in 2016. The criteria for what can qualify as a KBA is one or more of the following: The KBA standard has been applied around the globe to over 16,000 areas with a total 21,000,000 km 2 , [ 6 ] which can be viewed in map form. [ 7 ] It is used by scientists to assess fragmentation and habitat loss in vulnerable areas, [ 8 ] [ 9 ] [ 10 ] [ 11 ] and is generally seen as an effective method of identifying areas in need of protection. [ 12 ] Some criticism involves the scale of KBAs, such as the use of global data to set parameters for single regions or ecosystems, as well as the lack of involvement of local governments and other authorities- especially in developing countries- in their implementation. [ 13 ] Other issues raised include the defining of conservation strictly in terms of location, and the naming of single species as important to the environment rather than the interconnectivity between species [ 13 ] and doesn't prioritize areas that are dense in biological diversity. [ 14 ] Some argue, however, that KBAs are meant to be a "focused response to a central problem in conservation" [ 15 ] rather than a catch-all solution. Criteria may also be too broad, as one analysis found that between 26% and 68% of all terrestrial land on Earth could be classified as a KBA. [ 14 ]
https://en.wikipedia.org/wiki/Key_Biodiversity_Area
Key Performance Parameters (KPPs) specify what the critical performance goals are in a United States Department of Defense (DoD) acquisition under the JCIDS process. [ 1 ] [ 2 ] The JCIDS intent for KPPs is to have a few measures stated where the acquisition product either meets the stated performance measure or else the program will be considered a failure [ 3 ] per instructions CJCSI 3170.01H – Joint Capabilities Integration and Development System. The mandates require 3 to 8 KPPs be specified for a United States Department of Defense major acquisition, known as Acquisition Category 1 or ACAT-I. The term is defined as "Performance attributes of a system considered critical to the development of an effective military capability. A KPP normally has a threshold representing the minimum acceptable value achievable at low-to-moderate risk, and an objective, representing the desired operational goal but at higher risk in cost, schedule, and performance. KPPs are contained in the Capability Development Document (CDD) and the Capability Production Document (CPD) and are included verbatim in the Acquisition Program Baseline (APB). KPPs are considered Measures of Performance (MOPs) by the operational test community." [ 4 ] Commentary notes that metrics must be chosen carefully, and that they are hard to define and apply throughout a projects life cycle. [ 5 ] It is also desired that KPPs of a program avoid repetition, and to be something applicable among different programs such as fuel efficiency. [ 6 ] Higher numbers of KPPs are associated to program and schedule instability. [ 7 ]
https://en.wikipedia.org/wiki/Key_Performance_Parameters
In evolutionary biology , a key innovation , also known as an adaptive breakthrough or key adaptation , is a novel phenotypic trait that allows subsequent radiation and success of a taxonomic group. Typically they bring new abilities that allows the taxa to rapidly diversify and invade niches that were not previously available. The phenomenon helps to explain how some taxa are much more diverse and have many more species than their sister taxa . The term was first used in 1949 by Alden H. Miller who defined it as "key adjustments in the morphological and physiological mechanism which are essential to the origin of new major groups", [ 1 ] although a broader, contemporary definition holds that "a key innovation is an evolutionary change in individual traits that is causally linked to an increased diversification rate in the resulting clade". [ 2 ] The theory of key innovations has come under attack because it is hard to test in a scientific manner, but there is evidence to support the idea. [ 3 ] The mechanism by which a key innovation leads to taxonomic diversity is not certain but several hypotheses have been suggested: [ 2 ] A key innovation may, by increasing the fitness of individuals of the species, result in extinction becoming less likely and allow the taxa to expand and speciate . [ 2 ] Latex and resin canals in plants are used to deter predators by releasing a sticky secretion when punctured which can immobilise insects and some contain toxic or foul tasting substances. They have evolved independently approximately 40 times and are considered a key innovation. By increasing the plant's resistance to predation the canals increase the species fitness and allow them to escape being eaten, at least until the predator evolves an ability to overcome the defence. During the period of resistance the plants are less likely to become extinct and can diversify and speciate, and as such taxa with latex and resin canals are more diverse than their canal lacking sister taxa. [ 2 ] A key innovation may allow a species to invade a new region or niche and thus be freed from competition, allowing subsequent speciation and radiation . A classic example of this is the fourth cusp of mammalian molars , the hypocone , which allowed early mammalian ancestors to effectively digest their generalised diet. The precursors to this, the triconodont teeth of reptiles, were adapted for gripping and slicing rather than chewing. The evolution of the hypocone and flat molars later allowed animals to adapt to a herbivorous diet as they could be used to break down tough plant matter through grinding. The evolution of this ability led to mammals being able to adapt to utilise a huge variety of food sources, [ 4 ] and allowed early mammals to invade novel niches through the evolution of specialised herbivores, which experienced relative success during the middle eocene . Specialising for a plant based diet offered early herbivores sufficient resources to radiate as energy was not lost to higher trophic levels and few competitors existed at the time. [ 4 ] A key innovation may result in reproductive isolation, whereby those individuals with the innovation no longer breed with those without. This can lead to rapid speciation as the two populations separate and accumulate mutations. The nectar spurs in Aquilegia , a diverse genus of flowering plant, are considered a key innovation because of this. Nectar spurs aid in pollination by making the nectar further from the stamen, ensuring that insect or bird pollinators pick up pollen as they access it. These led rapid speciation within the genus as plants and their pollinators can become specialised to each other i.e. a species of pollinator exclusively feeds from a species of plant, and thus plant populations could easily become reproductively isolated from one another. In addition the shape and size of the nectar spur can evolve in response to pollinator adaptations, developing a co-evolutionary relationship. The genus Aquilegia has over 50 species. [ 3 ] As an evolutionary theory, key innovations has come under critical scrutiny due it being hard to test. Identification depends on finding a correlation between the innovation and increased diversity by comparing sister taxa, but this does not prove causality or isolate other causes of diversity such as stochasticity or habitat, and it is possible to 'cherry pick' examples that fit the hypothesis. [ 5 ] In addition, the retrospective identification of key innovations offers little in terms of understanding the processes and pressures that resulted in the adaptation and may identify a very complex evolutionary process as a single event. An example of this is the evolution of avian flight, which was identified as a key innovation in 1963 by Ernst Mayr . [ 6 ] However, separate evolutionary changes had to occur throughout the physiology of the avian ancestor, including the enlargement of the cerebellum and the enlargement and ossification of the sternum . These adaptations arose separately, and millions of years apart, [ 5 ] not in one step.
https://en.wikipedia.org/wiki/Key_innovation
Keyhole Markup Language ( KML ) is an XML notation for expressing geographic annotation and visualization within two-dimensional maps and three-dimensional Earth browsers. KML was developed for use with Google Earth , which was originally named Keyhole Earth Viewer. It was created by Keyhole, Inc , which was acquired by Google in 2004. KML became an international standard of the Open Geospatial Consortium in 2008. [ 1 ] [ 2 ] Google Earth was the first program able to view and graphically edit KML files, but KML support is now available in many GIS software applications, such as Marble , [ 3 ] QGIS , [ 4 ] and ArcGIS . [ 5 ] The KML file specifies a set of features (place marks, images, polygons, 3D models, textual descriptions, etc.) that can be displayed on maps in geospatial software implementing the KML encoding. Every place has a longitude and a latitude . Other data can make a view more specific, such as tilt, heading, or altitude, which together define a "camera view" along with a timestamp or timespan. KML shares some of the same structural grammar as Geography Markup Language (GML). Some KML information cannot be viewed in Google Maps or Mobile. [ 6 ] KML files are very often distributed as KMZ files, which are zipped KML files with a .kmz extension. The contents of a KMZ file are a single root KML document and optionally any overlays, images, icons, and COLLADA 3D models referenced in the KML including network-linked KML files. The root KML document by convention is a file named "doc.kml" at the root directory level, which is the file loaded upon opening. By convention the root KML document is at root level and referenced files are in subdirectories (e.g. images for overlay). [ 7 ] An example KML document is: The MIME type associated with KML is application/vnd.google-earth.kml+xml ; the MIME type associated with KMZ is application/vnd.google-earth.kmz . For its reference system, KML uses 3D geographic coordinates: longitude, latitude, and altitude, in that order, with negative values for west, south, and below mean sea level. The longitude/latitude components (decimal degrees) are as defined by the World Geodetic System of 1984 (WGS84) . Altitude, the vertical component, is measured in meters from the WGS84 EGM96 Geoid vertical datum . If altitude is omitted from a coordinate string, e.g. (-77.03647, 38.89763) then the default value of 0 (approximately sea level) is assumed for the altitude component, i.e. (-77.03647, 38.89763, 0). A formal definition of the coordinate reference system (encoded as GML) used by KML is contained in the OGC KML 2.2 Specification. This definition references well-known EPSG CRS components. [ 8 ] The KML 2.2 specification was submitted to the Open Geospatial Consortium to assure its status as an open standard for all geobrowsers . In November 2007 a new KML 2.2 Standards Working Group was established within OGC to formalize KML 2.2 as an OGC standard. Comments were sought on the proposed standard until January 4, 2008, [ 9 ] and it became an official OGC standard on April 14, 2008. [ 10 ] The OGC KML Standards Working Group finished working on change requests to KML 2.2 and incorporated accepted changes into the KML 2.3 standard. [ 11 ] The official OGC KML 2.3 standard was published on August 4, 2015. [ 12 ]
https://en.wikipedia.org/wiki/Keyhole_Markup_Language
A Keynesian beauty contest is a beauty contest in which judges are rewarded for selecting the most popular faces among all judges, rather than those they may personally find the most attractive. This idea is often applied in financial markets, whereby investors could profit more by buying whichever stocks they think other investors will buy, rather than the stocks that have fundamentally the best value, because when other people buy a stock, they bid up the price, allowing an earlier investor to cash out with a profit, regardless of whether the price increases are supported by its fundamentals and theoretical arguments. The concept was developed by John Maynard Keynes and introduced in Chapter 12 of his work The General Theory of Employment, Interest and Money (1936) to explain price fluctuations in equity markets . Keynes described the action of rational agents in a market using an analogy based on a fictional newspaper contest, in which entrants are asked to choose the six most attractive faces from a hundred photographs. Those who picked the most popular faces are then eligible for a prize. A naive strategy would be to choose the face that, in the opinion of the entrant, is the most handsome. A more sophisticated contest entrant, wishing to maximize the chances of winning a prize, would think about what the majority perception of attractiveness is, and then make a selection based on some inference from their knowledge of public perceptions. This can be carried one step further to take into account the fact that other entrants would each have their own opinion of what public perceptions are. Thus the strategy can be extended to the next order and the next and so on, at each level attempting to predict the eventual outcome of the process based on the reasoning of other rational agents . "It is not a case of choosing those [faces] that, to the best of one's judgment, are really the prettiest, nor even those that average opinion genuinely thinks the prettiest. We have reached the third degree where we devote our intelligences to anticipating what average opinion expects the average opinion to be. And there are some, I believe, who practice the fourth, fifth and higher degrees." (Keynes, General Theory of Employment, Interest and Money , 1936). Keynes believed that similar behavior was at work within the stock market . This would have investors pricing shares not based on what they think an asset's fundamental value is, or even on what investors think other investors believe about the asset's value, but on what they think other investors believe is the average opinion about the value of the asset, or even higher-order assessments. In 2011, National Public Radio 's Planet Money tested the theory by having its listeners select the cutest of three animal videos. The listeners were broken into two groups. One selected the animal they thought was cutest, and the other selected the one they thought most participants would think was the cutest. The results showed significant differences between the groups. Fifty percent of the first group selected a video with a kitten, compared to seventy-six percent of the second selecting the same kitten video. Individuals in the second group were generally able to disregard their own preferences and accurately make a decision based on the expected preferences of others. The results were considered to be consistent with Keynes' theory. [ 1 ]
https://en.wikipedia.org/wiki/Keynesian_beauty_contest
Keystroke inference attacks are a class of privacy-invasive technique that allows attackers to infer what a user is typing on a keyboard. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] The origins of keystroke inference attacks can be traced back to the mid-1980s when academic interest first emerged in utilizing various emanations from devices to deduce their state. While keystroke inference attacks were not explicitly discussed during this period, the declassified introductory textbook on TEMPEST standards, NACSIM 5000, alluded to keyboards as potential sources of data leakage. [ 9 ] In 1998, academic papers explored defenses similar to those described in TEMPEST standards, suggesting that emissions from keyboards could be used to track keystrokes, though without practical demonstrations. In 2001, researchers discovered a timing side channel in the SSH protocol that could be exploited to leak keystroke data. [ 10 ] The concept gained more attention in 2002 when a Computerworld opinion piece described the "keyboard trick," where recorded keyboard sounds were analyzed to reconstruct keystrokes, a technique the author claimed to have known since the 1980s. [ 11 ] [ 9 ] Formal academic research on sound-based keystroke detection began in 2004, with IBM researchers demonstrating that each keystroke produces a unique sound and developing an algorithm to translate these sounds into keystrokes. This work was refined in 2006 and in 2009, enhancing the attack's reliability. [ 10 ] In 2009, Vuagnoux et al. revealed that modern keyboards emit electromagnetic signals that can be used to infer keystrokes. [ 1 ] This computer security article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Keystroke_inference_attack
The Kfar Monash Hoard is a hoard of metal objects dated to the Early Bronze Age (the third millennium BCE ) found in the spring of 1962 by the agriculturalist Zvi Yizhar in Kfar Monash , Israel . Kfar Monash is located 3.3 km south-east of Tel Hefer (Tell Ishbar) in the Plain of Sharon or in modern terms 9 km/6 mi northeast of Netanya , [ 1 ] which is roughly located along the Israeli coast between Netanya and Haifa . The Monash Hoard consists of: The Crescentic Axehead was found about 5 years later at about 200m distance. [ 2 ] As of June 2006, the Kfar Monash Hoard was on display in the Israel Museum . There has been conflicting ideas to the purpose of the 800 copper plates. Although they have been assumed to be scales of armor from an Egyptian army unit, as proposed by archaeologist Shmuel Yeivin , [ 3 ] recent reevaluations have confuted this claim. Archaeologist William A. Ward proposed that the scales were means of barter or a reserve supply of metal from the Syro-Palestinian area. [ 4 ] Ward arrived at this conclusion through several pieces of evidence: the scales were not attached to any jacket, body armor was generally not used by the Egyptians until the New Kingdom, copper was still very rare, and the plates were too thin for body armor. Several metal objects similar to those in the Kfar Monash hoard were found in this general area of the Levant. They were subject to metallurgical analysis, and generally dated to the Early Bronze Age. For example, objects from Ashkelon-Afridar, and from Tell esh-Shuna (the Jordan Valley) were seen as similar. Also the axes from early EB I Yiftah’el are seen as relevant. [ 5 ] Kfar Monash objects were also dated, based on typological considerations, to EB IB, [ 6 ] similarly to the axes from Tel Beth Shean. [ 7 ] The study of Kfar Monash hoard indicated that some of them were made of unalloyed copper. [ 8 ] The source of this unalloyed copper was found likely to be in Wadi Feynan , in southern Jordan. Such unalloyed copper was apparently mainly used for the production of tools. Other objects were made using a CuAsNi alloy. This is the copper-arsenic-nickel alloy that is especially characteristic of Chalcolithic period Arslantepe in Eastern Anatolia (the upper Euphrates region). Nevertheless, the adzes that were made of this alloy were determined to be of "an Egyptian type". [ 9 ] Objects from Arslantepe using such polymetallic ores are mainly ascribed to Level VIA (3400–3000 BCE), dating to the Uruk period . [ 5 ]
https://en.wikipedia.org/wiki/Kfar_Monash_Hoard
The KhAB-250 is the provisional name of an aerial bomb developed by the Soviet Air Force to deliver the chemical weapon sarin . [ 1 ] The KhAB-250 operational weight has been reported as 333 lb (151 kg) [ 2 ] and 514 lb (233 kg). [ 1 ] The Tupolev Tu-22 could carry 24. [ 2 ] The bomb uses a burst charge to detonate on impact with the ground. It contains a payload of 108 lb (49 kg) of sarin. [ 1 ] The KhAB-250 was displayed at the Shikani Test and Proving Grounds in 1986 as a component of the then-current Soviet chemical arsenal. Contemporary analysts noted that it appeared relatively unsophisticated compared to Soviet conventional munitions of the same time frame. [ 1 ] The bomb was removed from service as a result of the Chemical Weapons Convention in the early 1990s.
https://en.wikipedia.org/wiki/KhAB-250
The KhAB-500 is the provisional naming of a series of World War II -era aerial bombs developed by the Soviet Air Force to deliver chemical weapons . [ 1 ] KhAB-500s were typically filled with yperite (R-5) or phosgene (R-10). It was 17.7 in (450 mm) in diameter and about 94 in (2,400 mm) long. Its loaded weight was about 650 lb (290 kg) including roughly 375 lb (170 kg) of chemical agent and a 2.5–3.6 lb (1.1–1.6 kg) impact-fused burst charge. [ 1 ] Upon detonation, the KhAB-500 R-10 would create a hemispherical cloud of gas with a radius of 20–25 m (66–82 ft). In ideal weather conditions, the phosgene cloud could produce serious medical effects up to 500 m (1,600 ft) downwind. [ 1 ] The KhAB-500 was carried by Soviet Union era aircraft . [ 2 ] The bomb was removed from service as a result of the Chemical Weapons Convention in the early 1990s.
https://en.wikipedia.org/wiki/KhAB-500
The Khabarovsk war crimes trials were the Soviet hearings of twelve Japanese Kwantung Army officers and medical staff charged with the manufacture and use of biological weapons , and human experimentation, during World War II . The war crimes trials were held between 25 and 31 December 1949 in the Soviet industrial city of Khabarovsk (Хабаровск), the largest in the Russian Far East . The Soviet Union and the United States allegedly gathered data from the Unit after the fall of Japan. While twelve Unit 731 researchers arrested by Soviet forces were tried at the December 1949 Khabarovsk war crimes trials, they were sentenced only to the Siberian labor camp from two to 25 years, seemingly in exchange for the information they held. [ 1 ] Those captured by the US military were secretly given immunity , [ 2 ] while being covered up with stipends to the perpetrators. The US was purported to have co-opted the researchers' bioweapons information and experience for use in their own warfare program (resembling Operation Paperclip ), so did the Soviet Union in building their bioweapons facility in Sverdlovsk using documentation captured from the Unit in Manchuria. [ 1 ] [ 3 ] [ 4 ] In 1956, those still serving their sentences were released and repatriated to Japan. During the trials, the accused, including Major General Kiyoshi Kawashima, testified that as early as 1941, some 40 members of Unit 731 air-dropped plague -contaminated fleas on Changde , China , causing epidemic plague outbreaks. [ 5 ] Judges found all twelve accused war criminals guilty, sentencing them to terms ranging from two to twenty-five years in labour camps. In 1956, those still serving their sentences were released and repatriated to Japan. [ 6 ] In 1950, the Soviet Union published official trial materials in English, titled Materials on the Trial of Former Servicemen of the Japanese Army Charged with Manufacturing and Employing Bacteriological Weapons . [ 7 ] These included documents from the preliminary investigation (the indictment , some of the documentary evidence, and some interrogation records), testimony from both the accused and witnesses , final pleas of the accused, some expert findings, and speeches from the state prosecutor and defense counsel , verbatim . Published by state-run Foreign Languages Publishing House , the Soviet publication has long been out of print. But in November 2015, Google Books determined it was now in the public domain and published a facsimile of it online, also offering it for sale as an ebook . [ 7 ] Speaking to the overall judicial integrity of the proceedings, bioethics expert Jing-Bao Nie said the following: Despite its strong ideological tone and many obvious shortcomings such as the lack of international participation, the trial established beyond reasonable doubt that the Japanese army had prepared and deployed bacteriological weapons and that Japanese researchers had conducted cruel experiments on living human beings . However, the trial, together with the evidence presented to the court and its major findings—which have proved remarkably accurate—was dismissed as communist propaganda and totally ignored in the West until the 1980s. [ 8 ] Historian Sheldon Harris described the trial in his history of Unit 731: Evidence introduced during the hearings was based on eighteen volumes of interrogations and documentary material gathered in investigations over the previous four years. Some of the volumes included more than four hundred pages of depositions.... Unlike the Moscow Show Trials of the 1930s, the Japanese confessions made in the Khabarovsk trial were based on fact and not the fantasy of their handlers. [ 9 ] Yet the very wealth of trial documentation that tended to confirm that the Khabarovsk proceedings were no mere show trial also led Harris to question the relatively light punishment meted out there. All of defendants (aside from one who died in prison and another who committed suicide) had been freed by 1956, a mere seven years after the trial took place. [ 10 ] Chief trial translator Georgy Permyakov alleged that Soviet leader Joseph Stalin may have initially feared that Japan would execute Soviet prisoners of war if the Khabarovsk defendants were hanged . [ 10 ] But Harris also claimed that "the Soviets made a deal with the Japanese similar to the one completed by the Americans: Information [in exchange] for... extremely light sentences": [ 10 ] The Soviets and their successors never released the interrogation reports of the Japanese, some 18 volumes. This leads me to believe that the Japanese did arrange a deal, did yield some information, and the Soviets settled for the best goodies they could get. [ 10 ] Harris also noted other controversies unleashed by the trial, which linked emperor Hirohito to the Japanese biological warfare program, as well as allegations that Japanese biological warfare experiments had also been conducted on Allied prisoners of war. One of the experts called upon by Soviet prosecutors during the trial, N. N. Zhukov-Verezhnikov, later served on the panel of scientists, led by Joseph Needham , investigating Chinese and North Korean allegations of US biological warfare in the Korean War . [ 11 ]
https://en.wikipedia.org/wiki/Khabarovsk_war_crimes_trials
Khabibullin's conjecture is a conjecture in mathematics related to Paley 's problem [ 1 ] for plurisubharmonic functions and to various extremal problems in the theory of entire functions of several variables. The conjecture was named after its proposer, B. N. Khabibullin. There are three versions of the conjecture, one in terms of logarithmically convex functions, one in terms of increasing functions, and one in terms of non-negative functions. The conjecture has implications in the study of complex functions and is related to Euler's Beta function . While the conjecture is known to hold for certain conditions, counterexamples have also been found. Khabibullin's conjecture (version 1, 1992). Let S {\displaystyle \displaystyle S} be a non-negative increasing function on the half-line [ 0 , + ∞ ) {\displaystyle [0,+\infty )} such that S ( 0 ) = 0 {\displaystyle \displaystyle S(0)=0} . Assume that S ( e x ) {\displaystyle \displaystyle S(e^{x})} is a convex function of x ∈ [ − ∞ , + ∞ ) {\displaystyle x\in [-\infty ,+\infty )} . Let λ ≥ 1 / 2 {\displaystyle \lambda \geq 1/2} , n ≥ 2 {\displaystyle n\geq 2} , and n ∈ N {\displaystyle n\in \mathbb {N} } . If then This statement of the Khabibullin's conjecture completes his survey. [ 2 ] The product in the right hand side of the inequality ( 2 ) is related to the Euler's Beta function B {\displaystyle \mathrm {B} } : For each fixed λ ≥ 1 / 2 {\displaystyle \lambda \geq 1/2} the function turns the inequalities ( 1 ) and ( 2 ) to equalities. The Khabibullin's conjecture is valid for λ ≤ 1 {\displaystyle \lambda \leq 1} without the assumption of convexity of S ( e x ) {\displaystyle S(e^{x})} . Meanwhile, one can show that this conjecture is not valid without some convexity conditions for S {\displaystyle S} . In 2010, R. A. Sharipov showed that the conjecture fails in the case n = 2 {\displaystyle n=2} and for λ = 2 {\displaystyle \lambda =2} . [ 3 ] Khabibullin's conjecture (version 2). Let h {\displaystyle \displaystyle h} be a non-negative increasing function on the half-line [ 0 , + ∞ ) {\displaystyle [0,+\infty )} and α > 1 / 2 {\displaystyle \alpha >1/2} . If then Khabibullin's conjecture (version 3). Let q {\displaystyle \displaystyle q} be a non-negative continuous function on the half-line [ 0 , + ∞ ) {\displaystyle [0,+\infty )} and α > 1 / 2 {\displaystyle \alpha >1/2} . If then
https://en.wikipedia.org/wiki/Khabibullin's_conjecture_on_integral_inequalities
Khaled A. Mahdi (Arabic: خالد مهدي; born 1 January 1970) held the position of the Secretary-General of the General Secretariat of the Supreme Council for Planning and Development (GSSCPD) of Kuwait [ 1 ] in 2016 till 2024. [ 2 ] He succeeded Mr. Hashem Alrefee [ 3 ] (2014 - 2015) and Dr. Adel Alweqayan [ 4 ] (2014 - 2007). The GSSCPD steers the long-term Kuwait National Development Plan (KNDP), part of the vision for New Kuwait [ 5 ] by His Highness the Emir of Kuwait , Sabah Al-Ahmad Al-Jaber Al-Sabah , which "mobilizes development efforts across seven pillars with the aim of transforming Kuwait into a financial, cultural, and institutional leader in the region by 2035". [ 5 ] Mahdi was born on 1 January 1970. He received his Bachelor's degree in chemical engineering from University of Toronto , Canada, and his Master's in the same field from the Illinois Institute of Technology , USA. Mahdi received his PhD in chemical engineering from Northwestern University , USA, where he specialized on applications of statistical mechanics on complex thermodynamic systems working with Professor Monica Olvera de la Cruz research group. [ 6 ] Mahdi starting his academic career working in the Water Resource Division in the Kuwait Institute for Scientific Researches before moving to Kuwait University to become Associate Professor of Chemical Engineering, where he taught 30 different courses in general engineering, chemical engineering as well as management sciences. He received Kuwait University's Best Teaching Award in 2009 [ 7 ] among other honors and awards. Mahdi has authored or co-authored more than 75 journal articles, book chapters, conference papers, patents, and books in different fields. [ 8 ] His primary interest is the study of modeling complex systems and process optimization. Mahdi and collaborators also established SYNERGY with Prof. Maytham Safar, [ 9 ] Computer Engineering Department at Kuwait University. [ 8 ] He is a senior member of the American Institute of Chemical Engineers (AIChE) [ 10 ] and several other professional societies. Under Mahdi's directorship, the Secretariat has been keen "on implementing the Sustainable Development Goals (SDGs), or the 2030 Agenda, adopted by the UN in September 2015," [ 11 ] [ 12 ] and have already "achieved a considerable part of the Sustainable Development Goals" [ 13 ] for Post-2015. A key mission of the KNDP is to create stronger integration between Kuwait's public and private sectors, and in an interview with the Oxford Business Group as part of their Kuwait 2017 report, [ 14 ] Mahdi shared that "the government stated very clearly its intentions to transition its role in the economy from an operator to regulator, from wealth distribution to wealth creation, and to entrust the economy to the private sector." Additionally, Mahdi oversees the GSSCPD's engagement in development programming and support of the national development agenda, working with various sectors and segments of the population. [ 15 ] In Mahdi's own words about the implementation of the national plan, the Secretariat looks to build "a solid commercial and financial infrastructure, and this cements us as a financial and commercial hub", [ 16 ] through projects like the Knowledge Transfer and Small and Medium Enterprises (SME) expo, [ 17 ] The knowledge Economy Forum [ 18 ] and the Kuwait Public Policy Center (KPPC). [ 19 ] Since 2016, Mahdi as served as the Secretary-General of the Supreme Council for Planning and Development in the Government of Kuwait. [ 1 ] Prior to this appointment, he was the Assistant Secretary General for the Follow-up and Future Forecasting besides being the Acting Assistant Secretary General for Planning. Mahdi sits in several government boards and higher committees, mainly the Public Authority for Industry (PAI), Kuwait Institute for Scientific Research (KISR), Public Authority for Civil Information (PACI), and the Public Authority for Housing and Welfare (PAHW).He is a member of the Supreme Council for Education and is on the Board of Trustees of the National Center for Education Development (NCED). He serves as member in high-level committees such as the Economic and Fiscal Reform Monitoring Committee , the Humanitarian Foreign Aid Committee, Fiscal Budgeting Framework, Kuwait Demographic Disparities Committee, the Development of Kuwait Islands Committee, the Executive Committee of the Silk City, Kuwait Master Plan 2040 Committee, Kuwait Bond Debut Roadshow, and the Kuwait University Mega Projects Committee. He also serves other government sub-committees and leading government work groups, and has spoken extensively at national and international conferences, such as the Kuwait Housing & Residential Development Forum [ 20 ] and Kuwait Infrastructure Week [ 21 ] in 2017. Mahdi chairs the Consultants Selection Committee during the period 2016-2017 [ 22 ] and is the National Director of the Country Program Action Plan of the United Nations Development Programme in Kuwait, [ 23 ] and currently presides over the National Committee for the Implementation of Agenda 2030. [ 24 ] Mahdi steers chairs the national SDGs 2030 permanent standing committee and leads the vision of the state, “New Kuwait 2035”. He established the Kuwait Public Policy Center (KPPC) and its Nudge Unit; also oversees the research centers including The National Knowledge Economy Center, the National Sustainable Development Observatory and the National Development Research Center. He also led the initiative to produce Kuwait’s first Macro-Economic Model by collaborating with Oxford Economics. [ 25 ] He has also hosted the IPIECA’s report “the Mapping the Oil and Gas Industry to the SDGs: An Atlas”, and the World Energy Outlook 2018’s report of the IEO. Mahdi has also developed strategic partnerships with local, regional and international organizations to support knowledge sharing and development. [ 26 ] [ 19 ] In October 2018, Dr. Khaled received the People First Leader GCC HR Award for his government services at the 6th Annual GOV HR Summit. Abu Dhabi, UAE. [ 26 ]
https://en.wikipedia.org/wiki/Khaled_A._Mahdi
Khalil Ahmad Qureshi ( Urdu : خليل احمد قريشى; HI , SI ), is a Pakistani physical chemist and the professor of physical chemistry at the Punjab University . He has published notable papers in nuclear physical chemistry in international scientific journals as well contributing in the advancement of the scientific applications of the civilian usage of the fuel cycle . A native of Lahore , Qureshi subsequently attended the Punjab University to study chemistry where he graduated with BSc in chemistry. [ 1 ] For his higher studies , he went to United Kingdom to attend Imperial College London . [ 1 ] He earned MSc in Chemical Technology and worked towards gaining the DIC in physical metallurgy . [ 2 ] At the Imperial College , he joined the doctoral group led by Thomas West and David Craig . [ 3 ] He earned his PhD in physical chemistry under the supervision of David Craig , writing his thesis on Physico-chemical studies of the vapour deposition of Al 2 O 3 , in 1972. [ 3 ] He briefly taught physical chemistry at the London University before moving to Pakistan. Upon his return, he joined the Pakistan Atomic Energy Commission (PAEC) and took the professorship of nuclear chemistry at the Pakistan Institute of Nuclear Science and Technology (PINSTECH). [ 4 ] Subsequently, he joined the clandestine atomic bomb project 's chemistry section led by fellow chemist Iqbal Hussain Qureshi . Munir Ahmad Khan , chairman PAEC , had him partially take over the "R-Labs" at PAEC to engage research in chemical explosives. Initially, the research was concentrated towards development of the HMX , a non-toxic explosive that was produced as a by-product of the RDX process. [ 5 ] In the 1970s, he founded the Metallurgical Laboratory (ML) where he also moved majority of the staff to undertake research in metallurgy. [ 6 ] He then led a team of chemists who supervised the physical conversion of UF 6 into solid metal before coating and machining the metal . [ 6 ] During this time, he also led the research on using chemical and metallurgical industrial techniques and reduction furnaces to produce metal from the Highly enriched uranium . [ 6 ] Due to the sensitivity of the project and concerns of fellow theorist Dr. AQ Khan , the program was moved to KRL in the 1980s. [ 6 ] While at PAEC , Qureshi joined the chemistry department of Quaid-e-Azam University as an associate professor . In the 1990s, he joined the Punjab University to teach post-graduate course on physical chemistry . In the 2000s, he joined the Lahore University of Management Sciences 's School of Science and Engineering as director of engineering and safety. [ 2 ] Over the years, he became known for his strong scientific advocacy of peaceful usage of nuclear energy , safety, and security , following the Fukushima disaster . [ 7 ] A member of Khwarizmi Science Society , he has lectured on safety issues regarded the nuclear power and topics in nuclear chemistry. [ 8 ] He has also authored numerous articles on chemical safety and securities around the world in world's leading research journal. [ 9 ] In 2011, he lectured on physical chemistry and spoke about how nuclear technology was being used currently and different ways of disposing nuclear waste at the Forman Christian College University in Lahore. [ 10 ] He is the recipient of Pakistan 's highest honours – the Hilal-i-Imtiaz bestowed in 2003 and the Sitara-e-Imtiaz bestowed in 1999 by the Government of Pakistan. [ 11 ]
https://en.wikipedia.org/wiki/Khalil_Qureshi
The Kharasch addition is an organic reaction and a metal-catalysed free radical addition of CXCl 3 compounds (X = Cl, Br, H) to alkenes . [ 1 ] The reaction is used to append trichloromethyl or dichloromethyl groups to terminal alkenes . The method has attracted considerable interest, [ 2 ] but it is of limited value because of narrow substrate scope and demanding conditions. [ 3 ] The reaction mechanism involves free radicals of the general formula CXCl 2 (X = Cl, H). For the precursors carbon tetrachloride and chloroform , the requisite radicals can arise by abstraction of a halogen atom by a electropositive metal. The addition proceeds in an anti-Markovnikov fashion . Early work linked the addition to olefin polymerization. [ 4 ] This addition is a step in a protocol known as atom transfer radical polymerization . [ 5 ] An example of Kharasch addition is the synthesis of 1,1,3-trichloro-n-nonane from 1-octene and chloroform using an iron-based catalyst: [ 6 ] The reaction was discovered by Morris S. Kharasch in the 1940s. [ 7 ] [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Kharasch_addition
The Kharasch–Sosnovsky reaction is a method that involves using a copper or cobalt salt as a catalyst to oxidize olefins at the allylic position, subsequently condensing a peroxy ester (e.g. tert-Butyl peroxybenzoate ) or a peroxide resulting in the formation of allylic benzoates or alcohols via radical oxidation. [ 1 ] This method is noteworthy for being the first allylic functionalization to utilize first-row transition metals and has found numerous applications in chemical and total synthesis. [ 2 ] Chiral ligands can be used to render the reaction asymmetric , constructing chiral C–O bonds via C–H bond activation. [ 3 ] This is notable as asymmetric addition to allylic groups tends to be difficult due to the transition state being highly symmetric. The reaction is named after Morris S. Kharasch and George Sosnovsky who first reported it in 1958. [ 4 ] This method is noteworthy for being the first allylic functionalization to utilize first-row transition metals and has found numerous applications in chemical and total synthesis. [ 2 ] Substituted oxazolines and thiazolines can be oxidized to the corresponding oxazoles and thiazoles via a modification of the classic reaction. [ 5 ] Although the mechanism of Kharasch-Sosnovsky oxidation is not fully understood, the general aspects have been established. The reaction is known to undergo a radical mechanism. Taking the most representative reaction as an example, most of the studies suggest that the Cu(I) and perester complex can go through a homolytic dissociation of the perester through coordination of a Cu(I) salt, leading to the formation of a Cu(II) complex and tert -butoxyl radical. However, the mechanism of Cu(II) to Cu(III) remains unknown. Several mechanistic studies hypothesize it can undergo multiple steps to generate the allyl- Cu(III) key intermediate. [ 6 ] In the final step, the C-O bond formation between the alkenyl and benzoate occurs through the reductive elimination of the copper(III) complex. The last step, a reductive elimination of an organocopper(III) intermediate to regenerate the Cu(I) catalyst and form the product, is proposed to take place via a seven-membered ring transition state. [ citation needed ] In the original work on Kharasch-Sosnovsky oxidation, Kharasch and Sosnovsky observed the selective formation of the branched product over the linear product with 1-octene in a ratio of 99:1. [ 1 ] It is notable that the reaction favors the thermodynamically less stable terminal alkene. Mechanistic investigations later suggested that the reaction proceeds through a 7-membered ring organo-copper (III) species in a pericyclic reaction, resulting in an unrearranged terminal alkene product. Since the reaction usually generates a stereogenic center, multiple asymmetric variants of this transformation have been developed. [ 7 ] To achieve the stereoselectivity, employing bidentate chiral ligand into the reactions is the most common strategy, inducing the asymmetric formation of benzoate often relies on the ability of the ligand and Cu(III). Some examples of frequently used ligands are oxazolines , [ 8 ] pyridines , [ 9 ] and C3 symmetric oxazoles . [ 10 ] Since the early 20th century, the scientific community has been aware of the oxidation of allylic C-H bonds. This reactivity can be attributed to the weakening strength of allylic and benzylic C-H bonds by approximately 16.4-16.7 kcal/mol, [ 11 ] compared to a regular C-H bond. In the late 1950s, Kharasch—Sosnovsky oxidation was developed. Since then, there have been multiple studies employing first-row transition metals (especially copper)-mediated reactions to install functional groups in the allylic position. One of the examples is from Corey and his co-workers' synthesis of oleanolic acid in 1993. [ 12 ] They employed Kharasch—Sosnovsky oxidation in a novel manner to access OBz intermediate. Initially, vinylcyclopropane was treated with CuBr and tert -butyl perbenzoate, resulting in the abstraction of a hydrogen atom, leading to the formation of allylic radical. Subsequently, the allylic radical underwent a transformation through the homolytic cleavage of the cyclopropane ring, followed by the recombination of the resulting primary and benzyloxy radicals. This unique combination of the Kharasch reaction and the Simmons−Smith cyclopropanation facilitated the introduction of the cyclopropyl group, enabling the efficient and stereoselective installation of an oxidized methyl group. Another example is from Mukaiyama 's Taxol synthesis in 1999 [ 13 ] Mukaiyama's group utilized the Kharasch reaction to introduce an oxidation on the Taxol C-ring. By treating with an excess of CuBr and tert -butyl perbenzoate, a mixture was obtained. After separating the two bromides, Mukaiyama and his colleagues were able to convert side product into the desired through isomerization using CuBr in MeCN at 50 °C. The efficient conversion of the relatively inert alkene to the reactive allylic bromide played a crucial role in the success of Mukaiyama's synthesis, as the allylic bromide served as the necessary component to construct the oxetane D ring. This catalysis article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kharasch–Sosnovsky_reaction
Kharitonov's theorem is a result used in control theory to assess the stability of a dynamical system when the physical parameters of the system are not known precisely. When the coefficients of the characteristic polynomial are known, the Routh–Hurwitz stability criterion can be used to check if the system is stable (i.e. if all roots have negative real parts). Kharitonov's theorem can be used in the case where the coefficients are only known to be within specified ranges. It provides a test of stability for a so-called interval polynomial , while Routh–Hurwitz is concerned with an ordinary polynomial . An interval polynomial is the family of all polynomials where each coefficient a i ∈ R {\displaystyle a_{i}\in R} can take any value in the specified intervals It is also assumed that the leading coefficient cannot be zero: 0 ∉ [ l n , u n ] {\displaystyle 0\notin [l_{n},u_{n}]} . An interval polynomial is stable (i.e. all members of the family are stable) if and only if the four so-called Kharitonov polynomials are stable. What is somewhat surprising about Kharitonov's result is that although in principle we are testing an infinite number of polynomials for stability, in fact we need to test only four. This we can do using Routh–Hurwitz or any other method. So it only takes four times more work to be informed about the stability of an interval polynomial than it takes to test one ordinary polynomial for stability. Kharitonov's theorem is useful in the field of robust control , which seeks to design systems that will work well despite uncertainties in component behavior due to measurement errors , changes in operating conditions, equipment wear and so on.
https://en.wikipedia.org/wiki/Kharitonov's_theorem
A Kharitonov region is a concept in mathematics . It arises in the study of the stability of polynomials . Let D {\displaystyle D} be a simply-connected set in the complex plane and let P {\displaystyle P} be the polynomial family. D {\displaystyle D} is said to be a Kharitonov region if is a subset of P . {\displaystyle P.} Here, V T n {\displaystyle V_{T}^{n}} denotes the set of all vertex polynomials of complex interval polynomials ( T n ) {\displaystyle (T^{n})} and V S n {\displaystyle V_{S}^{n}} denotes the set of all vertex polynomials of real interval polynomials ( S n ) . {\displaystyle (S^{n}).} This polynomial -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Kharitonov_region
Kharosthi script ( Gāndhārī : 𐨑𐨪𐨆𐨮𐨿𐨛𐨁𐨌𐨫𐨁𐨤𐨁 , romanized: kharoṣṭhī lipi ), also known as the Gandhari script ( 𐨒𐨌𐨣𐨿𐨢𐨌𐨪𐨁𐨌𐨫𐨁𐨤𐨁 , gāndhārī lipi ), [ 1 ] was an ancient script originally developed in the Gandhara Region of modern-day Pakistan , [ 2 ] [ 3 ] between the 5th and 3rd century BCE. [ 4 ] [ 5 ] [ 6 ] used primarily by the people of Gandhara alongside various parts of South Asia and Central Asia . [ 7 ] it remained in use until it died out in its homeland around the 5th century CE. [ 7 ] It was also in use in Bactria , the Kushan Empire , Sogdia , and along the Silk Road . There is some evidence it may have survived until the 7th century in Khotan and Niya , both cities in East Turkestan . The name Kharosthi may derive from the Hebrew kharosheth , a Semitic word for writing, [ 8 ] or from Old Iranian *xšaθra-pištra , which means "royal writing". [ 9 ] The script was earlier also known as Indo-Bactrian script , Kabul script and Arian-Pali . [ 10 ] [ 11 ] Scholars are not in agreement as to whether the Kharosthi script evolved gradually, or was the deliberate work of a single inventor. An analysis of the script forms shows a clear dependency on the Aramaic alphabet but with extensive modifications. Kharosthi seems to be derived from a form of Aramaic used in administrative work during the reign of Darius the Great , rather than the monumental cuneiform used for public inscriptions. [ 8 ] One theory suggests that the Aramaic script arrived with the Achaemenid conquest of the Indus Valley in 500 BCE and evolved over the next 200+ years to reach its final form by the 3rd century BCE where it appears in some of the Edicts of Ashoka. However, no intermediate forms have yet been found to confirm this evolutionary model, and rock and coin inscriptions from the 3rd century BCE onward show a unified and standard form. An inscription in Aramaic dating back to the 4th century BCE was found in Sirkap , testifying to the presence of the Aramaic script in present-day Pakistan. According to Sir John Marshall , this seems to confirm that Kharoshthi was later developed from Aramaic. [ 12 ] While the Brahmi script remained in use for centuries, Kharosthi seems to have been abandoned after the 2nd–3rd century AD. Because of the substantial differences between the Semitic-derived Kharosthi script and its successors, knowledge of Kharosthi may have declined rapidly once the script was supplanted by Brahmi-derived scripts, until its re-discovery by Western scholars in the 19th century. [ 8 ] The Kharosthi script was deciphered separately almost concomitantly by James Prinsep (in 1835, published in the Journal of the Asiatic society of Bengal , India) [ 13 ] and by Carl Ludwig Grotefend (in 1836, published in Blätter für Münzkunde , Germany), [ 14 ] with Grotefend "evidently not aware" of Prinsep's article, followed by Christian Lassen (1838). [ 15 ] They all used the bilingual coins of the Indo-Greek Kingdom (obverse in Greek, reverse in Pali , using the Kharosthi script). This in turn led to the reading of the Edicts of Ashoka , some of which were written in the Kharosthi script (the Major Rock Edicts at Mansehra and Shahbazgarhi ). [ 8 ] The study of the Kharosthi script was recently invigorated by the discovery of the Gandhāran Buddhist texts , a set of birch bark manuscripts written in Kharosthi, discovered near the Afghan city of Hadda just west of the Khyber Pass in Pakistan . The manuscripts were donated to the British Library in 1994. The entire set of British Library manuscripts are dated to the 1st century CE, although other collections from different institutions contain Kharosthi manuscripts from 1st century BCE to 3rd century CE, [ 16 ] [ 17 ] making them the oldest Buddhist manuscripts yet discovered. Kharosthi is mostly written right to left. Some variations in both the number and order of syllables occur in extant texts. [ citation needed ] The Kharosthi alphabet is also known as the arapacana alphabet, and follows the order. This alphabet was used in Gandharan Buddhism as a mnemonic for the Pañcaviṃśatisāhasrikā Prajñāpāramitā Sūtra , a series of verses on the nature of phenomena. A bar above a consonant ⟨ 𐨸 ⟩ can be used to indicate various modified pronunciations depending on the consonant, such as nasalization or aspiration. It is used with k, ṣ, g, c, j, n, m, ś, ṣ, s, and h. The cauda ⟨ 𐨹 ⟩ changes how consonants are pronounced in various ways, particularly fricativization . It is used with g, j, ḍ, t, d, p, y, v, ś, and s. The dot below ⟨ 𐨺 ⟩ is used with m and h, but its precise phonetic function is unknown. Kharosthi includes only one standalone vowel character, which is used for initial vowels in words. [ citation needed ] Other initial vowels use the a character modified by diacritics. Each syllable includes the short /a/ sound by default [ citation needed ] , with other vowels being indicated by diacritic marks. Long vowels are marked with the diacritic ⟨ 𐨌 ⟩ . An anusvara ⟨ 𐨎 ⟩ indicates nasalization of the vowel or a nasal segment following the vowel. A visarga ⟨ 𐨏 ⟩ indicates the unvoiced syllable-final /h/. It can also be used as a vowel length marker. A further diacritic, the double ring below ⟨ 𐨍 ⟩ appears with vowels -a and -u in some Central Asian documents, but its precise phonetic function is unknown. [ 22 ] Salomon has established that the vowel order is /a e i o u/, akin to Semitic scripts, rather than the usual vowel order for Indic scripts /a i u e o/. Nine Kharosthi punctuation marks have been identified: [ 20 ] Kharosthi included a set of numerals that are reminiscent of Roman numerals and Psalter Pahlavi Numerals. [ citation needed ] The system is based on an additive and a multiplicative principle, but does not have the subtractive feature used in the Roman numeral system. [ 23 ] The numerals, like the letters, are written from right to left. There is no zero and no separate signs for the digits 5–9. Numbers are written additively, so, for example, the number 1996 would be written as 𐩇𐩃𐩃𐩀𐩆𐩅𐩅𐩅𐩅𐩄𐩃𐩁 . 𐩅𐩅𐩅𐩅𐩄𐩃𐩁 (2+4+10+20+20+20+20) + 𐩃𐩃𐩀𐩆 100x(1+4+4) + 𐩇 1000 𐩅𐩅𐩅𐩅𐩄𐩃𐩁 𐩃𐩃𐩀𐩆 𐩇 (2+4+10+20+20+20+20) + 100x(1+4+4) + 1000 Kharosthi was added to the Unicode Standard in March 2005 with the release of version 4.1. The Unicode block for Kharosthi is U+10A00–U+10A5F:
https://en.wikipedia.org/wiki/Kharoṣṭhī_numerals
Khatib & Alami ( K&A ) ( Arabic : خطيب وعلمي) is an international multidisciplinary consultancy firm established in the early 1960s. The company provides services in architecture, engineering, urban and regional planning, transportation, water and environmental engineering, power and renewables, geotechnical and civil engineering, oil and gas, program management, and digital solutions. [ 1 ] The firm operates across 28 offices globally and employs over 6,000 professionals. [ 2 ] Its holding company is based in Singapore, with regional headquarters in Beirut and Riyadh , [ 3 ] and an executive office in Dubai . Khatib & Alami has contributed to various large-scale infrastructure and development projects, including transportation systems, masterplans, and public utilities. [ 4 ] In 2023 the company has been ranked among the Top 50 International Design Firms and Top 10 in the Middle East by Engineering News-Record (ENR). [ 5 ] Khatib & Alami (K&A) was established in February 1964 by the founders, late Prof. Mounir Khatib and latethe Dr. Zuheir Alami . [ 6 ] Founded in Lebanon , the company began expanding its regional operations by the 1980s, establishing a presence in Saudi Arabia , the United Arab Emirates , Oman , and Bahrain . By the early 1990s, the firm had extended its services to additional countries, including Iraq , Belgium , Kazakhstan , and Tajikistan . In recent years, K&A has expanded into various African markets, offering engineering and architectural consultancy services across multiple continents. [ 7 ] In 2017, Dr. Najib Khatib was elected Chairman of the Board of Directors. Under the leadership of Dr. Khatib, K&A marked several milestones in 2020, including an extension of its largest PMO project in KSA and the renewal of contracts with Aramco and KSA’s Ministry of Housing. [ 8 ]
https://en.wikipedia.org/wiki/Khatib_and_Alami
Khimera is a software product from Kintech Lab intended for calculation of the kinetic parameters of microscopic processes, thermodynamic and transport properties of substances and their mixtures in gases, plasmas and also of heterogeneous processes. The development of a kinetic mechanism is a key stage of present-day technologies for the creation of hi-tech devices and processes in a wide range of fields, such as microelectronics, chemical industry, and the design and optimization of combustion engines and power stations. Khimera with Chemical WorkBench , another software product from Kintech Lab, allows both the development of complex physical and chemical mechanisms and their validation. Essential feature of Khimera is its user-friendly [ citation needed ] interface for importing and utilizing the results of quantum-chemical calculations for estimating rate constants of elementary processes and thermodynamic and transport properties. Khimera incorporates up to date achievements in the development of the wide range of models of elementary physicochemical processes; these models are of particular importance for hi-tech applications in: The computation modules of Khimera allow one to calculate the kinetic parameters of elementary processes and thermodynamic and transport properties from the data on the molecular structures and properties obtained from quantum-chemical calculations or from an experiment. The molecular properties and the parameters of molecular interactions can be calculated using quantum-chemical software ( Gaussian , GAMESS , Jaguar , ADF ) and directly imported into Khimera in an automatic mode. The results of calculations can be presented visually and exported for the further use in kinetic modeling and CFD packages. 1. J Comput Chem 23 : 1375–1389, 2002 2. https://web.archive.org/web/20160611153527/http://www.softscout.com/software/Science-and-Laboratory/Laboratory-Information-Management-LIMS/Khimera.html
https://en.wikipedia.org/wiki/Khimera
Khimprom ( Russian : Волгоградское открытое акционерное общество «Химпром» , formerly known as Plant 91 ) was a major producer of industrial and consumer chemical products based in Volgograd , Russia. [ 1 ] The company used to manufacture organophosphorus nerve agents, and as of 2013 still produced dual-use chemicals. [ 2 ] The plant was established in 1931. [ 3 ] The plant began production of sarin in 1959, and soman in 1967; production of both was officially ended before 1987. It was claimed that the plant manufactured 5 to 10 tons of binary nerve agent in 1991 as part of the Foliant research program, that was subsequently field tested at the Ust'yurt plateau , Uzbekistan . [ 4 ] In the post-Soviet era, the plant manufactured phosphorus oxychloride , and a range of phosphorus- and fluorine-containing compounds. [ 2 ] The company's financial situation grew worse in the late 2000s, and it was officially declared bankrupt in 2012. [ 3 ] Production at the plant was fully discontinued in 2014. [ 5 ] In January 2015, layoffs began as the enterprise was being liquidated. [ 3 ] At the same time, projects were launched to restore the environmental damage caused by the plant during decades of chemical production. [ 3 ] As of May 2018, the local government is in talks with the Japan-based Marubeni to build a modern methanol plant on the Khimprom site. [ 3 ] December 27, 2019 - liquidation of the organization. [ 6 ] Bankruptcy trustee is Chertkova Inna Valeryevna (for 2024). [ 6 ]
https://en.wikipedia.org/wiki/Khimprom_(Volgograd)
Khimprom Novocheboksarsk ( Russian : ПАО «Химпром» ) is a chemicals-producing company based in Novocheboksarsk , Russia. It is part of Orgsintez Group ( Renova ). [ 2 ] The Novocheboksarsk Khimprom Production Association is a giant facility whose Production Facility No. 3 manufactured chemical agents between 1972 and 1987. The plant is now making preparations to destroy chemical weapons and agents while continuing to produce household chemicals and fertilizers. [ 3 ] The company used to manufacture organophosphorus nerve agents, and as of 2013 still produced dual-use chemicals. [ 4 ] It produced Soviet V-gas until 1987, and still manufactures phosphorus oxychloride, phosphorus trichloride, and dimethyl phosphite, and phosphorus-based insecticides, herbicides and dyestuffs. [ 4 ] As of June 2022 the company has listed the following chemical compounds that it's been producing at the time: [ 5 ] A range of chemical flotation agents for froth flotation]] processes: [ b ] Source: [ 6 ] This Russian corporation or company article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Khimprom_Novocheboksarsk
In number theory , Khinchin's constant is a mathematical constant related to the simple continued fraction expansions of many real numbers . In particular Aleksandr Yakovlevich Khinchin proved that for almost all real numbers x , the coefficients a i of the continued fraction expansion of x have a finite geometric mean that is independent of the value of x. It is known as Khinchin's constant and denoted by K 0 . That is, for it is almost always true that The decimal value of Khinchin's constant is given by: Although almost all numbers satisfy this property, it has not been proven for any real number not specifically constructed for the purpose. The following numbers whose continued fraction expansions apparently do have this property (based on empirical data) are: Among the numbers x whose continued fraction expansions are known not to have this property are: Khinchin is sometimes spelled Khintchine (the French transliteration of Russian Хинчин) in older mathematical literature. Khinchin's constant can be given by the following infinite product: This implies: Khinchin's constant may also be expressed as a rational zeta series in the form [ 1 ] or, by peeling off terms in the series, where N is an integer, held fixed, and ζ( s , n ) is the complex Hurwitz zeta function . Both series are strongly convergent, as ζ( n ) − 1 approaches zero quickly for large n . An expansion may also be given in terms of the dilogarithm : There exist a number of integrals related to Khinchin's constant: [ 2 ] The proof presented here was arranged by Czesław Ryll-Nardzewski [ 3 ] and is much simpler than Khinchin's original proof which did not use ergodic theory . Since the first coefficient a 0 of the continued fraction of x plays no role in Khinchin's theorem and since the rational numbers have Lebesgue measure zero, we are reduced to the study of irrational numbers in the unit interval , i.e., those in I = [ 0 , 1 ] ∖ Q {\displaystyle I=[0,1]\setminus \mathbb {Q} } . These numbers are in bijection with infinite continued fractions of the form [0; a 1 , a 2 , ...], which we simply write [ a 1 , a 2 , ...], where a 1 , a 2 , ... are positive integers . Define a transformation T : I → I by The transformation T is called the Gauss–Kuzmin–Wirsing operator . For every Borel subset E of I , we also define the Gauss–Kuzmin measure of E Then μ is a probability measure on the σ -algebra of Borel subsets of I . The measure μ is equivalent to the Lebesgue measure on I , but it has the additional property that the transformation T preserves the measure μ . Moreover, it can be proved that T is an ergodic transformation of the measurable space I endowed with the probability measure μ (this is the hard part of the proof). The ergodic theorem then says that for any μ - integrable function f on I , the average value of f ( T k x ) {\displaystyle f\left(T^{k}x\right)} is the same for almost all x {\displaystyle x} : Applying this to the function defined by f ([ a 1 , a 2 , ...]) = ln( a 1 ), we obtain that for almost all [ a 1 , a 2 , ...] in I as n → ∞. Taking the exponential on both sides, we obtain to the left the geometric mean of the first n coefficients of the continued fraction, and to the right Khinchin's constant. The Khinchin constant can be viewed as the first in a series of the Hölder means of the terms of continued fractions. Given an arbitrary series { a n }, the Hölder mean of order p of the series is given by When the { a n } are the terms of a continued fraction expansion, the constants are given by This is obtained by taking the p -th mean in conjunction with the Gauss–Kuzmin distribution . This is finite when p < 1 {\displaystyle p<1} . The arithmetic average diverges: lim n → ∞ 1 n ∑ k = 1 n a k = K 1 = + ∞ {\displaystyle \lim _{n\to \infty }{\frac {1}{n}}\sum _{k=1}^{n}a_{k}=K_{1}=+\infty } , and so the coefficients grow arbitrarily large: lim sup n a n = + ∞ {\displaystyle \limsup _{n}a_{n}=+\infty } . The value for K 0 is obtained in the limit of p → 0. The harmonic mean ( p = −1) is Many well known numbers, such as π , the Euler–Mascheroni constant γ, and Khinchin's constant itself, based on numerical evidence, [ 4 ] [ 5 ] [ 2 ] are thought to be among the numbers for which the limit lim n → ∞ ( a 1 a 2 . . . a n ) 1 / n {\displaystyle \lim _{n\rightarrow \infty }\left(a_{1}a_{2}...a_{n}\right)^{1/n}} converges to Khinchin's constant. However, none of these limits have been rigorously established. In fact, it has not been proven for any real number, which was not specifically constructed for that exact purpose. [ 6 ] The algebraic properties of Khinchin's constant itself, e. g. whether it is a rational, algebraic irrational , or transcendental number, are also not known. [ 2 ]
https://en.wikipedia.org/wiki/Khinchin's_constant
The Khintchine inequality , is a result in probability also frequently used in analysis bounding the expectation a weighted sum of Rademacher random variables with square-summable weights. It is named after Aleksandr Khinchin and spelled in multiple ways in the Latin alphabet. It states that for each p ∈ ( 0 , ∞ ) {\displaystyle p\in (0,\infty )} there exist constants A p , B p > 0 {\displaystyle A_{p},B_{p}>0} depending only on p {\displaystyle p} such that for every sequence x = ( x 1 , x 2 , … ) ∈ ℓ 2 {\displaystyle x=(x_{1},x_{2},\dots )\in \ell ^{2}} , and i.i.d. Rademacher random variables ϵ 1 , ϵ 2 , … {\displaystyle \epsilon _{1},\epsilon _{2},\dots } , A p ≤ E [ | ∑ n = 1 ∞ ϵ n x n | p ] 1 / p ‖ x ‖ 2 ≤ B p . {\displaystyle A_{p}\leq {\frac {\mathbb {E} \left[\left|\sum _{n=1}^{\infty }\epsilon _{n}x_{n}\right|^{p}\right]^{1/p}}{\|x\|_{2}}}\leq B_{p}.} As a particular case, consider N {\displaystyle N} complex numbers x 1 , … , x N ∈ C {\displaystyle x_{1},\dots ,x_{N}\in \mathbb {C} } , which can be pictured as vectors in a plane. Now sample N {\displaystyle N} random signs ϵ 1 , … , ϵ N ∈ { − 1 , + 1 } {\displaystyle \epsilon _{1},\dots ,\epsilon _{N}\in \{-1,+1\}} , with equal independent probability. The inequality states that | ∑ i ϵ i x i | ≈ | x 1 | 2 + ⋯ + | x N | 2 {\displaystyle {\Big |}\sum _{i}\epsilon _{i}x_{i}{\Big |}\approx {\sqrt {|x_{1}|^{2}+\cdots +|x_{N}|^{2}}}} with a bounded error. Let { ε n } n = 1 N {\displaystyle \{\varepsilon _{n}\}_{n=1}^{N}} be i.i.d. random variables with P ( ε n = ± 1 ) = 1 2 {\displaystyle P(\varepsilon _{n}=\pm 1)={\frac {1}{2}}} for n = 1 , … , N {\displaystyle n=1,\ldots ,N} , i.e., a sequence with Rademacher distribution . Let 0 < p < ∞ {\displaystyle 0<p<\infty } and let x 1 , … , x N ∈ C {\displaystyle x_{1},\ldots ,x_{N}\in \mathbb {C} } . Then for some constants A p , B p > 0 {\displaystyle A_{p},B_{p}>0} depending only on p {\displaystyle p} (see Expected value for notation). More succinctly, ( E ⁡ | ∑ n = 1 N ε n x n | p ) 1 / p ∈ [ A p , B p ] {\displaystyle \left(\operatorname {E} \left|\sum _{n=1}^{N}\varepsilon _{n}x_{n}\right|^{p}\right)^{1/p}\in [A_{p},B_{p}]} for any sequence x {\displaystyle x} with unit ℓ 2 {\displaystyle \ell ^{2}} norm. The sharp values of the constants A p , B p {\displaystyle A_{p},B_{p}} were found by Haagerup (Ref. 2; see Ref. 3 for a simpler proof). It is a simple matter to see that A p = 1 {\displaystyle A_{p}=1} when p ≥ 2 {\displaystyle p\geq 2} , and B p = 1 {\displaystyle B_{p}=1} when 0 < p ≤ 2 {\displaystyle 0<p\leq 2} . Haagerup found that where p 0 ≈ 1.847 {\displaystyle p_{0}\approx 1.847} and Γ {\displaystyle \Gamma } is the Gamma function . One may note in particular that B p {\displaystyle B_{p}} matches exactly the moments of a normal distribution . The uses of this inequality are not limited to applications in probability theory . One example of its use in analysis is the following: if we let T {\displaystyle T} be a linear operator between two L p spaces L p ( X , μ ) {\displaystyle L^{p}(X,\mu )} and L p ( Y , ν ) {\displaystyle L^{p}(Y,\nu )} , 1 < p < ∞ {\displaystyle 1<p<\infty } , with bounded norm ‖ T ‖ < ∞ {\displaystyle \|T\|<\infty } , then one can use Khintchine's inequality to show that for some constant C p > 0 {\displaystyle C_{p}>0} depending only on p {\displaystyle p} and ‖ T ‖ {\displaystyle \|T\|} . [ 1 ] For the case of Rademacher random variables, Pawel Hitczenko showed [ 2 ] that the sharpest version is: where b = ⌊ p ⌋ {\displaystyle b=\lfloor p\rfloor } , and A {\displaystyle A} and B {\displaystyle B} are universal constants independent of p {\displaystyle p} . Here we assume that the x i {\displaystyle x_{i}} are non-negative and non-increasing.
https://en.wikipedia.org/wiki/Khintchine_inequality
Khristianovich Institute of Theoretical and Applied Mechanics of the Siberian Branch of the RAS , ITAM SB RAS ( Russian : Институт теоретической и прикладной механики имени С. А. Христиановича СО РАН ) is a research institute in Akademgorodok of Novosibirsk , Russia. It was founded in 1957. The institute was founded by Sergey Khristianovich in 1957, he also became its first director. [ 1 ] In 1957, the institute was located within the territory of SibNIA , where it was engaged in the creation of a supersonic wind tunnel until 1960. [ 2 ] In 1991, the International Center for Aerophysical Research (ICAR) was established at the institute. [ 3 ] In 1997, the institute became a member of the International Supersonic Tunnel Association (STAI). [ 4 ] [ 3 ] In 2005, the institute was named after Sergey Khristianovich. [ 5 ] As of 2023, three scientists at the institute had been arrested on suspicion of treason of sharing hypersonic technology. [ 6 ] The main directions of scientific research are physical-chemical mechanics, aerogasdynamics, mathematical modeling in mechanics, mechanics of rigid body , deformations, and destructions. [ 7 ]
https://en.wikipedia.org/wiki/Khristianovich_Institute_of_Theoretical_and_Applied_Mechanics
The Kinetic Simulation Algorithm Ontology (KiSAO) supplies information about existing algorithms available for the simulation of systems biology models, their characterization and interrelationships. KiSAO is part of the BioModels.net project and of the COMBINE initiative. [ 1 ] KiSAO consists of three main branches: The elements of each algorithm branch are linked to characteristic and parameter branches using has characteristic and has parameter relationships accordingly. The algorithm branch itself is hierarchically structured using relationships which denote that the descendant algorithms were derived from, or specify, more general ancestors.
https://en.wikipedia.org/wiki/KiSAO
The K i Database (or K i DB ) is a public domain database of published binding affinities ( K i ) of drugs and chemical compounds for receptors , neurotransmitter transporters , ion channels , and enzymes . The resource is maintained by the University of North Carolina at Chapel Hill and is funded by the NIMH Psychoactive Drug Screening Program and by a gift from the Heffter Research Institute . As of April 2010 [update] , the database had data for 7 449 compounds at 738 different receptors and, as of 27 April 2018 [update] , 67 696 K i values. The Ki database has data useful for both chemical biology and chemogenetics . This pharmacology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ki_Database
A Kibble balance (also formerly known as a watt balance ) is an electromechanical measuring instrument that measures the weight of a test object very precisely by the electric current and voltage needed to produce a compensating force. It is a metrological instrument that can realize the definition of the kilogram unit of mass based on fundamental constants . [ 1 ] [ 2 ] It was originally known as a watt balance because the weight of the test mass is proportional to the product of current and voltage, which is measured in watts . In June 2016, two months after the death of its inventor, Bryan Kibble , metrologists of the Consultative Committee for Units of the International Committee for Weights and Measures agreed to rename the device in his honor. [ 3 ] [ 4 ] Prior to 2019, the definition of the kilogram was based on a physical object known as the International Prototype of the Kilogram (IPK). After considering alternatives , in 2013 the General Conference on Weights and Measures (CGPM) agreed on accuracy criteria for replacing this definition with one based on the use of a Kibble balance. After these criteria had been achieved, the CGPM voted unanimously on November 16, 2018, to change the definition of the kilogram and several other units , effective May 20, 2019, to coincide with World Metrology Day . [ 3 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] There is also a method called the joule balance . All methods that use the fixed numerical value of the Planck constant are sometimes called the Planck balance. The Kibble balance is a more accurate version of the ampere balance , an early current measuring instrument in which the force between two current-carrying coils of wire is measured and then used to calculate the magnitude of the current. The Kibble balance operates in the opposite sense; the current in the coils set very precisely by the Planck constant , and the force between the coils is used to measure the weight of a test kilogram mass. Then the mass is calculated from the weight by accurately measuring the local Earth's gravity (the net acceleration combining gravitational and centrifugal effects) with a gravimeter . Thus the mass of the object is defined in terms of a current and a voltage — allowing the device to "measure mass without recourse to the IPK ( International Prototype Kilogram ) or any physical object". [ 9 ] The principle that is used in the Kibble balance was proposed by Bryan Kibble (1938-2016) of the UK National Physical Laboratory (NPL) in 1975 for measurement of the gyromagnetic ratio . [ 10 ] In 1978 the Mark I watt balance was built at the NPL with Ian Robinson and Ray Smith. [ 11 ] [ 12 ] It operated until 1988. [ 13 ] The main weakness of the ampere balance method is that the result depends on the accuracy with which the dimensions of the coils are measured. The Kibble balance uses an extra calibration step to cancel the effect of the geometry of the coils, removing the main source of uncertainty. This extra step involves moving the force coil through a known magnetic flux at a known speed. This was possible by setting of the conventional values of the von Klitzing constant and Josephson constant , which are used throughout the world for voltage and resistance calibration. Using these principles, in 1990 Bryan Kibble and Ian Robinson invented the Kibble Mark II balance, which uses a circular coil and operates in vacuum conditions . [ 14 ] Bryan Kibble worked with Ian Robinson and Janet Belliss to build this Mark Two version of the balance. This design allowed for measurements accurate enough for use in the redefinition of the SI unit of mass: the kilogram. [ 15 ] The Kibble balance originating from the National Physical Laboratory was transferred to the National Research Council of Canada (NRC) in 2009, where scientists from the two labs continued to refine the instrument. [ 16 ] In 2014, NRC researchers published the most accurate measurement of the Planck constant at that time, with a relative uncertainty of 1.8 × 10 −8 . [ 17 ] A final paper by NRC researchers was published in May 2017, presenting a measurement of the Planck constant with an uncertainty of only 9.1 parts per billion, the measurement with the least uncertainty to that date. [ 18 ] Other Kibble balance experiments are conducted in the US National Institute of Standards and Technology (NIST), the Swiss Federal Office of Metrology (METAS) in Berne, the International Bureau of Weights and Measures (BIPM) near Paris and Laboratoire national de métrologie et d’essais (LNE) in Trappes , France. [ 19 ] A conducting wire of length L {\displaystyle L} that carries an electric current I {\displaystyle I} perpendicular to a magnetic field of strength B {\displaystyle B} experiences a Lorentz force equal to the product of these variables. In the Kibble balance, the current is varied so that this force counteracts the weight w {\displaystyle w} of a mass m {\displaystyle m} to be measured. This principle is derived from the ampere balance. w {\displaystyle w} is given by the mass m {\displaystyle m} multiplied by the local gravitational acceleration g {\displaystyle g} . Thus, The Kibble balance avoids the problems of measuring B {\displaystyle B} and L {\displaystyle L} in a second calibration step. The same wire (in practice, a coil) is moved through the same magnetic field at a known speed v {\displaystyle v} . By Faraday's law of induction , a potential difference U {\displaystyle U} is generated across the ends of the wire, which equals B L v {\displaystyle BLv} . Thus The unknown product B L {\displaystyle BL} can be eliminated from the equations to give With U {\displaystyle U} , I {\displaystyle I} , g {\displaystyle g} , and v {\displaystyle v} accurately measured, this gives an accurate value for m {\displaystyle m} . Both sides of the equation have the dimensions of power , measured in watts in the International System of Units; hence the original name "watt balance". The product B L {\displaystyle BL} , also called the geometric factor, is not trivially equal in both calibration steps. The geometric factor is only constant under certain stability conditions on the coil. [ 1 ] The Kibble balance is constructed so that the mass to be measured and the wire coil are suspended from one side of a balance scale, with a counterbalance mass on the other side. The system operates by alternating between two modes: "weighing" and "moving". The entire mechanical subsystem operates in a vacuum chamber to remove the effects of air buoyancy. [ 20 ] While "weighing", the system measures I {\displaystyle I} , by controlling the current in the coil to keep the electromagnetic force on the coil balanced with the force of gravity. Coil position and velocity measurement circuitry uses an interferometer together with a precision clock input to determine the velocity and control the current needed to maintain it. The required current is measured, using an ammeter comprising a Josephson junction voltage standard and an integrating voltmeter. While "moving", the system measures U {\displaystyle U} and v {\displaystyle v} , by ceasing to provide current to the coil. This allows the counterbalance to pull the coil (and mass) upward through the magnetic field, which causes a voltage difference across the coil. The velocity measurement circuitry measures the speed v {\displaystyle v} of movement of the coil. This voltage is measured, using the same voltage standard and integrating voltmeter. A typical Kibble balance measures U {\displaystyle U} , I {\displaystyle I} , and v {\displaystyle v} , but does not measure the local gravitational acceleration g {\displaystyle g} , because g {\displaystyle g} does not vary rapidly with time. Instead, g {\displaystyle g} is measured in the same laboratory using a highly accurate and precise gravimeter . In addition, the balance depends on a highly accurate and precise frequency reference such as an atomic clock to compute voltage and current. Thus, the precision and accuracy of the mass measurement depends on the Kibble balance, the gravimeter, and the clock. Like the early atomic clocks, the early Kibble balances were one-of-a-kind experimental devices and were large, expensive, and delicate. As of 2019, work is underway to produce standardized devices at prices that permit use in any metrology laboratory that requires high-precision measurement of mass. [ 21 ] As well as large Kibble balances, microfabricated or MEMS watt balances (now called Kibble balances) have been demonstrated [ 22 ] since around 2003. These are fabricated on single silicon dies similar to those used in microelectronics and accelerometers, and are capable of measuring small forces in the nanonewton to micronewton range traceably to the SI-defined physical constants via electrical and optical measurements. Due to their small scale, MEMS Kibble balances typically use electrostatic rather than the inductive forces used in larger instruments. Lateral and torsional [ 23 ] variants have also been demonstrated, with the main application (as of 2019) being in the calibration of the atomic force microscope . Accurate measurements by several teams will enable their results to be averaged and so reduce the experimental error. [ 24 ] Accurate measurements of electric current and potential difference are made in conventional electrical units (rather than SI units), which are based on fixed " conventional values " of the Josephson constant and the von Klitzing constant , K J-90 {\displaystyle K_{\text{J-90}}} and R K-90 {\displaystyle R_{\text{K-90}}} respectively. The current Kibble balance experiments are equivalent to measuring the value of the conventional watt in SI units. From the definition of the conventional watt, this is equivalent to measuring the value of the product K J 2 R K {\displaystyle K_{\text{J}}^{2}R_{\text{K}}} in SI units instead of its fixed value in conventional electrical units: The importance of such measurements is that they are also a direct measurement of the Planck constant h {\displaystyle h} : The principle of the electronic kilogram relies on the value of the Planck constant, which is as of 2019 an exact value. This is similar to the metre being defined by the speed of light . With the constant defined exactly, the Kibble balance is not an instrument to measure the Planck constant, but is instead an instrument to measure mass: Gravity and the nature of the Kibble balance, which oscillates test masses up and down against the local gravitational acceleration g , are exploited so that mechanical power is compared against electrical power, which is the square of voltage divided by electrical resistance. However, g varies significantly—by nearly 1%—depending on where on the Earth's surface the measurement is made (see Earth's gravity ). There are also slight seasonal variations in g at a location due to changes in underground water tables, and larger semimonthly and diurnal changes due to tidal distortions in the Earth's shape ( earth tide and polar motion ) caused by the Moon and the Sun. Although g is not a term in the definition of the kilogram, it is crucial in the process of measurement of the kilogram when relating energy to power in a kibble balance. Accordingly, g must be measured with at least as much precision and accuracy as are the other terms, so measurements of g must also be traceable to fundamental constants of nature. For the most precise work in mass metrology, g is measured using dropping-mass absolute gravimeters that contain an iodine-stabilised helium–neon laser interferometer . The fringe-signal , frequency-sweep output from the interferometer is measured with a rubidium atomic clock . Since this type of dropping-mass gravimeter derives its accuracy and stability from the constancy of the speed of light as well as the innate properties of helium, neon, and rubidium atoms, the 'gravity' term in the delineation of an all-electronic kilogram is also measured in terms of invariants of nature—and with very high precision. For instance, in the basement of the NIST's Gaithersburg facility in 2009, when measuring the gravity acting upon Pt‑10Ir test masses (which are denser, smaller, and have a slightly lower center of gravity inside the Kibble balance than stainless steel masses), the measured value was typically within 8 ppb of 9.801 016 44 m/s 2 . [ 25 ] [ 26 ] [ 27 ]
https://en.wikipedia.org/wiki/Kibble_balance
The Kibble–Zurek mechanism ( KZM ) describes the non-equilibrium dynamics and the formation of topological defects in a system which is driven through a continuous phase transition at finite rate. It is named after Tom W. B. Kibble , who pioneered the study of domain structure formation through cosmological phase transitions in the early universe , and Wojciech H. Zurek , who related the number of defects it creates to the critical exponents of the transition and to its rate—to how quickly the critical point is traversed. Based on the formalism of spontaneous symmetry breaking , Tom Kibble developed the idea for the primordial fluctuations of a two-component scalar field like the Higgs field . [ 1 ] [ 2 ] If a two-component scalar field switches from the isotropic and homogeneous high-temperature phase to the symmetry-broken stage during cooling and expansion of the very early universe (shortly after Big Bang ), the order parameter necessarily cannot be the same in regions which are not connected by causality. Regions are not connected by causality if they are separated far enough (at the given age of the universe ) that they cannot "communicate" even with the speed of light . This implies that the symmetry cannot be broken globally. The order parameter will take different values in causally disconnected regions, and the domains will be separated by domain walls after further evolution of the universe . Depending on the symmetry of the system and the symmetry of the order parameter, different types of topological defects like monopoles, vortices or textures can arise. It was debated for quite a while if magnetic monopoles might be residuals of defects in the symmetry-broken Higgs field. [ 3 ] Up to now, defects like this have not been observed within the event horizon of the visible universe. This is one of the main reasons (beside the isotropy of the cosmic background radiation and the flatness of spacetime ) why nowadays an inflationary expansion of the universe is postulated. During the exponentially fast expansion within the first 10 −30 second after Big-Bang, all possible defects were diluted so strongly that they lie beyond the event horizon. Today, the two-component primordial scalar field is usually named inflaton . Wojciech Zurek pointed out, that the same ideas play a role for the phase transition of normal fluid helium to superfluid helium . [ 4 ] [ 5 ] [ 6 ] The analogy between the Higgs field and superfluid helium is given by the two-component order parameter; superfluid helium is described via a macroscopic quantum mechanical wave function with global phase. In helium, two components of the order parameter are magnitude and phase (or real and imaginary part) of the complex wave function. Defects in superfluid helium are given by vortex lines, where the coherent macroscopic wave function disappears within the core. Those lines are high-symmetry residuals within the symmetry broken phase. It is characteristic for a continuous phase transition that the energy difference between ordered and disordered phase disappears at the transition point. This implies that fluctuations between both phases will become arbitrarily large. Not only the spatial correlation lengths diverge for those critical phenomena , but fluctuations between both phases also become arbitrarily slow in time, described by the divergence of the relaxation time . If a system is cooled at any non-zero rate (e.g. linearly) through a continuous phase transition, the time to reach the transition will eventually become shorter than the correlation time of the critical fluctuations. At this time, the fluctuations are too slow to follow the cooling rate; the system has fallen out of equilibrium and ceases to be adiabatic. A "fingerprint" of critical fluctuations is taken at this fall-out time and the longest-length scale of the domain size is frozen out. The further evolution of the system is now determined by this length scale. For very fast cooling rates, the system will fall out of equilibrium very early and far away from the transition. The domain size will be small. For very slow rates, the system will fall out of equilibrium in the vicinity of the transition when the length scale of critical fluctuations will be large, thus the domain size will be large, too. [ footnote 1 ] The inverse of this length scale can be used as an estimate of the density of topological defects, and it obeys a power law in the quench rate. This prediction is universal, and the power exponent is given in terms of the critical exponents of the transition. Consider a system that undergoes a continuous phase transition at the critical value λ = λ c = 0 {\displaystyle \lambda =\lambda _{c}=0} of a control parameter. The theory of critical phenomena states that, as the control parameter is tuned closer and closer to its critical value, the correlation length ξ {\displaystyle \xi } and the relaxation time τ {\displaystyle \tau } of the system tend to diverge algebraically with the critical exponent ν {\displaystyle \nu } as ξ ∼ λ − ν , τ ∼ λ − z ν , {\displaystyle \xi \sim \lambda ^{-\nu },\qquad \tau \sim \lambda ^{-z\nu },} respectively. z {\displaystyle z} is the dynamic exponent which relates spatial with temporal critical fluctuations. The Kibble–Zurek mechanism describes the nonadiabatic dynamics resulting from driving a high-symmetry (i.e. disordered) phase λ ≪ 0 {\displaystyle \lambda \ll 0} to a broken-symmetry (i.e. ordered) phase at λ ≫ 0 {\displaystyle \lambda \gg 0} . If the control parameter varies linearly in time, λ ( t ) = v t {\displaystyle \lambda (t)=vt} , equating the time to the critical point to the relaxation time, we obtain the freeze out time t ¯ {\displaystyle {\bar {t}}} , t ¯ = [ λ ( t ¯ ) ] − z ν ⇒ t ¯ ∼ v − z ν / ( 1 + z ν ) . {\displaystyle {\bar {t}}=[\lambda ({\bar {t}})]^{-z\nu }\Rightarrow {\bar {t}}\sim v^{-z\nu /(1+z\nu )}.} This time scale is often referred to as the freeze-out time. It is the intersection point of the blue and the red curve in the figure. The distance to the transition is on one hand side the time to reach the transition as function of cooling rate (red curve) and for linear cooling rates at the same time the difference of the control parameter to the critical point (blue curve). As the system approaches the critical point, it freezes as a result of the critical slowing down and falls out of equilibrium. Adiabaticity is lost around − t ¯ {\displaystyle -{\bar {t}}} . Adiabaticity is restored in the broken-symmetry phase after + t ¯ {\displaystyle +{\bar {t}}} . The correlation length at this time provides a length scale for coherent domains, ξ ¯ ≡ ξ [ λ ( t ¯ ) ] ∼ v − ν / ( 1 + z ν ) . {\displaystyle {\bar {\xi }}\equiv \xi [\lambda ({\bar {t}})]\sim v^{-\nu /(1+z\nu )}.} The size of the domains in the broken-symmetry phase is set by ξ ¯ {\displaystyle {\bar {\xi }}} . The density of defects immediately follows if d {\displaystyle d} is the dimension of the system, using ρ ∼ ξ ¯ − d . {\displaystyle \rho \sim {\bar {\xi }}^{-d}.} The Kibble–Zurek mechanism generally applies to spontaneous symmetry breaking scenarios where a global symmetry is broken. For gauge symmetries defect formation can arise through the Kibble–Zurek mechanism and the flux trapping mechanism proposed by Hindmarsh and Rajantie. [ 7 ] [ 8 ] In 2005, it was shown that KZM describes as well the dynamics through a quantum phase transition . [ 9 ] [ 10 ] [ 11 ] [ 12 ] In 2008 spontaneous vortices were observed in the formation of atomic Bose-Einstein condensates, consistent with the Kibble-Zurek mechanism. [ 13 ] The mechanism also applies in the presence of inhomogeneities, [ 14 ] ubiquitous in condensed matter experiments, to both classical, [ 15 ] [ 16 ] [ 17 ] quantum phase transitions [ 18 ] [ 19 ] and even in optics. [ 20 ] A variety of experiments have been reported that can be described by the Kibble–Zurek mechanism. [ 21 ] A review by T. Kibble discusses the significance and limitations of various experiments (until 2007). [ 22 ] A system, where structure formation can be visualized directly is given by a colloidal mono-layer which forms a hexagonal crystal in two dimensions. The phase transition is described by the so-called Kosterlitz–Thouless–Halperin–Nelson–Young theory where translational and orientational symmetry are broken by two Kosterlitz–Thouless transitions . The corresponding topological defects are dislocations and disclinations in two dimensions. The latter are nothing else but the monopoles of the high-symmetry phase within the six-fold director field of crystal axes. A special feature of Kosterlitz–Thouless transitions is the exponential divergence of correlation times and length (instead of algebraic ones). This serves a transcendental equation which can be solved numerically. The figure shows a comparison of the Kibble–Zurek scaling with algebraic and exponential divergences. The data illustrate, that the Kibble–Zurek mechanism also works for transitions of the Kosterlitz–Thoules universality class. [ 23 ]
https://en.wikipedia.org/wiki/Kibble–Zurek_mechanism
KickStart International is a nonprofit social enterprise headquartered in Nairobi , Kenya . KickStart designs and mass-markets climate-smart irrigation technology to smallholder farmers in sub-Saharan Africa , in order to enable a transition from subsistence agriculture to commercial irrigated agriculture. Donor funds are used to design the irrigation pumps, establish supply chains, demonstrate and promote the pumps, and educate farmers on the benefits and methods of small-scale irrigation. [ 1 ] Food supply across sub-Saharan Africa is highly unstable due to its unpredictable climate and water reserves. [ 2 ] Only 6% of Africa's cultivated land is irrigated, limiting the volume of crops that can be grown out of season, but increased access to irrigation systems stands to increase food productivity by up to 50%. [ 3 ] KickStart was founded in 1991 by Dr. Martin Fisher and Nick Moon. Fisher first went to Kenya on a Fulbright Fellowship to study the Appropriate Technology Movement , where he met Moon, who was in Kenya with the Voluntary Service Overseas (VSO) . The two worked closely together on a variety of development interventions, including building rural water systems, constructing schools, and creating job training programs. Out of frustration with traditional development models, Fisher and Moon developed an alternative model for poverty alleviation. [ 4 ] Their model was based on a five-step process to develop, launch and promote simple money-making tools that poor entrepreneurs could use to create their own profitable businesses. [ 5 ] [ 6 ] Together, they founded ApproTEC, which later became KickStart International in 2005. Starting in 1998, KickStart began developing a line of manually operated irrigation pumps, designed to enable farmers to easily pull water from a river, pond, or shallow well, and pressurize it through a hose pipe to reach their crops. Through this small-scale technological intervention, farmer can harvest their crops year-round, facilitating a transition from rain-fed subsistence farming to year-round commercial irrigated agriculture. [ 7 ] The MoneyMaker Max can pressurize water to a total height of 50 feet (15 m), pushing it through a hose pipe as far as 200 metres (660 ft), and can irrigate as much as two acres (0.81 ha) of land. [ 8 ] KickStart has received the following awards: Schwab Foundation's Outstanding Social Entrepreneurs (2003), [ 9 ] US State Department "Innovation Award for the Empowerment of Women and Girls" (2012), [ 10 ] Forbes Magazine Impact 30 List - World's leading social entrepreneurs (2011), [ 11 ] Lemelson-MIT Award for Sustainability (2008), Social Capitalist Award Fast Company Magazine & the Monitor Group (2008), [ 12 ] Skoll Award for Social Entrepreneurship (2005), [ 13 ] Gleitsman Award of Achievement (2003). [ 14 ]
https://en.wikipedia.org/wiki/KickStart_International
The kicked rotator , also spelled as kicked rotor , is a paradigmatic model for both Hamiltonian chaos (the study of chaos in Hamiltonian systems ) and quantum chaos . It describes a free rotating stick (with moment of inertia I {\displaystyle I} ) in an inhomogeneous "gravitation like" field that is periodically switched on in short pulses. The model is described by the Hamiltonian where θ ∈ [ 0 , 2 π ] {\displaystyle \theta \in [0,2\pi ]} is the angular position of the stick ( θ = π {\displaystyle \theta =\pi } corresponds to the position of the rotator at rest), p θ {\displaystyle p_{\theta }} is the conjugated momentum of θ {\displaystyle \theta } , K {\displaystyle \textstyle K} is the kicking strength, T {\displaystyle T} is the kicking period and δ {\displaystyle \textstyle \delta } is the Dirac delta function . The equations of motion of the kicked rotator write d θ d t = ∂ H ∂ p = p I and d p d t = − ∂ H ∂ θ = K sin ⁡ θ ∑ n = − ∞ ∞ δ ( t T − n ) {\displaystyle {\frac {\mathrm {d} \theta }{\mathrm {d} t}}={\frac {\partial {\mathcal {H}}}{\partial p}}={\frac {p}{I}}\quad {\text{and}}\quad {\frac {\mathrm {d} p}{\mathrm {d} t}}=-{\frac {\partial {\mathcal {H}}}{\partial \theta }}=K\sin \theta \sum _{n=-\infty }^{\infty }\delta \left({\frac {t}{T}}-n\right)} Theses equations show that between two consecutive kicks, the rotator simply moves freely: the momentum p {\displaystyle p} is conserved and the angular position growths linearly in time. On the other hand, during each kick the momentum abruptly jumps by a quantity K T sin ⁡ θ {\displaystyle KT\sin \theta } , where θ {\displaystyle \theta } is the angular position near the kick. The kicked rotator dynamics can thus be described by the discrete map [ 1 ] p n + 1 = p n + K T sin ⁡ θ n and θ n + 1 = θ n + T I p n + 1 {\displaystyle p_{n+1}=p_{n}+KT\sin \theta _{n}\quad {\text{and}}\quad \theta _{n+1}=\theta _{n}+{\frac {T}{I}}p_{n+1}} where θ n {\displaystyle \theta _{n}} and p n {\displaystyle p_{n}} are the canonical coordinates at time t = n T − {\displaystyle t=nT^{-}} , just before the n {\displaystyle n} -th kick. It is usually more convenient to introduce dimensionless momentum p → p / I T {\textstyle p\rightarrow p/{\frac {I}{T}}} , time t → t / T {\textstyle t\rightarrow t/T} and kicking strength K → K / I T 2 {\textstyle K\rightarrow K/{\frac {I}{T^{2}}}} to reduce the dynamics to the single parameter map p n + 1 = p n + K sin ⁡ θ n and θ n + 1 = θ n + p n + 1 {\displaystyle p_{n+1}=p_{n}+K\sin \theta _{n}\quad {\text{and}}\quad \theta _{n+1}=\theta _{n}+p_{n+1}} known as Chirikov standard map , with the caveat that p n {\displaystyle p_{n}} is not periodic as in the standard map. However, one can directly see that two rotators with same initial angular position θ 0 {\displaystyle \theta _{0}} but shifted dimensionless momentum p 0 {\displaystyle p_{0}} and p 0 + 2 π l {\textstyle p_{0}+2\pi l} (with l {\displaystyle l} an arbitrary integer) will have the same exact stroboscopic dynamics, but with dimensionless momentum shifted at any time by 2 π l {\textstyle 2\pi l} (this is why stroboscopic phase portraits of the kicked rotator are usually displayed in a single momentum cell p ∈ [ − π , π ] {\textstyle p\in [-\pi ,\pi ]} ). The kicked rotator is a prototype model used to illustrate the transition from integrability to chaos in Hamiltonian systems and in particular the Kolmogorov–Arnold–Moser theorem . In the limit K = 0 {\displaystyle K=0} , the system describes the free motion of the rotator, the momentum is conserved (the system is integrable ) and the corresponding trajectories are straight lines in the ( θ , p ) {\displaystyle (\theta ,p)} plane (phase space), that is tori. For small, but non-vanishing perturbation K {\displaystyle K} , instabilities and chaos starts to develop. Only quasi-periodic orbits (represented by invariant tori in phase space) remain stable, while other orbits become unstable. For larger K {\displaystyle K} , invariant tori are eventually destroyed by the perturbation. For the value K = K c ≈ 0.971635 … {\displaystyle K=K_{c}\approx 0.971635\dots } , the last invariant tori connecting θ = − π {\displaystyle \theta =-\pi } and θ = π {\displaystyle \theta =\pi } in phase space is destroyed. For K > K c {\displaystyle K>K_{c}} , chaotic unstable orbits are no longer constraints by invariant tori in the momentum direction and can explore the full phase space. For K ≫ K c {\displaystyle K\gg K_{c}} , the particle after each kicks typically moved over a large distance, which strongly modifies the amplitude and sign of the following kick. At long time enough, the particle as thus been submitted to a series of kicks with quasi-random amplitudes. This quasi-random walk is responsible for a diffusion process in the momentum direction ⟨ ( Δ p n ) 2 ⟩ = 2 D cl n {\displaystyle \langle (\Delta p_{n})^{2}\rangle =2D_{\text{cl}}n} (where the average runs over different initial conditions). More precisely, after n {\displaystyle n} kicks, the momentum p n {\displaystyle p_{n}} of a particle with initial momentum p 0 {\displaystyle p_{0}} writes p n = p 0 + K ∑ i = 0 n − 1 sin ⁡ θ i {\textstyle p_{n}=p_{0}+K\sum _{i=0}^{n-1}\sin \theta _{i}} [ 2 ] (obtained by iterating n {\displaystyle n} times the standard map). Assuming that kicks are randoms and uncorrelated in time, the spreading of the momentum distribution writes ⟨ ( Δ p ) 2 ⟩ = ⟨ ( p n − p 0 ) 2 ⟩ = K 2 ∑ i = 0 n − 1 ⟨ sin 2 θ i ⟩ + K 2 ∑ i ≠ j ⟨ sin ⁡ θ i sin ⁡ θ j ⟩ ≈ K 2 ∑ i = 0 n − 1 ⟨ sin 2 θ i ⟩ = 1 2 K 2 n {\displaystyle \left\langle {(\Delta p)}^{2}\right\rangle =\left\langle {(p_{n}-p_{0})}^{2}\right\rangle =K^{2}\sum _{i=0}^{n-1}\left\langle {\sin }^{2}\theta _{i}\right\rangle +K^{2}\sum _{i\neq j}^{}\left\langle \sin \theta _{i}\sin \theta _{j}\right\rangle \approx K^{2}\sum _{i=0}^{n-1}\left\langle {\sin }^{2}\theta _{i}\right\rangle ={\frac {1}{2}}K^{2}n} The classical diffusion coefficient in momentum direction is then given in first approximation by D cl = K 2 4 {\textstyle D_{\text{cl}}={\frac {K^{2}}{4}}} . Corrections coming from neglected correlation terms can actually be taken into account, leading to the improved expression [ 3 ] D cl = K 2 4 [ 1 − 2 J 2 ( K ) + 2 J 2 2 ( K ) ] {\displaystyle D_{\text{cl}}={\frac {K^{2}}{4}}[1-2J_{2}(K)+2J_{2}^{2}(K)]} where J 2 {\textstyle J_{2}} is the Bessel function of first kind. The dynamics of the quantum kicked rotator (with wave function | ψ ( t ) ⟩ {\displaystyle |\psi (t)\rangle } ) is governed by the time dependent Schrödinger's equation with [ θ ^ , p ^ ] = i ℏ {\displaystyle [{\hat {\theta }},{\hat {p}}]=i\hbar } (or equivalently ⟨ θ | p ^ | ψ ⟩ = i ℏ ∂ ψ ∂ θ {\textstyle \langle \theta |{\hat {p}}|\psi \rangle =i\hbar {\frac {\partial \psi }{\partial \theta }}} ). As for classical dynamics, a stroboscopic point of view can be adopted by introducing the time propagator over a kicking period U ^ {\displaystyle {\hat {U}}} (that is the Floquet operator ) so that | ψ ( t + T ) ⟩ = U ^ | ψ ( t ) ⟩ {\displaystyle |\psi (t+T)\rangle ={\hat {U}}|\psi (t)\rangle } . After a careful integration of the time-dependent Schrödinger's equation, one finds that U ^ {\displaystyle {\hat {U}}} can be written as the product of two operators U ^ = exp ⁡ [ − i p ^ 2 T 2 I ℏ ] exp ⁡ [ − i K T ℏ cos ⁡ θ ^ ] {\displaystyle {\hat {U}}=\exp \left[-i{\frac {{\hat {p}}^{2}T}{2I\hbar }}\right]\exp \left[-i{\frac {KT}{\hbar }}\cos {\hat {\theta }}\right]} We recover the classical interpretation: the dynamics of the quantum kicked rotor between two kicks is the succession of a free propagation during a time T {\displaystyle T} followed by a short kick. This simple expression of the Floquet operator U ^ {\displaystyle {\hat {U}}} (a product of two operators, one diagonal in momentum basis, the other one diagonal in angular position basis) allows to easily numerically solve the evolution of a given wave function using split-step method . Because of the periodic boundary conditions at θ = ± π {\displaystyle \theta =\pm \pi } , any wave function | ψ ⟩ {\displaystyle |\psi \rangle } can be expanded in a discrete momentum basis | l ⟩ {\textstyle |l\rangle } (with p = l ℏ {\displaystyle p=l\hbar } , l {\displaystyle l} integer) see Bloch theorem ), so that Using this relation with the above expression of U ^ {\displaystyle {\hat {U}}} , we find the recursion relation [ 4 ] ⟨ l | ψ ( t + T ) ⟩ = exp ⁡ ( − i l 2 ℏ T 2 I ) ∑ m = − ∞ ∞ ( − i ) m − l J m − l ( K T ℏ ) ⟨ m | ψ ( t ) ⟩ {\displaystyle \langle l|\psi (t+T)\rangle =\exp \left(-i{\frac {l^{2}\hbar T}{2I}}\right)\sum _{m=-\infty }^{\infty }(-i)^{m-l}J_{m-l}\left({\frac {KT}{\hbar }}\right)\langle m|\psi (t)\rangle } where J n {\displaystyle \textstyle {J}_{n}} is a Bessel function of first kind. It has been discovered [ 1 ] that the classical diffusion is suppressed in the quantum kicked rotator. It was later understood [ 5 ] [ 6 ] [ 7 ] [ 8 ] that this is a manifestation of a quantum dynamical localization effect that parallels Anderson localization . There is a general argument [ 9 ] [ 10 ] that leads to the following estimate for the breaktime of the diffusive behavior Where D c l {\displaystyle D_{cl}} is the classical diffusion coefficient. The associated localization scale in momentum is therefore D c l t ∗ {\displaystyle \textstyle {\sqrt {D_{cl}t^{*}}}} . The quantum kicked rotor can actually formally be related to the Anderson tight-binding model a celebrated Hamiltonian that describes electrons in a disordered lattice with lattice site state | n ⟩ {\displaystyle |n\rangle } , where Anderson localization takes place (in one dimension) H ^ = ∑ n ε n | n ⟩ ⟨ n | + ∑ n ≠ m t n − m | n ⟩ ⟨ m | {\displaystyle {\hat {H}}=\sum _{n}\varepsilon _{n}|n\rangle \langle n|+\sum _{n\neq m}t_{n-m}|n\rangle \langle m|} where the ε n {\displaystyle \varepsilon _{n}} are random on-site energies, and the t n − m {\displaystyle t_{n-m}} are the hopping amplitudes between sites n {\displaystyle n} and m {\displaystyle m} . In the quantum kicked rotator it can be shown, [ 11 ] that the plane wave | p ⟩ {\displaystyle |p\rangle } with quantized momentum p = n ℏ {\displaystyle p=n\hbar } play the role of the lattice sites states. The full mapping to the Anderson tight-binding model goes as follow (for a given eigenstates of the Floquet operator, with quasi-energy ω {\displaystyle \omega } ) t n = − ∫ − π π d x 2 π tan ⁡ [ K cos ⁡ ( x ) / 2 ] e − i x n and ε n = tan ⁡ ( ω / 2 − n 2 / 4 ) {\displaystyle t_{n}=-\int _{-\pi }^{\pi }{\frac {\mathrm {d} x}{2\pi }}\tan[K\cos(x)/2]\mathrm {e} ^{-ixn}\quad {\text{and}}\quad \varepsilon _{n}=\tan(\omega /2-n^{2}/4)} Dynamical localization in the quantum kicked rotator then actually takes place in the momentum basis. If noise is added to the system, the dynamical localization is destroyed, and diffusion is induced. [ 12 ] [ 13 ] [ 14 ] This is somewhat similar to hopping conductance. The proper analysis requires to figure out how the dynamical correlations that are responsible for the localization effect are diminished. Recall that the diffusion coefficient is D c l ≈ K 2 / 2 {\displaystyle D_{cl}\approx K^{2}/2} , because the change ( p ( t ) − p ( 0 ) ) {\displaystyle (p(t)-p(0))} in the momentum is the sum of quasi-random kicks K sin ⁡ ( x ( n ) ) {\displaystyle K\sin(x(n))} . An exact expression for D c l {\displaystyle D_{cl}} is obtained by calculating the "area" of the correlation function C ( n ) = ⟨ sin ⁡ ( x ( n ) ) sin ⁡ ( x ( 0 ) ) ⟩ {\displaystyle C(n)=\langle \sin(x(n))\sin(x(0))\rangle } , namely the sum D = K 2 ∑ C ( n ) {\displaystyle D=K^{2}\sum C(n)} . Note that C ( 0 ) = 1 / 2 {\displaystyle C(0)=1/2} . The same calculation recipe holds also in the quantum mechanical case, and also if noise is added. In the quantum case, without the noise, the area under C ( n ) {\displaystyle C(n)} is zero (due to long negative tails), while with the noise a practical approximation is C ( n ) ↦ C ( n ) e − t / t c {\displaystyle C(n)\mapsto C(n)e^{-t/t_{c}}} where the coherence time t c {\displaystyle t_{c}} is inversely proportional to the intensity of the noise. Consequently, the noise induced diffusion coefficient is Also the problem of quantum kicked rotator with dissipation (due to coupling to a thermal bath) has been considered. There is an issue here how to introduce an interaction that respects the angle periodicity of the position x {\displaystyle x} coordinate, and is still spatially homogeneous. In the first works [ 15 ] [ 16 ] a quantum-optic type interaction has been assumed that involves a momentum dependent coupling. Later [ 17 ] a way to formulate a purely position dependent coupling, as in the Caldeira-Leggett model, has been figured out, which can be regarded as the earlier version of the DLD model . The first experimental realizations of the quantum kicked rotator have been achieved by Mark G. Raizen group [ 18 ] [ 19 ] in 1995, later followed by the Auckland group, [ 20 ] and have encouraged a renewed interest in the theoretical analysis. In this kind of experiment, a sample of cold atoms provided by a magneto-optical trap interacts with a pulsed standing wave of light. The light being detuned with respect to the atomic transitions, atoms undergo a space-periodic conservative force . Hence, the angular dependence is replaced by a dependence on position in the experimental approach. Sub-milliKelvin cooling is necessary to obtain quantum effects: because of the Heisenberg uncertainty principle , the de Broglie wavelength, i.e. the atomic wavelength, can become comparable to the light wavelength. For further information, see. [ 21 ] Thanks to this technique, several phenomena have been investigated, including the noticeable:
https://en.wikipedia.org/wiki/Kicked_rotator