text
stringlengths
11
320k
source
stringlengths
26
161
Statistical finance [ 1 ] is the application of econophysics [ 2 ] to financial markets . Instead of the normative roots of finance , it uses a positivist framework. It includes exemplars from statistical physics with an emphasis on emergent or collective properties of financial markets. Empirically observed stylized facts are the starting point for this approach to understanding financial markets. Statistical finance is focused on three areas: Financial econometrics also has a focus on the first two of these three areas. However, there is almost no overlap or interaction between the community of statistical finance researchers (who typically publish in physics journals) and the community of financial econometrics researchers (who typically publish in economics journals). Behavioural finance attempts to explain price anomalies in terms of the biased behaviour of individuals, mostly concerned with the agents themselves and to a lesser degree aggregation of agent behaviour. Statistical finance is concerned with emergent properties arising from systems with many interacting agents and as such attempts to explain price anomalies in terms of the collective behaviour. Emergent properties are largely independent of the uniqueness of individual agents because they are dependent on the nature of the interactions of the agents rather than the agents themselves. This approach has drawn strongly on ideas arising from complex systems , phase transitions , criticality , self-organized criticality , non-extensivity (see Tsallis entropy ), q-Gaussian models, and agents based models (see agent based model ); as these are known to be able to recover some of phenomenology of financial market data, the stylized facts , in particular the long-range memory and scaling due to long-range interactions. Within the subject the description of financial markets blindly in terms of models of statistical physics has been argued as flawed because it has transpired these do not fully correspond to what we now know about real finance markets. First, traders create largely noise, not long range correlations among themselves, except when they all buy or all sell, such as during a popular IPO or during a crash. A market is not at an equilibrium critical point, the resulting non-equilibrium market must reflect details of traders' interactions (universality applies only to a limited very class of bifurcations, and the market does not sit at a bifurcation). Even if the notion of a thermodynamics equilibrium is considered not at the level of the agents but in terms of collections of instruments stable configurations are not observed. The market does not 'self-organize' into a stable statistical equilibrium, rather, markets are unstable. Although markets could be 'self-organizing' in the sense used by finite-time singularity models these models are difficult to falsify. Although Complex systems have never been defined in a broad sense financial markets do satisfy reasonable criterion of being considered complex adaptive systems . [ 3 ] The Tallis doctrine has been put into question as it is apparently a special case of markov dynamics so questioning the very notion of a "non-linear Fokker-Plank equation". In addition, the standard 'stylized facts' of financial markets, fat tails, scaling, and universality are not observed in real FX markets even if they are observed in equity markets. From outside the subject the approach has been considered by many as a dangerous view of finance which has drawn criticism from some heterodox economists because of: [ 4 ] In response to these criticism there are claims of a general maturing of these non-traditional empirical approaches to Finance. [ 5 ] This defense of the subject does not flatter the use of physics metaphors but does defend the alternative empirical approach of "econophysics" itself. Some of the key data claims have been questioned in terms of methods of data analysis. [ 6 ] Some of the ideas arising from nonlinear sciences and statistical physics have been helpful in shifting our understanding financial markets, and may yet be found useful, but the particular requirements of stochastic analysis to the specific models useful in finance is apparently unique to finance as a subject. There is much lacking in this approach to finance yet it would appear that the canonical approach to finance based optimization of individual behaviour given information and preferences with assumptions to allow aggregation in equilibrium are even more problematic. It has been suggested that what is required is a change in mindset within finance and economics that moves the field towards methods of natural science. [ 7 ] Perhaps finance needs to be thought of more as an observational science where markets are observed in the same way as the observable universe in cosmology, or the observable ecosystems in the environmental sciences. Here local principles can be uncovered by local experiments but meaningful global experiments are difficult to envision as feasible without reproducing the system being observed. The required science becomes that based largely on pluralism (see scientific pluralism the view that some phenomena observed in science require multiple explanations to account for their nature), as in most sciences that deal with complexity, rather than a singled unified mathematical framework that is to be verified by experiment. See Econophysics bibliography and text books [ broken anchor ]
https://en.wikipedia.org/wiki/Statistical_finance
Statistical fluctuations are fluctuations in quantities derived from many identical random processes. They are fundamental and unavoidable. It can be proved that the relative fluctuations reduce as the square root of the number of identical processes. Statistical fluctuations are responsible for many results of statistical mechanics and thermodynamics , including phenomena such as shot noise in electronics. When a number of random processes occur, it can be shown that the outcomes fluctuate (vary in time) and that the fluctuations are inversely proportional to the square root of the number of processes. The average of fluctuations over a statistical ensemble is always zero as they are defined as deviations from the mean. [ 1 ] To characterize the intensity of fluctuations, several statistical measures are used. The Variance is the most common measure of fluctuation intensity. It's defined as the average of the squared deviations from the mean. [ 2 ] The Root Mean Square (RMS) fluctuation: This is the square root of the variance and provides a measure of the typical magnitude of fluctuations. As an example that will be familiar to all, if a fair coin is tossed many times and the number of heads and tails counted, the ratio of heads to tails will be very close to 1 (about as many heads as tails); but after only a few throws, outcomes with a significant excess of heads over tails or vice versa are common; if an experiment with a few throws is repeated over and over, the outcomes will fluctuate a lot. An electric current so small that not many electrons are involved flowing through a p-n junction is susceptible to statistical fluctuations as the actual number of electrons per unit time (the current) will fluctuate; this produces detectable and unavoidable electrical noise known as shot noise . This article about statistical mechanics is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Statistical_fluctuations
A statistical hypothesis test is a method of statistical inference used to decide whether the data provide sufficient evidence to reject a particular hypothesis. A statistical hypothesis test typically involves a calculation of a test statistic . Then a decision is made, either by comparing the test statistic to a critical value or equivalently by evaluating a p -value computed from the test statistic. Roughly 100 specialized statistical tests are in use and noteworthy. [ 1 ] [ 2 ] While hypothesis testing was popularized early in the 20th century, early forms were used in the 1700s. The first use is credited to John Arbuthnot (1710), [ 3 ] followed by Pierre-Simon Laplace (1770s), in analyzing the human sex ratio at birth; see § Human sex ratio . Paul Meehl has argued that the epistemological importance of the choice of null hypothesis has gone largely unacknowledged. When the null hypothesis is predicted by theory, a more precise experiment will be a more severe test of the underlying theory. When the null hypothesis defaults to "no difference" or "no effect", a more precise experiment is a less severe test of the theory that motivated performing the experiment. [ 4 ] An examination of the origins of the latter practice may therefore be useful: 1778: Pierre Laplace compares the birthrates of boys and girls in multiple European cities. He states: "it is natural to conclude that these possibilities are very nearly in the same ratio". Thus, the null hypothesis in this case that the birthrates of boys and girls should be equal given "conventional wisdom". [ 5 ] 1900: Karl Pearson develops the chi squared test to determine "whether a given form of frequency curve will effectively describe the samples drawn from a given population." Thus the null hypothesis is that a population is described by some distribution predicted by theory. He uses as an example the numbers of five and sixes in the Weldon dice throw data . [ 6 ] 1904: Karl Pearson develops the concept of " contingency " in order to determine whether outcomes are independent of a given categorical factor. Here the null hypothesis is by default that two things are unrelated (e.g. scar formation and death rates from smallpox). [ 7 ] The null hypothesis in this case is no longer predicted by theory or conventional wisdom, but is instead the principle of indifference that led Fisher and others to dismiss the use of "inverse probabilities". [ 8 ] Modern significance testing is largely the product of Karl Pearson ( p -value , Pearson's chi-squared test ), William Sealy Gosset ( Student's t-distribution ), and Ronald Fisher (" null hypothesis ", analysis of variance , " significance test "), while hypothesis testing was developed by Jerzy Neyman and Egon Pearson (son of Karl). Ronald Fisher began his life in statistics as a Bayesian (Zabell 1992), but Fisher soon grew disenchanted with the subjectivity involved (namely use of the principle of indifference when determining prior probabilities), and sought to provide a more "objective" approach to inductive inference. [ 9 ] Fisher emphasized rigorous experimental design and methods to extract a result from few samples assuming Gaussian distributions . Neyman (who teamed with the younger Pearson) emphasized mathematical rigor and methods to obtain more results from many samples and a wider range of distributions. Modern hypothesis testing is an inconsistent hybrid of the Fisher vs Neyman/Pearson formulation, methods and terminology developed in the early 20th century. Fisher popularized the "significance test". He required a null-hypothesis (corresponding to a population frequency distribution) and a sample. His (now familiar) calculations determined whether to reject the null-hypothesis or not. Significance testing did not utilize an alternative hypothesis so there was no concept of a Type II error (false negative). The p -value was devised as an informal, but objective, index meant to help a researcher determine (based on other knowledge) whether to modify future experiments or strengthen one's faith in the null hypothesis. [ 10 ] Hypothesis testing (and Type I/II errors) was devised by Neyman and Pearson as a more objective alternative to Fisher's p -value, also meant to determine researcher behaviour, but without requiring any inductive inference by the researcher. [ 11 ] [ 12 ] Neyman & Pearson considered a different problem to Fisher (which they called "hypothesis testing"). They initially considered two simple hypotheses (both with frequency distributions). They calculated two probabilities and typically selected the hypothesis associated with the higher probability (the hypothesis more likely to have generated the sample). Their method always selected a hypothesis. It also allowed the calculation of both types of error probabilities. Fisher and Neyman/Pearson clashed bitterly. Neyman/Pearson considered their formulation to be an improved generalization of significance testing (the defining paper [ 11 ] was abstract ; Mathematicians have generalized and refined the theory for decades [ 13 ] ). Fisher thought that it was not applicable to scientific research because often, during the course of the experiment, it is discovered that the initial assumptions about the null hypothesis are questionable due to unexpected sources of error. He believed that the use of rigid reject/accept decisions based on models formulated before data is collected was incompatible with this common scenario faced by scientists and attempts to apply this method to scientific research would lead to mass confusion. [ 14 ] The dispute between Fisher and Neyman–Pearson was waged on philosophical grounds, characterized by a philosopher as a dispute over the proper role of models in statistical inference. [ 15 ] Events intervened: Neyman accepted a position in the University of California, Berkeley in 1938, breaking his partnership with Pearson and separating the disputants (who had occupied the same building). World War II provided an intermission in the debate. The dispute between Fisher and Neyman terminated (unresolved after 27 years) with Fisher's death in 1962. Neyman wrote a well-regarded eulogy. [ 16 ] Some of Neyman's later publications reported p -values and significance levels. [ 17 ] The modern version of hypothesis testing is generally called the null hypothesis significance testing (NHST) [ 18 ] and is a hybrid of the Fisher approach with the Neyman-Pearson approach. In 2000, Raymond S. Nickerson wrote an article stating that NHST was (at the time) "arguably the most widely used method of analysis of data collected in psychological experiments and has been so for about 70 years" and that it was at the same time "very controversial". [ 18 ] This fusion resulted from confusion by writers of statistical textbooks (as predicted by Fisher) beginning in the 1940s [ 19 ] (but signal detection , for example, still uses the Neyman/Pearson formulation). Great conceptual differences and many caveats in addition to those mentioned above were ignored. Neyman and Pearson provided the stronger terminology, the more rigorous mathematics and the more consistent philosophy, but the subject taught today in introductory statistics has more similarities with Fisher's method than theirs. [ 20 ] Sometime around 1940, [ 19 ] authors of statistical text books began combining the two approaches by using the p -value in place of the test statistic (or data) to test against the Neyman–Pearson "significance level". Hypothesis testing and philosophy intersect. Inferential statistics , which includes hypothesis testing, is applied probability. Both probability and its application are intertwined with philosophy. Philosopher David Hume wrote, "All knowledge degenerates into probability." Competing practical definitions of probability reflect philosophical differences. The most common application of hypothesis testing is in the scientific interpretation of experimental data, which is naturally studied by the philosophy of science . Fisher and Neyman opposed the subjectivity of probability. Their views contributed to the objective definitions. The core of their historical disagreement was philosophical. Many of the philosophical criticisms of hypothesis testing are discussed by statisticians in other contexts, particularly correlation does not imply causation and the design of experiments . Hypothesis testing is of continuing interest to philosophers. [ 15 ] [ 21 ] Statistics is increasingly being taught in schools with hypothesis testing being one of the elements taught. [ 22 ] [ 23 ] Many conclusions reported in the popular press (political opinion polls to medical studies) are based on statistics. Some writers have stated that statistical analysis of this kind allows for thinking clearly about problems involving mass data, as well as the effective reporting of trends and inferences from said data, but caution that writers for a broad public should have a solid understanding of the field in order to use the terms and concepts correctly. [ 24 ] [ 25 ] An introductory college statistics class places much emphasis on hypothesis testing – perhaps half of the course. Such fields as literature and divinity now include findings based on statistical analysis (see the Bible Analyzer ). An introductory statistics class teaches hypothesis testing as a cookbook process. Hypothesis testing is also taught at the postgraduate level. Statisticians learn how to create good statistical test procedures (like z , Student's t , F and chi-squared). Statistical hypothesis testing is considered a mature area within statistics, [ 26 ] but a limited amount of development continues. An academic study states that the cookbook method of teaching introductory statistics leaves no time for history, philosophy or controversy. Hypothesis testing has been taught as received unified method. Surveys showed that graduates of the class were filled with philosophical misconceptions (on all aspects of statistical inference) that persisted among instructors. [ 27 ] While the problem was addressed more than a decade ago, [ 28 ] and calls for educational reform continue, [ 29 ] students still graduate from statistics classes holding fundamental misconceptions about hypothesis testing. [ 30 ] Ideas for improving the teaching of hypothesis testing include encouraging students to search for statistical errors in published papers, teaching the history of statistics and emphasizing the controversy in a generally dry subject. [ 31 ] Raymond S. Nickerson commented: The debate about NHST has its roots in unresolved disagreements among major contributors to the development of theories of inferential statistics on which modern approaches are based. Gigerenzer et al. (1989) have reviewed in considerable detail the controversy between R. A. Fisher on the one hand and Jerzy Neyman and Egon Pearson on the other as well as the disagreements between both of these views and those of the followers of Thomas Bayes. They noted the remarkable fact that little hint of the historical and ongoing controversy is to be found in most textbooks that are used to teach NHST to its potential users. The resulting lack of an accurate historical perspective and understanding of the complexity and sometimes controversial philosophical foundations of various approaches to statistical inference may go a long way toward explaining the apparent ease with which statistical tests are misused and misinterpreted. [ 18 ] The typical steps involved in performing a frequentist hypothesis test in practice are: The difference in the two processes applied to the radioactive suitcase example (below): The former report is adequate, the latter gives a more detailed explanation of the data and the reason why the suitcase is being checked. Not rejecting the null hypothesis does not mean the null hypothesis is "accepted" per se (though Neyman and Pearson used that word in their original writings; see the Interpretation section). The processes described here are perfectly adequate for computation. They seriously neglect the design of experiments considerations. [ 33 ] [ 34 ] It is particularly critical that appropriate sample sizes be estimated before conducting the experiment. The phrase "test of significance" was coined by statistician Ronald Fisher . [ 35 ] When the null hypothesis is true and statistical assumptions are met, the probability that the p-value will be less than or equal to the significance level α {\displaystyle \alpha } is at most α {\displaystyle \alpha } . This ensures that the hypothesis test maintains its specified false positive rate (provided that statistical assumptions are met). [ 36 ] The p -value is the probability that a test statistic which is at least as extreme as the one obtained would occur under the null hypothesis. At a significance level of 0.05, a fair coin would be expected to (incorrectly) reject the null hypothesis (that it is fair) in 1 out of 20 tests on average. The p -value does not provide the probability that either the null hypothesis or its opposite is correct (a common source of confusion). [ 37 ] If the p -value is less than the chosen significance threshold (equivalently, if the observed test statistic is in the critical region), then we say the null hypothesis is rejected at the chosen level of significance. If the p -value is not less than the chosen significance threshold (equivalently, if the observed test statistic is outside the critical region), then the null hypothesis is not rejected at the chosen level of significance. In the "lady tasting tea" example (below), Fisher required the lady to properly categorize all of the cups of tea to justify the conclusion that the result was unlikely to result from chance. His test revealed that if the lady was effectively guessing at random (the null hypothesis), there was a 1.4% chance that the observed results (perfectly ordered tea) would occur. Statistics are helpful in analyzing most collections of data. This is equally true of hypothesis testing which can justify conclusions even when no scientific theory exists. In the Lady tasting tea example, it was "obvious" that no difference existed between (milk poured into tea) and (tea poured into milk). The data contradicted the "obvious". Real world applications of hypothesis testing include: [ 38 ] Statistical hypothesis testing plays an important role in the whole of statistics and in statistical inference . For example, Lehmann (1992) in a review of the fundamental paper by Neyman and Pearson (1933) says: "Nevertheless, despite their shortcomings, the new paradigm formulated in the 1933 paper, and the many developments carried out within its framework continue to play a central role in both the theory and practice of statistics and can be expected to do so in the foreseeable future". Significance testing has been the favored statistical tool in some experimental social sciences (over 90% of articles in the Journal of Applied Psychology during the early 1990s). [ 39 ] Other fields have favored the estimation of parameters (e.g. effect size ). Significance testing is used as a substitute for the traditional comparison of predicted value and experimental result at the core of the scientific method . When theory is only capable of predicting the sign of a relationship, a directional (one-sided) hypothesis test can be configured so that only a statistically significant result supports theory. This form of theory appraisal is the most heavily criticized application of hypothesis testing. "If the government required statistical procedures to carry warning labels like those on drugs, most inference methods would have long labels indeed." [ 40 ] This caution applies to hypothesis tests and alternatives to them. The successful hypothesis test is associated with a probability and a type-I error rate. The conclusion might be wrong. The conclusion of the test is only as solid as the sample upon which it is based. The design of the experiment is critical. A number of unexpected effects have been observed including: A statistical analysis of misleading data produces misleading conclusions. The issue of data quality can be more subtle. In forecasting for example, there is no agreement on a measure of forecast accuracy. In the absence of a consensus measurement, no decision based on measurements will be without controversy. Publication bias: Statistically nonsignificant results may be less likely to be published, which can bias the literature. Multiple testing: When multiple true null hypothesis tests are conducted at once without adjustment, the overall probability of Type I error is higher than the nominal alpha level. [ 41 ] Those making critical decisions based on the results of a hypothesis test are prudent to look at the details rather than the conclusion alone. In the physical sciences most results are fully accepted only when independently confirmed. The general advice concerning statistics is, "Figures never lie, but liars figure" (anonymous). The following definitions are mainly based on the exposition in the book by Lehmann and Romano: [ 36 ] A statistical hypothesis test compares a test statistic ( z or t for examples) to a threshold. The test statistic (the formula found in the table below) is based on optimality. For a fixed level of Type I error rate, use of these statistics minimizes Type II error rates (equivalent to maximizing power). The following terms describe tests in terms of such optimality: Bootstrap-based resampling methods can be used for null hypothesis testing. A bootstrap creates numerous simulated samples by randomly resampling (with replacement) the original, combined sample data, assuming the null hypothesis is correct. The bootstrap is very versatile as it is distribution-free and it does not rely on restrictive parametric assumptions, but rather on empirical approximate methods with asymptotic guarantees. Traditional parametric hypothesis tests are more computationally efficient but make stronger structural assumptions. In situations where computing the probability of the test statistic under the null hypothesis is hard or impossible (due to perhaps inconvenience or lack of knowledge of the underlying distribution), the bootstrap offers a viable method for statistical inference. [ 43 ] [ 44 ] [ 45 ] [ 46 ] The earliest use of statistical hypothesis testing is generally credited to the question of whether male and female births are equally likely (null hypothesis), which was addressed in the 1700s by John Arbuthnot (1710), [ 47 ] and later by Pierre-Simon Laplace (1770s). [ 48 ] Arbuthnot examined birth records in London for each of the 82 years from 1629 to 1710, and applied the sign test , a simple non-parametric test . [ 49 ] [ 50 ] [ 51 ] In every year, the number of males born in London exceeded the number of females. Considering more male or more female births as equally likely, the probability of the observed outcome is 0.5 82 , or about 1 in 4,836,000,000,000,000,000,000,000; in modern terms, this is the p -value. Arbuthnot concluded that this is too small to be due to chance and must instead be due to divine providence: "From whence it follows, that it is Art, not Chance, that governs." In modern terms, he rejected the null hypothesis of equally likely male and female births at the p = 1/2 82 significance level. Laplace considered the statistics of almost half a million births. The statistics showed an excess of boys compared to girls. [ 5 ] He concluded by calculation of a p -value that the excess was a real, but unexplained, effect. [ 52 ] In a famous example of hypothesis testing, known as the Lady tasting tea , [ 53 ] Dr. Muriel Bristol , a colleague of Fisher, claimed to be able to tell whether the tea or the milk was added first to a cup. Fisher proposed to give her eight cups, four of each variety, in random order. One could then ask what the probability was for her getting the number she got correct, but just by chance. The null hypothesis was that the Lady had no such ability. The test statistic was a simple count of the number of successes in selecting the 4 cups. The critical region was the single case of 4 successes of 4 possible based on a conventional probability criterion (< 5%). A pattern of 4 successes corresponds to 1 out of 70 possible combinations (p≈ 1.4%). Fisher asserted that no alternative hypothesis was (ever) required. The lady correctly identified every cup, [ 54 ] which would be considered a statistically significant result. A statistical test procedure is comparable to a criminal trial ; a defendant is considered not guilty as long as his or her guilt is not proven. The prosecutor tries to prove the guilt of the defendant. Only when there is enough evidence for the prosecution is the defendant convicted. In the start of the procedure, there are two hypotheses H 0 {\displaystyle H_{0}} : "the defendant is not guilty", and H 1 {\displaystyle H_{1}} : "the defendant is guilty". The first one, H 0 {\displaystyle H_{0}} , is called the null hypothesis . The second one, H 1 {\displaystyle H_{1}} , is called the alternative hypothesis . It is the alternative hypothesis that one hopes to support. The hypothesis of innocence is rejected only when an error is very unlikely, because one does not want to convict an innocent defendant. Such an error is called error of the first kind (i.e., the conviction of an innocent person), and the occurrence of this error is controlled to be rare. As a consequence of this asymmetric behaviour, an error of the second kind (acquitting a person who committed the crime), is more common. A criminal trial can be regarded as either or both of two decision processes: guilty vs not guilty or evidence vs a threshold ("beyond a reasonable doubt"). In one view, the defendant is judged; in the other view the performance of the prosecution (which bears the burden of proof) is judged. A hypothesis test can be regarded as either a judgment of a hypothesis or as a judgment of evidence. A person (the subject) is tested for clairvoyance . They are shown the back face of a randomly chosen playing card 25 times and asked which of the four suits it belongs to. The number of hits, or correct answers, is called X . As we try to find evidence of their clairvoyance, for the time being the null hypothesis is that the person is not clairvoyant. [ 55 ] The alternative is: the person is (more or less) clairvoyant. If the null hypothesis is valid, the only thing the test person can do is guess. For every card, the probability (relative frequency) of any single suit appearing is 1/4. If the alternative is valid, the test subject will predict the suit correctly with probability greater than 1/4. We will call the probability of guessing correctly p . The hypotheses, then, are: and When the test subject correctly predicts all 25 cards, we will consider them clairvoyant, and reject the null hypothesis. Thus also with 24 or 23 hits. With only 5 or 6 hits, on the other hand, there is no cause to consider them so. But what about 12 hits, or 17 hits? What is the critical number, c , of hits, at which point we consider the subject to be clairvoyant? How do we determine the critical value c ? With the choice c =25 (i.e. we only accept clairvoyance when all cards are predicted correctly) we're more critical than with c =10. In the first case almost no test subjects will be recognized to be clairvoyant, in the second case, a certain number will pass the test. In practice, one decides how critical one will be. That is, one decides how often one accepts an error of the first kind – a false positive , or Type I error. With c = 25 the probability of such an error is: and hence, very small. The probability of a false positive is the probability of randomly guessing correctly all 25 times. Being less critical, with c = 10, gives: Thus, c = 10 yields a much greater probability of false positive. Before the test is actually performed, the maximum acceptable probability of a Type I error ( α ) is determined. Typically, values in the range of 1% to 5% are selected. (If the maximum acceptable error rate is zero, an infinite number of correct guesses is required.) Depending on this Type 1 error rate, the critical value c is calculated. For example, if we select an error rate of 1%, c is calculated thus: From all the numbers c, with this property, we choose the smallest, in order to minimize the probability of a Type II error, a false negative . For the above example, we select: c = 13 {\displaystyle c=13} . Statistical hypothesis testing is a key technique of both frequentist inference and Bayesian inference , although the two types of inference have notable differences. Statistical hypothesis tests define a procedure that controls (fixes) the probability of incorrectly deciding that a default position ( null hypothesis ) is incorrect. The procedure is based on how likely it would be for a set of observations to occur if the null hypothesis were true. This probability of making an incorrect decision is not the probability that the null hypothesis is true, nor whether any specific alternative hypothesis is true. This contrasts with other possible techniques of decision theory in which the null and alternative hypothesis are treated on a more equal basis. One naïve Bayesian approach to hypothesis testing is to base decisions on the posterior probability , [ 56 ] [ 57 ] but this fails when comparing point and continuous hypotheses. Other approaches to decision making, such as Bayesian decision theory , attempt to balance the consequences of incorrect decisions across all possibilities, rather than concentrating on a single null hypothesis. A number of other approaches to reaching a decision based on data are available via decision theory and optimal decisions , some of which have desirable properties. Hypothesis testing, though, is a dominant approach to data analysis in many fields of science. Extensions to the theory of hypothesis testing include the study of the power of tests, i.e. the probability of correctly rejecting the null hypothesis given that it is false. Such considerations can be used for the purpose of sample size determination prior to the collection of data. An example of Neyman–Pearson hypothesis testing (or null hypothesis statistical significance testing) can be made by a change to the radioactive suitcase example. If the "suitcase" is actually a shielded container for the transportation of radioactive material, then a test might be used to select among three hypotheses: no radioactive source present, one present, two (all) present. The test could be required for safety, with actions required in each case. The Neyman–Pearson lemma of hypothesis testing says that a good criterion for the selection of hypotheses is the ratio of their probabilities (a likelihood ratio ). A simple method of solution is to select the hypothesis with the highest probability for the Geiger counts observed. The typical result matches intuition: few counts imply no source, many counts imply two sources and intermediate counts imply one source. Notice also that usually there are problems for proving a negative . Null hypotheses should be at least falsifiable . Neyman–Pearson theory can accommodate both prior probabilities and the costs of actions resulting from decisions. [ 58 ] The former allows each test to consider the results of earlier tests (unlike Fisher's significance tests). The latter allows the consideration of economic issues (for example) as well as probabilities. A likelihood ratio remains a good criterion for selecting among hypotheses. The two forms of hypothesis testing are based on different problem formulations. The original test is analogous to a true/false question; the Neyman–Pearson test is more like multiple choice. In the view of Tukey [ 59 ] the former produces a conclusion on the basis of only strong evidence while the latter produces a decision on the basis of available evidence. While the two tests seem quite different both mathematically and philosophically, later developments lead to the opposite claim. Consider many tiny radioactive sources. The hypotheses become 0,1,2,3... grains of radioactive sand. There is little distinction between none or some radiation (Fisher) and 0 grains of radioactive sand versus all of the alternatives (Neyman–Pearson). The major Neyman–Pearson paper of 1933 [ 11 ] also considered composite hypotheses (ones whose distribution includes an unknown parameter). An example proved the optimality of the (Student's) t -test, "there can be no better test for the hypothesis under consideration" (p 321). Neyman–Pearson theory was proving the optimality of Fisherian methods from its inception. Fisher's significance testing has proven a popular flexible statistical tool in application with little mathematical growth potential. Neyman–Pearson hypothesis testing is claimed as a pillar of mathematical statistics, [ 60 ] creating a new paradigm for the field. It also stimulated new applications in statistical process control , detection theory , decision theory and game theory . Both formulations have been successful, but the successes have been of a different character. The dispute over formulations is unresolved. Science primarily uses Fisher's (slightly modified) formulation as taught in introductory statistics. Statisticians study Neyman–Pearson theory in graduate school. Mathematicians are proud of uniting the formulations. Philosophers consider them separately. Learned opinions deem the formulations variously competitive (Fisher vs Neyman), incompatible [ 9 ] or complementary. [ 13 ] The dispute has become more complex since Bayesian inference has achieved respectability. The terminology is inconsistent. Hypothesis testing can mean any mixture of two formulations that both changed with time. Any discussion of significance testing vs hypothesis testing is doubly vulnerable to confusion. Fisher thought that hypothesis testing was a useful strategy for performing industrial quality control, however, he strongly disagreed that hypothesis testing could be useful for scientists. [ 10 ] Hypothesis testing provides a means of finding test statistics used in significance testing. [ 13 ] The concept of power is useful in explaining the consequences of adjusting the significance level and is heavily used in sample size determination . The two methods remain philosophically distinct. [ 15 ] They usually (but not always ) produce the same mathematical answer. The preferred answer is context dependent. [ 13 ] While the existing merger of Fisher and Neyman–Pearson theories has been heavily criticized, modifying the merger to achieve Bayesian goals has been considered. [ 61 ] Criticism of statistical hypothesis testing fills volumes. [ 62 ] [ 63 ] [ 64 ] [ 65 ] [ 66 ] [ 67 ] Much of the criticism can be summarized by the following issues: Critics and supporters are largely in factual agreement regarding the characteristics of null hypothesis significance testing (NHST): While it can provide critical information, it is inadequate as the sole tool for statistical analysis . Successfully rejecting the null hypothesis may offer no support for the research hypothesis. The continuing controversy concerns the selection of the best statistical practices for the near-term future given the existing practices. However, adequate research design can minimize this issue. Critics would prefer to ban NHST completely, forcing a complete departure from those practices, [ 78 ] while supporters suggest a less absolute change. [ citation needed ] Controversy over significance testing, and its effects on publication bias in particular, has produced several results. The American Psychological Association has strengthened its statistical reporting requirements after review, [ 79 ] medical journal publishers have recognized the obligation to publish some results that are not statistically significant to combat publication bias, [ 80 ] and a journal ( Journal of Articles in Support of the Null Hypothesis ) has been created to publish such results exclusively. [ 81 ] Textbooks have added some cautions, [ 82 ] and increased coverage of the tools necessary to estimate the size of the sample required to produce significant results. Few major organizations have abandoned use of significance tests although some have discussed doing so. [ 79 ] For instance, in 2023, the editors of the Journal of Physiology "strongly recommend the use of estimation methods for those publishing in The Journal" (meaning the magnitude of the effect size (to allow readers to judge whether a finding has practical, physiological, or clinical relevance) and confidence intervals to convey the precision of that estimate), saying "Ultimately, it is the physiological importance of the data that those publishing in The Journal of Physiology should be most concerned with, rather than the statistical significance." [ 83 ] A unifying position of critics is that statistics should not lead to an accept-reject conclusion or decision, but to an estimated value with an interval estimate ; this data-analysis philosophy is broadly referred to as estimation statistics . Estimation statistics can be accomplished with either frequentist [ 84 ] or Bayesian methods. [ 85 ] [ 86 ] Critics of significance testing have advocated basing inference less on p-values and more on confidence intervals for effect sizes for importance, prediction intervals for confidence, replications and extensions for replicability, meta-analyses for generality :. [ 87 ] But none of these suggested alternatives inherently produces a decision. Lehmann said that hypothesis testing theory can be presented in terms of conclusions/decisions, probabilities, or confidence intervals: "The distinction between the ... approaches is largely one of reporting and interpretation." [ 26 ] Bayesian inference is one proposed alternative to significance testing. (Nickerson cited 10 sources suggesting it, including Rozeboom (1960)). [ 18 ] For example, Bayesian parameter estimation can provide rich information about the data from which researchers can draw inferences, while using uncertain priors that exert only minimal influence on the results when enough data is available. Psychologist John K. Kruschke has suggested Bayesian estimation as an alternative for the t -test [ 85 ] and has also contrasted Bayesian estimation for assessing null values with Bayesian model comparison for hypothesis testing. [ 86 ] Two competing models/hypotheses can be compared using Bayes factors . [ 88 ] Bayesian methods could be criticized for requiring information that is seldom available in the cases where significance testing is most heavily used. Neither the prior probabilities nor the probability distribution of the test statistic under the alternative hypothesis are often available in the social sciences. [ 18 ] Advocates of a Bayesian approach sometimes claim that the goal of a researcher is most often to objectively assess the probability that a hypothesis is true based on the data they have collected. [ 89 ] [ 90 ] Neither Fisher 's significance testing, nor Neyman–Pearson hypothesis testing can provide this information, and do not claim to. The probability a hypothesis is true can only be derived from use of Bayes' Theorem , which was unsatisfactory to both the Fisher and Neyman–Pearson camps due to the explicit use of subjectivity in the form of the prior probability . [ 11 ] [ 91 ] Fisher's strategy is to sidestep this with the p -value (an objective index based on the data alone) followed by inductive inference , while Neyman–Pearson devised their approach of inductive behaviour .
https://en.wikipedia.org/wiki/Statistical_hypothesis_test
Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution . [ 1 ] Inferential statistical analysis infers properties of a population , for example by testing hypotheses and deriving estimates. It is assumed that the observed data set is sampled from a larger population. Inferential statistics can be contrasted with descriptive statistics . Descriptive statistics is solely concerned with properties of the observed data, and it does not rest on the assumption that the data come from a larger population. In machine learning , the term inference is sometimes used instead to mean "make a prediction, by evaluating an already trained model"; [ 2 ] in this context inferring properties of the model is referred to as training or learning (rather than inference ), and using a model for prediction is referred to as inference (instead of prediction ); see also predictive inference . Statistical inference makes propositions about a population, using data drawn from the population with some form of sampling . Given a hypothesis about a population, for which we wish to draw inferences, statistical inference consists of (first) selecting a statistical model of the process that generates the data and (second) deducing propositions from the model. [ 3 ] Konishi and Kitagawa state "The majority of the problems in statistical inference can be considered to be problems related to statistical modeling". [ 4 ] Relatedly, Sir David Cox has said, "How [the] translation from subject-matter problem to statistical model is done is often the most critical part of an analysis". [ 5 ] The conclusion of a statistical inference is a statistical proposition . [ 6 ] Some common forms of statistical proposition are the following: Any statistical inference requires some assumptions. A statistical model is a set of assumptions concerning the generation of the observed data and similar data. Descriptions of statistical models usually emphasize the role of population quantities of interest, about which we wish to draw inference. [ 7 ] Descriptive statistics are typically used as a preliminary step before more formal inferences are drawn. [ 8 ] Statisticians distinguish between three levels of modeling assumptions: Whatever level of assumption is made, correctly calibrated inference, in general, requires these assumptions to be correct; i.e. that the data-generating mechanisms really have been correctly specified. Incorrect assumptions of 'simple' random sampling can invalidate statistical inference. [ 10 ] More complex semi- and fully parametric assumptions are also cause for concern. For example, incorrectly assuming the Cox model can in some cases lead to faulty conclusions. [ 11 ] Incorrect assumptions of Normality in the population also invalidates some forms of regression-based inference. [ 12 ] The use of any parametric model is viewed skeptically by most experts in sampling human populations: "most sampling statisticians, when they deal with confidence intervals at all, limit themselves to statements about [estimators] based on very large samples, where the central limit theorem ensures that these [estimators] will have distributions that are nearly normal." [ 13 ] In particular, a normal distribution "would be a totally unrealistic and catastrophically unwise assumption to make if we were dealing with any kind of economic population." [ 13 ] Here, the central limit theorem states that the distribution of the sample mean "for very large samples" is approximately normally distributed, if the distribution is not heavy-tailed. Given the difficulty in specifying exact distributions of sample statistics, many methods have been developed for approximating these. With finite samples, approximation results measure how close a limiting distribution approaches the statistic's sample distribution : For example, with 10,000 independent samples the normal distribution approximates (to two digits of accuracy) the distribution of the sample mean for many population distributions, by the Berry–Esseen theorem . [ 14 ] Yet for many practical purposes, the normal approximation provides a good approximation to the sample-mean's distribution when there are 10 (or more) independent samples, according to simulation studies and statisticians' experience. [ 14 ] Following Kolmogorov's work in the 1950s, advanced statistics uses approximation theory and functional analysis to quantify the error of approximation. In this approach, the metric geometry of probability distributions is studied; this approach quantifies approximation error with, for example, the Kullback–Leibler divergence , Bregman divergence , and the Hellinger distance . [ 15 ] [ 16 ] [ 17 ] With indefinitely large samples, limiting results like the central limit theorem describe the sample statistic's limiting distribution if one exists. Limiting results are not statements about finite samples, and indeed are irrelevant to finite samples. [ 18 ] [ 19 ] [ 20 ] However, the asymptotic theory of limiting distributions is often invoked for work with finite samples. For example, limiting results are often invoked to justify the generalized method of moments and the use of generalized estimating equations , which are popular in econometrics and biostatistics . The magnitude of the difference between the limiting distribution and the true distribution (formally, the 'error' of the approximation) can be assessed using simulation. [ 21 ] The heuristic application of limiting results to finite samples is common practice in many applications, especially with low-dimensional models with log-concave likelihoods (such as with one-parameter exponential families ). For a given dataset that was produced by a randomization design, the randomization distribution of a statistic (under the null-hypothesis) is defined by evaluating the test statistic for all of the plans that could have been generated by the randomization design. In frequentist inference, the randomization allows inferences to be based on the randomization distribution rather than a subjective model, and this is important especially in survey sampling and design of experiments. [ 22 ] [ 23 ] Statistical inference from randomized studies is also more straightforward than many other situations. [ 24 ] [ 25 ] [ 26 ] In Bayesian inference , randomization is also of importance: in survey sampling , use of sampling without replacement ensures the exchangeability of the sample with the population; in randomized experiments, randomization warrants a missing at random assumption for covariate information. [ 27 ] Objective randomization allows properly inductive procedures. [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] Many statisticians prefer randomization-based analysis of data that was generated by well-defined randomization procedures. [ 33 ] (However, it is true that in fields of science with developed theoretical knowledge and experimental control, randomized experiments may increase the costs of experimentation without improving the quality of inferences. [ 34 ] [ 35 ] ) Similarly, results from randomized experiments are recommended by leading statistical authorities as allowing inferences with greater reliability than do observational studies of the same phenomena. [ 36 ] However, a good observational study may be better than a bad randomized experiment. The statistical analysis of a randomized experiment may be based on the randomization scheme stated in the experimental protocol and does not need a subjective model. [ 37 ] [ 38 ] However, at any time, some hypotheses cannot be tested using objective statistical models, which accurately describe randomized experiments or random samples. In some cases, such randomized studies are uneconomical or unethical. It is standard practice to refer to a statistical model, e.g., a linear or logistic models, when analyzing data from randomized experiments. [ 39 ] However, the randomization scheme guides the choice of a statistical model. It is not possible to choose an appropriate model without knowing the randomization scheme. [ 23 ] Seriously misleading results can be obtained analyzing data from randomized experiments while ignoring the experimental protocol; common mistakes include forgetting the blocking used in an experiment and confusing repeated measurements on the same experimental unit with independent replicates of the treatment applied to different experimental units. [ 40 ] Model-free techniques provide a complement to model-based methods, which employ reductionist strategies of reality-simplification. The former combine, evolve, ensemble and train algorithms dynamically adapting to the contextual affinities of a process and learning the intrinsic characteristics of the observations. [ 41 ] [ 42 ] For example, model-free simple linear regression is based either on: In either case, the model-free randomization inference for features of the common conditional distribution D x ( . ) {\displaystyle D_{x}(.)} relies on some regularity conditions, e.g. functional smoothness. For instance, model-free randomization inference for the population feature conditional mean , μ ( x ) = E ( Y | X = x ) {\displaystyle \mu (x)=E(Y|X=x)} , can be consistently estimated via local averaging or local polynomial fitting, under the assumption that μ ( x ) {\displaystyle \mu (x)} is smooth. Also, relying on asymptotic normality or resampling, we can construct confidence intervals for the population feature, in this case, the conditional mean , μ ( x ) {\displaystyle \mu (x)} . [ 43 ] Different schools of statistical inference have become established. These schools—or "paradigms"—are not mutually exclusive, and methods that work well under one paradigm often have attractive interpretations under other paradigms. Bandyopadhyay and Forster describe four paradigms: The classical (or frequentist ) paradigm, the Bayesian paradigm, the likelihoodist paradigm, and the Akaikean-Information Criterion -based paradigm. [ 44 ] This paradigm calibrates the plausibility of propositions by considering (notional) repeated sampling of a population distribution to produce datasets similar to the one at hand. By considering the dataset's characteristics under repeated sampling, the frequentist properties of a statistical proposition can be quantified—although in practice this quantification may be challenging. One interpretation of frequentist inference (or classical inference) is that it is applicable only in terms of frequency probability ; that is, in terms of repeated sampling from a population. However, the approach of Neyman [ 45 ] develops these procedures in terms of pre-experiment probabilities. That is, before undertaking an experiment, one decides on a rule for coming to a conclusion such that the probability of being correct is controlled in a suitable way: such a probability need not have a frequentist or repeated sampling interpretation. In contrast, Bayesian inference works in terms of conditional probabilities (i.e. probabilities conditional on the observed data), compared to the marginal (but conditioned on unknown parameters) probabilities used in the frequentist approach. The frequentist procedures of significance testing and confidence intervals can be constructed without regard to utility functions . However, some elements of frequentist statistics, such as statistical decision theory , do incorporate utility functions . [ citation needed ] In particular, frequentist developments of optimal inference (such as minimum-variance unbiased estimators , or uniformly most powerful testing ) make use of loss functions , which play the role of (negative) utility functions. Loss functions need not be explicitly stated for statistical theorists to prove that a statistical procedure has an optimality property. [ 46 ] However, loss-functions are often useful for stating optimality properties: for example, median-unbiased estimators are optimal under absolute value loss functions, in that they minimize expected loss, and least squares estimators are optimal under squared error loss functions, in that they minimize expected loss. While statisticians using frequentist inference must choose for themselves the parameters of interest, and the estimators / test statistic to be used, the absence of obviously explicit utilities and prior distributions has helped frequentist procedures to become widely viewed as 'objective'. [ 47 ] The Bayesian calculus describes degrees of belief using the 'language' of probability; beliefs are positive, integrate into one, and obey probability axioms. Bayesian inference uses the available posterior beliefs as the basis for making statistical propositions. [ 48 ] There are several different justifications for using the Bayesian approach. Many informal Bayesian inferences are based on "intuitively reasonable" summaries of the posterior. For example, the posterior mean, median and mode, highest posterior density intervals, and Bayes Factors can all be motivated in this way. While a user's utility function need not be stated for this sort of inference, these summaries do all depend (to some extent) on stated prior beliefs, and are generally viewed as subjective conclusions. (Methods of prior construction which do not require external input have been proposed but not yet fully developed.) Formally, Bayesian inference is calibrated with reference to an explicitly stated utility, or loss function; the 'Bayes rule' is the one which maximizes expected utility, averaged over the posterior uncertainty. Formal Bayesian inference therefore automatically provides optimal decisions in a decision theoretic sense. Given assumptions, data and utility, Bayesian inference can be made for essentially any problem, although not every statistical inference need have a Bayesian interpretation. Analyses which are not formally Bayesian can be (logically) incoherent ; a feature of Bayesian procedures which use proper priors (i.e. those integrable to one) is that they are guaranteed to be coherent . Some advocates of Bayesian inference assert that inference must take place in this decision-theoretic framework, and that Bayesian inference should not conclude with the evaluation and summarization of posterior beliefs. Likelihood-based inference is a paradigm used to estimate the parameters of a statistical model based on observed data. Likelihoodism approaches statistics by using the likelihood function , denoted as L ( x | θ ) {\displaystyle L(x|\theta )} , quantifies the probability of observing the given data x {\displaystyle x} , assuming a specific set of parameter values θ {\displaystyle \theta } . In likelihood-based inference, the goal is to find the set of parameter values that maximizes the likelihood function, or equivalently, maximizes the probability of observing the given data. The process of likelihood-based inference usually involves the following steps: The Akaike information criterion (AIC) is an estimator of the relative quality of statistical models for a given set of data. Given a collection of models for the data, AIC estimates the quality of each model, relative to each of the other models. Thus, AIC provides a means for model selection . AIC is founded on information theory : it offers an estimate of the relative information lost when a given model is used to represent the process that generated the data. (In doing so, it deals with the trade-off between the goodness of fit of the model and the simplicity of the model.) The minimum description length (MDL) principle has been developed from ideas in information theory [ 49 ] and the theory of Kolmogorov complexity . [ 50 ] The (MDL) principle selects statistical models that maximally compress the data; inference proceeds without assuming counterfactual or non-falsifiable "data-generating mechanisms" or probability models for the data, as might be done in frequentist or Bayesian approaches. However, if a "data generating mechanism" does exist in reality, then according to Shannon 's source coding theorem it provides the MDL description of the data, on average and asymptotically. [ 51 ] In minimizing description length (or descriptive complexity), MDL estimation is similar to maximum likelihood estimation and maximum a posteriori estimation (using maximum-entropy Bayesian priors ). However, MDL avoids assuming that the underlying probability model is known; the MDL principle can also be applied without assumptions that e.g. the data arose from independent sampling. [ 51 ] [ 52 ] The MDL principle has been applied in communication- coding theory in information theory , in linear regression , [ 52 ] and in data mining . [ 50 ] The evaluation of MDL-based inferential procedures often uses techniques or criteria from computational complexity theory . [ 53 ] Fiducial inference was an approach to statistical inference based on fiducial probability , also known as a "fiducial distribution". In subsequent work, this approach has been called ill-defined, extremely limited in applicability, and even fallacious. [ 54 ] [ 55 ] However this argument is the same as that which shows [ 56 ] that a so-called confidence distribution is not a valid probability distribution and, since this has not invalidated the application of confidence intervals , it does not necessarily invalidate conclusions drawn from fiducial arguments. An attempt was made to reinterpret the early work of Fisher's fiducial argument as a special case of an inference theory using upper and lower probabilities . [ 57 ] Developing ideas of Fisher and of Pitman from 1938 to 1939, [ 58 ] George A. Barnard developed "structural inference" or "pivotal inference", [ 59 ] an approach using invariant probabilities on group families . Barnard reformulated the arguments behind fiducial inference on a restricted class of models on which "fiducial" procedures would be well-defined and useful. Donald A. S. Fraser developed a general theory for structural inference [ 60 ] based on group theory and applied this to linear models. [ 61 ] The theory formulated by Fraser has close links to decision theory and Bayesian statistics and can provide optimal frequentist decision rules if they exist. [ 62 ] The topics below are usually included in the area of statistical inference . Predictive inference is an approach to statistical inference that emphasizes the prediction of future observations based on past observations. Initially, predictive inference was based on observable parameters and it was the main purpose of studying probability , [ citation needed ] but it fell out of favor in the 20th century due to a new parametric approach pioneered by Bruno de Finetti . The approach modeled phenomena as a physical system observed with error (e.g., celestial mechanics ). De Finetti's idea of exchangeability —that future observations should behave like past observations—came to the attention of the English-speaking world with the 1974 translation from French of his 1937 paper, [ 63 ] and has since been propounded by such statisticians as Seymour Geisser . [ 64 ]
https://en.wikipedia.org/wiki/Statistical_inference
In mathematics , a statistical manifold is a Riemannian manifold , each of whose points is a probability distribution . Statistical manifolds provide a setting for the field of information geometry . The Fisher information metric provides a metric on these manifolds. Following this definition, the log-likelihood function is a differentiable map and the score is an inclusion . [ 1 ] The family of all normal distributions can be thought of as a 2-dimensional parametric space parametrized by the expected value μ and the variance σ 2 ≥ 0. Equipped with the Riemannian metric given by the Fisher information matrix, it is a statistical manifold with a geometry modeled on hyperbolic space . A way of picturing the manifold is done by inferring the parametric equations via the Fisher Information rather than starting from the likelihood-function. A simple example of a statistical manifold, taken from physics, would be the canonical ensemble : it is a one-dimensional manifold, with the temperature T serving as the coordinate on the manifold. For any fixed temperature T , one has a probability space: so, for a gas of atoms, it would be the probability distribution of the velocities of the atoms. As one varies the temperature T , the probability distribution varies. Another simple example, taken from medicine, would be the probability distribution of patient outcomes, in response to the quantity of medicine administered. That is, for a fixed dose, some patients improve, and some do not: this is the base probability space. If the dosage is varied, then the probability of outcomes changes. Thus, the dosage is the coordinate on the manifold. To be a smooth manifold , one would have to measure outcomes in response to arbitrarily small changes in dosage; this is not a practically realizable example, unless one has a pre-existing mathematical model of dose-response where the dose can be arbitrarily varied. Let X be an orientable manifold , and let ( X , Σ , μ ) {\displaystyle (X,\Sigma ,\mu )} be a measure on X . Equivalently, let ( Ω , F , P ) {\displaystyle (\Omega ,{\mathcal {F}},P)} be a probability space on Ω = X {\displaystyle \Omega =X} , with sigma algebra F = Σ {\displaystyle {\mathcal {F}}=\Sigma } and probability P = μ {\displaystyle P=\mu } . The statistical manifold S ( X ) of X is defined as the space of all measures μ {\displaystyle \mu } on X (with the sigma-algebra Σ {\displaystyle \Sigma } held fixed). Note that this space is infinite-dimensional; it is commonly taken to be a Fréchet space . The points of S ( X ) are measures. Rather than dealing with an infinite-dimensional space S ( X ), it is common to work with a finite-dimensional submanifold , defined by considering a set of probability distributions parameterized by some smooth, continuously varying parameter θ {\displaystyle \theta } . That is, one considers only those measures that are selected by the parameter. If the parameter θ {\displaystyle \theta } is n -dimensional, then, in general, the submanifold will be as well. All finite-dimensional statistical manifolds can be understood in this way. [ clarification needed ]
https://en.wikipedia.org/wiki/Statistical_manifold
When a business or regulator uses limited funds to take an action that saves a limited number of lives, instead of an alternative action that would save more lives, this decision is sometimes called statistical murder . This phrase is currently primarily a term of political advocacy, used to draw attention to unwise decision making that either is not the most effective available or is potentially even harmful. This phrase is a diffuse neologism . The phrase originated in the early 1990s with Professor John D. Graham, a tenured professor of policy and decision sciences at Harvard University 's school of Public Health and director of the Harvard Center for Risk Analysis. [1] This phrase appears in the Congressional Record in February, 1995 where he is quoted thus "John Graham, a Harvard professor, who said, 'Sound science means saving the most lives and achieving the most ecological protection with our scarce budgets. Without sound science, we are engaging in a form of "statistical murder," where we squander our resources on phantom risks when our families continue to be endangered by real risks." [ 1 ] In 2001 he was appointed the head of the U.S. Office of Information and Regulatory Affairs in the Office of Management and Budget by George W. Bush , making him the top regulator for the United States. [ 2 ] Because the analysis underlying the term was controversial among those interested in U.S. government policy, the senate confirmation process for nomination made the term more widely known. To show that something is statistical murder requires that a comparative risk analysis be done on the available alternatives. This is akin to a cost-benefit analysis but does not entail the translation of lives and health into dollars. However, if other types of benefits [ which? ] are to also be evaluated, the comparative risk analysis approach may not viable, so a cost-benefit analysis must be done. [ 3 ] Additionally, the concept implies that the inefficiently spent resources could in fact be transferred to a more effective alternative. This requires that regulators and policy makers with budgetary authority at least allow such transfers and preferably use cost-benefit analysis to plan the budgeting. [ 4 ] This was not the practice at the time the phrase was coined, and has not yet become standard practice in the U.S. Some people object to the required analysis because they believe it is always wrong to put a financial value on human life. They would have no objection to a risk assessment because it only measures lives lost. However, with this limitation it also cannot value any effects other than the number of human lives lost - including non-fatal human diseases, effects on non-human species, and effects on human activities and enjoyment. It is quite possible to make errors in the statistics used to do the analysis, and in 2002 Richard Parker, a law professor at the University of Connecticut, argued that all the widely published studies suffered from unacceptable flaws. [ 5 ] An alternative view, taken by some policy analysts, is that it is not sufficient to look solely at outcomes, but also at feelings. If a risk is perceived to be significant, but is in fact insignificant, it may nonetheless be appropriate to respond in some way to that risk. Proponents of this view suggest using an expected utility calculation instead. [ 6 ]
https://en.wikipedia.org/wiki/Statistical_murder
In physics , statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics , its applications include many problems in a wide variety of fields such as biology , [ 1 ] neuroscience , [ 2 ] computer science , [ 3 ] [ 4 ] information theory [ 5 ] and sociology . [ 6 ] Its main purpose is to clarify the properties of matter in aggregate, in terms of physical laws governing atomic motion. [ 7 ] [ 8 ] Statistical mechanics arose out of the development of classical thermodynamics , a field for which it was successful in explaining macroscopic physical properties—such as temperature , pressure , and heat capacity —in terms of microscopic parameters that fluctuate about average values and are characterized by probability distributions . [ 9 ] : 1–4 While classical thermodynamics is primarily concerned with thermodynamic equilibrium , statistical mechanics has been applied in non-equilibrium statistical mechanics to the issues of microscopically modeling the speed of irreversible processes that are driven by imbalances. [ 9 ] : 3 Examples of such processes include chemical reactions and flows of particles and heat. The fluctuation–dissipation theorem is the basic knowledge obtained from applying non-equilibrium statistical mechanics to study the simplest non-equilibrium situation of a steady state current flow in a system of many particles. [ 9 ] : 572–573 In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica which laid the basis for the kinetic theory of gases . In this work, Bernoulli posited the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion. [ 10 ] The founding of the field of statistical mechanics is generally credited to three physicists: In 1859, after reading a paper on the diffusion of molecules by Rudolf Clausius , Scottish physicist James Clerk Maxwell formulated the Maxwell distribution of molecular velocities, which gave the proportion of molecules having a certain velocity in a specific range. [ 11 ] This was the first-ever statistical law in physics. [ 12 ] Maxwell also gave the first mechanical argument that molecular collisions entail an equalization of temperatures and hence a tendency towards equilibrium. [ 13 ] Five years later, in 1864, Ludwig Boltzmann , a young student in Vienna, came across Maxwell's paper and spent much of his life developing the subject further. Statistical mechanics was initiated in the 1870s with the work of Boltzmann, much of which was collectively published in his 1896 Lectures on Gas Theory . [ 14 ] Boltzmann's original papers on the statistical interpretation of thermodynamics, the H-theorem , transport theory , thermal equilibrium , the equation of state of gases, and similar subjects, occupy about 2,000 pages in the proceedings of the Vienna Academy and other societies. Boltzmann introduced the concept of an equilibrium statistical ensemble and also investigated for the first time non-equilibrium statistical mechanics, with his H -theorem . The term "statistical mechanics" was coined by the American mathematical physicist J. Willard Gibbs in 1884. [ 15 ] According to Gibbs, the term "statistical", in the context of mechanics, i.e. statistical mechanics, was first used by the Scottish physicist James Clerk Maxwell in 1871: "In dealing with masses of matter, while we do not perceive the individual molecules, we are compelled to adopt what I have described as the statistical method of calculation, and to abandon the strict dynamical method, in which we follow every motion by the calculus." "Probabilistic mechanics" might today seem a more appropriate term, but "statistical mechanics" is firmly entrenched. [ 17 ] Shortly before his death, Gibbs published in 1902 Elementary Principles in Statistical Mechanics , a book which formalized statistical mechanics as a fully general approach to address all mechanical systems—macroscopic or microscopic, gaseous or non-gaseous. [ 18 ] Gibbs' methods were initially derived in the framework classical mechanics , however they were of such generality that they were found to adapt easily to the later quantum mechanics , and still form the foundation of statistical mechanics to this day. [ 19 ] In physics, two types of mechanics are usually examined: classical mechanics and quantum mechanics . For both types of mechanics, the standard mathematical approach is to consider two concepts: Using these two concepts, the state at any other time, past or future, can in principle be calculated. There is however a disconnect between these laws and everyday life experiences, as we do not find it necessary (nor even theoretically possible) to know exactly at a microscopic level the simultaneous positions and velocities of each molecule while carrying out processes at the human scale (for example, when performing a chemical reaction). Statistical mechanics fills this disconnection between the laws of mechanics and the practical experience of incomplete knowledge, by adding some uncertainty about which state the system is in. Whereas ordinary mechanics only considers the behaviour of a single state, statistical mechanics introduces the statistical ensemble , which is a large collection of virtual, independent copies of the system in various states. The statistical ensemble is a probability distribution over all possible states of the system. In classical statistical mechanics, the ensemble is a probability distribution over phase points (as opposed to a single phase point in ordinary mechanics), usually represented as a distribution in a phase space with canonical coordinate axes. In quantum statistical mechanics, the ensemble is a probability distribution over pure states and can be compactly summarized as a density matrix . As is usual for probabilities, the ensemble can be interpreted in different ways: [ 18 ] These two meanings are equivalent for many purposes, and will be used interchangeably in this article. However the probability is interpreted, each state in the ensemble evolves over time according to the equation of motion. Thus, the ensemble itself (the probability distribution over states) also evolves, as the virtual systems in the ensemble continually leave one state and enter another. The ensemble evolution is given by the Liouville equation (classical mechanics) or the von Neumann equation (quantum mechanics). These equations are simply derived by the application of the mechanical equation of motion separately to each virtual system contained in the ensemble, with the probability of the virtual system being conserved over time as it evolves from state to state. One special class of ensemble is those ensembles that do not evolve over time. These ensembles are known as equilibrium ensembles and their condition is known as statistical equilibrium . Statistical equilibrium occurs if, for each state in the ensemble, the ensemble also contains all of its future and past states with probabilities equal to the probability of being in that state. (By contrast, mechanical equilibrium is a state with a balance of forces that has ceased to evolve.) The study of equilibrium ensembles of isolated systems is the focus of statistical thermodynamics. Non-equilibrium statistical mechanics addresses the more general case of ensembles that change over time, and/or ensembles of non-isolated systems. The primary goal of statistical thermodynamics (also known as equilibrium statistical mechanics) is to derive the classical thermodynamics of materials in terms of the properties of their constituent particles and the interactions between them. In other words, statistical thermodynamics provides a connection between the macroscopic properties of materials in thermodynamic equilibrium , and the microscopic behaviours and motions occurring inside the material. Whereas statistical mechanics proper involves dynamics, here the attention is focused on statistical equilibrium (steady state). Statistical equilibrium does not mean that the particles have stopped moving ( mechanical equilibrium ), rather, only that the ensemble is not evolving. A sufficient (but not necessary) condition for statistical equilibrium with an isolated system is that the probability distribution is a function only of conserved properties (total energy, total particle numbers, etc.). [ 18 ] There are many different equilibrium ensembles that can be considered, and only some of them correspond to thermodynamics. [ 18 ] Additional postulates are necessary to motivate why the ensemble for a given system should have one form or another. A common approach found in many textbooks is to take the equal a priori probability postulate . [ 19 ] This postulate states that The equal a priori probability postulate therefore provides a motivation for the microcanonical ensemble described below. There are various arguments in favour of the equal a priori probability postulate: Other fundamental postulates for statistical mechanics have also been proposed. [ 10 ] [ 21 ] [ 22 ] For example, recent studies shows that the theory of statistical mechanics can be built without the equal a priori probability postulate. [ 21 ] [ 22 ] One such formalism is based on the fundamental thermodynamic relation together with the following set of postulates: [ 21 ] where the third postulate can be replaced by the following: [ 22 ] There are three equilibrium ensembles with a simple form that can be defined for any isolated system bounded inside a finite volume. [ 18 ] These are the most often discussed ensembles in statistical thermodynamics. In the macroscopic limit (defined below) they all correspond to classical thermodynamics. For systems containing many particles (the thermodynamic limit ), all three of the ensembles listed above tend to give identical behaviour. It is then simply a matter of mathematical convenience which ensemble is used. [ 9 ] : 227 The Gibbs theorem about equivalence of ensembles [ 23 ] was developed into the theory of concentration of measure phenomenon, [ 24 ] which has applications in many areas of science, from functional analysis to methods of artificial intelligence and big data technology. [ 25 ] Important cases where the thermodynamic ensembles do not give identical results include: In these cases the correct thermodynamic ensemble must be chosen as there are observable differences between these ensembles not just in the size of fluctuations, but also in average quantities such as the distribution of particles. The correct ensemble is that which corresponds to the way the system has been prepared and characterized—in other words, the ensemble that reflects the knowledge about that system. [ 19 ] Once the characteristic state function for an ensemble has been calculated for a given system, that system is 'solved' (macroscopic observables can be extracted from the characteristic state function). Calculating the characteristic state function of a thermodynamic ensemble is not necessarily a simple task, however, since it involves considering every possible state of the system. While some hypothetical systems have been exactly solved, the most general (and realistic) case is too complex for an exact solution. Various approaches exist to approximate the true ensemble and allow calculation of average quantities. There are some cases which allow exact solutions. Although some problems in statistical physics can be solved analytically using approximations and expansions, most current research utilizes the large processing power of modern computers to simulate or approximate solutions. A common approach to statistical problems is to use a Monte Carlo simulation to yield insight into the properties of a complex system . Monte Carlo methods are important in computational physics , physical chemistry , and related fields, and have diverse applications including medical physics , where they are used to model radiation transport for radiation dosimetry calculations. [ 27 ] [ 28 ] [ 29 ] The Monte Carlo method examines just a few of the possible states of the system, with the states chosen randomly (with a fair weight). As long as these states form a representative sample of the whole set of states of the system, the approximate characteristic function is obtained. As more and more random samples are included, the errors are reduced to an arbitrarily low level. Many physical phenomena involve quasi-thermodynamic processes out of equilibrium, for example: All of these processes occur over time with characteristic rates. These rates are important in engineering. The field of non-equilibrium statistical mechanics is concerned with understanding these non-equilibrium processes at the microscopic level. (Statistical thermodynamics can only be used to calculate the final result, after the external imbalances have been removed and the ensemble has settled back down to equilibrium.) In principle, non-equilibrium statistical mechanics could be mathematically exact: ensembles for an isolated system evolve over time according to deterministic equations such as Liouville's equation or its quantum equivalent, the von Neumann equation . These equations are the result of applying the mechanical equations of motion independently to each state in the ensemble. These ensemble evolution equations inherit much of the complexity of the underlying mechanical motion, and so exact solutions are very difficult to obtain. Moreover, the ensemble evolution equations are fully reversible and do not destroy information (the ensemble's Gibbs entropy is preserved). In order to make headway in modelling irreversible processes, it is necessary to consider additional factors besides probability and reversible mechanics. Non-equilibrium mechanics is therefore an active area of theoretical research as the range of validity of these additional assumptions continues to be explored. A few approaches are described in the following subsections. One approach to non-equilibrium statistical mechanics is to incorporate stochastic (random) behaviour into the system. Stochastic behaviour destroys information contained in the ensemble. While this is technically inaccurate (aside from hypothetical situations involving black holes , a system cannot in itself cause loss of information), the randomness is added to reflect that information of interest becomes converted over time into subtle correlations within the system, or to correlations between the system and environment. These correlations appear as chaotic or pseudorandom influences on the variables of interest. By replacing these correlations with randomness proper, the calculations can be made much easier. The Boltzmann transport equation and related approaches are important tools in non-equilibrium statistical mechanics due to their extreme simplicity. These approximations work well in systems where the "interesting" information is immediately (after just one collision) scrambled up into subtle correlations, which essentially restricts them to rarefied gases. The Boltzmann transport equation has been found to be very useful in simulations of electron transport in lightly doped semiconductors (in transistors ), where the electrons are indeed analogous to a rarefied gas. Another important class of non-equilibrium statistical mechanical models deals with systems that are only very slightly perturbed from equilibrium. With very small perturbations, the response can be analysed in linear response theory . A remarkable result, as formalized by the fluctuation–dissipation theorem , is that the response of a system when near equilibrium is precisely related to the fluctuations that occur when the system is in total equilibrium. Essentially, a system that is slightly away from equilibrium—whether put there by external forces or by fluctuations—relaxes towards equilibrium in the same way, since the system cannot tell the difference or "know" how it came to be away from equilibrium. [ 30 ] : 664 This provides an indirect avenue for obtaining numbers such as ohmic conductivity and thermal conductivity by extracting results from equilibrium statistical mechanics. Since equilibrium statistical mechanics is mathematically well defined and (in some cases) more amenable for calculations, the fluctuation–dissipation connection can be a convenient shortcut for calculations in near-equilibrium statistical mechanics. A few of the theoretical tools used to make this connection include: An advanced approach uses a combination of stochastic methods and linear response theory . As an example, one approach to compute quantum coherence effects ( weak localization , conductance fluctuations ) in the conductance of an electronic system is the use of the Green–Kubo relations, with the inclusion of stochastic dephasing by interactions between various electrons by use of the Keldysh method. [ 31 ] [ 32 ] The ensemble formalism can be used to analyze general mechanical systems with uncertainty in knowledge about the state of a system. Ensembles are also used in: Statistical physics explains and quantitatively describes superconductivity , superfluidity , turbulence , collective phenomena in solids and plasma , and the structural features of liquid . It underlies the modern astrophysics and virial theorem . In solid state physics, statistical physics aids the study of liquid crystals , phase transitions , and critical phenomena . Many experimental studies of matter are entirely based on the statistical description of a system. These include the scattering of cold neutrons , X-ray , visible light , and more. Statistical physics also plays a role in materials science, nuclear physics, astrophysics, chemistry, biology and medicine (e.g. study of the spread of infectious diseases). [ citation needed ] Analytical and computational techniques derived from statistical physics of disordered systems, can be extended to large-scale problems, including machine learning, e.g., to analyze the weight space of deep neural networks . [ 33 ] Statistical physics is thus finding applications in the area of medical diagnostics . [ 34 ] Quantum statistical mechanics is statistical mechanics applied to quantum mechanical systems . In quantum mechanics, a statistical ensemble (probability distribution over possible quantum states ) is described by a density operator S , which is a non-negative, self-adjoint , trace-class operator of trace 1 on the Hilbert space H describing the quantum system. This can be shown under various mathematical formalisms for quantum mechanics . One such formalism is provided by quantum logic . [ citation needed ]
https://en.wikipedia.org/wiki/Statistical_physics
In protein structure prediction , statistical potentials or knowledge-based potentials are scoring functions derived from an analysis of known protein structures in the Protein Data Bank (PDB). The original method to obtain such potentials is the quasi-chemical approximation , due to Miyazawa and Jernigan. [ 2 ] It was later followed by the potential of mean force (statistical PMF [ Note 1 ] ), developed by Sippl. [ 3 ] Although the obtained scores are often considered as approximations of the free energy —thus referred to as pseudo-energies —this physical interpretation is incorrect. [ 4 ] [ 5 ] Nonetheless, they are applied with success in many cases, because they frequently correlate with actual Gibbs free energy differences. [ 6 ] Possible features to which a pseudo-energy can be assigned include: The classic application is, however, based on pairwise amino acid contacts or distances, thus producing statistical interatomic potentials . For pairwise amino acid contacts, a statistical potential is formulated as an interaction matrix that assigns a weight or energy value to each possible pair of standard amino acids . The energy of a particular structural model is then the combined energy of all pairwise contacts (defined as two amino acids within a certain distance of each other) in the structure. The energies are determined using statistics on amino acid contacts in a database of known protein structures (obtained from the PDB ). Many textbooks present the statistical PMFs as proposed by Sippl [ 3 ] as a simple consequence of the Boltzmann distribution , as applied to pairwise distances between amino acids. This is incorrect, but a useful start to introduce the construction of the potential in practice. The Boltzmann distribution applied to a specific pair of amino acids, is given by: where r {\displaystyle r} is the distance, k {\displaystyle k} is the Boltzmann constant , T {\displaystyle T} is the temperature and Z {\displaystyle Z} is the partition function , with The quantity F ( r ) {\displaystyle F(r)} is the free energy assigned to the pairwise system. Simple rearrangement results in the inverse Boltzmann formula , which expresses the free energy F ( r ) {\displaystyle F(r)} as a function of P ( r ) {\displaystyle P(r)} : To construct a PMF, one then introduces a so-called reference state with a corresponding distribution Q R {\displaystyle Q_{R}} and partition function Z R {\displaystyle Z_{R}} , and calculates the following free energy difference: The reference state typically results from a hypothetical system in which the specific interactions between the amino acids are absent. The second term involving Z {\displaystyle Z} and Z R {\displaystyle Z_{R}} can be ignored, as it is a constant. In practice, P ( r ) {\displaystyle P(r)} is estimated from the database of known protein structures, while Q R ( r ) {\displaystyle Q_{R}(r)} typically results from calculations or simulations. For example, P ( r ) {\displaystyle P(r)} could be the conditional probability of finding the C β {\displaystyle C\beta } atoms of a valine and a serine at a given distance r {\displaystyle r} from each other, giving rise to the free energy difference Δ F {\displaystyle \Delta F} . The total free energy difference of a protein, Δ F T {\displaystyle \Delta F_{\textrm {T}}} , is then claimed to be the sum of all the pairwise free energies: Δ F T = ∑ i < j Δ F ( r i j ∣ a i , a j ) = − k T ∑ i < j ln ⁡ P ( r i j ∣ a i , a j ) Q R ( r i j ∣ a i , a j ) {\displaystyle \Delta F_{\textrm {T}}=\sum _{i<j}\Delta F(r_{ij}\mid a_{i},a_{j})=-kT\sum _{i<j}\ln {\frac {P\left(r_{ij}\mid a_{i},a_{j}\right)}{Q_{R}\left(r_{ij}\mid a_{i},a_{j}\right)}}} where the sum runs over all amino acid pairs a i , a j {\displaystyle a_{i},a_{j}} (with i < j {\displaystyle i<j} ) and r i j {\displaystyle r_{ij}} is their corresponding distance. In many studies Q R {\displaystyle Q_{R}} does not depend on the amino acid sequence . [ 7 ] Intuitively, it is clear that a low value for Δ F T {\displaystyle \Delta F_{\textrm {T}}} indicates that the set of distances in a structure is more likely in proteins than in the reference state. However, the physical meaning of these statistical PMFs has been widely disputed, since their introduction. [ 4 ] [ 5 ] [ 8 ] [ 9 ] The main issues are: In response to the issue regarding the physical validity, the first justification of statistical PMFs was attempted by Sippl. [ 10 ] It was based on an analogy with the statistical physics of liquids. For liquids, the potential of mean force is related to the radial distribution function g ( r ) {\displaystyle g(r)} , which is given by: [ 11 ] where P ( r ) {\displaystyle P(r)} and Q R ( r ) {\displaystyle Q_{R}(r)} are the respective probabilities of finding two particles at a distance r {\displaystyle r} from each other in the liquid and in the reference state. For liquids, the reference state is clearly defined; it corresponds to the ideal gas, consisting of non-interacting particles. The two-particle potential of mean force W ( r ) {\displaystyle W(r)} is related to g ( r ) {\displaystyle g(r)} by: According to the reversible work theorem, the two-particle potential of mean force W ( r ) {\displaystyle W(r)} is the reversible work required to bring two particles in the liquid from infinite separation to a distance r {\displaystyle r} from each other. [ 11 ] Sippl justified the use of statistical PMFs—a few years after he introduced them for use in protein structure prediction—by appealing to the analogy with the reversible work theorem for liquids. For liquids, g ( r ) {\displaystyle g(r)} can be experimentally measured using small angle X-ray scattering ; for proteins, P ( r ) {\displaystyle P(r)} is obtained from the set of known protein structures, as explained in the previous section. However, as Ben-Naim wrote in a publication on the subject: [ 5 ] [...] the quantities, referred to as "statistical potentials," "structure based potentials," or "pair potentials of mean force", as derived from the protein data bank (PDB), are neither "potentials" nor "potentials of mean force," in the ordinary sense as used in the literature on liquids and solutions. Moreover, this analogy does not solve the issue of how to specify a suitable reference state for proteins. In the mid-2000s, authors started to combine multiple statistical potentials, derived from different structural features, into composite scores . [ 12 ] For that purpose, they used machine learning techniques, such as support vector machines (SVMs). Probabilistic neural networks (PNNs) have also been applied for the training of a position-specific distance-dependent statistical potential. [ 13 ] In 2016, the DeepMind artificial intelligence research laboratory started to apply deep learning techniques to the development of a torsion- and distance-dependent statistical potential. [ 14 ] The resulting method, named AlphaFold , won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by correctly predicting the most accurate structure for 25 out of 43 free modelling domains . Baker and co-workers [ 15 ] justified statistical PMFs from a Bayesian point of view and used these insights in the construction of the coarse grained ROSETTA energy function. According to Bayesian probability calculus, the conditional probability P ( X ∣ A ) {\displaystyle P(X\mid A)} of a structure X {\displaystyle X} , given the amino acid sequence A {\displaystyle A} , can be written as: P ( X ∣ A ) {\displaystyle P(X\mid A)} is proportional to the product of the likelihood P ( A ∣ X ) {\displaystyle P\left(A\mid X\right)} times the prior P ( X ) {\displaystyle P\left(X\right)} . By assuming that the likelihood can be approximated as a product of pairwise probabilities, and applying Bayes' theorem , the likelihood can be written as: P ( A ∣ X ) ≈ ∏ i < j P ( a i , a j ∣ r i j ) ∝ ∏ i < j P ( r i j ∣ a i , a j ) P ( r i j ) {\displaystyle P\left(A\mid X\right)\approx \prod _{i<j}P\left(a_{i},a_{j}\mid r_{ij}\right)\propto \prod _{i<j}{\frac {P\left(r_{ij}\mid a_{i},a_{j}\right)}{P(r_{ij})}}} where the product runs over all amino acid pairs a i , a j {\displaystyle a_{i},a_{j}} (with i < j {\displaystyle i<j} ), and r i j {\displaystyle r_{ij}} is the distance between amino acids i {\displaystyle i} and j {\displaystyle j} . Obviously, the negative of the logarithm of the expression has the same functional form as the classic pairwise distance statistical PMFs, with the denominator playing the role of the reference state. This explanation has two shortcomings: it relies on the unfounded assumption the likelihood can be expressed as a product of pairwise probabilities, and it is purely qualitative . Hamelryck and co-workers [ 6 ] later gave a quantitative explanation for the statistical potentials, according to which they approximate a form of probabilistic reasoning due to Richard Jeffrey and named probability kinematics . This variant of Bayesian thinking (sometimes called " Jeffrey conditioning ") allows updating a prior distribution based on new information on the probabilities of the elements of a partition on the support of the prior. From this point of view, (i) it is not necessary to assume that the database of protein structures—used to build the potentials—follows a Boltzmann distribution, (ii) statistical potentials generalize readily beyond pairwise differences, and (iii) the reference ratio is determined by the prior distribution. Expressions that resemble statistical PMFs naturally result from the application of probability theory to solve a fundamental problem that arises in protein structure prediction: how to improve an imperfect probability distribution Q ( X ) {\displaystyle Q(X)} over a first variable X {\displaystyle X} using a probability distribution P ( Y ) {\displaystyle P(Y)} over a second variable Y {\displaystyle Y} , with Y = f ( X ) {\displaystyle Y=f(X)} . [ 6 ] Typically, X {\displaystyle X} and Y {\displaystyle Y} are fine and coarse grained variables, respectively. For example, Q ( X ) {\displaystyle Q(X)} could concern the local structure of the protein, while P ( Y ) {\displaystyle P(Y)} could concern the pairwise distances between the amino acids. In that case, X {\displaystyle X} could for example be a vector of dihedral angles that specifies all atom positions (assuming ideal bond lengths and angles). In order to combine the two distributions, such that the local structure will be distributed according to Q ( X ) {\displaystyle Q(X)} , while the pairwise distances will be distributed according to P ( Y ) {\displaystyle P(Y)} , the following expression is needed: where Q ( Y ) {\displaystyle Q(Y)} is the distribution over Y {\displaystyle Y} implied by Q ( X ) {\displaystyle Q(X)} . The ratio in the expression corresponds to the PMF. Typically, Q ( X ) {\displaystyle Q(X)} is brought in by sampling (typically from a fragment library), and not explicitly evaluated; the ratio, which in contrast is explicitly evaluated, corresponds to Sippl's PMF. This explanation is quantitive, and allows the generalization of statistical PMFs from pairwise distances to arbitrary coarse grained variables. It also provides a rigorous definition of the reference state, which is implied by Q ( X ) {\displaystyle Q(X)} . Conventional applications of pairwise distance statistical PMFs usually lack two necessary features to make them fully rigorous: the use of a proper probability distribution over pairwise distances in proteins, and the recognition that the reference state is rigorously defined by Q ( X ) {\displaystyle Q(X)} . Statistical potentials are used as energy functions in the assessment of an ensemble of structural models produced by homology modeling or protein threading . Many differently parameterized statistical potentials have been shown to successfully identify the native state structure from an ensemble of decoy or non-native structures. [ 16 ] Statistical potentials are not only used for protein structure prediction , but also for modelling the protein folding pathway. [ 17 ] [ 18 ]
https://en.wikipedia.org/wiki/Statistical_potential
Statistical proof is the rational demonstration of degree of certainty for a proposition , hypothesis or theory that is used to convince others subsequent to a statistical test of the supporting evidence and the types of inferences that can be drawn from the test scores. Statistical methods are used to increase the understanding of the facts and the proof demonstrates the validity and logic of inference with explicit reference to a hypothesis, the experimental data , the facts, the test, and the odds . Proof has two essential aims: the first is to convince and the second is to explain the proposition through peer and public review. [ 1 ] The burden of proof rests on the demonstrable application of the statistical method, the disclosure of the assumptions, and the relevance that the test has with respect to a genuine understanding of the data relative to the external world. There are adherents to several different statistical philosophies of inference, such as Bayes' theorem versus the likelihood function , or positivism versus critical rationalism . These methods of reason have direct bearing on statistical proof and its interpretations in the broader philosophy of science. [ 1 ] [ 2 ] A common demarcation between science and non-science is the hypothetico-deductive proof of falsification developed by Karl Popper , which is a well-established practice in the tradition of statistics. Other modes of inference, however, may include the inductive and abductive modes of proof. [ 3 ] Scientists do not use statistical proof as a means to attain certainty, but to falsify claims and explain theory. Science cannot achieve absolute certainty nor is it a continuous march toward an objective truth as the vernacular as opposed to the scientific meaning of the term "proof" might imply. Statistical proof offers a kind of proof of a theory's falsity and the means to learn heuristically through repeated statistical trials and experimental error. [ 2 ] Statistical proof also has applications in legal matters with implications for the legal burden of proof . [ 4 ] There are two kinds of axioms , 1) conventions that are taken as true that should be avoided because they cannot be tested, and 2) hypotheses. [ 5 ] Proof in the theory of probability was built on four axioms developed in the late 17th century: The preceding axioms provide the statistical proof and basis for the laws of randomness, or objective chance from where modern statistical theory has advanced. Experimental data, however, can never prove that the hypotheses (h) is true, but relies on an inductive inference by measuring the probability of the hypotheses relative to the empirical data. The proof is in the rational demonstration of using the logic of inference , math , testing , and deductive reasoning of significance . [ 1 ] [ 2 ] [ 6 ] The term proof descended from its Latin roots (provable, probable, probare L.) meaning to test . [ 7 ] [ 8 ] Hence, proof is a form of inference by means of a statistical test. Statistical tests are formulated on models that generate probability distributions . Examples of probability distributions might include the binary , normal , or poisson distribution that give exact descriptions of variables that behave according to natural laws of random chance . When a statistical test is applied to samples of a population, the test determines if the sample statistics are significantly different from the assumed null-model . True values of a population, which are unknowable in practice, are called parameters of the population. Researchers sample from populations, which provide estimates of the parameters, to calculate the mean or standard deviation. If the entire population is sampled, then the sample statistic mean and distribution will converge with the parametric distribution. [ 9 ] Using the scientific method of falsification, the probability value that the sample statistic is sufficiently different from the null-model than can be explained by chance alone is given prior to the test. Most statisticians set the prior probability value at 0.05 or 0.1, which means if the sample statistics diverge from the parametric model more than 5 (or 10) times out of 100, then the discrepancy is unlikely to be explained by chance alone and the null-hypothesis is rejected. Statistical models provide exact outcomes of the parametric and estimates of the sample statistics. Hence, the burden of proof rests in the sample statistics that provide estimates of a statistical model. Statistical models contain the mathematical proof of the parametric values and their probability distributions. [ 10 ] [ 11 ] Bayesian statistics are based on a different philosophical approach for proof of inference . The mathematical formula for Bayes's theorem is: P r [ P a r a m e t e r | D a t a ] = P r [ D a t a | P a r a m e t e r ] × P r [ P a r a m e t e r ] P r [ D a t a ] {\displaystyle Pr[Parameter|Data]={\frac {Pr[Data|Parameter]\times Pr[Parameter]}{Pr[Data]}}} The formula is read as the probability of the parameter (or hypothesis =h , as used in the notation on axioms ) “given” the data (or empirical observation), where the horizontal bar refers to "given". The right hand side of the formula calculates the prior probability of a statistical model (Pr [Parameter]) with the likelihood (Pr [Data | Parameter]) to produce a posterior probability distribution of the parameter (Pr [Parameter | Data]). The posterior probability is the likelihood that the parameter is correct given the observed data or samples statistics. [ 12 ] Hypotheses can be compared using Bayesian inference by means of the Bayes factor, which is the ratio of the posterior odds to the prior odds. It provides a measure of the data and if it has increased or decreased the likelihood of one hypothesis relative to another. [ 13 ] The statistical proof is the Bayesian demonstration that one hypothesis has a higher (weak, strong, positive) likelihood. [ 13 ] There is considerable debate if the Bayesian method aligns with Karl Poppers method of proof of falsification, where some have suggested that "...there is no such thing as "accepting" hypotheses at all. All that one does in science is assign degrees of belief..." [ 14 ] : 180 According to Popper, hypotheses that have withstood testing and have yet to be falsified are not verified but corroborated . Some researches have suggested that Popper's quest to define corroboration on the premise of probability put his philosophy in line with the Bayesian approach. In this context, the likelihood of one hypothesis relative to another may be an index of corroboration, not confirmation, and thus statistically proven through rigorous objective standing. [ 6 ] [ 15 ] "Where gross statistical disparities can be shown, they alone may in a proper case constitute prima facie proof of a pattern or practice of discrimination." [ nb 1 ] : 271 Statistical proof in a legal proceeding can be sorted into three categories of evidence: Statistical proof was not regularly applied in decisions concerning United States legal proceedings until the mid 1970s following a landmark jury discrimination case in Castaneda v. Partida . The US Supreme Court ruled that gross statistical disparities constitutes " prima facie proof" of discrimination, resulting in a shift of the burden of proof from plaintiff to defendant. Since that ruling, statistical proof has been used in many other cases on inequality, discrimination, and DNA evidence. [ 4 ] [ 17 ] [ 18 ] However, there is not a one-to-one correspondence between statistical proof and the legal burden of proof. "The Supreme Court has stated that the degrees of rigor required in the fact finding processes of law and science do not necessarily correspond." [ 18 ] : 1533 In an example of a death row sentence ( McCleskey v. Kemp [ nb 2 ] ) concerning racial discrimination, the petitioner, a black man named McCleskey was charged with the murder of a white police officer during a robbery. Expert testimony for McClesky introduced a statistical proof showing that "defendants charged with killing white victims were 4.3 times as likely to receive a death sentence as charged with killing blacks.". [ 19 ] : 595 Nonetheless, the statistics was insufficient "to prove that the decisionmakers in his case acted with discriminatory purpose." [ 19 ] : 596 It was further argued that there were "inherent limitations of the statistical proof", [ 19 ] : 596 because it did not refer to the specifics of the individual. Despite the statistical demonstration of an increased probability of discrimination, the legal burden of proof (it was argued) had to be examined on a case-by-case basis. [ 19 ]
https://en.wikipedia.org/wiki/Statistical_proof
A statistical syllogism (or proportional syllogism or direct inference ) is a non- deductive syllogism . It argues, using inductive reasoning , from a generalization true for the most part to a particular case. Statistical syllogisms may use qualifying words like "most", "frequently", "almost never", "rarely", etc., or may have a statistical generalization as one or both of their premises. For example: Premise 1 (the major premise) is a generalization , and the argument attempts to draw a conclusion from that generalization. In contrast to a deductive syllogism, the premises logically support or confirm the conclusion rather than strictly implying it: it is possible for the premises to be true and the conclusion false, but it is not likely. General form: In the abstract form above, F is called the "reference class" and G is the "attribute class" and I is the individual object. So, in the earlier example, "(things that are) taller than 26 inches" is the attribute class and "people" is the reference class. Unlike many other forms of syllogism, a statistical syllogism is inductive , so when evaluating this kind of argument it is important to consider how strong or weak it is, along with the other rules of induction (as opposed to deduction ). In the above example, if 99% of people are taller than 26 inches, then the probability of the conclusion being true is 99%. Two dicto simpliciter fallacies can occur in statistical syllogisms. They are " accident " and " converse accident ". Faulty generalization fallacies can also affect any argument premise that uses a generalization. A problem with applying the statistical syllogism in real cases is the reference class problem : given that a particular case I is a member of very many reference classes F, in which the proportion of attribute G may differ widely, how should one decide which class to use in applying the statistical syllogism? The importance of the statistical syllogism was urged by Henry E. Kyburg, Jr. , who argued that all statements of probability could be traced to a direct inference. For example, when taking off in an airplane, our confidence (but not certainty) that we will land safely is based on our knowledge that the vast majority of flights do land safely. The widespread use of confidence intervals in statistics is often justified using a statistical syllogism, in such words as " Were this procedure to be repeated on multiple samples, the calculated confidence interval (which would differ for each sample) would encompass the true population parameter 90% of the time." [ 1 ] The inference from what would mostly happen in multiple samples to the confidence we should have in the particular sample involves a statistical syllogism. [ 2 ] One person who argues that statistical syllogism is more of a probability is Donald Williams. [ 3 ] Ancient writers on logic and rhetoric approved arguments from "what happens for the most part". For example, Aristotle writes "that which people know to happen or not to happen, or to be or not to be, mostly in a particular way, is likely, for example, that the envious are malevolent or that those who are loved are affectionate." [ 4 ] [ 5 ] The ancient Jewish law of the Talmud used a "follow the majority" rule to resolve cases of doubt. [ 5 ] : 172–5 From the invention of insurance in the 14th century, insurance rates were based on estimates (often intuitive) of the frequencies of the events insured against, which involves an implicit use of a statistical syllogism. John Venn pointed out in 1876 that this leads to a reference class problem of deciding in what class containing the individual case to take frequencies in. He writes, “It is obvious that every single thing or event has an indefinite number of properties or attributes observable in it, and might therefore be considered as belonging to an indefinite number of different classes of things”, leading to problems with how to assign probabilities to a single case, for example the probability that John Smith, a consumptive Englishman aged fifty, will live to sixty-one. [ 6 ] In the 20th century, clinical trials were designed to find the proportion of cases of disease cured by a drug, in order that the drug can be applied confidently to an individual patient with the disease. The statistical syllogism was used by Donald Cary Williams and David Stove in their attempt to give a logical solution to the problem of induction . They put forward the argument, which has the form of a statistical syllogism: If the population is, say, a large number of balls which are black or white but in an unknown proportion, and one takes a large sample and finds they are all white, then it is likely, using this statistical syllogism, that the population is all or nearly all white. That is an example of inductive reasoning. [ 7 ] Statistical syllogisms may be used as legal evidence but it is usually believed that a legal decision should not be based solely on them. For example, in L. Jonathan Cohen 's "gatecrasher paradox", 499 tickets to a rodeo have been sold and 1000 people are observed in the stands. The rodeo operator sues a random attendee for non-payment of the entrance fee. The statistical syllogism: is a strong one, but it is felt to be unjust to burden a defendant with membership of a class, without evidence that bears directly on the defendant. [ 8 ]
https://en.wikipedia.org/wiki/Statistical_syllogism
The following terms are used by electrical engineers in statistical signal processing studies instead of typical statistician's terms. In other engineering fields, particularly mechanical engineering , uncertainty analysis examines systematic and random components of variations in measurements associated with physical experiments.
https://en.wikipedia.org/wiki/Statisticians'_and_engineers'_cross-reference_of_statistical_terms
A status message is a function of some instant messaging applications whereby a user may post a message that appears automatically to other users if they attempt to make contact. A status message can tell other contacts the user's current status, such as being busy or what the user is currently doing. [ 1 ] It is analogous to the voice message in an answering machine or voice mail system. However, status messages may be displayed even if the person is present. They are often updated much more frequently than messages in answering machines, and thus may serve as a means of instant, limited "publication" or indirect communication. Generally, the available status is denoted by a green dot , while the busy status is denoted by a red dot on most of the instant messengers. Whereas answering machine or voice mail messages often have a generic greeting to leave a message, status messages more often contain a description of where the person is at the moment or what they are doing. Because most instant messaging clients indicate to users when their online contacts are away before they send a message, more often than not away messages are meant to be read in lieu of sending a message, rather than a response. Away messages are not to be confused with idle messages , which is an automatic reply to a message when the messaging client has determined that the replier is not at his or her computer. In the XMPP protocol for instant messaging, the status of a user is signalled by an element called presence . This provides a variety of functions, including the option to subscribe to the status so that the recipient is continuously updated with changes in status. [ 2 ] This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Status_message_(instant_messaging)
Statutory interpretation is the process by which courts interpret and apply legislation . Some amount of interpretation is often necessary when a case involves a statute . Sometimes the words of a statute have a plain and a straightforward meaning, but in many cases, there is some ambiguity in the words of the statute that must be resolved by the judge. To find the meanings of statutes, judges use various tools and methods of statutory interpretation, including traditional canons of statutory interpretation, legislative history, and purpose. In common law jurisdictions, the judiciary may apply rules of statutory interpretation both to legislation enacted by the legislature and to delegated legislation such as administrative agency regulations . Statutory interpretation first became significant in common law systems, of which historically England is the exemplar. In Roman and civil law, a statute (or code) guides the magistrate, but there is no judicial precedent. In England, Parliament historically failed to enact a comprehensive code of legislation, which is why it was left to the courts to develop the common law; and having decided a case and given reasons for the decision , the decision would become binding on later courts. Accordingly, a particular interpretation of a statute would also become binding, and it became necessary to introduce a consistent framework for statutory interpretation. In the construction (interpretation) of statutes, the principal aim of the court must be to carry out the "intention of Parliament", and the English courts developed three main rules (plus some minor ones) to assist them in the task. These were: the mischief rule , the literal rule , and the golden rule . Statutes may be presumed to incorporate certain components, as Parliament is "presumed" to have intended their inclusion. [ 1 ] For example: Where legislation and case law are in conflict, there is a presumption that legislation takes precedence insofar as there is any inconsistency. In the United Kingdom this principle is known as parliamentary sovereignty ; but while Parliament has exclusive competence to legislate, the courts (mindful of their historic role of having developed the entire system of common law) retain sole competence to interpret statutes. The age old process of application of the enacted law has led to the formulation of certain rules of interpretation. According to Cross, "Interpretation is the process by which the courts determine the meaning of a statutory provision for the purpose of applying it to the situation before them", [ 6 ] while Salmond calls it "the process by which the courts seek to ascertain the meaning of the legislature through the medium of authoritative forms in which it is expressed". [ 7 ] Interpretation of a particular statute depends upon the degree of creativity applied by the judges or the court in the reading of it, employed to achieve some stated end. It is often mentioned that common law statutes can be interpreted by using the Golden Rule, the Mischief Rule or the Literal Rule. However, according to Francis Bennion , author of texts on statutory interpretation, [ 8 ] there are no such simple devices to elucidate complex statutes, "[i]nstead there are a thousand and one interpretative criteria ". [ 9 ] A statute is an edict of a legislature, [ 10 ] and the conventional way of interpreting a statute is to seek the "intention" of its maker of framer. It is the judicature's duty to act upon the true intention of the legislature or the mens or sentential legis. The courts have to objectively determine the interpretation with guidance furnished by the accepted principles. [ 11 ] If a statutory provision is open to more than one interpretation the court has to choose that interpretation which represents the true intention of the legislature. [ 12 ] [ 13 ] The function of the courts is only to expound and not to legislate. [ 14 ] Federal jurisdictions may presume that either federal or local government authority prevails in the absence of a defined rule. In Canada , there are areas of law where provincial governments and the federal government have concurrent jurisdiction. In these cases the federal law is held to be paramount. However, in areas where the Canadian constitution is silent, the federal government does not necessarily have superior jurisdiction. Rather, an area of law that is not expressly mentioned in Canada's Constitution will have to be interpreted to fall under either the federal residual jurisdiction found in the preamble of s. 91—known as the Peace, Order and Good Government clause—or the provinces residual jurisdiction of "Property and Civil Rights" under s. 92(13A) of the 1867 Constitution Act. This contrasts with other federal jurisdictions, notably the United States and Australia , where it is presumed that if legislation is not enacted pursuant to a specific provision of the federal Constitution , the states will have authority over the relevant matter in their respective jurisdictions, unless the state's definitions of their statutes conflicts with federally established or recognized rights The judiciary interprets how legislation should apply in a particular case as no legislation unambiguously and specifically addresses all matters. Legislation may contain uncertainties for a variety of reasons: Therefore, the court must try to determine how a statute should be enforced. This requires statutory construction . It is a tenet of statutory construction that the legislature is supreme (assuming constitutionality) when creating law and that the court is merely an interpreter of the law. Nevertheless, in practice, by performing the construction the court can make sweeping changes in the operation of the law. Moreover, courts must also often view a case's statutory context . While cases occasionally focus on a few key words or phrases, judges may occasionally turn to viewing a case in its whole in order to gain deeper understanding. The totality of the language of a particular case allows the Justices presiding to better consider their rulings when it comes to these key words and phrases. [ 18 ] Statutory interpretation is the process by which a court looks at a statute and determines what it means. A statute, which is a bill or law passed by the legislature, imposes obligations and rules on the people. Although legislature makes the Statute, it may be open to interpretation and have ambiguities. Statutory interpretation is the process of resolving those ambiguities and deciding how a particular bill or law will apply in a particular case. Assume, for example, that a statute mandates that all motor vehicles travelling on a public roadway must be registered with the Department of Motor Vehicles (DMV). If the statute does not define the term "motor vehicles", then that term will have to be interpreted if questions arise in a court of law. A person driving a motorcycle might be pulled over and the police may try to fine him if his motorcycle is not registered with the DMV. If that individual argued to the court that a motorcycle is not a "motor vehicle", then the court would have to interpret the statute to determine what the legislature meant by "motor vehicle" and whether or not the motorcycle fell within that definition and was covered by the statute. There are numerous rules of statutory interpretation. The first and most important rule is the rule dealing with the statute's plain language. This rule essentially states that the statute means what it says. If, for example, the statute says "motor vehicles", then the court is most likely to construe that the legislation is referring to the broad range of motorised vehicles normally required to travel along roadways and not "aeroplanes" or "bicycles" even though aeroplanes are vehicles propelled by a motor and bicycles may be used on a roadway. In Australia and in the United States, the courts have consistently stated that the text of the statute is used first, and it is read as it is written, using the ordinary meaning of the words of the statute. Below are various quotes on this topic from US courts: It is presumed that a statute will be interpreted so as to be internally consistent. A particular section of the statute shall not be divorced from the rest of the act. The ejusdem generis (or eiusdem generis , Latin for "of the same kind") rule applies to resolve the problem of giving meaning to groups of words where one of the words is ambiguous or inherently unclear. The rule states that where "general words follow enumerations of particular classes or persons or things, the general words shall be construed as applicable only to persons or things of the same general nature or kind as those enumerated". [ 19 ] A statute shall not be interpreted so as to be inconsistent with other statutes. Where there is an apparent inconsistency, the judiciary will attempt to provide a harmonious interpretation. [ example needed ] Legislative bodies themselves may try to influence or assist the courts in interpreting their laws by placing into the legislation itself statements to that effect. These provisions have many different names, but are typically noted as: In most legislatures internationally, these provisions of the bill simply give the legislature's goals and desired effects of the law, and are considered non-substantive and non-enforceable in and of themselves. [ 21 ] [ 22 ] However in the case of the European Union, a supranational body, the recitals in Union legislation must specify the reasons the operative provisions were adopted, and if they do not, the legislation is void. [ 23 ] This has been interpreted by the courts as giving them a role in statutory interpretation with Klimas, Tadas and Vaiciukaite explaining "recitals in EC law are not considered to have independent legal value, but they can expand an ambiguous provision's scope. They cannot, however, restrict an unambiguous provision's scope, but they can be used to determine the nature of a provision, and this can have a restrictive effect." [ 23 ] Also known as canons of construction, canons give common sense guidance to courts in interpreting the meaning of statutes. Most canons emerge from the common law process through the choices of judges. Critics [ who? ] of the use of canons argue that the canons constrain judges and limit the ability of the courts to legislate from the bench . Proponents [ who? ] argue that a judge always has a choice between competing canons that lead to different results, so judicial discretion is only hidden through the use of canons, not reduced. These canons can be divided into two major groups: Textual canons are rules of thumb for understanding the words of the text. Some of the canons are still known by their traditional Latin names. Substantive canons instruct the court to favor interpretations that promote certain values or policy results. Deference canons instruct the court to defer to the interpretation of another institution, such as an administrative agency or Congress. These canons reflect an understanding that the judiciary is not the only branch of government entrusted with constitutional responsibility. The avoidance canon was discussed in Bond v. United States when the defendant placed toxic chemicals on frequently touched surfaces of a friend. [ 54 ] The statute in question made using a chemical weapon a crime; however, the separation of power between states and the federal government would be infringed upon if the Supreme Court interpreted the statute to extend to local crimes. [ 55 ] Therefore, the Court utilized the canon of constitutional avoidance and decided to "read the statute more narrowly, to exclude the defendant's conduct". [ 56 ] The application of this rule in the United Kingdom is not entirely clear. The literal meaning rule – that if "Parliament's meaning is clear, that meaning is binding no matter how absurd the result may seem" [ 59 ] – has a tension with the "golden rule", permitting courts to avoid absurd results in cases of ambiguity. At times, courts are not "concerned with what parliament intended, but simply with what it has said in the statute". [ 60 ] Different judges have different views. In Nothman v. London Borough of Barnet , Lord Denning of the Court of Appeals attacked "those who adopt the strict literal and grammatical construction of the words" and saying that the "[t]he literal method is now completely out-of-date [and] replaced by the ... 'purposive' approach". [ 61 ] On appeal, however, against Denning's decision, Lord Russell in the House of Lords "disclaim[ed] the sweeping comments of Lord Denning". [ 62 ] For jurisprudence in the United States, "an absurdity is not mere oddity. The absurdity bar is high, as it should be. The result must be preposterous, one that 'no reasonable person could intend ' ". [ 63 ] [ 64 ] Moreover, the avoidance applies only when "it is quite impossible that Congress could have intended the result ... and where the alleged absurdity is so clear as to be obvious to most anyone". [ 65 ] "To justify a departure from the letter of the law upon that ground, the absurdity must be so gross as to shock the general moral or common sense", [ 66 ] with an outcome "so contrary to perceived social values that Congress could not have 'intended' it". [ 67 ] Critics of the use of canons argue that canons impute some sort of "omniscience" to the legislature, suggesting that it is aware of the canons when constructing the laws. In addition, it is argued that the canons give a credence to judges who want to construct the law a certain way, imparting a false sense of justification to their otherwise arbitrary process. In a classic article, Karl Llewellyn argued that every canon had a "counter-canon" that would lead to the opposite interpretation of the statute. [ 68 ] [ 69 ] Some scholars argue that interpretive canons should be understood as an open set, despite conventional assumptions that traditional canons capture all relevant language generalizations. Empirical evidence, for example, suggests that ordinary people readily incorporate a "nonbinary gender canon" and "quantifier domain restriction canon" in the interpretation of legal rules. [ 70 ] Other scholars argue that the canons should be reformulated as "canonical" or archetypical queries helping to direct genuine inquiry rather than purporting to somehow help provide answers in themselves. [ 71 ] The French philosopher Montesquieu (1689–1755) believed that courts should act as "the mouth of the law", but soon it was found that some interpretation is inevitable. Following the German scholar Friedrich Carl von Savigny (1779–1861) the four main interpretation methods are: It is controversial [ citation needed ] whether there is a hierarchy between interpretation methods. Germans prefer a "grammatical" (literal) interpretation, because the statutory text has a democratic legitimation, and "sensible" interpretations are risky, in particular in view of German history. "Sensible" means different things to different people. The modern, common-law perception that courts actually make law is very different. In a German perception, courts can only further develop law ( Rechtsfortbildung ). All of the above methods may seem reasonable: The freedom of interpretation varies by area of law. Criminal law and tax law must be interpreted very strictly, and never to the disadvantage of citizens, [ citation needed ] but liability law requires more elaborate interpretation, because here (usually) both parties are citizens. Here the statute may even be interpreted contra legem in exceptional cases, if otherwise a patently unreasonable result would follow. The interpretation of international treaties is governed by another treaty, the Vienna Convention on the Law of Treaties , notably Articles 31–33. Some states (such as the United States) are not a parties to the treaty, but recognize that the Convention is, at least in part, merely a codification of customary international law. The rule set out in the Convention is essentially that the text of a treaty is decisive unless it either leaves the meaning ambiguous, or obscure, or leads to a result that is manifestly absurd or unreasonable. Recourse to "supplementary means of interpretation" is allowed only in that case, like the preparatory works, also known by the French designation of travaux préparatoires . Within the United States, purposivism and textualism are the two most prevalent methods of statutory interpretation. [ 72 ] Also recognized is the theory of intentionalists, which is to prioritize and consider sources beyond the text. "Purposivists often focus on the legislative process, taking into account the problem that Congress was trying to solve by enacting the disputed law and asking how the statute accomplished that goal." [ 73 ] Purposivists believe in reviewing the processes surrounding the power of the legislative body as stated in the constitution as well as the rationale that a "reasonable person conversant with the circumstances underlying enactment would suppress the mischief and advance the remedy" [ 74 ] Purposivists would understand statutes by examining "how Congress makes its purposes known, through text and reliable accompanying materials constituting legislative history." [ 75 ] [ 76 ] "In contrast to purposivists, textualists focus on the words of a statute, emphasizing text over any unstated purpose." [ 77 ] Textualists believe that everything which the courts need in deciding on cases are enumerated in the text of legislative statutes. In other words, if any other purpose was intended by the legislature then it would have been written within the statutes and since it is not written, it implies that no other purpose or meaning was intended. By looking at the statutory structure and hearing the words as they would sound in the mind of a skilled, objectively reasonable user of words, [ 78 ] textualists believe that they would respect the constitutional separation of power and best respect legislative supremacy . [ 74 ] Critiques of modern textualism on the United States Supreme Court abound. [ 79 ] [ 80 ] Intentionalists refer to the specific intent of the enacting legislature on a specific issue. Intentionalists can also focus on general intent. It is important to note that private motives do not eliminate the common goal that the legislature carries. This theory differs from others mainly on the types of sources that will be considered. Intentional theory seeks to refer to as many different sources as possible to consider the meaning or interpretation of a given statute. This theory is adjacent to a contextualist theory, which prioritizes the use of context to determine why a legislature enacted any given statute.
https://en.wikipedia.org/wiki/Statutory_interpretation
The Staudinger reaction is a chemical reaction of an organic azide with a phosphine or phosphite produces an iminophosphorane . [ 1 ] [ 2 ] The reaction was discovered by and named after Hermann Staudinger . [ 3 ] The reaction follows this stoichiometry: The Staudinger reduction is conducted in two steps. First phosphine imine-forming reaction is conducted involving treatment of the azide with the phosphine. The intermediate, e.g. triphenylphosphine phenylimide , is then subjected to hydrolysis to produce a phosphine oxide and an amine : The overall conversion is a mild method of reducing an azide to an amine. Triphenylphosphine or tributylphosphine are most commonly used, yielding tributylphosphine oxide or triphenylphosphine oxide as a side product in addition to the desired amine. An example of a Staudinger reduction is the organic synthesis of the pinwheel compound 1,3,5-tris(aminomethyl)-2,4,6-triethylbenzene. [ 4 ] The reaction mechanism centers around the formation of an iminophosphorane through nucleophilic addition of the aryl or alkyl phosphine at the terminal nitrogen atom of the organic azide and expulsion of diatomic nitrogen . The iminophosphorane is then hydrolyzed in the second step to the amine and a phosphine oxide byproduct. Of interest in chemical biology is the Staudinger ligation , which has been called one of the most important bioconjugation methods. [ 5 ] Two versions of the Staudinger ligation have been developed. Both begin with the classic iminophosphorane reaction. In the classical Staudinger ligation, the organophosphorus compound becomes incorporated into the nascent amide. [ 6 ] Typically, appended to the organophosphorus component are reporter groups such as fluorophores. In the traceless Staudinger ligation, the organophosphorus group dissociates, giving a phosphorus-free peptide or bioconjugate. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Staudinger_reaction
The Staudinger synthesis , also called the Staudinger ketene-imine cycloaddition, is a chemical synthesis in which an imine 1 reacts with a ketene 2 through a non- photochemical 2+2 cycloaddition to produce a β -lactam 3 . [ 1 ] The reaction carries particular importance in the synthesis of β-lactam antibiotics . [ 2 ] The Staudinger synthesis should not be confused with the Staudinger reaction , a phosphine or phosphite reaction used to reduce azides to amines. Reviews on the mechanism, stereochemistry, and applications of the reaction have been published. [ 3 ] [ 4 ] [ 5 ] The reaction was discovered in 1907 by the German chemist Hermann Staudinger . [ 6 ] The reaction did not attract interest until the 1940s, when the structure of penicillin was elucidated. The β -lactam moiety of the first synthetic penicillin was constructed using this cycloaddition, [ 7 ] and it remains a valuable tool in synthetic organic chemistry. The first step is a nucleophilic attack by the imine nitrogen on the carbonyl carbon to generate a zwitterionic intermediate. Electron-donating groups on the imine facilitate this step, while electron-withdrawing groups impede the attack. [ 8 ] The second step is either an intramolecular nucleophilic ring closure or a conrotatory electrocyclic ring closure . [ 9 ] The second step is different from typical electrocyclic ring closures as predicted by the Woodward–Hoffmann rules . Under photochemical and microwave conditions the intermediate's 4π-electron system cannot undergo a disrotatory ring closure to form the β-lactam, possibly because the two double bonds are not coplanar. [ 10 ] Some products of the Staudinger synthesis differ from those predicted by the torquoelectronic model. [ 11 ] In addition, the electronic structure of the transition state differs from that of other conrotary ring closures. [ 11 ] There is evidence from computational studies on model systems that in the gas phase the mechanism is concerted. [ 5 ] The stereochemistry of the Staudinger synthesis can be difficult to predict because either step can be rate-determining . [ 12 ] If the ring closure step is rate-determining, stereochemical predictions based on torquoselectivity are reliable. [ 12 ] Other factors that affect the stereochemistry include the initial regiochemistry of the imine. Generally, (E)-imines form cis β-lactams while (Z)-imines form trans β-lactams. [ 5 ] Other substituents affect the stereochemistry as well. Ketenes with strong electron-donating substituents mainly produce cis β-lactams, while ketenes with strong electron-withdrawing substituents generally produce trans β-lactams. The ketene substituent affects the transition state by either speeding up or slowing down the progress towards the β-lactam. A slower reaction allows for the isomerization of the imine, which generally results in a trans product. [ 11 ] Reviews on asymmetric induction of the Staudinger synthesis, including the use of organic and organometallic catalysts, have been published. [ 1 ] [ 5 ] [ 13 ] The imine can be replaced by adding olefin to produce a cyclobutanone , carbonyl to produce a β -lactone , or carbodiimides to produce 4-imino β -lactams . [ 1 ] The Staudinger synthesis and variations are all ketene cycloadditions . In 2014, Doyle and coworkers reported a one-pot, multicomponent Staudinger synthesis of β-lactams from azides and two diazo compounds. The reaction occurs by a rhodium acetate-catalyzed reaction between the aryldiazoacetate (red) and the organic azide (blue) to form an imine. A Wolff rearrangement of the diazoacetoacetate enone (black) forms a stable ketene, which reacts with the imine to form a stable β-lactam compound. The solvent used for this reaction is dichloromethane (DCM) and the solution needs to rest for 3 hours at room temperature. The yield of the reaction is about 99%. [ 14 ] The reaction with sulfenes instead of ketenes leading to β - sultams is called Sulfa-Staudinger cycloaddition . The following illustration shows an example of the Sulfa-Staudinger cycloaddition. Benzylidenemethylamine reacts with ethanesulfonyl chloride to a β-sultam. For this reaction was tetrahydrofuran (THF) used as a solvent and the solution needed to rest for 24 hours. [ 15 ]
https://en.wikipedia.org/wiki/Staudinger_synthesis
Staurostoma is a genus of cnidarians of the family Laodiceidae . [ 2 ] The genus contains two described species. [ 2 ] There are two currently accepted species: [ 1 ] This Leptothecata -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Staurostoma
Staurostoma falklandica is a species of jellyfish first discovered in 1907 by the Scottish Antarctic Expedition aboard the S.S. Scotia in Stanley Harbour, Falkland Islands . [ 2 ] Staurostoma falklandica is very similar to the related White cross jellyfish , with the distinguishing feature being the much more diminutive second set of tentacles. [ 3 ] : 236 It has a thin umbrella, measuring 90mm in diameter, with a stomach in four radiating arms across it. The mouth is the same length as the stomach, and its edges are a complicated series of folds. The gonads are along the edge of the stomach in deeper folds. [ 3 ] : 235 There are several hundred principle tentacles closely packed round the edge of the bell. In between each pair of tentacles is a much smaller tentacle, similar in shape. Between the smaller and larger tentacles is a cordylus (sensory club). [ 4 ] [ 3 ] : 236 Staurostoma falklandica is a marine species which inhabits the southern hemisphere near Antarctica. Observations have been made in Chile, Argentina, Australia and New Zealand. [ 2 ] This Hydrozoa -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Staurostoma_falklandica
A stave is a narrow length of wood with a slightly bevelled edge to form the sides of barrels , tanks, tubs , vats and pipelines , originally handmade by coopers . [ 1 ] They have been used in the construction of large holding tanks and penstocks at hydro power developments . [ 2 ] They are also used in the construction of certain musical instruments with rounded bodies or backs. [ 3 ] [ 4 ] This article about joinery, woodworking joints, carpentry or woodworking is a stub . You can help Wikipedia by expanding it . This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stave_(wood)
Stavros Avramidis (born in Kavala , Greece, in 1958) [ 1 ] is a Greek Canadian wood scientist and professor at the University of British Columbia in Canada, who is an elected fellow ( FIAWS ) and president of the International Academy of Wood Science for the period 2023–2026. [ 2 ] [ 3 ] Avramidis was born in Kavala , Greece , on April 6, 1958, and grew up in Thessaloniki . He attended the Department of Forestry at the Aristotle University of Thessaloniki and received his university degree in 1981. Following that, he pursued research-based postgraduate (1982–1983) (M.S. in the area of composite products) and doctoral studies (1983–1986) in the United States at the State University of New York College of Environmental Science and Forestry , [ 4 ] in the area of biopolymer physics under the guidance of John F. Siau. [ 5 ] Avramidis began his academic career in 1987 in Canada at the University of British Columbia as an assistant professor at Department of Wood Science in the Faculty of Forestry. He was appointed associate professor in 1993 and full professor in 1998. Avramidis served as the Head of the UBC Department of Wood Science for two consecutive terms, from 2016 to the present. [ 6 ] Avramidis's research team has presented research work on the physical and drying properties of wood. His applied research addresses practical issues in the Canadian wood industry related to energy optimization and upgrading production methods, using acoustic, electrical, and optical techniques, as well as radio wave methods, simulation, and artificial intelligence. [ 7 ] [ 8 ] Avramidis along with his colleagues have authored over 250 scientific articles, more than 100 industrial studies, and his research work has received almost 3,000 citations in the Scopus database, until July 2024. [ 9 ] In 2012, Avramidis was selected as a member of the editorial board of the journal, Wood Material Science and Engineering . [ 10 ] He has been a member of the editorial boards of Holzforschung, Drying Technology, Wood Research, European Journal of Wood and Wood Products , Maderas. Ciencia y tecnologia and Drying Technology . [ 11 ] [ 12 ] [ 13 ] In 2020, his name was included in the Mendeley Data , published in the journal Plos Biology [ 14 ] for the international impact of his yearlong research in wood drying . In 2022, Avramidis received the Ternryd Award 2022 from the Swedish Linnaeus Academy Research Foundation [ 15 ] for his research in wood science. [ 16 ] In June 2023, Avramidis was elected as the president of the International Academy of Wood Science , [ 17 ] for the years 2023–2026. In October 2023, a referred metaresearch conducted by John Ioannidis and his team at Stanford University , included Avramidis in Elsevier Data 2022, where he was placed at the top 2% of researchers in the area of wood physics. [ 18 ] In August 2024, Avramidis has acquired the same international distinction for his research work in wood science ( Elsevier Data 2023 ; career data). [ 19 ]
https://en.wikipedia.org/wiki/Stavros_Avramidis
Ste5 is a MAPK scaffold protein involved in the mating of yeast . The active complex is formed by interactions with the MAPK Fus3 , the MAPK kinase (MAPKK) Ste7, and the MAPKK kinase Ste11. After the induction of mating by an appropriate mating pheromone (either a-factor or α –factor) Ste5 and its associated proteins are recruited to the membrane . Ste4 helps to recruit Ste5 but is not required for the attachment of Ste5 to the membrane, which depends on a pleckstrin homology domain as well as an amphipathic alpha-helical domain in the amino terminus. [ 1 ] During mating, Fus3 MAPK and Ptc1 phosphatase compete to control 4 phosphorylation sites on the Ste5 scaffold. When all of 4 sites have been dephosphorylated by Ptc1, Fus3 is released and becomes active. [ 2 ] Ste5 plays 2 main roles in the mating signal pathway: Ste5 oligomerization is very important for stable membrane recruitment. In one model, the activation of the pathway occurs at the same time that Ste5 is converted from a less active, closed form of Ste5 to an active Ste5 dimer that can bind to the beta-gamma subunit of the heterotrimeric G-protein and form a lattice for the MAPK cascade to assemble on. [ 6 ] Not only does Ste5 contribute to propagation of the pheromone signal, but it is also involved in down regulating of signalling. It stimulates autophosphorylation of Fus3, which results in phosphorylation of Ste5, causing a downregulation in signalling. [ 7 ] Ste5 also catalytically unlocks Fus3 (but not its homologue Kss1) for phosphorylation by Ste7. Both this catalytically active Ste5 domain as well as Ste7 are required for full Fus3 activation, which explains why Fus3 is activated by only the mating pathway, and remains inactive during other pathways which also utilize Ste7. [ 8 ] Ste5 can be localized to the cytoplasm, mating projection tip, nucleus, and plasma membrane. [ 9 ] Ste5 is involved in the following biological processes: [ 9 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ste5
Organizations: A steady-state economy is an economy made up of a constant stock of physical wealth (capital) and a constant population size. In effect, such an economy does not grow in the course of time. [ 1 ] : 366–369 [ 2 ] : 545 [ 3 ] [ 4 ] The term usually refers to the national economy of a particular country, but it is also applicable to the economic system of a city, a region, or the entire world . Early in the history of economic thought , classical economist Adam Smith of the 18th century developed the concept of a stationary state of an economy: Smith believed that any national economy in the world would sooner or later settle in a final state of stationarity . [ 5 ] : 78 Since the 1970s, the concept of a steady-state economy has been associated mainly with the work of leading ecological economist Herman Daly . [ 6 ] : 303 [ 7 ] : 32f [ 8 ] : 85 As Daly's concept of a steady-state includes the ecological analysis of natural resource flows through the economy, his concept differs from the original classical concept of a stationary state . One other difference is that Daly recommends immediate political action to establish the steady-state economy by imposing permanent government restrictions on all resource use, whereas economists of the classical period believed that the final stationary state of any economy would evolve by itself without any government intervention. [ 9 ] : 295f [ 10 ] : 55f Critics of the steady-state economy usually object to it by arguing that resource decoupling , technological development , and the operation of market mechanisms are capable of overcoming resource scarcity, pollution, or population overshoot . Proponents of the steady-state economy, on the other hand, maintain that these objections remain insubstantial and mistaken — and that the need for a steady-state economy is becoming more compelling every day. [ 11 ] [ 12 ] [ 13 ] [ 8 ] : 148–155 A steady-state economy is not to be confused with economic stagnation . Whereas a steady-state economy is established as the result of deliberate political action, economic stagnation is the unexpected and unwelcome failure of a growth economy . An ideological contrast to the steady-state economy is formed by the concept of a post-scarcity economy . Since the 1970s, the concept of a steady-state economy has been associated mainly with the work of leading ecological economist Herman Daly — to such an extent that even his boldest critics recognize the prominence of his work. [ 14 ] : 167 [ 7 ] : 32 [ 15 ] : 9 Herman Daly defines his concept of a steady-state economy as an economic system made up of a constant stock of physical wealth (capital) and a constant stock of people (population), both stocks to be maintained by a flow of natural resources through the system. The first component, the constant stocks, is similar to the concept of the stationary state , originally used in classical economics; the second component, the flow of natural resources, is a new ecological feature , presently also used in the academic discipline of ecological economics. The durability of both of the constant stocks is to be maximized: The more durable the stock of capital is, the smaller the flow of natural resources is needed to maintain the stock; likewise, a 'durable' population means a population enjoying a high life expectancy — something desirable by itself — maintained by a low birth rate and an equally low death rate. Taken together, higher durability translates into better ecology in the system as a whole. [ 16 ] : 14–19 Daly's concept of a steady-state economy is based on the vision that man's economy is an open subsystem embedded in a finite natural environment of scarce resources and fragile ecosystems. The economy is maintained by importing valuable natural resources from the input end and exporting valueless waste and pollution at the output end in a constant and irreversible flow. Any subsystem of a finite nongrowing system must itself at some point also become nongrowing and start maintaining itself in a steady-state as far as possible. This vision is opposed to mainstream neoclassical economics , where the economy is represented by an isolated and circular model with goods and services exchanging endlessly between companies and households, without exhibiting any physical contact to the natural environment. [ 17 ] : xiii In the early 2010s, reviewers sympathetic towards Daly's concept of a steady-state economy have passed the concurrent judgement that although his concept remains beyond what is politically feasible at present, there is room for mainstream thinking and collective action to approach the concept in the future. [ 2 ] : 549 [ 18 ] : 84 [ 8 ] : 83 In 2022 a research (chapters 4–5) described degrowth toward a steady state economy as something possible and probably positive. The study ends by the words:"The case for a transition to a steady-state economy with low throughput and low emissions, initially in the high-income economies and then in rapidly growing economies, needs more serious attention and international cooperation. [ 19 ] For centuries, economists and other scholars have considered matters of natural resource scarcity and limits to growth, from the early classical economists in the 18th and 19th centuries down to the ecological concerns that emerged in the second half of the 20th century and developed into the formation of ecological economics as an independent academic subdiscipline in economics . From Adam Smith and onwards, economists in the classical period of economic theorising described the general development of society in terms of a contrast between the scarcity of arable agricultural land on the one hand, and the growth of population and capital on the other hand. The incomes from gross production were distributed as rents, profits and wages among landowners, capitalists and labourers respectively, and these three classes were incessantly engaged in the struggle for increasing their own share. The accumulation of capital (net investments) would sooner or later come to an end as the rate of profit fell to a minimum or to nil . At that point, the economy would settle in a final stationary state with a constant population size and a constant stock of capital. [ 16 ] : 3 [ 9 ] : 295 Adam Smith's magnum opus on The Wealth of Nations , published in 1776, laid the foundation of classical economics in Britain. Smith thereby disseminated and established a concept that has since been a cornerstone in economics throughout most of the world: In a liberal capitalist society , provided with a stable institutional and legal framework, an ' invisible hand ' will ensure that the enlightened self-interest of all members of society will contribute to the growth and prosperity of society as a whole, thereby leading to an 'obvious and simple system of natural liberty'. [ 5 ] : 349f, 533f Smith was convinced of the beneficial effect of the enlightened self-interest on the wealth of nations; but he was less certain this wealth would grow forever. Smith observed that any country in the world found itself in either a 'progressive', a 'stationary', or a 'declining' state: Although England was wealthier than its North American colonies, wages were higher in the latter place as wealth in North America was growing faster than in England; hence, North America was in the 'cheerful and hearty' progressive state. In China, on the other hand, wages were low, the condition of poor people was scantier than in any nation in Europe, and more marriages were contracted here because the 'horrid' killing of newborn babies was permitted and even widely practised; hence, China was in the 'dull' stationary state, although it did not yet seem to be declining. In nations situated in the 'melancholic' declining state, the higher ranks of society would fall down and settle for occupation amid the lower ranks, while the lowest ranks would either subsist on a miserable and insufficient wage, resort to begging or crime, or slide into starvation and early death. Bengal and some other English settlements in the East Indies possibly found themselves in this state, Smith reckoned. [ 5 ] : 59–68 Smith pointed out that as wealth was growing in any nation, the rate of profit would tend to fall and investment opportunities would diminish. In a nation that had thereby reached this 'full complement of riches', society would finally settle in a stationary state with a constant stock of people and capital. In an 18th-century anticipation of The Limits to Growth ( see below ), Smith described the state as follows: In a country which had acquired that full complement of riches which the nature of its soil and climate, and its situation with respect to other countries, allowed it to acquire; which could, therefore, advance no further, and which was not going backwards, both the wages of labour and the profits of stock would probably be very low. In a country fully peopled in proportion to what either its territory could maintain or its stock employ, the competition for employment would necessarily be so great as to reduce the wages of labour to what was barely sufficient to keep up the number of labourers, and, the country being already fully peopled, that number could never be augmented. In a country fully stocked in proportion to all the business it had to transact, as great a quantity of stock would be employed in every particular branch as the nature and extent of the trade would admit. The competition, therefore, would everywhere be as great, and consequently the ordinary profit as low as possible. [ 5 ] : 78 According to Smith, Holland seemed to be approaching this stationary state, although at a much higher level than in China. Smith believed the laws and institutions of China prevented this country from achieving the potential wealth its soil, climate and situation might have admitted of. [ 5 ] : 78f Smith was unable to provide any contemporary examples of a nation in the world that had in fact reached the full complement of riches and thus had settled in stationarity, because, as he conjectured, "... perhaps no country has ever yet arrived at this degree of opulence." [ 5 ] : 78 In the early 19th century, David Ricardo was the leading economist of the day and the champion of British laissez-faire liberalism . He is known today for his free trade principle of comparative advantage , and for his formulation of the controversial labor theory of value . Ricardo replaced Adam Smith's empirical reasoning with abstract principles and deductive argument . This new methodology would later become the norm in economics as a science. [ 9 ] : 135f In Ricardo's times, Britain's trade with the European continent was somewhat disrupted during the Napoleonic Wars that had raged since 1803. The Continental System brought into effect a large-scale embargo against British trade, whereby the nation's food supply came to rely heavily on domestic agriculture to the benefit of the landowning classes. When the wars ended with Napoleon's final defeat in 1815, the landowning classes dominating the British parliament had managed to tighten the existing Corn Laws in order to retain their monopoly status on the home market during peacetime. The controversial Corn Laws were a protectionist two-sided measure of subsidies on corn exports and tariffs on corn imports. The tightening was opposed by both the capitalist and the labouring classes, as the high price of bread effectively reduced real profits and real wages in the economy. So was the political setting when Ricardo published his treatise On the Principles of Political Economy and Taxation in 1817. [ 20 ] : 6–10 According to Ricardo, the limits to growth were ever present due to scarcity of arable agricultural land in the country. In the wake of the wartime period, the British economy seemed to be approaching the stationary state as population was growing, plots of land with lower fertility were put into agricultural use, and the rising rents of the rural landowning class were crowding out the profits of the urban capitalists. This was the broad outline of Ricardo's controversial land rent theory . Ricardo believed that the only way for Britain to avoid the stationary state was to increase her volume of international trade : The country should export more industrial products and start importing cheap agricultural products from abroad in turn. However, this course of development was impeded by the Corn Laws that seemed to be hampering both the industrialisation and the internationalization of the British economy. In the 1820s, Ricardo and his followers – Ricardo himself died in 1823 – directed much of their fire at the Corn Laws in order to have them repealed, and various other free trade campaigners borrowed indiscriminately from Ricardo's doctrines to suit their agenda. [ 20 ] : 202f The Corn Laws were not repealed before 1846. In the meantime, the British economy kept growing, a fact that effectively undermined the credibility and thrust of Ricardian economics in Britain; [ 20 ] : 223 but Ricardo had by now established himself as the first stationary state theorist in the history of economic thought. [ 9 ] : 88f Ricardo's preoccupation with class conflict anticipated the work of Karl Marx ( see below ). John Stuart Mill was the leading economist, philosopher and social reformer in mid-19th century Britain. His economics treatise on the Principles of Political Economy , published in 1848, attained status as the standard textbook in economics throughout the English-speaking world until the turn of the century. [ 9 ] : 179 A champion of classical liberalism , Mill believed that an ideal society should allow all individuals to pursue their own good without any interference from others or from government. [ 21 ] Also a utilitarian philosopher , Mill regarded the 'Greatest Happiness Principle' as the ultimate ideal for a harmonious society: As the means of making the nearest approach to this ideal, utility would enjoin, first, that laws and social arrangements should place the happiness ... of every individual, as nearly as possible in harmony with the interest of the whole; and secondly, that education and opinion, which have so vast a power over human character, should so use that power as to establish in the mind of every individual an indissoluble association between his own happiness and the good of the whole; ... [ 22 ] : 19 Mill's concept of the stationary state was strongly coloured by these ideals. [ 16 ] : 16 [ 9 ] : 213 Mill conjectured that the stationary state of society was not too far away in the future: It must always have been seen, more or less distinctly, by political economists, that the increase of wealth is not boundless; that at the end of what they term the progressive state lies the stationary state, that all progress in wealth is but a postponement of this, and that each step in advance is an approach to it. We have now been led to recognize that this ultimate goal is at all times near enough to be fully in view; that we are always on the verge of it, and that, if we have not reached it long ago, it is because the goal itself flies before us. [ 23 ] : 592 Contrary to both Smith and Ricardo before him, Mill took an optimistic view on the future stationary state. Mill could not "... regard the stationary state of capital and wealth with the unaffected aversion so generally manifested toward it by political economists of the old school." [ 23 ] : 593 Instead, Mill attributed many important qualities to this future state, he even believed the state would bring about "... a very considerable improvement on our present condition." [ 23 ] : 593 According to Mill, the stationary state was at one and the same time inevitable, necessary and desirable: It was inevitable , because the accumulation of capital would bring about a falling rate of profit that would diminish investment opportunities and hamper further accumulation; it was also necessary , because mankind had to learn how to reduce its size and its level of consumption within the boundaries set by nature and by employment opportunities; finally, the stationary state was desirable , as it would ease the introduction of public income redistibution schemes, create more equality and put an end to man's ruthless struggle to get by — instead, the human spirit would be liberated to the benefit of more elevated social and cultural activities, 'the graces of life'. [ 23 ] : 592–596 Hence, Mill was able to express all of his liberal ideals for mankind through his concept of the stationary state. [ 16 ] : 14f [ 9 ] : 213 It has been argued that Mill essentially made a quality-of-life argument for the stationary state. [ 18 ] : 79 When the influence of John Stuart Mill and his Principles declined, the classical-liberalist period of economic theorising came to an end. By the turn of the 19th century, Marxism and neoclassical economics had emerged to dominate economics: Taken together, it has been argued that "... if Judeo-Christian monotheism took nature out of religion, Anglo-American economists (after about 1880) took nature out of economics." [ 10 ] : xx Almost one century later, Herman Daly has reintegrated nature into economics in his concept of a steady-state economy ( see below ). John Maynard Keynes was the paradigm founder of modern macroeconomics , and is widely considered today to be the most influential economist of the 20th century. Keynes rejected the basic tenet of classical economics that free markets would lead to full employment by themselves . Consequently, he recommended government intervention to stimulate aggregate demand in the economy, a macroeconomic policy now known as Keynesian economics . Keynes also believed that capital accumulation would reach saturation at some point in the future. In his essay from 1930 on The Economic Possibilities of Our Grandchildren , Keynes ventured to look one hundred years ahead into the future and predict the standard of living in the 21st century. Writing at the beginning of the Great Depression , Keynes rejected the prevailing "bad attack of economic pessimism" of his own time and foresaw that by 2030, the grandchildren of his generation would live in a state of abundance, where saturation would have been reached. People would find themselves liberated from such economic activities as saving and capital accumulation, and be able to get rid of 'pseudo-moral principles' — avarice, exaction of interest, love of money — that had characterized capitalistic societies so far. Instead, people would devote themselves to the true art of life, to live "wisely and agreeably and well." Mankind would finally have solved "the economic problem," that is, the struggle for existence. [ 29 ] [ 30 ] : 2, 11 The similarity between John Stuart Mill's concept of the stationary state ( see above ) and Keynes's predictions in this essay has been noted. [ 30 ] : 15 It has been argued that although Keynes was right about future growth rates, he underestimated the inequalities prevailing today, both within and across countries. He was also wrong in predicting that greater wealth would induce more leisure spent; in fact, the reverse trend seems to be true. [ 30 ] : 3–6 In his magnum opus on The General Theory of Employment, Interest and Money , Keynes looked only one generation ahead into the future and predicted that state intervention balancing aggregate demand would by then have caused capital accumulation to reach the point of saturation. The marginal efficiency of capital as well as the rate of interest would both be brought down to zero, and — if population was not increasing rapidly — society would finally "... attain the conditions of a quasi-stationary community where change and progress would result only from changes in technique, taste, population and institutions ..." [ 31 ] : 138f Keynes believed this development would bring about the disappearance of the rentier class, something he welcomed: Keynes argued that rentiers incurred no sacrifice for their earnings, and their savings did not lead to productive investments unless aggregate demand in the economy was sufficiently high. "I see, therefore, the rentier aspect of capitalism as a transitional phase which will disappear when it has done its work." [ 31 ] : 237 The economic expansion following World War II took place while mainstream economics largely neglected the importance of natural resources and environmental constraints in the development. Addressing this discrepancy, ecological concerns emerged in academia around 1970. Later on, these concerns developed into the formation of ecological economics as an academic subdiscipline in economics. After the ravages of World War II , the industrialised part of the world experienced almost three decades of unprecedented and prolonged economic expansion. This expansion — known today as the Post–World War II economic expansion — was brought about by international financial stability, low oil prices and ever increasing labour productivity in manufacturing. During the era, all the advanced countries who founded — or later joined — the OECD enjoyed robust and sustained growth rates as well as full employment. In the 1970s, the expansion ended with the 1973 oil crisis , resulting in the 1973–75 recession and the collapse of the Bretton Woods monetary system . Throughout this era, mainstream economics — dominated by both neoclassical economics and Keynesian economics — developed theories and models where natural resources and environmental constraints were neglected. [ 27 ] : 46f [ 32 ] : 3f Conservation issues related specifically to agriculture and forestry were left to specialists in the subdiscipline of environmental economics at the margins of the mainstream. As the theoretical framework of neoclassical economics — namely general equilibrium theory — was uncritically adopted and maintained by even environmental economics, this subdiscipline was rendered largely unable to consider important issues of concern to environmental policy. [ 33 ] : 416–422 In the years around 1970, the widening discrepancy between an ever-growing world economy on the one hand, and a mainstream economics discipline not taking into account the importance of natural resources and environmental constraints on the other hand, was finally addressed — indeed, challenged — in academia by a few unorthodox economists and researchers. [ 6 ] : 296–298 During the short period of time from 1966 to 1972, four works were published addressing the importance of natural resources and the environment to human society: Taken together, these four works were seminal in bringing about the formation of ecological economics later on. [ 6 ] : 301–305 Although most of the theoretical and foundational work behind ecological economics was in place by the early 1970s, a long gestation period elapsed before this new academic subdiscipline in economics was properly named and institutionalized. Ecological economics was formally founded in 1988 as the culmination of a series of conferences and meetings through the 1980s, where key scholars interested in the ecology-economy interdependency were interacting with each other. The most important people involved in the establishment were Herman Daly and Robert Costanza from the US; AnnMari Jansson from Sweden; and Juan Martínez-Alier from Spain (Catalonia). [ 6 ] : 308–310 Since 1989, the discipline has been organised in the International Society for Ecological Economics that publishes the journal of Ecological Economics . When the ecological economics subdiscipline was established, Herman Daly's 'preanalytic vision' of the economy was widely shared among the members who joined in: The human economy is an open subsystem of a finite and non-growing ecosystem (earth's natural environment), and any subsystem of a fixed nongrowing system must itself at some point also become nongrowing. Indeed, it has been argued that the subdiscipline itself was born out of frustration with the unwillingness of the established disciplines to accept this vision. [ 48 ] : 266 However, ecological economics has since been overwhelmed by the influence and domination of neoclassical economics and its everlasting free market orthodoxy . This development has been deplored by activistic ecological economists as an 'incoherent', 'shallow' and overly 'pragmatic' slide. [ 49 ] [ 50 ] [ 51 ] In the 1970s, Herman Daly became the world's leading proponent of a steady-state economy. [ 18 ] : 81f Throughout his career, Daly published several books and articles on the subject. [ 16 ] [ 17 ] [ 52 ] : 117–124 [ 53 ] He also helped to found the Center for the Advancement of the Steady-State Economy (CASSE). [ 54 ] He received several prizes and awards in recognition of his work. [ 55 ] According to two independent comparative studies of American Daly's steady-state economics versus the later, competing school of degrowth from continental Europe, no differences of analytical substance exist between the two schools; only, Daly's bureaucratic — or even technocratic — top-down management of the economy fares badly with the more radical grassroots appeal of degrowth, as championed by French political scientist Serge Latouche ( see below ). [ 2 ] : 549 [ 8 ] : 146–148 The premise underlying Daly's concept of a steady-state economy is that the economy is an open subsystem of a finite and non-growing ecosystem (earth's natural environment). The economy is maintained by importing low-entropy matter-energy (resources) from nature; these resources are put through the economy, being transformed and manufactured into goods along the way; eventually, the throughput of matter-energy is exported to the environment as high-entropy waste and pollution. Recycling of material resources is possible, but only by using up some energy resources as well as an additional amount of other material resources; and energy resources, in turn, cannot be recycled at all, but are dissipated as waste heat . Out of necessity, then, any subsystem of a fixed nongrowing system must itself at some point also become nongrowing. [ 17 ] : xiii Daly argues that nature has provided basically two sources of wealth at man's disposal, namely a stock of terrestrial mineral resources and a flow of solar energy . An 'asymmetry' between these two sources of wealth exist in that we may — within some practical limits — extract the mineral stock at a rate of our own choosing (that is, rapidly), whereas the flow of solar energy is reaching earth at a rate beyond human control. Since the Sun will continue to shine on earth at a fixed rate for billions of years to come, it is the terrestrial mineral stock — and not the Sun — that constitutes the crucial scarcity factor regarding man's economic future. [ 17 ] : 21f Daly points out that today's global ecological problems are rooted in man's historical record: Until the Industrial Revolution that took place in Britain in the second half of the 18th century, man lived within the limits imposed by what Daly terms a 'solar-income budget': The Palaeolithic tribes of hunter-gatherers and the later agricultural societies of the Neolithic and onwards subsisted primarily — though not exclusively — on earth's biosphere , powered by an ample supply of renewable energy, received from the Sun. The Industrial Revolution changed this situation completely, as man began extracting the terrestrial mineral stock at a rapidly increasing rate. The original solar-income budget was thereby broken and supplemented by the new, but much scarcer source of wealth. Mankind still lives in the after-effect of this revolution. Daly cautions that more than two hundred years of worldwide industrialisation is now confronting mankind with a range of problems pertaining to the future existence and survival of our species: The entire evolution of the biosphere has occurred around a fixed point — the constant solar-energy budget. Modern man is the only species to have broken the solar-income budget constraint, and this has thrown him out of equilibrium with the rest of the biosphere. Natural cycles have become overloaded, and new materials have been produced for which no natural cycles exist. Not only is geological capital being depleted, but the basic life-support services of nature are impaired in their functioning by too large a throughput from the human sector. [ 17 ] : 23 Following the work of Nicholas Georgescu-Roegen , Daly argues that the laws of thermodynamics restrict all human technologies and apply to all economic systems: Entropy is the basic physical coordinate of scarcity. Were it not for entropy, we could burn the same gallon of gasoline over and over, and our capital stock would never wear out. Technology is unable to rise above the basic laws of physics, so there is no question of ever 'inventing' a way to recycle energy. [ 17 ] : 36 This view on the role of technology in the economy was later termed 'entropy pessimism' ( see below ). [ 56 ] : 116 In Daly's view, mainstream economists tend to regard natural resource scarcity as only a relative phenomenon, while human needs and wants are granted absolute status: It is believed that the price mechanism and technological development (however defined) is capable of overcoming any scarcity ever to be faced on earth; it is also believed that all human wants could and should be treated alike as absolutes, from the most basic necessities of life to the extravagant and insatiable craving for luxuries. Daly terms this belief 'growthmania', which he finds pervasive in modern society. In opposition to the dogma of growthmania, Daly submits that "... there is such a thing as absolute scarcity, and there is such a thing as purely relative and trivial wants". [ 17 ] : 41 Once it is recognised that scarcity is imposed by nature in an absolute form by the laws of thermodynamics and the finitude of earth; and that some human wants are only relative and not worthy of satisfying; then we are all well on the way to the paradigm of a steady-state economy, Daly concludes. Consequently, Daly recommends that a system of permanent government restrictions on the economy is established as soon as possible, a steady-state economy. Whereas the classical economists believed that the final stationary state would settle by itself as the rate of profit fell and capital accumulation came to an end ( see above ), Daly wants to create the steady-state politically by establishing three institutions of the state as a superstructure on top of the present market economy: The purpose of these three institutions is to stop and prevent further growth by combining what Daly calls "a nice reconciliation of efficiency and equity" and providing "the ecologically necessary macrocontrol of growth with the least sacrifice in terms of microlevel freedom and variability." [ 17 ] : 69 Among the generation of his teachers, Daly ranks Nicholas Georgescu-Roegen and Kenneth E. Boulding as the two economists he has learned the most from. [ 17 ] : xvi However, both Georgescu-Roegen and Boulding have assessed that a steady-state economy may serve only as a temporary societal arrangement for mankind when facing the long-term issue of global mineral resource exhaustion : Even with a constant stock of people and capital, and a minimised (yet constant) flow of resources put through the world economy, earth's mineral stock will still be exhausted, although at a slower rate than is presently the situation ( see below ). [ 1 ] : 366–369 [ 37 ] : 165–167 Responding specifically to the criticism levelled at him by Georgescu-Roegen , Daly concedes that a steady-state economy will serve only to postpone, and not to prevent, the inevitable mineral resource exhaustion: "A steady-state economy cannot last forever, but neither can a growing economy, nor a declining economy". [ 16 ] : 369 A frank and committed Protestant , Daly further argues that... ... the steady-state economy is based on the assumption that creation will have an end — that it is finite temporally as well as spatially. ... Only God can raise any part of his creation out of time and into eternity . As mere stewards of creation, all we can do is to avoid wasting the limited capacity of creation to support present and future life. [ 16 ] : 370 Later, several other economists in the field have agreed that not even a steady-state economy can last forever on earth. [ 57 ] : 90f [ 58 ] : 105–107 [ 59 ] : 270 [ 2 ] : 548 [ 60 ] : 37 In 2021, a study checked if the current situation confirms the predictions of the book Limits to Growth . The conclusion was that in 10 years the global GDP will begin to decline. If it will not happen by deliberate transition it will happen by ecological disaster. [ 61 ] The world's mounting ecological problems have stimulated interest in the concept of a steady-state economy. Since the 1990s, most metrics have provided evidence that the volume of the world economy far exceeds critical global limits to economic growth already. [ 62 ] According to the ecological footprint measure , Earth's carrying capacity — that is, Earth's long-term capacity to sustain human populations and consumption levels — was exceeded by some 30 percent in 1995. By 2018, this figure had increased to some 70 percent. [ 63 ] [ 64 ] In 2020 multinational team of scientists published a study, saying that overconsumption is the biggest threat to sustainability. According to the study a drastic change in lifestyle is necessary for solving the ecological crisis. According to one of the authors Julia Steinberger: "To protect ourselves from the worsening climate crisis, we must reduce inequality and challenge the notion that riches, and those who possess them, are inherently good." The research was published on the site of the World Economic Forum (WEF). Klaus Schwab , founder and former chairman of the WEF, calls for a " great reset of capitalism". [ 65 ] In effect, mankind is confronted by an ecological crisis , in which humans are living outside of planetary boundaries which will have significant effects on human health and wellbeing . The significant impact of human activities on Earth's ecosystems has motivated some geologists to propose the present epoch be named the anthropocene . [ 66 ] The following issues have raised much concern worldwide: Air pollution emanating from motor vehicles and industrial plants is damaging public health and increasing mortality rates. The concentration of carbon dioxide and other greenhouse gases in the atmosphere is the apparent source of global warming and climate changes. Extreme regional weather patterns and rising sea levels caused by warming degrade living conditions in many — if not all — parts of the world. The warming already poses a security threat to many nations and works as a so-called 'threat multiplier' to geo-political stability. Even worse, the loss of Arctic permafrost may be triggering a massive release of methane and other greenhouse gases from thawing soils in the region, thereby overwhelming political action to counter climate change. If critical temperature thresholds are crossed, Earth's climate may transit from an 'icehouse' to a 'greenhouse' state for the first time in 34 million years. One of the most common solutions to the climate crisis is transitioning to renewable energy, but it also has some environmental impacts. They are presented by the proponents of theories like degrowth steady-state economy and circular economy as one of the proofs that for achieving sustainability technological methods are not enough and there is a need to limit consumption [ 67 ] [ 68 ] [ 69 ] In 2019 a new report "Plastic and Climate" was published. According to the report, in 2019, plastic will contribute greenhouse gases in the equivalent of 850 million tons of carbon dioxide (CO 2 ) to the atmosphere. In current trend, annual emissions will grow to 1.34 billion tons by 2030. By 2050 plastic could emit 56 billion tons of greenhouse gas emissions, as much as 14 percent of the Earth's remaining carbon budget, except the harm to Phytoplankton . [ 70 ] The report says that only solutions which involve a reduction in consumption can solve the problem, while others like biodegradable plastic, ocean cleanup, using renewable energy in plastic industry can do little, and in some cases may even worsen it. [ 71 ] Another report referring to all the environmental and health effects of plastic says the same. [ 72 ] Non-renewable mineral reserves are currently extracted at high and unsustainable rates from Earth's crust . Remaining reserves are likely to become ever more costly to extract in the near future , and will reach depletion at some point. The era of relatively peaceful economic expansion that has prevailed globally since World War II may be interrupted by unexpected supply shocks or simply be succeeded by the peaking depletion paths of oil and other valuable minerals . In 2020 in the first time the rate of use of natural resources arrived to more than 110 billion tons per year [ 73 ] Economist Jason Hickel has written critically about the ideology of green-growth, the idea that as capitalism and systems expand, natural resources will also expand naturally, as it is compatible with our planet's ecology. This contradicts with the idea of no-growth economics, or degrowth economics, where the sustainability and stability of the economy is prioritized over the uncontrolled profit of those in power. Models around creating development in communities have found that failing to account for sustainability in early stages leads to failure in the long term. These models contradict green growth theory and do not support ideas about expansion of natural resources. [ 74 ] Additionally, those living in poorer areas tend to be exposed to higher levels of toxins and pollutants as a result of systematic environmental racism . [ 75 ] Increasing natural resources and increasing local involvement in their distribution are potential solutions to alleviate pollution and address poverty in these areas. [ 75 ] Use of renewable resources in excess of their replenishment rates is undermining ecological stability worldwide. Between 2000 and 2012, deforestation resulted in some 14 percent of the equivalent of Earth's original forest cover to be cut down. Tropical rainforests have been subject to deforestation at a rapid pace for decades — especially in west and central Africa and in Brazil — mostly due to subsistence farming, population pressure, and urbanization . Population pressures also strain the world's soil systems , leading to land degradation , mostly in developing countries. Global erosion rates on conventional cropland are estimated to exceed soil creation rates by more than ten times. Widespread overuse of groundwater results in water deficits in many countries. By 2025, water scarcity could impact the living conditions of two-thirds of the world's population. The destructive impact of human activity on wildlife habitats worldwide is accelerating the extinction of rare species , thereby substantially reducing Earth's biodiversity . The natural nitrogen cycle is heavily overloaded by industrial nitrogen fixation and use , thereby disrupting most known types of ecosystems . The accumulating plastic debris in the oceans decimates aquatic life. Ocean acidification due to the excess concentration of carbon dioxide in the atmosphere is resulting in coral bleaching and impedes shell-bearing organisms . Arctic sea ice decline caused by global warming is endangering the polar bear . In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services . The report was finalised in Paris. The main conclusions: These mounting concerns have prompted an increasing number of academics and other writers — beside Herman Daly — to point to limits to economic growth, and to question — and even oppose — the prevailing ideology of infinite economic growth. [ 62 ] [ 78 ] [ 79 ] [ 80 ] [ 81 ] [ 82 ] [ 11 ] [ 83 ] [ 84 ] [ 28 ] [ 85 ] [ 86 ] [ 12 ] [ 13 ] [ 87 ] [ 88 ] [ 89 ] [ 45 ] [ 60 ] [ 90 ] [ 91 ] [ 92 ] [ 93 ] [ 94 ] [ 95 ] [ excessive citations ] In September 2019, 1 day before the Global Climate Strike on 20 September 2019 in the Guardian was published an article that summarizes a lot of research and say that limiting consumption is necessary for saving the biosphere. [ 96 ] Except the reasons linked to resource depletion and the carrying capacity of the ecological system, there are other reasons to limit consumption: overconsumption hurts the well-being of those who consume too much. In the same time when the ecological footprint of humanity exceeded the sustainable level, while GDP more than tripled from 1950, one of the well-being measures, the genuine progress indicator , has fallen from 1978. This is one of the reasons for pursuing the steady-state economy. [ 97 ] In some cases reducing consumption can increase the living standard. In Costa Rica the GDP is 4 times smaller than in many countries in Western Europe and North America, but people live longer and better. An American study shows that when the income is higher than $75,000, an increase in profits does not increase well-being. To better measure well-being, the New Economics Foundation's has launched the Happy Planet Index . [ 98 ] The food industry is a large sector of consumption, responsible for 37% of global greenhouse-gas emissions [ 99 ] and studies show that people waste a fifth of food products just through disposal or overconsumption. By the time food reaches the consumer, 9% (160 million tons) goes uneaten and 10% is lost to overconsumption, meaning consumers ate more than the calorie intake requirement. When the consumer takes in too much, this not only explains losses at the beginning of the stage at production (and overproduction) but also lends itself to overconsumption of energy and protein, having harmful effects on the body like obesity . [ 100 ] A report from the Lancet commission says the same. The experts write: "Until now, undernutrition and obesity have been seen as polar opposites of either too few or too many calories [...] In reality, they are both driven by the same unhealthy, inequitable food systems , underpinned by the same political economy that is single-focused on economic growth, and ignores the negative health and equity outcomes. Climate change has the same story of profits and power". [ 101 ] Obesity was a medical problem for people who overconsumed food and worked too little already in ancient Rome, and its impact slowly grew through history. [ 102 ] As to 2012, mortality from obesity was 3 times higher than from hunger, [ 103 ] reaching 2.8 million people per year by 2017. [ 104 ] Cycling reduces greenhouse gas emissions [ 105 ] while reducing the effects of a sedentary lifestyle at the same time. [ 106 ] As of 2002, a sedentary lifestyle claimed 2 million lives per year. The World Health Organization stated that "60 to 85% of people in the world—from both developed and developing countries—lead sedentary lifestyles, making it one of the more serious yet insufficiently addressed public health problems of our time." [ 107 ] By 2012, according to a study published in The Lancet , the number reached 5.3 million. [ 108 ] Reducing the use of screens can help fight many diseases, [ 109 ] among others depression , [ 110 ] [ 111 ] the leading cause of disability globally. [ 112 ] It also can lower greenhouse gas emission. As of 2018, 3.7% of global emissions were from digital technologies, more than from aviation ; the number is expected to achieve 8% by 2025, equal to the emissions from cars . [ 113 ] [ 114 ] Reducing light pollution can reduce greenhouse-gas emissions and improve health. [ 115 ] [ 116 ] In September 2019, 1 day before the Global Climate Strike on 20 September 2019 , an article was published in The Guardian that summarizes much research and says that limiting consumption is necessary for the health of overconsumers: it can increase empathy , improve the contacts with other people, and more. [ 96 ] The concept of a steady-state economy is connected to other concepts that can be generally defined as ecological economics and anti-consumerism , because it serves as the final target of those concepts: Those ideologies are not calling for poverty but want to reach a level of consumption that is the best for people and the environment. [ 117 ] [ 118 ] The Center for the Advancement of the Steady State Economy (CASSE) defines steady-state economy not only as an economy with some' constant level of consumption, but as an economy with the best possible level of consumption maintained constantly. To define what that level is, it considers not only ecology, but also living standards. The Center writes: "In cases where the benefits of growth outweigh the costs (for example, where people are not consuming enough to meet their needs), growth or redistribution of resources may be required. In cases where the size of the economy has surpassed the carrying capacity of the ecosystems that contain it (a condition known as overshoot), degrowth may be required before establishing a steady state economy that can be maintained over the long term". [ 119 ] In February 2020, the same organization proposed a slogan of "Degrowth Toward a Steady State Economy" because it can unite degrowthers and steady staters. In the statement it is mentioned that "[i]n 2018 the nascent DegrowUS adopted the mission statement, "Our mission is a democratic and just transition to a smaller, steady state economy in harmony with nature, family, and community." [ 120 ] In his article on Economic de-growth vs. steady-state economy , Christian Kerschner has integrated the strategy of declining-state, or degrowth, with Herman Daly's concept of the steady-state economy to the effect that degrowth should be considered a path taken by the rich industrialized countries leading towards a globally equitable steady-state economy. This ultra- egalitarian path will then make ecological room for poorer countries to catch up and combine into a final world steady-state, maintained at some internationally agreed upon intermediate and 'optimum' level of activity for some period of time — although not forever. Kerschner admits that this goal of a world steady-state may remain unattainable in the foreseeable future, but such seemingly unattainable goals could stimulate visions about how to better approach them. [ 2 ] : 548 [ 121 ] : 229 [ 8 ] : 142–146 In 1977 Leopold Kohr published a book named The Overdeveloped Nations: The Diseconomies Of Scale , talking primarily about overconsumption. [ 122 ] This book is the basis for the theory of overdevelopment , saying that the global north -- the "rich" countries -- are too developed, increasing humanity's ecological impact and creating problems in both overdeveloped and underdeveloped countries. [ 123 ] Several conceptual and ideological disagreements presently exist concerning the steady-state economy in particular and the dilemma of growth in general. The following issues are considered below: The role of technology; resource decoupling and the rebound effect; a declining-state economy; the possibility of having capitalism without growth; and the possibility of pushing some of the terrestrial limits into outer space. In 2019 a research, presenting an overview of the attempts to achieve constant economic growth without environmental destruction and their results, was published. It shows that by the year 2019 the attempts were not successful. It does not give a clear answer about future attempts. [ 124 ] Herman Daly's approach to these issues are presented throughout the text. Technology is usually defined as the application of scientific method in the production of goods or in other social achievements. Historically, technology has mostly been developed and implemented in order to improve labour productivity and increase living standards. In economics, disagreement presently exists regarding the role of technology when considering its dependency on natural resources: From the ecological point of view, it has been suggested that the disagreement boils down to a matter of teaching some elementary physics to the uninitiated neoclassical economists and other technological optimists . [ 25 ] : 15–19 [ 128 ] : 106–109 [ 42 ] : 80f [ 8 ] : 116–118 From the neoclassical point of view, leading growth theorist and Nobel Prize laureate Robert Solow has defended his much criticised position by replying in 1997 that 'elementary physics' has not by itself prevented growth in the industrialized countries so far. [ 129 ] : 134f Resource decoupling occurs when economic activity becomes less intensive ecologically: A declining input of natural resources is needed to produce one unit of output on average, measured by the ratio of total natural resource consumption to gross domestic product (GDP). Relative resource decoupling occurs when natural resource consumption declines on a ceteris paribus assumption — that is, all other things being equal. Absolute resource decoupling occurs when natural resource consumption declines, even while GDP is growing. [ 11 ] : 67f In the history of economic thought, William Stanley Jevons was the first economist of some standing to analyse the occurrence of resource decoupling, although he did not use this term. In his 1865 book on The Coal Question , Jevons argued that an increase in energy efficiency would by itself lead to more , not less, consumption of energy: Due to the income effect of the lowered energy expenditures, people would be rendered better off and demand even more energy, thereby outweighing the initial gain in efficiency. This mechanism is known today as the Jevons paradox or the rebound effect . Jevons's analysis of this seeming paradox formed part of his general concern that Britain's industrial supremacy in the 19th century would soon be set back by the inevitable exhaustion of the country's coal mines, whereupon the geopolitical balance of power would tip in favour of countries abroad possessing more abundant mines. [ 25 ] : 160–163 [ 7 ] : 40f [ 42 ] : 64f In 2009, two separate studies were published that — among other things — addressed the issues of resource decoupling and the rebound effect: German scientist and politician Ernst Ulrich von Weizsäcker published Factor Five: Transforming the Global Economy through 80% Improvements in Resource Productivity , co-authored with a team of researchers from The Natural Edge Project . [ 130 ] British ecological economist Tim Jackson published Prosperity Without Growth , drawing extensively from an earlier report authored by him for the UK Sustainable Development Commission . [ 11 ] Consider each in turn: Herman Daly has argued that the best way to increase natural resource efficiency (decouple) and to prevent the occurrence of any rebound effects is to impose quantitative restrictions on resource use by establishing a cap and trade system of quotas , managed by a government agency. Daly believes this system features a unique triple advantage: [ 17 ] : 61–64 For all its merits, Daly himself points to the existence of physical, technological and practical limitations to how much efficiency and recycling can be achieved by this proposed system. [ 17 ] : 77–80 The idea of absolute decoupling ridding the economy as a whole of any dependence on natural resources is ridiculed polemically by Daly as 'angelizing GDP': It would work only if we ascended to become angels ourselves. [ 17 ] : 118 A declining-state economy is an economy made up of a declining stock of physical wealth (capital) or a declining population size, or both. A declining-state economy is not to be confused with a recession : Whereas a declining-state economy is established as the result of deliberate political action, a recession is the unexpected and unwelcome failure of a growing or a steady economy. Proponents of a declining-state economy generally believe that a steady-state economy is not far-reaching enough for the future of mankind. Some proponents may even reject modern civilization as such, either partly or completely, whereby the concept of a declining-state economy begins bordering on the ideology of anarcho-primitivism , on radical ecological doomsaying or on some variants of survivalism . Romanian American economist Nicholas Georgescu-Roegen was the teacher and mentor of Herman Daly and is presently considered the main intellectual figure influencing the degrowth movement that formed in France and Italy in the early 2000s. In his paradigmatic magnum opus on The Entropy Law and the Economic Process , Georgescu-Roegen argues that the carrying capacity of earth — that is, earth's capacity to sustain human populations and consumption levels — is bound to decrease sometime in the future as earth's finite stock of mineral resources is presently being extracted and put to use ; and consequently, that the world economy as a whole is heading towards an inevitable future collapse . [ 35 ] In effect, Georgescu-Roegen points out that the arguments advanced by Herman Daly in support of his steady-state economy apply with even greater force in support of a declining-state economy: When the overall purpose is to ration and stretch mineral resource use for as long time into the future as possible, zero economic growth is more desirable than growth is, true; but negative growth is better still! [ 1 ] : 366–369 Instead of Daly's steady-state economics, Georgescu-Roegen proposed his own so-called 'minimal bioeconomic program', featuring restrictions even more severe than those propounded by his former student Daly (see above) . [ 1 ] : 374–379 [ 131 ] : 150–153 [ 8 ] : 142–146 American political advisor Jeremy Rifkin , French champion of the degrowth movement Serge Latouche and Austrian degrowth theorist Christian Kerschner — who all take their cue from Georgescu-Roegen's work — have argued in favour of declining-state strategies. Consider each in turn: Herman Daly on his part is not opposed to the concept of a declining-state economy; but he does point out that the steady-state economy should serve as a preliminary first step on a declining path, once the optimal levels of population and capital have been properly defined. However, this first step is an important one: [T]he first issue remains to stop the momentum of growth and to learn to run a stable economy at historically given initial conditions. ... But we cannot go into reverse without first coming to a stop. Step one is to achieve a steady-state economy at existing or nearby levels. Step two is to decide whether the optimum level is greater or less than present levels. ... My own judgment on these issues lead me to think we have overshot the optimum." [ 17 ] : 52 Daly concedes that it is 'difficult, probably impossible' to define such optimum levels; [ 17 ] : 52 even more, in his final analysis Daly agrees with his teacher and mentor Georgescu-Roegen that no defined optimum will be able to last forever ( see above ). [ 16 ] : 369 Several radical critics of capitalism have questioned the possibility of ever imposing a steady-state or a declining-state (degrowth) system as a superstructure on top of capitalism. [ 7 ] [ 134 ] [ 135 ] [ 86 ] : 97–100 [ 15 ] : 45–51 [ 136 ] [ 137 ] Taken together, these critics point to the following growth dynamics inherent in capitalism: — In short: There is no end to the systemic and ecologically harmful growth dynamics in modern capitalism, radical critics assert. Fully aware of the massive growth dynamics of capitalism, Herman Daly on his part poses the rhetorical question whether his concept of a steady-state economy is essentially capitalistic or socialistic . He provides the following answer (written in 1980): The growth versus steady-state debate really cuts across the old left - right rift, and we should resist any attempt to identify either growth or steady-state with either left or right, for two reasons. First, it will impose a logical distortion on the issue. Second, it will obscure the emergence of a third way, which might form a future synthesis of socialism and capitalism into a steady-state economy and eventually into a fully just and sustainable society. [ 16 ] : 367 Daly concludes by inviting all (most) people — both liberal supporters of and radical critics of capitalism — to join him in his effort to develop a steady-state economy. [ 16 ] : 367 Ever since the beginning of the modern Space Age in the 1950s, some space advocates have pushed for space habitation , frequently in the form of colonization , some arguing as a reason for alleviating human overpopulation , overconsumption and mitigate the human impact on the environment on Earth (if not for other reasons). In the 1970s, physicist and space activist Gerard K. O'Neill developed a large plan to build human settlements in outer space to solve the problems of overpopulation and limits to growth on earth without recourse to political repression. According to O'Neill's vision, mankind could — and indeed should — expand on this man-made frontier to many times the current world population and generate large amounts of new wealth in space. Herman Daly countered O'Neill's vision by arguing that a space colony would become subject to much harsher limits to growth — and hence, would have to be secured and managed with much more care and discipline — than a steady-state economy on large and resilient earth. Although the number of individual colonies supposedly could be increased without end, living conditions in any one particular colony would become very restricted nonetheless. Therefore, Daly concluded: "The alleged impossibility of a steady-state on earth provides a poor intellectual launching pad for space colonies." [ 16 ] : 369 By the 2010s, O'Neill's old vision of space colonisation had long since been turned upside down in many places: Instead of dispatching colonists from earth to live in remote space settlements, some ecology-minded space advocates conjecture that resources could be mined from asteroids in space and transported back to earth for use here. This new vision has the same double advantage of (partly) mitigating ecological pressures on earth's limited mineral reserves while also boosting exploration and colonisation of space. The building up of industrial infrastructure in space would be required for the purpose, as well as the establishment of a complete supply chain up to the level of self-sufficiency and then beyond, eventually developing into a permanent extraterrestrial source of wealth to provide an adequate return on investment for stakeholders. In the future, such an 'exo-economy' (off-planet economy) could possibly even serve as the first step towards mankind's cosmic ascension to a 'Type II' civilisation on the hypothetical Kardashev scale , in case such an ascension will ever be accomplished. [ 138 ] [ 139 ] [ 140 ] However, it is yet uncertain whether an off-planet economy of the type specified will develop in due time to match both the volume and the output mix needed to fully replace earth's dwindling mineral reserves . Sceptics like Herman Daly and others point to exorbitant earth-to-orbit launch costs of any space mission, inaccurate identification of target asteroids suitable for mining, and remote in situ ore extraction difficulties as obvious barriers to success: Investing a lot of terrestrial resources in order to recover only a few resources from space in return is not worthwhile in any case, regardless of the scarcities, technologies and other mission parameters involved in the venture. In addition, even if an off-planet economy could somehow be established at some future point, one long-term predicament would then loom large regarding the continuous mining and transportation of massive volumes of materials from space back to earth: How to keep up that volume flowing on a steady and permanent basis in the face of the astronomically long distances and time scales ever present in space. In the worst of cases, all of these obstacles could forever prevent any substantial pushing of limits into outer space — and then limits to growth on earth will remain the only limits of concern throughout mankind's entire span of existence. [ 141 ] : 24 [ 42 ] : 81–83 [ 142 ] [ 143 ] [ 144 ] Today, steady state economy is not implemented officially by any state, but there are some measures that limit growth and means a steady level of consumption of some products per capita: Some countries accepted measurements, alternatives to Gross domestic product to measure success:
https://en.wikipedia.org/wiki/Steady-state_economy
In cosmology , the steady-state model or steady-state theory was an alternative to the Big Bang theory. In the steady-state model, the density of matter in the expanding universe remains unchanged due to a continuous creation of matter, thus adhering to the perfect cosmological principle , a principle that says that the observable universe is always the same at any time and any place. A static universe , where space is not expanding, also obeys the perfect cosmological principle, but it cannot explain astronomical observations consistent with expansion of space. From the 1940s to the 1960s, the astrophysical community was divided between supporters of the Big Bang theory and supporters of the steady-state theory. The steady-state model is now rejected by most cosmologists , astrophysicists , and astronomers . [ 1 ] The observational evidence points to a hot Big Bang cosmology with a finite age of the universe , which the steady-state model does not predict. [ 2 ] Cosmological expansion was originally seen through observations by Edwin Hubble . Theoretical calculations also showed that the static universe , as modeled by Albert Einstein (1917), was unstable. The modern Big Bang theory, first advanced by Father Georges Lemaître , is one in which the universe has a finite age and has evolved over time through cooling, expansion, and the formation of structures through gravitational collapse. On the other hand, the steady-state model says while the universe is expanding, it nevertheless does not change its appearance over time (the perfect cosmological principle ). E.g., the universe has no beginning and no end. This required that matter be continually created in order to keep the universe's density from decreasing. Influential papers on the topic of a steady-state cosmology were published by Hermann Bondi , Thomas Gold , and Fred Hoyle in 1948. [ 3 ] [ 4 ] Similar models had been proposed earlier by William Duncan MacMillan , among others. [ 5 ] It is now known that Albert Einstein considered a steady-state model of the expanding universe, as indicated in a 1931 manuscript, many years before Hoyle, Bondi and Gold. However, Einstein abandoned the idea. [ 6 ] Problems with the steady-state model began to emerge in the 1950s and 60s – observations supported the idea that the universe was in fact changing. Bright radio sources ( quasars and radio galaxies ) were found only at large distances (therefore could have existed only in the distant past due to the effects of the speed of light on astronomy), not in closer galaxies. Whereas the Big Bang theory predicted as much, the steady-state model predicted that such objects would be found throughout the universe, including close to our own galaxy. By 1961, statistical tests based on radio-source surveys [ 7 ] provided strong evidence against the steady-state model. Some proponents like Halton Arp insist that the radio data were suspect. [ 1 ] : 384 Gold and Hoyle (1959) [ 8 ] considered that matter that is newly created exists in a region that is denser than the average density of the universe. This matter then may radiate and cool faster than the surrounding regions, resulting in a pressure gradient. This gradient would push matter into an over-dense region and result in a thermal instability and emit a large amount of plasma. However, Gould and Burbidge (1963) [ 9 ] realized that the thermal bremsstrahlung radiation emitted by such a plasma would exceed the amount of observed X-rays . Therefore, in the steady-state cosmological model, thermal instability does not appear to be important in the formation of galaxy-sized masses. [ 10 ] In 1964 the cosmic microwave background radiation was discovered as predicted by the Big Bang theory. The steady-state model attempted to explain the microwave background radiation as the result of light from ancient stars that has been scattered by galactic dust. However, the cosmic microwave background level is very even in all directions, making it difficult to explain how it could be generated by numerous point sources, and the microwave background radiation does not show the polarization characteristic of scattering. Furthermore, its spectrum is so close to that of an ideal black body that it could hardly be formed by the superposition of contributions from a multitude of dust clumps at different temperatures as well as at different redshifts . Steven Weinberg wrote in 1972: "The steady state model does not appear to agree with the observed d L versus z relation or with source counts ... In a sense, this disagreement is a credit to the model; alone among all cosmologies, the steady state model makes such definite predictions that it can be disproved even with the limited observational evidence at our disposal. The steady state model is so attractive that many of its adherents still retain hope that the evidence against it will eventually disappear as observations improve. However, if the cosmic microwave radiation ... is really black-body radiation, it will be difficult to doubt that the universe has evolved from a hotter denser early stage." [ 11 ] Since this discovery, the Big Bang theory has been considered to provide the best explanation of the origin of the universe. In most astrophysical publications, the Big Bang is implicitly accepted and is used as the basis of more complete theories. [ 12 ] : 388 Quasi-steady-state cosmology (QSS) was proposed in 1993 by Fred Hoyle, Geoffrey Burbidge , and Jayant V. Narlikar as a new incarnation of the steady-state ideas meant to explain additional features unaccounted for in the initial proposal. The model suggests pockets of creation occurring over time within the universe, sometimes referred to as minibangs, mini-creation events, or little bangs . [ 13 ] After the observation of an accelerating universe , further modifications of the model were made. [ 14 ] The Planck particle is a hypothetical black hole whose Schwarzschild radius is approximately the same as its Compton wavelength ; the evaporation of such a particle has been evoked as the source of light elements in an expanding steady-state universe. [ 15 ] Astrophysicist and cosmologist Ned Wright has pointed out flaws in the model. [ 16 ] These first comments were soon rebutted by the proponents. [ 17 ] Wright and other mainstream cosmologists reviewing QSS have pointed out new flaws and discrepancies with observations left unexplained by proponents. [ 18 ]
https://en.wikipedia.org/wiki/Steady-state_model
In biochemistry , steady state refers to the maintenance of constant internal concentrations of molecules and ions in the cells and organs of living systems. [ 1 ] Living organisms remain at a dynamic steady state where their internal composition at both cellular and gross levels are relatively constant, but different from equilibrium concentrations. [ 1 ] A continuous flux of mass and energy results in the constant synthesis and breakdown of molecules via chemical reactions of biochemical pathways . [ 1 ] Essentially, steady state can be thought of as homeostasis at a cellular level. [ 1 ] Metabolic regulation achieves a balance between the rate of input of a substrate and the rate that it is degraded or converted, and thus maintains steady state. [ 1 ] The rate of metabolic flow, or flux, is variable and subject to metabolic demands. [ 1 ] However, in a metabolic pathway, steady state is maintained by balancing the rate of substrate provided by a previous step and the rate that the substrate is converted into product, keeping substrate concentration relatively constant. [ 1 ] Thermodynamically speaking, living organisms are open systems, meaning that they constantly exchange matter and energy with their surroundings. [ 1 ] A constant supply of energy is required for maintaining steady state, as maintaining a constant concentration of a molecule preserves internal order and thus is entropically unfavorable. [ 1 ] When a cell dies and no longer utilizes energy, its internal composition will proceed toward equilibrium with its surroundings. [ 1 ] In some occurrences, it is necessary for cells to adjust their internal composition in order to reach a new steady state. [ 1 ] Cell differentiation, for example, requires specific protein regulation that allows the differentiating cell to meet new metabolic requirements. [ 1 ] The concentration of ATP must be kept above equilibrium level so that the rates of ATP-dependent biochemical reactions meet metabolic demands. A decrease in ATP will result in a decreased saturation of enzymes that use ATP as substrate, and thus a decreased reaction rate . [ 1 ] The concentration of ATP is also kept higher than that of AMP , and a decrease in the ATP/AMP ratio triggers AMPK to activate cellular processes that will return ATP and AMP concentrations to steady state. [ 1 ] In one step of the glycolysis pathway catalyzed by PFK-1, the equilibrium constant of reaction is approximately 1000, but the steady state concentration of products (fructose-1,6-bisphosphate and ADP) over reactants (fructose-6-phosphate and ATP) is only 0.1, indicating that the ratio of ATP to AMP remains in a steady state significantly above equilibrium concentration. Regulation of PFK-1 maintains ATP levels above equilibrium. [ 1 ] In the cytoplasm of hepatocytes , the steady state ratio of NADP + to NADPH is approximately 0.1 while that of NAD + to NADH is approximately 1000, favoring NADPH as the main reducing agent and NAD + as the main oxidizing agent in chemical reactions. [ 2 ] Blood glucose levels are maintained at a steady state concentration by balancing the rate of entry of glucose into the blood stream (i.e. by ingestion or released from cells) and the rate of glucose uptake by body tissues. [ 1 ] Changes in the rate of input will be met with a change in consumption, and vice versa, so that blood glucose concentration is held at about 5 mM in humans. [ 1 ] A change in blood glucose levels triggers the release of insulin or glucagon, which stimulates the liver to release glucose into the bloodstream or take up glucose from the bloodstream in order to return glucose levels to steady state. [ 1 ] Pancreatic beta cells, for example, increase oxidative metabolism as a result of a rise in blood glucose concentration, triggering secretion of insulin. [ 3 ] Glucose levels in the brain are also maintained at steady state, and glucose delivery to the brain relies on the balance between the flux of the blood brain barrier and uptake by brain cells. [ 4 ] In teleosts , a drop of blood glucose levels below that of steady state decreases the intracellular-extracellular gradient in the bloodstream, limiting glucose metabolism in red blood cells. [ 5 ] Blood lactate levels are also maintained at steady state. At rest or low levels of exercise, the rate of lactate production in muscle cells and consumption in muscle or blood cells allows lactate to remain in the body at a certain steady state concentration. If a higher level of exercise is sustained, however, blood lactose levels will increase before becoming constant, indicating that a new steady state of elevated concentration has been reached. Maximal lactate steady state (MLSS) refers to the maximum constant concentration of lactase reached during sustained high-activity. [ 6 ] Metabolic regulation of nitrogen-containing molecules, such as amino acids, is also kept at steady state. [ 2 ] The amino acid pool, which describes the level of amino acids in the body, is maintained at a relatively constant concentration by balancing the rate of input (i.e. from dietary protein ingestion, production of metabolic intermediates) and rate of depletion (i.e. from formation of body proteins, conversion to energy-storage molecules). [ 2 ] Amino acid concentration in lymph node cells, for example, is kept at steady state with active transport as the primary source of entry, and diffusion as the source of efflux . [ 7 ] One main function of plasma and cell membranes is to maintain asymmetric concentrations of inorganic ions in order to maintain an ionic steady state different from electrochemical equilibrium . [ 8 ] In other words, there is a differential distribution of ions on either side of the cell membrane - that is, the amount of ions on either side is not equal and therefore a charge separation exists. [ 8 ] However, ions move across the cell membrane such that a constant resting membrane potential is achieved; this is ionic steady state. [ 8 ] In the pump-leak model of cellular ion homeostasis, energy is utilized to actively transport ions against their electrochemical gradient . [ 9 ] The maintenance of this steady state gradient, in turn, is used to do electrical and chemical work , when it is dissipated though the passive movement of ions across the membrane. [ 9 ] In cardiac muscle, ATP is used to actively transport sodium ions out of the cell through a membrane ATPase . [ 10 ] Electrical excitation of the cell results in an influx of sodium ions into the cell, temporarily depolarizing the cell. [ 10 ] To restore the steady state electrochemical gradient, ATPase removes sodium ions and restores potassium ions in the cell. [ 10 ] When an elevated heart rate is sustained, causing more depolarizations, sodium levels in the cell increase until becoming constant, indicating that a new steady state has been reached. [ 10 ] Steady-states can be stable or unstable. A steady-state is unstable if a small perturbation in one or more of the concentrations results in the system diverging from its state. In contrast, if a steady-state is stable, any perturbation will relax back to the original steady state. Further details can be found on the page Stability theory . The following provides a simple example for computing the steady-state give a simple mathematical model. Consider the open chemical system composed of two reactions with rates v 1 {\displaystyle v_{1}} and v 2 : {\displaystyle v_{2}:} : X o ⟶ v 1 S 1 ⟶ v 2 X 1 {\displaystyle X_{o}{\stackrel {v_{1}}{\longrightarrow }}S_{1}{\stackrel {v_{2}}{\longrightarrow }}X_{1}} We will assume that the chemical species X o {\displaystyle X_{o}} and X 1 {\displaystyle X_{1}} are fixed external species and S 1 {\displaystyle S_{1}} is an internal chemical species that is allowed to change. The fixed boundaries is to ensure the system can reach a steady-state. If we assume simple irreversible mass-action kinetics , the differential equation describing the concentration of S 1 {\displaystyle S_{1}} is given by: d S 1 d t = k 1 X o − k 2 S 1 {\displaystyle {\frac {dS_{1}}{dt}}=k_{1}X_{o}-k_{2}S_{1}} To find the steady-state the differential equation is set to zero and the equation rearranged to solve for S 1 {\displaystyle S_{1}} S 1 = k 1 X o k 2 {\displaystyle S_{1}={\frac {k_{1}X_{o}}{k_{2}}}} This is the steady-state concentration of S 1 {\displaystyle S_{1}} . The stability of this system can be determined by making a perturbation δ S 1 {\displaystyle \delta S_{1}} in S 1 {\displaystyle S_{1}} This can be expressed as: d ( S 1 + δ S 1 ) d t = k 1 X o − k 2 ( S 1 + δ S 1 ) {\displaystyle {\frac {d(S_{1}+\delta S_{1})}{dt}}=k_{1}X_{o}-k_{2}(S_{1}+\delta S_{1})} Note that the δ S 1 {\displaystyle \delta S_{1}} will elicit a change in the rate of change. At steady-state k 1 X o − k 2 S 1 = 0 {\displaystyle k_{1}X_{o}-k_{2}S_{1}=0} , therefore the rate of change of S 1 {\displaystyle S_{1}} as a result of this perturbation is: d δ S 1 d t = − k 2 δ S 1 {\displaystyle {\frac {d\delta S_{1}}{dt}}=-k_{2}\delta S_{1}} This shows that the perturbation, δ S 1 {\displaystyle \delta S_{1}} decays exponetially, hence the system is stable.
https://en.wikipedia.org/wiki/Steady_state_(biochemistry)
In chemistry , a steady state is a situation in which all state variables are constant in spite of ongoing processes that strive to change them. For an entire system to be at steady state , i.e. for all state variables of a system to be constant, there must be a flow through the system (compare mass balance ). A simple example of such a system is the case of a bathtub with the tap running but with the drain unplugged: after a certain time, the water flows in and out at the same rate, so the water level (the state variable Volume) stabilizes and the system is in a steady state. The steady state concept is different from chemical equilibrium . Although both may create a situation where a concentration does not change, in a system at chemical equilibrium, the net reaction rate is zero ( products transform into reactants at the same rate as reactants transform into products), while no such limitation exists in the steady state concept. Indeed, there does not have to be a reaction at all for a steady state to develop. The term steady state is also used to describe a situation where some, but not all, of the state variables of a system are constant. For such a steady state to develop, the system does not have to be a flow system. Therefore, such a steady state can develop in a closed system where a series of chemical reactions take place. Literature in chemical kinetics usually refers to this case, calling it steady state approximation . In simple systems the steady state is approached by state variables gradually decreasing or increasing until they reach their steady state value. In more complex systems state variables might fluctuate around the theoretical steady state either forever (a limit cycle ) or gradually coming closer and closer. It theoretically takes an infinite time to reach steady state, just as it takes an infinite time to reach chemical equilibrium. Both concepts are, however, frequently used approximations because of the substantial mathematical simplifications these concepts offer. Whether or not these concepts can be used depends on the error the underlying assumptions introduce. So, even though a steady state, from a theoretical point of view, requires constant drivers (e.g. constant inflow rate and constant concentrations in the inflow), the error introduced by assuming steady state for a system with non-constant drivers may be negligible if the steady state is approached fast enough (relatively speaking). The steady state approximation , [ 1 ] occasionally called the stationary-state approximation or Bodenstein 's quasi-steady state approximation , involves setting the rate of change of a reaction intermediate in a reaction mechanism equal to zero so that the kinetic equations can be simplified by setting the rate of formation of the intermediate equal to the rate of its destruction. In practice it is sufficient that the rates of formation and destruction are approximately equal, which means that the net rate of variation of the concentration of the intermediate is small compared to the formation and destruction, and the concentration of the intermediate varies only slowly, similar to the reactants and products (see the equations and the green traces in the figures below). [ citation needed ] Its use facilitates the resolution of the differential equations that arise from rate equations , which lack an analytical solution for most mechanisms beyond the simplest ones. The steady state approximation is applied, for example, in Michaelis-Menten kinetics . As an example, the steady state approximation will be applied to two consecutive, irreversible, homogeneous first order reactions in a closed system. (For heterogeneous reactions, see reactions on surfaces .) This model corresponds, for example, to a series of nuclear decompositions like 239 U → 239 Np → 239 Pu . If the rate constants for the following reaction are k 1 and k 2 ; A → B → C , combining the rate equations with a mass balance for the system yields three coupled differential equations: For species A: d [ A ] d t = − k 1 [ A ] {\displaystyle {\frac {d[{\ce {A}}]}{dt}}=-k_{1}[{\ce {A}}]} For species B: d [ B ] d t = k 1 [ A ] − k 2 [ B ] {\displaystyle {\frac {d[{\ce {B}}]}{dt}}=k_{1}[{\ce {A}}]-k_{2}[{\ce {B}}]} For species C: d [ C ] d t = k 2 [ B ] {\displaystyle {\frac {d[{\ce {C}}]}{dt}}=k_{2}[{\ce {B}}]} The analytical solutions for these equations (supposing that initial concentrations of every substance except for A are zero) are: [ 2 ] If the steady state approximation is applied, then the derivative of the concentration of the intermediate is set to zero. This reduces the second differential equation to an algebraic equation which is much easier to solve. Therefore, d [ C ] d t = k 1 [ A ] , {\displaystyle {\tfrac {d[{\ce {C}}]}{dt}}=k_{1}[{\ce {A}}],} so that [ C ] = [ A ] 0 ( 1 − e − k 1 t ) . {\displaystyle [{\ce {C}}]=[{\ce {A}}]_{0}\left(1-e^{-k_{1}t}\right).} Since [ B ] = k 1 k 2 [ A ] = k 1 k 2 [ A ] 0 e − k 1 t , {\displaystyle [{\ce {B}}]={\tfrac {k_{1}}{k_{2}}}[{\ce {A}}]={\tfrac {k_{1}}{k_{2}}}[{\ce {A}}]_{0}e^{-k_{1}t},} the concentration of the reaction intermediate B changes with the same time constant as [A] and is not in a steady state in that sense. The analytical and approximated solutions should now be compared in order to decide when it is valid to use the steady state approximation. The analytical solution transforms into the approximate one when k 2 ≫ k 1 , {\displaystyle k_{2}\gg k_{1},} because then e − k 2 t ≪ e − k 1 t {\displaystyle e^{-k_{2}t}\ll e^{-k_{1}t}} and k 2 − k 1 ≈ k 2 . {\displaystyle k_{2}-k_{1}\approx \;k_{2}.} Therefore, it is valid to apply the steady state approximation only if the second reaction is much faster than the first ( k 2 / k 1 > 10 is a common criterion), because that means that the intermediate forms slowly and reacts readily so its concentration stays low. The graphs show concentrations of A (red), B (green) and C (blue) in two cases, calculated from the analytical solution. When the first reaction is faster it is not valid to assume that the variation of [B] is very small, because [B] is neither low or close to constant: first A transforms into B rapidly and B accumulates because it disappears slowly. As the concentration of A decreases its rate of transformation decreases, at the same time the rate of reaction of B into C increases as more B is formed, so a maximum is reached when t = { ln ⁡ ( k 1 k 2 ) k 1 − k 2 k 1 ≠ k 2 1 k 1 k 1 = k 2 {\displaystyle t={\begin{cases}{\frac {\ln \left({\frac {k_{1}}{k_{2}}}\right)}{k_{1}-k_{2}}}&\,k_{1}\neq k_{2}\\\\{\frac {1}{k_{1}}}&\,k_{1}=k_{2}\\\end{cases}}} From then on the concentration of B decreases. When the second reaction is faster, after a short induction period during which the steady state approximation does not apply, the concentration of B remains low (and more or less constant in an absolute sense) because its rates of formation and disappearance are almost equal and the steady state approximation can be used. The equilibrium approximation can sometimes be used in chemical kinetics to yield similar results to the steady state approximation. It consists in assuming that the intermediate arrives rapidly at chemical equilibrium with the reactants. For example, Michaelis-Menten kinetics can be derived assuming equilibrium instead of steady state. Normally the requirements for applying the steady state approximation are laxer: the concentration of the intermediate is only needed to be low and more or less constant (as seen, this has to do only with the rates at which it appears and disappears) but it is not required to be at equilibrium. The reaction H 2 + Br 2 → 2 HBr has the following mechanism: The rate of each species are: These equations cannot be solved, because each one has values that change with time. For example, the first equation contains the concentrations of [Br], [H 2 ] and [Br 2 ] , which depend on time, as can be seen in their respective equations. To solve the rate equations the steady state approximation can be used. The reactants of this reaction are H 2 and Br 2 , the intermediates are H and Br, and the product is HBr. For solving the equations, the rates of the intermediates are set to 0 in the steady state approximation: From the reaction rate of H, k 2 [Br][H 2 ] − k 3 [H][Br 2 ] − k 4 [H][HBr] = 0 , so the reaction rate of Br can be simplified: The reaction rate of HBr can also be simplifed, changing k 2 [Br][H 2 ] − k 4 [H][Br] to k 3 [H][Br 2 ] , since both values are equal. The concentration of H from equation 1 can be isolated: The concentration of this intermediate is small and changes with time like the concentrations of reactants and product. It is inserted into the last differential equation to give Simplifying the equation leads to The experimentally observed rate is The experimental rate law is the same as rate obtained with the steady state approximation, if ⁠ k ′ {\displaystyle k'} ⁠ is 2 k 2 k 1 k 5 {\textstyle 2k_{2}{\sqrt {\frac {k_{1}}{k_{5}}}}} and ⁠ k ″ {\displaystyle k''} ⁠ is k 4 k 3 {\textstyle {\frac {k_{4}}{k_{3}}}} .
https://en.wikipedia.org/wiki/Steady_state_(chemistry)
In electronics , steady state is an equilibrium condition of a circuit or network that occurs as the effects of transients are no longer important. Steady state is reached (attained) after transient (initial, oscillating or turbulent ) state has subsided. During steady state, a system is in relative stability. The term sinusoidal steady state emphasizes that sine waves of essentially-infinite duration may exist, as long as their amplitude and frequency remain constant. Steady state determination is an important topic, because many design specifications of electronic systems are given in terms of the steady-state characteristics. Periodic steady-state solution is also a prerequisite for small signal dynamic modeling. Steady-state analysis is therefore an indispensable component of the design process. Steady state calculation methods can be sorted into time-domain algorithms (time domain sensitivities, shooting) and frequency-domain algorithms ( harmonic balance ) methods, are the best choice for most microwave circuits excited with sinusoidal signals (e.g. mixers, power amplifiers). Time domain methods can be further divided into one step methods (time domain sensitivities) and iterative methods (shooting methods). One step methods require derivatives to compute the steady state; whenever those are not readily available at hand, iterative methods come into focus. This electricity-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Steady_state_(electronics)
Steam-assisted gravity drainage ( SAGD ; "Sag-D") is an enhanced oil recovery technology for producing heavy crude oil and bitumen . It is an advanced form of steam stimulation in which a pair of horizontal wells are drilled into the oil reservoir , one a few metres above the other. High pressure steam is continuously injected into the upper wellbore to heat the oil and reduce its viscosity , causing the heated oil to drain into the lower wellbore, where it is pumped out. Dr. Roger Butler, engineer at Imperial Oil from 1955 to 1982, invented the steam assisted gravity drainage (SAGD) process in the 1970s. Butler "developed the concept of using horizontal pairs of wells and injected steam to develop certain deposits of bitumen considered too deep for mining". [ 1 ] [ 2 ] In 1983 Butler became director of technical programs for the Alberta Oil Sands Technology and Research Authority (AOSTRA), [ 1 ] [ 3 ] a crown corporation created by Alberta Premier Lougheed to promote new technologies for oil sands and heavy crude oil production. AOSTRA quickly supported SAGD as a promising innovation in oil sands extraction technology. [ 2 ] Steam-assisted gravity drainage (SAGD) and cyclic steam stimulation (CSS) steam injection (oil industry) are two commercially applied primal thermal recovery processes used in the oil sands [ 4 ] in Geological formation sub-units, such as Grand Rapids Formation, Clearwater Formation, McMurray Formation, General Petroleum Sand, Lloydminster Sand, of the Mannville Group , a stratigraphic range in the Western Canadian Sedimentary Basin . Steam-assisted gravity drainage is one of the two primary extraction techniques in Alberta's oil sands, the other being strip-mining. While strip-mining is limited to deposits near the surface, steam-assisted gravity drainage technique (SAGD) is better suited to the larger deep deposits that surround the shallow ones. Much of the expected future growth of production in the Canadian oil sands is predicted to be from SAGD. [ 5 ] : 9 "Petroleum from the Canadian oil sands extracted via surface mining techniques can consume 20 times more water than conventional oil drilling. As a specific example of an underlying data weakness, this figure excludes the increasingly important steam-assisted gravity drainage technique (SAGD) method." Steam Assisted Gravity Drainage emissions are equivalent to what is emitted by the steam flood projects which have long been used to produce heavy oil in California's Kern River Oil Field and elsewhere around the world. [ 6 ] The SAGD process of heavy oil or bitumen production is an enhancement on the steam injection techniques originally developed to produce heavy oil from the Kern River Oil Field of California. [ 7 ] The key to all steam flooding processes is to deliver heat to the producing formation to reduce the viscosity of the heavy oil and enable it to move toward the producing well. The cyclic steam stimulation (CSS) process developed for the California heavy oil fields was able to produce oil from some portions of the Alberta oil sands, such as the Cold Lake oil sands , but did not work as well to produce bitumen from heavier and deeper deposits in the Athabasca oil sands and Peace River oil sands , where the majority of Alberta's oil sands reserves lie. To produce these much larger reserves, the SAGD process was developed, primarily by Dr. Roger Butler [ 8 ] of Imperial Oil with the assistance of the Alberta Oil Sands Technology and Research Authority and industry partners. [ 9 ] The SAGD process is estimated by the National Energy Board to be economic when oil prices are at least US$30 to $35 per barrel. [ 10 ] In the SAGD process, two parallel horizontal oil wells are drilled in the formation , one about 4 to 6 metres above the other. The upper well injects steam, and the lower one collects the heated crude oil or bitumen that flows down due to gravity, plus recovered water from the condensation of the injected steam. The basis of the SAGD process is that thermal communication is established with the reservoir so that the injected steam forms a "steam chamber". The heat from the steam reduces the viscosity of the heavy crude oil or bitumen which allows it to flow down into the lower wellbore. The steam and associated gas rise because of their low density compared to the heavy crude oil below, ensuring that steam is not produced at the lower production well, tend to rise in the steam chamber, filling the void space left by the oil. Associated gas forms, to a certain extent, an insulating heat blanket above (and around) the steam. [ 11 ] Oil and water flow is by a countercurrent, gravity driven drainage into the lower well bore. The condensed water and crude oil or bitumen is recovered to the surface by pumps such as progressive cavity pumps that work well for moving high-viscosity fluids with suspended solids. [ 12 ] Sub-cool is the difference between the saturation temperature (boiling point) of water at the producer pressure and the actual temperature at the same place where the pressure is measured. The higher the liquid level above the producer the lower the temperature and higher is the sub-cool. However, real life reservoirs are invariably heterogeneous therefore it becomes extremely difficult to achieve a uniform sub-cool along the entire horizontal length of a well. As a consequence many operators, when faced with uneven stunted steam chamber development, allow a small quantity of steam to enter into the producer to keep the bitumen in the entire wellbore hot hence keeping its viscosity low with the added benefit of transferring heat to colder parts of the reservoir along the wellbore. Another variation sometimes called Partial SAGD is used when operators deliberately circulate steam in the producer following a long shut-in period or as a startup procedure. Though a high value of sub-cool is desirable from a thermal efficiency standpoint as it generally includes reduction of steam injection rates but it also results in slightly reduced production due to a corresponding higher viscosity and lower mobility of bitumen caused by lower temperature. Another drawback of very high sub-cool is the possibility of steam pressure eventually not being enough to sustain steam chamber development above the injector, sometimes resulting in collapsed steam chambers where condensed steam floods the injector and precludes further development of the chamber. Continuous operation of the injection and production wells at approximately reservoir pressure eliminates the instability problems that plague all high-pressure and cyclic steam processes and SAGD produces a smooth, even production that can be as high as 70% to 80% of oil in place in suitable reservoirs. The process is relatively insensitive to shale streaks and other vertical barriers to steam and fluid flow because, as the rock is heated, differential thermal expansion allows steam and fluids to gravity flow through to the production well. This allows recovery rates of 60% to 70% of oil in place, even in formations with many thin shale barriers. Thermally, SAGD is generally twice as efficient as the older CSS process, and it results in far fewer wells being damaged by the high pressures associated with CSS. Combined with the higher oil recovery rates achieved, this means that SAGD is much more economic than cyclic steam processes where the reservoir is reasonably thick. [ 13 ] The gravity drainage idea was originally conceived by Dr. Roger Butler, an engineer for Imperial Oil in the 1970s [ 1 ] [ 2 ] In 1975 Imperial Oil transferred Butler from Sarnia, Ontario to Calgary, Alberta to head their heavy oil research effort. He tested the concept with Imperial Oil in 1980, in a pilot at Cold Lake which featured one of the first horizontal wells in the industry, with vertical injectors. In 1974, Premier of Alberta Peter Lougheed created the Alberta Oil Sands Technology and Research Authority (AOSTRA) as an Alberta crown corporation to promote the development and use of new technology for oil sands and heavy crude oil production, and enhanced recovery of conventional crude oil. Its first facility was owned and operated by ten industrial participants and received ample government support (Deutsch and McLennan 2005) [ 2 ] including from the Alberta Heritage Savings Trust Fund . [ 14 ] [ 15 ] [ 16 ] One of the main targets of AOSTRA finding of suitable technologies for that part of the Athabasca oil sands that could not be recovered using conventional surface mining technologies. [ 2 ] In 1984, AOSTRA initiated the Underground Test Facility in the Athabasca oil sands, located between the MacKay Rivers and the Devon River west of the Syncrude plant as an in-situ SAGD bitumen recovery facility. [ 2 ] [ 17 ] It was here that their first test of twin (horizontal) SAGD wells took place, proving the feasibility of the concept, briefly achieving positive cash flow in 1992 at a production rate of about 2,000 barrels per day (320 m 3 /d) from three well pairs. The Foster Creek plant in Alberta Canada, built in 1996 and operated by Cenovus Energy , was the first commercial Steam-assisted gravity drainage (SAGD) project and by 2010 Foster Creek "became the largest commercial SAGD project in Alberta to reach royalty payout status. " [ dead link ] [ 17 ] [ dead link ] [ 18 ] The original UTF SAGD wells were drilled horizontally from a tunnel in the limestone underburden, accessed with vertical mine shafts . The concept coincided with development of directional drilling techniques that allowed companies to drill horizontal wells accurately, cheaply and efficiently, to the point that it became hard to justify drilling a conventional vertical well any more. With the low cost of drilling horizontal well pairs, and the very high recovery rates of the SAGD process (up to 60% of the oil in place), SAGD is economically attractive to oil companies. At Foster Creek Cenovus has employed its patented [ 19 ] 'wedge well' technology to recover residual resources bypassed by regular SAGD operations, this improves the total recovery rate of the operation. The 'wedge well' technology works by accessing the residual bitumen that is bypassed in regular SAGD operations by drilling an infill well between two established operating SAGD well pairs once the SAGD steam chambers have matured to the point where they have merged and are in fluid communication and then what is left to recover in that reservoir area between the operating SAGD well pairs is a 'wedge' of residual, bypassed oil. Wedge well technology has been shown to improve overall recovery rates by 5%-10% at a reduced capital cost as less steam is required once the steam chambers mature to the point where they are in fluid communication and typically at this stage in the recovery process, also commonly known as the 'blow down' phase, [ 20 ] the injected steam is replaced with a non-condensable gas such as methane, further reducing production costs. [ 21 ] This technology was not at-first commercially viable. It became so during the increased oil prices during the 2000s. While traditional drilling methods were prevalent up until the 1990s, high crude prices of the 21st Century are encouraging more unconventional methods (such as SAGD) to extract crude oil. The Canadian oil sands have many SAGD projects in progress, since this region is home of one of the largest deposits of bitumen in the world ( Canada and Venezuela have the world's largest deposits). The SAGD process allowed the Alberta Energy Resources Conservation Board (ERCB) to increase its proven oil reserves to 179 billion barrels, which raised Canada's oil reserves to the third highest in the world after Venezuela and Saudi Arabia and approximately quadrupled North American oil reserves. As of 2011, the oil sands reserves stand at around 169 billion barrels. SAGD, a thermal recovery process, consumes large quantities of water and natural gas. [ 5 ] : 4 "Petroleum from the Canadian oil sands extracted via surface mining techniques can consume 20 times more water than conventional oil drilling. As a specific example of an underlying data weakness, this figure excludes the increasingly important steam-assisted gravity drainage technique (SAGD) method. We encourage future researchers to fill this hole. "Petroleum from the Canadian oil sands extracted via surface mining techniques can consume 20 times more water than conventional oil drilling." However, by 2011 there was inadequate data on the amount of water used in the increasingly important steam-assisted gravity drainage technique (SAGD) method. [ 5 ] : 4 Evaporators can treat the SAGD produced water to produce high quality freshwater for reuse in SAGD operations. [ 22 ] However, evaporators produce a high volume blowdown waste which requires further management. [ 22 ] As in all thermal recovery processes, cost of steam generation is a major part of the cost of oil production. Historically, natural gas has been used as a fuel for Canadian oil sands projects, due to the presence of large stranded gas reserves in the oil sands area. However, with the building of natural gas pipelines to outside markets in Canada and the United States, the price of gas has become an important consideration. The fact that natural gas production in Canada has peaked and is now declining is also a problem. Other sources of generating heat are under consideration, notably gasification of the heavy fractions of the produced bitumen to produce syngas , using the nearby (and massive) deposits of coal , or even building nuclear reactors to produce the heat. A source of large amounts of fresh and brackish water and large water re-cycling facilities are required in order to create the steam for the SAGD process. Water is a popular topic for debate in regards to water use and management. As of 2008, American petroleum production (not limited to SAGD) generates over 5 billion gallons of produced water every day. [ 23 ] [ 24 ] The concern of using large amounts of water has little to do with proportion of water used, rather the quality of the water. Traditionally close to 70 million cubic metres of the water volume that was used in the SAGD process was fresh, surface, water. There has been a significant reduction in fresh water use as of 2010, when approximately 18 million cubic metres were used. Though to offset the drastic reduction in fresh water use, industry has begun to significantly increase the volume of saline groundwater involved. This, as well as other, more general water saving techniques have allowed surface water usage by oil sands operations to decrease by more than threefold since production first began. [ 25 ] Relying upon gravity drainage, SAGD also requires comparatively thick and homogeneous reservoirs, and so is not suitable for all heavy-oil production areas. By 2009 the two commercially applied primal thermal recovery processes, steam-assisted gravity drainage (SAGD) and cyclic steam stimulation (CSS), were used in oil sands production in the Clearwater and Lower Grand Rapids Formations in the Cold Lake Area in Alberta. [ 4 ] Canadian Natural Resources employs cyclic steam or "huff and puff" technology to develop bitumen resources. This technology requires one well bore and the production consists of the injection to fracture and heat the formation prior to the production phases. First steam is injected above the formation fracture point for several weeks or months, mobilizing cold bitumen, the well is then shut in for several weeks or months to allow the steam to soak into the formation. Then the flow on the injection well is reversed producing oil through the same injection well bore. The injection and production phases together comprise one cycle. Steam is re-injected to begin a new cycle when oil production rates fall below a critical threshold due to the cooling of the reservoir. [ 26 ] Cyclic steam stimulation also has a number of CSS Follow-up or Enhancement Processes, including Pressure Up and Blow Down (PUBD), Mixed Well Steam Drive and Drainage (MWSDD), Vapor Extraction (Vapex), Liquid Addition to Steam for Enhanced Recovery of Bitumen (LASER) and HPCSS Assisted SAGD and Hybrid Process. [ 4 ] "Roughly 35 per cent of all in situ production in the Alberta oil sands uses a technique called high pressure cyclic steam stimulation (HPCSS), which cycles between two phases: first, steam is injected into an underground oil sands deposit to fracture and heat the formation to soften the bitumen just like CSS does, excepting at even higher pressures; then, the cycle switches to production where the resulting hot mixture of bitumen and steam (called a "bitumen emulsion") is pumped up to the surface through the same well, again just like CSS, until the resulting pressure drop slows production to an uneconomical stage. The process is then repeated multiple times." [ 27 ] An Alberta Energy Regulator (AER) news release explained the difference between high pressure cyclic steam stimulation (HPCSS) and steam assisted gravity drainage (SAGD). "HPCSS has been used in oil recovery in Alberta for more than 30 years. The method involves injecting high-pressure steam, well above the ambient reservoir pressure, into a reservoir over a prolonged period of time. As heat softens the bitumen and water dilutes and separates the bitumen from the sand, the pressure creates fractures, cracks and openings through which the bitumen can flow back into the steam-injector wells. HPCSS differs from steam assisted gravity drainage (SAGD) operations where steam is continuously injected at lower pressures without fracturing the reservoir and uses gravity drainage as the primary recovery mechanism." [ 28 ] In some cases satellite radar InSAR techniques were used to monitor surface deformation associated with the oil extraction. [ 29 ] In the Clearwater Formation near Cold Lake, Alberta the high pressure cyclic steam stimulation (HPCSS) is used. [ 4 ] There are both horizontal and vertical wells. Injection is at fracture pressure. There is a 60 m to 180 m spacing for horizontal wells. Vertical wells are spaced at 2 to 8 Acre spacing for vertical wells. The development can be as low as 7 m net pay. It is used in areas generally with no to minimal bottom water or top gas. The CSOR is 3.3 to 4.5. The ultimate recovery is predicted at 15 to 35%. [ 4 ] SAGD thermal recovery method is also used in Clearwater and Lower Grand Rapids Formations with Horizontal Well Pairs (700 to 1000 m), Operating pressure 3 to 5 MPa, Burnt Lake SAGD was started with higher operating pressure close to dilation pressure, 75 m to 120 m spacing, Development to as low as 10 m net pay, In areas with or without bottom water, CSOR: 2.8 to 4.0 (at 100% quality), Predicted ultimate recovery: 45% to 55%. [ 4 ] Canadian Natural Resources Limited's (CNRL) Primrose and Wolf Lake in situ oil sands project near Cold Lake, Alberta in the Clearwater Formation , operated by CNRL subsidiary Horizon Oil Sands , use the high pressure cyclic steam stimulation (HPCSS). [ 4 ] Alternative enhanced oil recovery mechanisms include VAPEX ( V apor A ssisted P etroleum Ex traction), Electro-Thermal Dynamic Stripping Process (ET-DSP) , and ISC (for In Situ Combustion). VAPEX, a "gravity-drainage process that uses vapourized solvents rather than steam to displace or produce heavy oil and reduce its viscosity, was also invented by Butler. [ 30 ] ET-DSP is a patented process that uses electricity to heat oil sands deposits to mobilize bitumen allowing production using simple vertical wells. ISC uses oxygen to generate heat that diminishes oil viscosity; alongside carbon dioxide generated by heavy crude oil displace oil toward production wells. One ISC approach is called THAI for Toe to Heel Air Injection. The THAI facility in Saskatchewan was purchased in 2017 by Proton Technologies Canada Inc., who has demonstrated separation of pure hydrogen at this site. Proton's goal is to leave the carbon in the ground and extract only the hydrogen from hydrocarbons. [ 30 ] eMSAGP is a MEG Energy patented [ 31 ] process wherein MEG, in partnership with Cenovus, [ 32 ] developed a modified recovery process dubbed “enhanced Modified Steam and Gas Push” (eMSAGP), a modification of SAGP designed to improve the thermal efficiency of SAGD by utilizing additional producers located midway between adjacent SAGD well pairs, at the elevation of the SAGD producers. These additional producers, commonly referred to as “infill” wells, are an integral part of the eMSAGP recovery system.
https://en.wikipedia.org/wiki/Steam-assisted_gravity_drainage
The steam digester or bone digester (also known as Papin’s digester ) is a high-pressure cooker invented by French physicist Denis Papin in 1679. It is a device for extracting fats from bones in a high-pressure steam environment, which also renders them brittle enough to be easily ground into bone meal . It is the forerunner of the autoclave and the domestic pressure cooker . [ 1 ] The steam-release valve, which was invented for Papin's digester following various explosions of the earlier models, inspired the development of the piston-and-cylinder steam engine . [ 2 ] The artificial vacuum was first produced in 1643 by Italian scientist Evangelista Torricelli and further developed by German scientist Otto von Guericke with his Magdeburg hemispheres . Guerike's demonstration was documented by Gaspar Schott , in a book that was read by Robert Boyle . Boyle and his assistant Robert Hooke improved Guericke's air pump design and built their own. From this, through various experiments, they formulated what is called Boyle's law , which states that the volume of a body of an ideal gas is inversely proportional to its pressure. Soon Jacques Charles formulated Charles' Law , which states that the volume of a gas at a constant pressure is proportional to its temperature. Boyle's and Charles' Laws were combined into the ideal gas law . Based on these concepts in 1679 Boyle's associate, Denis Papin , built a bone digester , which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically moving up and down, Papin conceived the idea of a piston and cylinder engine. He did not, however, follow through with his design. In 1697, independent of Papin's designs, engineer Thomas Savery built the world's first steam engine. [ 3 ] By 1712 an improved design based on Papin's ideas was developed by Thomas Newcomen . [ 4 ] Boyle speaks of Papin as having gone to England in the hope of finding a place in which he could satisfactorily pursue his favorite studies. Boyle himself had already been long engaged in the study of pneumatics, and had been especially interested in the investigations which had been original with Guericke. He admitted young Papin into his laboratory, and the two philosophers worked together at these attractive problems. He probably invented his "Digester" while in England, and it was first described in a brochure written in English, under the title, "The New Digester." It was subsequently published in Paris. This was a vessel with a safety valve, which can be tightly closed by a screw and a lid. Food can be cooked along with water in the vessel when the vessel is heated, and the vessel's internal temperature can be raised by as much as the pressure inside the vessel will permit safely. The maximum pressure is limited by a weight placed on the safety valve lever. If the pressure exceeds this limit, the safety valve will be forced open and steam will escape until the pressure drops sufficient for the weight to close the valve again. It is probable that this essential attachment to the steam boiler had previously been used for other purposes; but Papin is given the credit of having first made use of it to control the pressure of steam. [ 5 ] In 1787, Antoine Lavoisier , in his Elements of Chemistry , refers to "Papin's digester" as an example of an environment where high pressure prevents evaporation when he explains that the pressure caused by evaporation of fluid prevents further evaporation. [ 6 ]
https://en.wikipedia.org/wiki/Steam_digester
Steam distillation is a separation process that consists of distilling water together with other volatile and non-volatile components. The steam from the boiling water carries the vapor of the volatiles to a condenser ; both are cooled and return to the liquid or solid state, while the non-volatile residues remain behind in the boiling container. If, as is usually the case, the volatiles are not miscible with water, they will spontaneously form a distinct phase after condensation, allowing them to be separated by decantation or with a separatory funnel . [ 1 ] Steam distillation can be used when the boiling point of the substance to be extracted is higher than that of water, and the starting material cannot be heated to that temperature because of decomposition or other unwanted reactions. It may also be useful when the amount of the desired substance is small compared to that of the non-volatile residues. It is often used to separate volatile essential oils from plant material. [ 2 ] for example, to extract limonene (boiling point 176 °C) from orange peels . Steam distillation once was a popular laboratory method for purification of organic compounds, but it has been replaced in many such uses by vacuum distillation and supercritical fluid extraction . It is however much simpler and economical than those alternatives, and remains important in certain industrial sectors. [ 3 ] In the simplest form, water distillation or hydrodistillation , the water is mixed with the starting material in the boiling container. In direct steam distillation , the starting material is suspended above the water in the boiling flask, supported by a metal mesh or perforated screen. In dry steam distillation , the steam from a boiler is forced to flow through the starting material in a separate container. The latter variant allows the steam to be heated above the boiling point of water (thus becoming superheated steam ), for more efficient extraction. [ 4 ] Steam distillation is used in many of the recipes given in the Kitāb al-Taraffuq fī al-ʿiṭr ('Book of Gentleness on Perfume'), also known as the Kitāb Kīmiyāʾ al-ʿiṭr wa-l-taṣʿīdāt ('Book of the Chemistry of Perfume and Distillations'), attributed to the early Arabic philosopher al-Kindi ( c. 801 –873). [ 5 ] Steam distillation was also used by the Persian philosopher and physician Avicenna (980–1037) to produce essential oils by adding water to rose petals and distilling the mixture. [ 6 ] The process was also used by al-Dimashqi (1256–1327) to produce rose water on a large scale. [ 7 ] Every substance has some vapor pressure even below its boiling point, so in theory it could be distilled at any temperature by collecting and condensing its vapors. However, ordinary distillation below the boiling point is not practical because a layer of vapor-rich air would form over the liquid, and evaporation would stop as soon as the partial pressure of the vapor in that layer reached the vapor pressure. The vapor would then flow to the condenser only by diffusion, which is an extremely slow process. Simple distillation is generally done by boiling the starting material, because, once its vapor pressure exceeds atmospheric pressure, that still vapor-rich layer of air will be disrupted, and there will be a significant and steady flow of vapor from the boiling flask to the condenser. In steam distillation, that positive flow is provided by steam from boiling water, rather than by the boiling of the substances of interest. The steam carries with it the vapors of the latter. The substance of interest does not need to be miscible water or soluble in it. It suffices that it has significant vapor pressure at the steam's temperature. If the water forms an azeotrope with the substances of interest, the boiling point of the mixture may be lower than the boiling point of water. For example, bromobenzene boils at 156 °C (at normal atmospheric pressure), but a mixture with water boils at 95 °C. [ 8 ] However, the formation of an azeotrope is not necessary for steam distillation to work. Steam distillation is often employed in the isolation of essential oils , for use in perfumes , for example. In this method, steam is passed through the plant material containing the desired oils. Eucalyptus oil , camphor oil and orange oil are obtained by this method on an industrial scale. [ 2 ] Steam distillation is a means of purifying fatty acids, e.g. from tall oils . [ 9 ] Steam distillation is sometimes used in the chemical laboratory . Illustrative is a classic preparation of bromo biphenyl where steam distillation is used to first remove the excess benzene and subsequently to purifiy the brominated product. [ 10 ] In one preparation of benzophenone , steam is employed to first recover unreacted carbon tetrachloride and subsequently to hydrolyze the intermediate benzophenone dichloride into benzophenone, which is in fact not steam distilled. [ 11 ] It one preparation of a purine , steam distillation is used to remove volatile benzaldehyde from nonvolatile product. [ 12 ] On a lab scale, steam distillations are carried out using steam generated outside the system and piped through the mixture to be purified. [ 14 ] [ 1 ] Steam can also be generated in-situ using a Clevenger-type apparatus. [ 15 ] The Likens-Nickerson apparatus simultaneously performs steam distillation and extraction. It is typically used to isolate target organic compounds for further analysis. [ 16 ]
https://en.wikipedia.org/wiki/Steam_distillation
A steam explosion is an explosion caused by violent boiling or flashing of water or ice into steam , occurring when water or ice is either superheated , rapidly heated by fine hot debris produced within it, or heated by the interaction of molten metals (as in a fuel–coolant interaction, or FCI, of molten nuclear-reactor fuel rods with water in a nuclear reactor core following a core-meltdown ). Steam explosions are instances of explosive boiling . Pressure vessels, such as pressurized water (nuclear) reactors , that operate above atmospheric pressure can also provide the conditions for a steam explosion. The water changes from a solid or liquid to a gas with extreme speed, increasing dramatically in volume. A steam explosion sprays steam and boiling-hot water and the hot medium that heated it in all directions (if not otherwise confined, e.g. by the walls of a container), creating a danger of scalding and burning. Steam explosions are not normally chemical explosions , although a number of substances react chemically with steam (for example, zirconium and superheated graphite (inpure carbon , C) react with steam and air respectively to give off hydrogen (H 2 ), which may explode violently in air (O 2 ) to form water or H 2 O) so that chemical explosions and fires may follow. Some steam explosions appear to be special kinds of boiling liquid expanding vapor explosion (BLEVE), and rely on the release of stored superheat. But many large-scale events, including foundry accidents, show evidence of an energy-release front propagating through the material (see description of FCI below), where the forces create fragments and mix the hot phase into the cold volatile one; and the rapid heat transfer at the front sustains the propagation. High steam generation rates can occur under other circumstances, such as boiler-drum failure, or at a quench front (for example when water re-enters a hot dry boiler). Though potentially damaging, they are usually less energetic than events in which the hot ("fuel") phase is molten and so can be finely fragmented within the volatile ("coolant") phase. Some examples follow: Steam explosions are naturally produced by certain volcanoes , especially stratovolcanoes , and are a major cause of human fatalities in volcanic eruptions. They are often encountered where hot lava meets sea water or ice. Such an occurrence is also called a littoral explosion . A dangerous steam explosion can also be created when liquid water or ice encounters hot, molten metal. As the water explodes into steam, it splashes the burning hot liquid metal along with it, causing an extreme risk of severe burns to anyone located nearby and creating a fire hazard. When a pressurized container such as the waterside of a steam boiler ruptures, it is always followed by some degree of steam explosion. A common operating temperature and pressure for a marine boiler is around 950 psi (6,600 kPa) and 850 °F (454 °C) at the outlet of the superheater. A steam boiler has an interface of steam and water in the steam drum, which is where the water is finally evaporating due to the heat input, usually oil-fired burners. When a water tube fails due to any of a variety of reasons, it causes the water in the boiler to expand out of the opening into the furnace area that is only a few psi above atmospheric pressure. This will likely extinguish all fires and expands over the large surface area on the sides of the boiler. To decrease the likelihood of a devastating explosion, boilers have gone from the " fire-tube " designs, where the heat was added by passing hot gases through tubes in a body of water, to " water-tube " boilers that have the water inside of the tubes and the furnace area is around the tubes. Old "fire-tube" boilers often failed due to poor build quality or lack of maintenance (such as corrosion of the fire tubes, or fatigue of the boiler shell due to constant expansion and contraction). A failure of fire tubes forces large volumes of high pressure, high temperature steam back down the fire tubes in a fraction of a second and often blows the burners off the front of the boiler, whereas a failure of the pressure vessel surrounding the water would lead to a full and entire evacuation of the boiler's contents in a large steam explosion. On a marine boiler, this would certainly destroy the ship's propulsion plant and possibly the corresponding end of the ship. Tanks containing crude oil and certain commercial oil cuts, such as some diesel oils and kerosene , may be subject to boilover , an extremely hazardous situation in which a water layer under an open-top tank pool fire starts boiling, which results in a significant increase in fire intensity accompanied by violent expulsion of burning fluid to the surrounding areas. In many cases, the underlying water layer is superheated , in which case part of it goes through explosive boiling. When this happens, the abruptness of the expansion further enhances the expulsion of blazing fuel. [ 1 ] [ 2 ] [ 3 ] Events of this general type are also possible if the fuel and fuel elements of a water-cooled nuclear reactor gradually melt. The mixture of molten core structures and fuel is often referred to as "Corium". If such corium comes into contact with water, vapour explosions may occur from the violent interaction between molten fuel (corium) and water as coolant. Such explosions are seen to be fuel–coolant interactions (FCI). [ citation needed ] [ 4 ] [ 5 ] The severity of a steam explosion based on fuel-coolant interaction (FCI) depends strongly on the so-called premixing process, which describes the mixing of the melt with the surrounding water-steam mixture. In general, water-rich premixtures are considered more favorable than steam-rich environments in terms of steam explosion initiation and strength. The theoretical maximum for the strength of a steam explosion from a given mass of molten corium, which can never be achieved in practice, is due to its optimal distribution in the form of molten corium droplets of a certain size. These droplets are surrounded by a suitable volume of water, which in principle results from the max. possible mass of vaporized water at instantaneous heat exchange between the molten droplet fragmenting in a shock wave and the surrounding water. On the basis of this very conservative assumption, calculations for alpha containment failure were carried out by Theofanous. [ 6 ] However, these optimal conditions used for conservative estimates do not occur in the real world. For one thing, the entire molten reactor core will never be in premixture, but only in the form of a part of it, e.g., as a jet of molten corium impinging a water pool in the lower plenum of the reactor, fragmenting there by ablation and allowing by this the formation of a premixture in the vicinity of the melt jet falling through the water pool. Alternatively, the melt may arrive as a thick jet at the bottom of the lower plenum, where it forms a pool of melt overlaid by a pool of water. In this case, a premixing zone can form at the interface between the melt pool and the water pool. In both cases, it is clear that by far not the entire molten reactor inventory is involved in premixing, but rather only a small percentage. Further limitations arise from the saturated nature of the water in the reactor, i.e., water with appreciable supercooling is not present there. In the case of penetration of a fragmenting melt jet there, this leads to increasing evaporation and an increasing steam content in the premixture, which, from a content > 70% in the water/steam mixture, prevents the explosion altogether or at least limits its strength. Another counter-effect is the solidification of the molten particles, which depends, among other things, on the diameter of the molten particles. That is, small particles solidify faster than larger ones. Furthermore, the models for instability growth at interfaces between flowing media (e.g. Kelvin-Helmholtz, Rayleigh-Taylor, Conte-Miles, ...) show a correlation between particle size after fragmentation and the ratio of the density of the fragmenting medium (water-vapor mixture) to the density of the fragmented medium, which can also be demonstrated experimentally. In the case of corium (density of ~ 8000 kg/m³), much smaller droplets (~ 3 - 4 mm) result than when alumina (Al2O3) is used as a corium simulant with a density of just under half that of corium with droplet sizes in the range of 1 - 2 cm. Jet fragmentation experiments conducted at JRC ISPRA under typical reactor conditions with masses of molten corium up to 200 kg and melt jet diameters of 5 - 10 cm in diameter in pools of saturated water up to 2 m deep resulted in success with respect to steam explosions only when Al2O3 was used as the corium simulant. Despite various efforts on the part of the experimenters, it was never possible to trigger a steam explosion in the corium experiments in FARO.(Will be continued ...) If a steam explosion occurs in a confined tank of water due to rapid heating of the water, the pressure wave and rapidly expanding steam can cause severe water hammer . This was the mechanism that, in Idaho, USA, in 1961, caused the SL-1 nuclear reactor vessel to jump over 9 feet (2.7 m) in the air when it was destroyed by a criticality accident . In the case of SL-1, the fuel and fuel elements vaporized from instantaneous overheating. In January 1961, operator error caused the SL-1 reactor to instantly destroy itself in a steam explosion. The 1986 Chernobyl nuclear disaster in the Soviet Union was feared to cause major steam explosion (and resulting Europe -wide nuclear fallout ) upon melting the lava -like nuclear fuel through the reactor 's basement towards contact with residue fire-fighting water and groundwater . The threat was averted by frantic tunneling underneath the reactor in order to pump out water and reinforce underlying soil with concrete . In a nuclear meltdown , the most severe outcome of a steam explosion is early containment building failure. Two possibilities are the ejection at high pressure of molten fuel into the containment, causing rapid heating; or an in-vessel steam explosion causing ejection of a missile (such as the upper head ) into, and through, the containment. Less dramatic but still significant is that the molten mass of fuel and reactor core melts through the floor of the reactor building and reaches ground water ; a steam explosion might occur, but the debris would probably be contained, and would in fact, being dispersed, probably be more easily cooled. See WASH-1400 for details. Molten aluminium produces a strong exothermic reaction with water, which is observed in some building fires. [ 7 ] [ 8 ] In a more domestic setting, steam explosions can be a result of trying to extinguish burning oil with water, in a process called slopover . When oil in a pan is on fire, the natural impulse may be to extinguish it with water; however, doing so will cause the hot oil to superheat the water. The resulting steam will disperse upwards and outwards rapidly and violently in a spray also containing the ignited oil. The correct method to extinguish such fires is to use either a damp cloth or a tight lid on the pan; both methods deprive the fire of oxygen , and the cloth also cools it down. Alternatively, a non-volatile purpose designed fire retardant agent or simply a fire blanket can be used. Steam explosive biorefinement is an industrial application to valorize biomass. It involves pressurizing biomass with steam at up to 3 MPa (30 atmospheres) and instantaneously releasing the pressure to produce the desired transformation in the biomass. An industrial application of the concept has been shown for a paper fiber project. [ 9 ] [ 10 ] A water vapor explosion creates a high volume of gas without producing environmentally harmful leftovers. The controlled explosion of water has been used for generating steam in power stations and in modern types of steam turbines . Newer steam engines use heated oil to force drops of water to explode and create high pressure in a controlled chamber. The pressure is then used to run a turbine or a converted combustion engine. Hot oil and water explosions are becoming particularly popular in concentrated solar generators, because the water can be separated from the oil in a closed loop without any external energy. Water explosion is considered to be environmentally friendly if the heat is generated by a renewable resource. A cooking technique called flash boiling uses a small amount of water to quicken the process of boiling. For example, this technique can be used to melt a slice of cheese onto a hamburger patty. The cheese slice is placed on top of the meat on a hot surface such as a frying pan, and a small quantity of cold water is thrown onto the surface near the patty. A vessel (such as a pot or frying-pan cover) is then used to quickly seal the steam-flash reaction, dispersing much of the steamed water on the cheese and patty. This results in a large release of heat, transferred via vaporized water condensing back into a liquid (a principle also used in refrigerator and freezer production). Internal combustion engines may use flash-boiling to aerosolize the fuel. [ 11 ]
https://en.wikipedia.org/wiki/Steam_explosion
A steam hammer , also called a drop hammer , is an industrial power hammer driven by steam that is used for tasks such as shaping forgings and driving piles. Typically the hammer is attached to a piston that slides within a fixed cylinder , but in some designs the hammer is attached to a cylinder that slides along a fixed piston. The concept of the steam hammer was described by James Watt in 1784, but it was not until 1840 that the first working steam hammer was built to meet the needs of forging increasingly large iron or steel components. In 1843 there was an acrimonious dispute between François Bourdon of France and James Nasmyth of Britain over who had invented the machine. Bourdon had built the first working machine, but Nasmyth claimed it was built from a copy of his design. Steam hammers proved to be invaluable in many industrial processes. Technical improvements gave greater control over the force delivered, greater longevity, greater efficiency and greater power. A steam hammer built in 1891 by the Bethlehem Iron Company delivered a 125-ton blow. In the 20th century steam hammers were gradually displaced in forging by mechanical and hydraulic presses, but some are still in use. Compressed air power hammers, descendants of the early steam hammers, are still manufactured. A single-acting steam hammer is raised by the pressure of steam injected into the lower part of a cylinder and drops under gravity when the pressure is released. With the more common double-acting steam hammer, steam is also used to push the ram down, giving a more powerful blow at the die. [ 1 ] The weight of the ram may range from 225 to 22,500 kg (500 to 50,000 lb). [ 2 ] The piece being worked is placed between a bottom die resting on an anvil block and a top die attached to the ram (hammer). [ 3 ] Hammers are subject to repeated concussion, which could cause fracturing of cast iron components. The early hammers were therefore made from a number of parts bolted together. This made it cheaper to replace broken parts, and also gave it a degree of elasticity that made fractures less likely. [ 4 ] A steam hammer may have one or two supporting frames. The single frame design lets the operator move around the dies more easily, while the double frame can support a more powerful hammer. The frame(s) and the anvil block are mounted on wooden beams that protect the concrete foundations by absorbing the shock. [ 3 ] Deep foundations are needed, but a large steam drop hammer will still shake the building that holds it. This may be solved with a counterblow steam hammer, in which two converging rams drive the top and bottom dies together. The upper ram is driven down and the lower ram is pulled or driven up. These hammers produce a large impact and can make large forgings. [ 5 ] They can be installed with smaller foundations than anvil hammers of similar force. [ 6 ] Counterblow hammers are not often used in the United States, but are common in Europe. [ 7 ] With some early steam hammers an operator moved the valves by hand, controlling each blow. With others the valve action was automatic, allowing for rapid repetitive hammering. Automatic hammers could give an elastic blow, where steam cushioned the piston towards the end of the down stroke, or a dead blow with no cushioning. The elastic blow gave a quicker rate of hammering, but less force than the dead blow. [ 8 ] Machines were built that could run in either mode according to the job requirement. [ 9 ] The force of the blow could be controlled by varying the amount of steam introduced to cushion the blow. [ 10 ] A modern air/steam hammer can deliver up to 300 blows per minute. [ 11 ] The possibility of a steam hammer was noted by James Watt (1736–1819) in his 28 April 1784 patent for an improved steam engine. [ 12 ] Watt described "Heavy Hammers or Stampers, for forging or stamping iron, copper, or other metals, or other matters without the intervention of rotative motions or wheels, by fixing the Hammer or Stamper to be so worked, either directly to the piston or piston rod of the engine." [ 13 ] Watt's design had the cylinder at one end of a wooden beam and the hammer at the other. The hammer did not move vertically, but in the arc of a circle. [ 14 ] On 6 June 1806 W. Deverell, engineer of Surrey, filed a patent for a steam-powered hammer or stamper. The hammer would be welded to a piston rod contained in a cylinder. Steam from a boiler would be let in under the piston, raising it and compressing the air above it. The steam would then be released and the compressed air would force the piston down. [ 13 ] In August 1827 John Hague was awarded a patent for a method of working cranes and tilt-hammers driven by a piston in an oscillating cylinder where air power supplied the motive force. A partial vacuum was made in one end of a long cylinder by an air pump worked by a steam engine or some other power source, and atmospheric pressure drove the piston into that end of the cylinder. When a valve was reversed, the vacuum was formed in the other end and the piston forced in the opposite direction. [ 15 ] Hague made a hammer to this design for planishing frying pans. Many years later, when discussing the advantages of air over steam for delivering power, it was recalled that Hague's air hammer "worked with such an extraordinary rapidity that it was impossible to see where the hammer was in working, and the effect was seemed more like giving one continuous pressure." However, it was not possible to regulate the force of the blows. [ 16 ] It seems probable that the Scottish Engineer James Nasmyth (1808–1890) and his French counterpart François Bourdon (1797–1865) reinvented the steam hammer independently in 1839, both trying to solve the same problem of forging shafts and cranks for the increasingly large steam engines used in locomotives and paddle boats. [ 17 ] In Nasmyth's 1883 "autobiography", written by Samuel Smiles , he described how the need arose for a paddle shaft for Isambard Kingdom Brunel 's new transatlantic steamer SS Great Britain , with a 30 inches (760 mm) diameter shaft, larger than any that had been previously forged. He came up with his steam hammer design, making a sketch dated 24 November 1839, but the immediate need disappeared when the practicality of screw propellers was demonstrated and the Great Britain was converted to that design. Nasmyth showed his design to all visitors. [ 18 ] Bourdon came up with the idea of what he called a "Pilon" in 1839 and made detailed drawings of his design, which he also showed to all engineers who visited the works at Le Creusot owned by the brothers Adolphe and Eugène Schneider . [ 18 ] However, the Schneiders hesitated to build Bourdon's radical new machine. Bourdon and Eugène Schneider visited the Nasmyth works in England in the middle of 1840, where they were shown Nasmyth's sketch. This confirmed the feasibility of the concept to Schneider. [ 17 ] In 1840 Bourdon built the first steam hammer in the world at the Schneider & Cie works at Le Creusot. It weighed 2,500 kilograms (5,500 lb) and lifted to 2 metres (6 ft 7 in). The Schneiders patented the design in 1841. [ 19 ] Nasmyth visited Le Creusot in April 1842. By his account, Bourdon took him to the forge department so he might, as he said, "see his own child". Nasmyth said "there it was, in truth–a thumping child of my brain!" [ 18 ] After returning from France in 1842 Nasmyth built his first steam hammer in his Patricroft foundry in Manchester , England , adjacent to the (then new) Liverpool and Manchester Railway and the Bridgewater Canal . [ 20 ] In 1843 a dispute broke out between Nasmyth and Bourdon over priority of invention of the steam hammer. Nasmyth, an excellent publicist, managed to convince many people that he was the first. [ 21 ] Nasmyth's first steam hammer, described in his patent of 9 December 1842, was built for the Low Moor Works at Bradford. They rejected the machine, but on 18 August 1843 accepted an improved version with a self-acting gear. [ 22 ] Robert Wilson (1803–1882), who had also invented the screw propeller and was manager of Nasmyth's Bridgewater works, invented the self-acting motion that made it possible to adjust the force of the blow delivered by the hammer – a critically important improvement. [ 23 ] An early writer said of Wilson's gear, "... I would be prouder to say that I was the inventor of that motion, then to say I had commanded a regiment at Waterloo..." [ 22 ] Nasmyth's steam hammers could now vary the force of the blow across a wide range. Nasmyth was fond of breaking an egg placed in a wineglass without breaking the glass, followed by a blow that shook the building. [ 20 ] By 1868 engineers had introduced further improvements to the original design. John Condie's steam hammer, built for Fulton in Glasgow, had a stationary piston and a moving cylinder to which the hammer was attached. The piston was hollow, and was used to deliver steam to the cylinder and then remove it. The hammer weighed 6.5 tons with a stroke of 7.5 feet (2.3 m). [ 24 ] Condie steam hammers were used to forge the shafts of Isambard Kingdom Brunel's SS Great Eastern . [ 25 ] A high-speed compressed-air hammer was described in The Mechanics' Magazine in 1865, a variant of the steam hammer for use where steam power was not available or a very dry environment was required. [ 26 ] The Bowling Ironworks steam hammers had the steam cylinder bolted to the back of the hammer, thus reducing the height of the machine. [ 24 ] These were designed by John Charles Pearce, who took out a patent for his steam hammer design several years before Nasmyth's patent expired. [ 27 ] Marie-Joseph Farcot of Paris proposed a number of improvements including an arrangement so the steam acted from above, increasing the striking force, improved valve arrangements and the use of springs and material to absorb the shock and prevent breakage. [ 24 ] [ 28 ] John Ramsbottom invented a duplex hammer, with two rams moving horizontally towards a forging placed between them. [ 29 ] Using the same principles of operation, Nasmyth developed a steam-powered pile-driving machine . At its first use at Devonport , a dramatic contest was carried out. His engine drove a pile in four and half minutes compared with the twelve hours that the conventional method required. [ 30 ] It was soon found that a hammer with a relatively short fall height was more effective than a taller machine. The shorter machine could deliver many more blows in a given time, driving the pile faster even though each blow was smaller. It also caused less damage to the pile. [ 31 ] Riveting machines designed by Garforth and Cook were based on the steam hammer. [ 32 ] The catalog for the Great Exhibition held in London in 1851 said of Garforth's design, "With this machine, one man and three boys can rivet with perfect ease, and in the firmest manner, at the rate of six rivets per minute, or three hundred and sixty per hour." [ 33 ] Other variants included crushers to help extract iron ore from quartz and a hammer to drive holes in the rock of a quarry to hold gunpowder charges. [ 32 ] An 1883 book on modern steam practice said The direct application of steam to forging hammers is beyond question the greatest improvement that has ever been made in forging machinery; not only has it simplified the operations that were carried on before its invention, but it has added many branches, and extended the art of forging, to purposes that could never have been attained except by the steam hammer. ... The steam hammer ... seems to be so perfectly adapted to fill the different conditions of power hammering that there seems nothing left to be desired... [ 34 ] Schneider & Co. built 110 steam hammers between 1843 and 1867 with different sizes and strike rates, but trending towards ever larger machines to handle the demands of large cannon, engine shafts and armor plate, with steel increasingly used in place of wrought iron. In 1861 the "Fritz" steam hammer came into operation at the Krupp works in Essen , Germany. With a 50-ton blow, for many years it was the most powerful in the world. [ 35 ] There is a story that the Fritz steam hammer took its name from a machinist named Fritz whom Alfred Krupp presented to the Emperor William when he visited the works in 1877. Krupp told the emperor that Fritz had such perfect control of the machine that he could let the hammer drop without harming an object placed on the center of the block. The Emperor immediately put his watch, which was studded with diamonds, on the block and motioned Fritz to start the hammer. When the machinist hesitated, Krupp told him "Fritz let fly!" He did as he was told, the watch was unharmed, and the emperor gave Fritz the watch as a gift. Krupp had the words "Fritz let fly!" engraved on the hammer. [ 36 ] The Schneiders eventually saw a need for a hammer of colossal proportions. [ 35 ] The Creusot steam hammer was a giant steam hammer built in 1877 by Schneider and Co. in the French industrial town of Le Creusot . With the ability to deliver a blow of up to 100 tons, the Creusot hammer was the largest and most powerful in the world. [ 37 ] A wooden replica was built for the Exposition Universelle (1878) in Paris. In 1891 the Bethlehem Iron Company of the United States purchased patent rights from Schneider and built a steam hammer of almost identical design but capable of delivering a 125-ton blow. [ 37 ] Eventually the great steam hammers became obsolete, displaced by hydraulic and mechanical presses. The presses applied force slowly and at a uniform rate, ensuring that the internal structure of the forging was uniform, without hidden internal flaws. [ 38 ] They were also cheaper to operate, not requiring steam to be blown off, and much cheaper to build, not requiring huge strong foundations. The 1877 Creusot steam hammer now stands as a monument in the Creusot town square. [ 38 ] An original Nasmyth hammer stands facing his foundry buildings (now a "business park"). A larger Nasmyth & Wilson steam hammer stands in the campus of the University of Bolton . Steam hammers continue to be used for driving piles into the ground. [ 1 ] Steam supplied by a circulating steam generator is more efficient than air. [ 39 ] However, today compressed air is often used rather than steam. [ 31 ] As of 2013 manufacturers continued to sell air/steam pile-driving hammers. [ 40 ] Forging services suppliers also continue to use steam hammers of varying sizes based on classical designs. [ 41 ] Citations Sources External links
https://en.wikipedia.org/wiki/Steam_hammer
Steam injection is an increasingly common method of extracting heavy crude oil . Used commercially since the 1960s, [ 1 ] it is considered an enhanced oil recovery (EOR) method and is the main type of thermal stimulation of oil reservoirs. There are several different forms of the technology, with the two main ones being Cyclic Steam Stimulation and Steam Flooding. Both are most commonly applied to oil reservoirs, which are relatively shallow and which contain crude oils which are very viscous at the temperature of the native underground formation. Steam injection is widely used in the San Joaquin Valley of California (US), the Lake Maracaibo area of Venezuela , and the oil sands of northern Alberta , Canada. [ 1 ] Another contributing factor that enhances oil production during steam injection is related to near-wellbore cleanup. In this case, steam reduces the viscosity that ties paraffins and asphaltenes to the rock surfaces while steam distillation of crude oil light ends creates a small solvent bank that can miscibly remove trapped oil. [ 2 ] This method, also known as the Huff and Puff method, consists of 3 stages: injection, soaking, and production. Steam is first injected into a well for a certain amount of time to heat the oil in the surrounding reservoir to a recover approximately 20% of the original oil in place (OOIP), compared to steam assisted gravity drainage, which has been reported to recover over 50% of OOIP. It is quite common for wells to be produced in the cyclic steam manner for a few cycles before being put on a steam flooding regime with other wells. The mechanism proceeds through cycles of steam injection, soak, and oil production. First, steam is injected into a well at a temperature of 300 to 340° Celsius for a period of weeks to months. Next, the well is allowed to sit for days to weeks to allow heat to soak into the formation. Finally, the hot oil is pumped out of the well for a period of weeks or months. Once the production rate falls off, the well is put through another cycle of injection, soak and production. This process is repeated until the cost of injecting steam becomes higher than the money made from producing oil. [ 3 ] The CSS method has the advantage that recovery factors are around 20 to 25% and the disadvantage that the cost to inject steam is high. Canadian Natural Resources use "employs cyclic steam or "huff and puff" technology to develop bitumen resources. This technology requires one well bore and the production consists of the injection and production phases. First steam is "injected for several weeks, mobilizing cold bitumen". Then the flow "on the injection well is reversed producing oil through the same injection well bore. The injection and production phases together comprise one cycle. "Steam is re-injected to begin a new cycle when oil production rates fall below a critical threshold due to the cooling of the reservoir. Artificial lift method of production may be used at this stage. After a few cycles, it may not be economical to produce by the huff and puff method. Steam flooding is then considered for further oil recovery if other conditions are favorable. It has been observed that recovery from huff and puff can be achieved up to 30% and from steam flooding recovery can be up to 50%" ( CNRL 2013 ) harv error: no target: CITEREFCNRL2013 ( help ) . [ 4 ] In a steam flood, sometimes known as a steam drive, some wells are used as steam injection wells and other wells are used for oil production. Two mechanisms are at work to improve the amount of oil recovered. The first is to heat the oil to higher temperatures and to thereby decrease its viscosity so that it more easily flows through the formation toward the producing wells. A second mechanism is the physical displacement employing in a manner similar to water flooding , in which oil is meant to be pushed to the production wells. While more steam is needed for this method than for the cyclic method, it is typically more effective at recovering a larger portion of the oil. A form of steam flooding popular in the Alberta oil sands is steam assisted gravity drainage (SAGD), in which two horizontal wells are drilled, one a few meters above the other, and steam is injected into the upper one, reducing the bitumen's viscosity to the point where gravity will pulls it down into the producing well. In 2011 Laricina Energy combined solvent injection with steam injection in a process called solvent cyclic steam-assisted gravity drainage (SC-SAGD) ( Canadian Association of Petroleum Producers CAPP 2009 ) harv error: no target: CITEREFCanadian_Association_of_Petroleum_Producers_CAPP2009 ( help ) . [ 5 ] Laricina claims that combining solvents with steam reduces the overall steam oil ratio for recovery by 30%. The alternative to surface generated steam is downhole steam generation that reduces heat loss and generates high-quality steam in the reservoir that allows for more heavy oil and oil sands production at a faster rate. Downhole steam generators were first proposed by the major oil companies in the early 1960s. Over the last 50 years multiple downhole steam technologies have been developed such as the DOE and SANDIA downhole combustion system known as Project Deep Steam that was field tested in Long Beach, CA in 1982, but was a failure. The only downhole steam generator that has proved successful is branded as eSteam [ citation needed ] .
https://en.wikipedia.org/wiki/Steam_injection_(oil_industry)
Steam reforming or steam methane reforming (SMR) is a method for producing syngas ( hydrogen and carbon monoxide ) by reaction of hydrocarbons with water. Commonly, natural gas is the feedstock. The main purpose of this technology is often hydrogen production , although syngas has multiple other uses such as production of ammonia or methanol . The reaction is represented by this equilibrium: [ 1 ] The reaction is strongly endothermic (Δ H SR = 206 kJ/mol). Hydrogen produced by steam reforming is termed 'grey' hydrogen when the waste carbon dioxide is released to the atmosphere and 'blue' hydrogen when the carbon dioxide is (mostly) captured and stored geologically—see carbon capture and storage . Zero carbon 'green' hydrogen is produced by thermochemical water splitting , using solar thermal, low- or zero-carbon electricity or waste heat, [ 2 ] or electrolysis , using low- or zero-carbon electricity. Zero carbon emissions 'turquoise' hydrogen is produced by one-step methane pyrolysis of natural gas. [ 3 ] Steam reforming of natural gas produces most of the world's hydrogen. Hydrogen is used in the industrial synthesis of ammonia and other chemicals. [ 4 ] Steam reforming reaction kinetics, in particular using nickel - alumina catalysts, have been studied in detail since the 1950s. [ 5 ] [ 6 ] [ 7 ] The purpose of pre-reforming is to break down higher hydrocarbons such as propane , butane or naphtha into methane (CH 4 ), which allows for more efficient reforming downstream. The name-giving reaction is the steam reforming (SR) reaction and is expressed by the equation: [ 1 ] C H 4 + H 2 O ⇌ C O + 3 H 2 Δ H S R = 206 k J / m o l {\displaystyle [1]\qquad \mathrm {CH} _{4}+\mathrm {H} _{2}\mathrm {O} \rightleftharpoons \mathrm {CO} +3\,\mathrm {H} _{2}\qquad \Delta H_{SR}=206\ \mathrm {kJ/mol} } Via the water-gas shift reaction (WGSR), additional hydrogen is released by reaction of water with the carbon monoxide generated according to equation [1]: [ 2 ] C O + H 2 O ⇌ C O 2 + H 2 Δ H W G S R = − 41 k J / m o l {\displaystyle [2]\qquad \mathrm {CO} +\mathrm {H} _{2}\mathrm {O} \rightleftharpoons \mathrm {CO} _{2}+\mathrm {H} _{2}\qquad \Delta H_{WGSR}=-41\ \mathrm {kJ/mol} } Some additional reactions occurring within steam reforming processes have been studied. [ 6 ] [ 7 ] Commonly the direct steam reforming (DSR) reaction is also included: [ 3 ] C H 4 + 2 H 2 O ⇌ C O 2 + 4 H 2 Δ H D S R = 165 k J / m o l {\displaystyle [3]\qquad \mathrm {CH} _{4}+2\,\mathrm {H} _{2}\mathrm {O} \rightleftharpoons \mathrm {CO} _{2}+4\,\mathrm {H} _{2}\qquad \Delta H_{DSR}=165\ \mathrm {kJ/mol} } As these reactions by themselves are highly endothermic (apart from WGSR, which is mildly exothermic), a large amount of heat needs to be added to the reactor to keep a constant temperature. Optimal SMR reactor operating conditions lie within a temperature range of 800 °C to 900 °C at medium pressures of 20-30 bar. [ 8 ] High excess of steam is required, expressed by the (molar) steam-to-carbon (S/C) ratio. Typical S/C ratio values lie within the range 2.5:1 - 3:1. [ 8 ] The reaction is conducted in multitubular packed bed reactors, a subtype of the plug flow reactor category. These reactors consist of an array of long and narrow tubes [ 10 ] which are situated within the combustion chamber of a large industrial furnace , providing the necessary energy to keep the reactor at a constant temperature during operation. Furnace designs vary, depending on the burner configuration they are typically categorized into: top-fired, bottom-fired, and side-fired. A notable design is the Foster-Wheeler terrace wall reformer. Inside the tubes, a mixture of steam and methane are put into contact with a nickel catalyst. [ 10 ] Catalysts with high surface-area-to-volume ratio are preferred because of diffusion limitations due to high operating temperature . Examples of catalyst shapes used are spoked wheels, gear wheels, and rings with holes ( see: Raschig rings ). Additionally, these shapes have a low pressure drop which is advantageous for this application. [ 11 ] Steam reforming of natural gas is 65–75% efficient. [ 12 ] The United States produces 9–10 million tons of hydrogen per year, mostly with steam reforming of natural gas. [ 13 ] The worldwide ammonia production, using hydrogen derived from steam reforming, was 144 million tonnes in 2018. [ 14 ] The energy consumption has been reduced from 100 GJ/tonne of ammonia in 1920 to 27 GJ by 2019. [ 15 ] Globally, almost 50% of hydrogen is produced via steam reforming. [ 9 ] It is currently the least expensive method for hydrogen production available in terms of its capital cost. [ 16 ] In an effort to decarbonise hydrogen production, carbon capture and storage (CCS) methods are being implemented within the industry, which have the potential to remove up to 90% of CO 2 produced from the process. [ 16 ] Despite this, implementation of this technology remains problematic, costly, and increases the price of the produced hydrogen significantly. [ 16 ] [ 17 ] Autothermal reforming (ATR) uses oxygen and carbon dioxide or steam in a reaction with methane to form syngas . The reaction takes place in a single chamber where the methane is partially oxidized. The reaction is exothermic. When the ATR uses carbon dioxide, the H 2 :CO ratio produced is 1:1; when the ATR uses steam, the H 2 :CO ratio produced is 2.5:1. The outlet temperature of the syngas is between 950–1100 °C and outlet pressure can be as high as 100 bar. [ 18 ] In addition to reactions [1] – [3], ATR introduces the following reaction: [ 19 ] [ 4 ] C H 4 + 0.5 O 2 ⇌ C O + 2 H 2 Δ H R = − 24.5 k J / m o l {\displaystyle [4]\qquad \mathrm {CH} _{4}+0.5\,\mathrm {O} _{2}\rightleftharpoons \mathrm {CO} +2\,\mathrm {H} _{2}\qquad \Delta H_{R}=-24.5\ \mathrm {kJ/mol} } The main difference between SMR and ATR is that SMR only uses air for combustion as a heat source to create steam, while ATR uses purified oxygen. The advantage of ATR is that the H 2 :CO ratio can be varied, which can be useful for producing specialty products. Due to the exothermic nature of some of the additional reactions occurring within ATR, the process can essentially be performed at a net enthalpy of zero (Δ H = 0). [ 20 ] Partial oxidation (POX) occurs when a sub-stoichiometric fuel-air mixture is partially combusted in a reformer creating hydrogen-rich syngas. POX is typically much faster than steam reforming and requires a smaller reactor vessel. POX produces less hydrogen per unit of the input fuel than steam reforming of the same fuel. [ 21 ] The capital cost of steam reforming plants is considered prohibitive for small to medium size applications. The costs for these elaborate facilities do not scale down well. Conventional steam reforming plants operate at pressures between 200 and 600 psi (14–40 bar) with outlet temperatures in the range of 815 to 925 °C. Flared gas and vented volatile organic compounds (VOCs) are known problems in the offshore industry and in the on-shore oil and gas industry, since both release greenhouse gases into the atmosphere. [ 22 ] Reforming for combustion engines utilizes steam reforming technology for converting waste gases into a source of energy. [ 23 ] Reforming for combustion engines is based on steam reforming, where non-methane hydrocarbons ( NMHCs ) of low quality gases are converted to synthesis gas (H 2 + CO) and finally to methane (CH 4 ), carbon dioxide (CO 2 ) and hydrogen (H 2 ) - thereby improving the fuel gas quality (methane number). [ 24 ] There is also interest in the development of much smaller units based on similar technology to produce hydrogen as a feedstock for fuel cells . [ 25 ] Small-scale steam reforming units to supply fuel cells are currently the subject of research and development, typically involving the reforming of methanol , but other fuels are also being considered such as propane , gasoline , autogas , diesel fuel , and ethanol . [ 26 ] [ 27 ] The reformer– the fuel-cell system is still being researched but in the near term, systems would continue to run on existing fuels, such as natural gas or gasoline or diesel. However, there is an active debate about whether using these fuels to make hydrogen is beneficial while global warming is an issue. Fossil fuel reforming does not eliminate carbon dioxide release into the atmosphere but reduces the carbon dioxide emissions and nearly eliminates carbon monoxide emissions as compared to the burning of conventional fuels due to increased efficiency and fuel cell characteristics. [ 28 ] However, by turning the release of carbon dioxide into a point source rather than distributed release, carbon capture and storage becomes a possibility, which would prevent the release of carbon dioxide to the atmosphere, while adding to the cost of the process. The cost of hydrogen production by reforming fossil fuels depends on the scale at which it is done, the capital cost of the reformer, and the efficiency of the unit, so that whilst it may cost only a few dollars per kilogram of hydrogen at an industrial scale, it could be more expensive at the smaller scale needed for fuel cells. [ 29 ] [ self-published source? ] There are several challenges associated with this technology:
https://en.wikipedia.org/wiki/Steam_reforming
A steam rupture occurs within a pressurized system of super critical water when the pressure exceeds the design plus safety margin specification. A steam rupture can occur in any high temperature pressurized system, including, but not limited to: automobile cooling systems , stationary power plants, mobile power plants, steam driven tools (such as some trip hammers ), and even the delivery systems for application processes such as cleaning and fabric fullering. This technology-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Steam_rupture
A steam shovel is a large steam-powered excavating machine designed for lifting and moving material such as rock and soil . It is the earliest type of power shovel or excavator . [ citation needed ] Steam shovels played a major role in public works in the 19th and early 20th century, being key to the construction of railroads and the Panama Canal . The development of simpler, cheaper diesel , gasoline and electric shovels caused steam shovels to fall out of favor in the 1930s. Grimshaw of Boulton & Watt devised the first steam-powered excavator in 1796. [ 1 ] In 1833 William Brunton patented another steam-powered excavator which he provided further details on in 1836. [ 2 ] [ 3 ] The steam shovel was invented by William Otis , who received a patent for his design in 1839. The first machines were known as 'partial-swing', since the boom could not rotate through 360 degrees. They were built on a railway chassis , on which the boiler and movement engines were mounted. The shovel arm and driving engines were mounted at one end of the chassis, which accounts for the limited swing. Bogies with flanged wheels were fitted, and power was taken to the wheels by a chain drive to the axles. Temporary rail tracks were laid by workers where the shovel was expected to work, and repositioned as required. Steam shovels became more popular in the latter half of the nineteenth century. Originally configured with chain hoists , the advent of steel cable in the 1870s allowed for easier rigging to the winches. Later machines were supplied with caterpillar tracks , obviating the need for rails. The full-swing, 360° revolving shovel was developed in England in 1884, and became the preferred format for these machines. Expanding railway networks (in the US and the UK) fostered a demand for steam shovels. The extensive mileage of railways, and corresponding volume of material to be moved, forced the technological leap. As a result, steam shovels became commonplace. American manufacturers included the Marion Steam Shovel Company founded in 1884, the Bucyrus Company and the Erie Shovel Company, now owned by Caterpillar. The booming cities in North America used shovels to dig foundations and basements for the early skyscrapers . One hundred and two steam shovels worked in the decade-long dig of the Panama Canal across the Isthmus of Panama . Of these, seventy-seven were built by Bucyrus; [ 4 ] the remainder were Marion shovels. These machines 'moved mountains' in their labors. The shovel crews would race to see who could move the most dirt. [ 5 ] Steam shovels assisted mining operations: the iron mines of Minnesota, the copper mines of Chile and Montana, placer mines of the Klondike – all had earth-moving equipment. With the burgeoning open-pit mines – first in Bingham Canyon , Utah – shovels became prominent. The shovels removed hillsides. As a result, steam shovels were used globally from Australia to Russia to coal mines in China. Shovels were used for construction, road and quarry work. Steam shovels became widely used in the 1920s in the road-building programs in North America. Thousands of miles of State Highways were built in this era, together with factories and many docks, ports, buildings, and grain elevators. During the 1930s steam shovels were supplanted by simpler, cheaper diesel-powered excavating shovels that were the forerunners of those in use today. Open-pit mines were electrified at this time. Only after the Second World War , with the advent of robust high-pressure hydraulic hoses, did the more versatile hydraulic excavators take pre-eminence over the cable-hoisting winch shovels. Many steam shovels remained at work on the railways of developing nations until diesel engines supplanted them. Most have since been scrapped. Large, multi-ton mining shovels still use the cable-lift shovel arrangement. In the 1950s and 1960s, Marion Shovel built massive stripping shovels for coal operations in the Eastern US. Shovels of note were the Marion 360, the Marion 5900, and the largest shovel ever built, Marion 6360 The Captain – with a 180-cubic-yard (140 m 3 ) bucket – while Bucyrus constructed one of the most famous monsters: the Big Brutus , the largest still in existence. The GEM of Egypt (GEM standing for "Giant Excavating Machine" and Egypt referring to the Egypt Valley in Belmont County , eastern Ohio where it was first employed), which operated from 1967 to 1988, was of comparable size. It has since been dismantled. [ 6 ] Although these big machines are still called steam shovels , they are more correctly known as power shovels since they use electricity to power their winches. A steam shovel consists of: The shovel has several individual operations: it can raise or luff the boom, extend the dipper stick with the boom or crowd engine, and raise or lower the dipper stick. Some shovels can rotate the platform on which the bulk of the machine is mounted on a turntable above its truck, similar to a modern excavator, while others, particularly those with longer bodies, have a turntable at the base of the boom, and rotate the boom. When digging at a rock face, the operator simultaneously raises and extends the dipper stick to fill the bucket with material. When the bucket is full, the shovel is rotated to load a railway car or motor truck. The locking pin on the bucket flap is released and the load drops away. The operator lowers the dipper stick, the bucket mouth self-closes, the pin relocks automatically and the process repeats. Steam shovels usually had at least a three-man crew: engineer, fireman and ground man. There was much jockeying to do to move shovels: rails and timber blocks to move; cables and block purchases to attach; chains and slings to rig; and so on. On soft ground, shovels used timber mats to help steady and level the ground. The early models were not self-propelled, rather they would use the boom to manoeuvre themselves. North American manufacturers: European manufacturers: Most steam shovels have been scrapped, although a few reside in industrial museums and private collections. The world's largest intact steam shovel is a Marion machine, dating from either 1906 or 1911, located in Le Roy, New York . It was listed on the National Register of Historic Places in 2008. [ 7 ] Dating from 1909, this machine – Ruston's called it a 'crane navvy' [ 8 ] – is the oldest surviving steam navvy in the world. [ 9 ] It was originally used at a chalk pit at Arlesey , in Bedfordshire , England. After the pit was closed, the steam navvy was simply abandoned and 'lost' as the pit became flooded with water. By the mid-1970s, the area had become a local beauty spot, known as The Blue Lagoon (from chemicals from the quarry colouring the water), and after long periods of drought, the top of the rusty navvy could be seen protruding from the water. Ruston & Hornsby expert Ray Hooley heard of its existence, and organised the difficult task of rescuing it from the water-filled pit. [ 10 ] Hooley arranged for its complete restoration to working order by apprentices at the Ruston-Bucyrus works. Subsequently it passed into the care of the Museum of Lincolnshire Life . [ 11 ] The museum was unable to make full use of the machine, and, not being stored under cover, its condition deteriorated. In 2011, Ray Hooley donated the machine to the Vintage Excavator Trust at Threlkeld Quarry and Mining Museum in Cumbria. It was moved to the quarry in 2011, [ 9 ] and (as of 2013) full restoration is once again under way. [ 12 ] Twenty-five Bucyrus Model 50-B steam shovels were sent to the Panama Canal to build bridges, roads, and drains and remove the huge quantities of soil and rock cut from the canal bed. All the shovels but one were scrapped at Panama. The survivor was shipped back to California and then brought to Denver. In the early 1950s, it was transported to Rollinsville by Roy and Russell Durand, who operated it at the Lump Gulch Placer, six miles south of Nederland, Colorado , until 1978. This steam shovel is one of two (the other at the Western Minnesota Steam Thresher's Reunion in Rollag, MN) remaining operational Bucyrus Model 50-Bs, [ 13 ] and is preserved at the Nederland Mining Museum . Roots of Motive Power in Willits, CA has also acquired a 50-B and operates it for the public once a year at their Steam Festival in early September. Two shovels sit abandoned in Zamora, California , north of Sacramento beside I 5.
https://en.wikipedia.org/wiki/Steam_shovel
Steam stripping is a process used in petroleum refineries and petrochemical plants to remove volatile contaminants, such as hydrocarbons and other volatile organic compounds (VOCs), from wastewater . [ 1 ] [ 2 ] It typically consists of passing a stream of superheated steam through the wastewater. This method is effective when the volatile compounds have lower boiling points than water or have limited solubility in water. This industry -related article is a stub . You can help Wikipedia by expanding it . This environment -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Steam_stripping
The steam to oil ratio is a measure of the water and energy consumption related to oil production in cyclic steam stimulation and steam assisted gravity drainage oil production . SOR is the ratio of unit of steam required to produce unit of Oil. The typical values are three to eight and two to five respectively. This means two to eight barrels of water converted into steam is used to produce one barrel of oil . This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Steam_to_oil_ratio
A steamroller (or steam roller ) is a form of road roller – a type of heavy construction machinery used for leveling surfaces, such as roads or airfields – that is powered by a steam engine . The leveling/flattening action is achieved through a combination of the size and weight of the vehicle and the rolls : the smooth wheels and the large cylinder or drum fitted in place of treaded road wheels. The majority of steam rollers are outwardly similar to traction engines as many traction engine manufacturers later produced rollers based on their existing designs, and the patents owned by certain roller manufacturers tended to influence the general arrangements used by others. The key difference between the two vehicles is that on a roller the main roll replaces the front wheels and axle that would be fitted to a traction engine, and the driving wheels are smooth-tired. The word steamroller frequently refers to road rollers in general, regardless of the method of propulsion. [ 1 ] Before about 1850, the word steamroller meant a fixed machine for rolling and curving steel plates for boilers and ships. From then on, it also meant a mobile device for flattening ground. [ 2 ] An early steamroller was patented by Louis Lemoine in France in 1859 and demonstrated sometime before February 1861. [ 3 ] In Britain, a 30-ton steamroller was designed in 1863 by William Clark and partner W.F. Batho. [ 4 ] [ 5 ] Having failed to impress the British municipal road authorities it was transferred to Kolkata where it continued to work. [ 5 ] The company Aveling and Porter was the first to successfully sell the product commercially and subsequently became the largest manufacturer in Britain. [ 4 ] In 1866 they produced a prototype roller with 3-foot wide (90 cm) rollers fitted to the rear of a standard 12 nominal-horsepower - traction engine . This experimental machine was described by local papers as 'the world's first steamroller' and it caused a public spectacle. In 1867, the steam road roller was patented and the company began production of the first practical steam roller – the new machine's rollers were mounted at the front instead of the back and it weighed in excess of 30 tons. It was tested on the Military Road in Chatham , Star Hill in Rochester and in Hyde Park , London and the machine proved a huge success. Within a year, they were being exported around the world, including to France, India and the United States. A New York City chief engineer said of one of these, that "in one day's rolling at a cost of 10 dollars, as much work was accomplished as in two days' rolling with a 7 ton roller drawn by eight horses at a cost of 20 dollars a day." [ 6 ] The heavier rollers were found to be hard to handle and the weight of the machines was reduced to around 10 tons. [ 4 ] Aveling and Porter refined their product continuously over the following decades, introducing fully steerable front rollers and compound steam engines at the 1881 Royal Agricultural Show . The move to asphalt for road construction resulted in the demand for steamrollers that could rapidly reverse so they could roll the tar while still hot. [ 7 ] Machines that could do this were introduced in the first decade of the 20th century. [ 7 ] Production ended around 1950. [ 8 ] The majority of rollers were of the same basic 3-roll configuration, gear-driven, with two large smooth wheels (rolls) at the back and a single wide roll at the front (in actuality, the wide roll usually consisted of two narrower rolls on the same axle, to make steering easier). However, there was also a distinctive variant, the "tandem", which had two wide rolls, one front, one rear. Those made by Robey & Co used their standard steam wagon engine and pistol boiler fitted in a girder frame with rolls and a chain drive to produce a quick-reversing roller suitable for modern road surfaces such as tarmacadam and bituminous asphalt . [ 9 ] A number of Robey & Co. tandem rollers were modified to make a further variant, the tri-tandem, which was a tandem with a third roll, mounted directly behind the rear one. Robey supplied the parts, but the modification was undertaken by Goodes of Royston. [ 9 ] Ten tandem and two tri-tandem Robey rollers survive in preservation, [ 10 ] and one of the tri-tandems is known to have been used to construct parts of the M1 motorway . A variation of the basic configuration was the "convertible": an engine which could be either a steam roller or a traction engine and could be changed from one form to the other in a relatively short time – i.e. , less than half a day. Convertible engines were liked by local authorities, since the same machine could be used for haulage in the winter and road-mending in the summer. Although most steam roller designs are derived from traction engines, and were manufactured by the same companies, there are a number of features that set them apart. The most obvious difference is in the wheels. Traction engines were generally built with large fabricated spoked steel wheels with wide rims. Those intended for road use would have continuous solid rubber tyres bolted around the rims, to improve traction on tarmac. Engines intended for agricultural use would have a series of strakes bolted diagonally across the rims, like the tread on a modern pneumatic tractor tyre, and the wheels were typically wider to spread the load more evenly. Steam rollers, on the other hand, had smooth rear wheels and a roller at the front. The roller consisted of a pair of adjacent wide cylinders supported at both ends. This replaced the separate wheels and axle of a traction engine. In the conventional arrangement, the front roller is mounted centrally, forward of the chimney. In order to allow enough clearance from the boiler (and hence a larger front roll), the smokebox is extended forward substantially at the top to incorporate a support plate on which to mount the bearing for the roller assembly. This gives the distinctive, hooded look to the front of a steam roller. It also necessitates a different design of smokebox door – it has to hinge up or down, rather than opening sideways, due to the limited access available. Access to the boiler tubes for cleaning is limited and the brush usually has to be inserted through the small gap between the top of the roll and the fork. The front and rear rolls were usually fitted with scraper bars . As the vehicle moved along, these removed any surface material that had become stuck to the roll, to prevent a build-up of material and ensure a flat finish was maintained. Some steam rollers were fitted with a scarifier mounted on the tender box at the rear. They could be swung down to road level and used to rip up the old surface before a road was remade. Another accessory was a tar sprayer – a bar mounted on the back of the roller. This was not a common fixture. Britain was a major exporter of steam rollers over the years, with the firm of Aveling and Porter probably being the most famous and the most prolific. Many other traction engine manufacturers built steam rollers, but after Aveling and Porter, the most popular were Marshall, Sons & Co. , John Fowler & Co. , and Wallis & Steevens . In America, the Buffalo-Springfield Roller Company [ d ] was a large builder. J. I. Case made a roller variant of their farm engines, but had a small market share. Other nations had makers including the Czechs, Swiss, Swedes, Germans (notably Kemna ) and Dutch which produced steam rollers. In the UK, a number of companies owned fleets of steam rollers and contracted them out to local authorities. Many were still in use into the 1960s, and part of the M1 motorway was made using steam rollers. [ 11 ] A few steam rollers were being used for road maintenance in the early 1970s, and this may go some way to explaining why diesel-powered rollers are still colloquially known as steam rollers today. Many steam rollers are preserved in working order, and can be seen in operation during special live steam festivals, where operating scale models may also be displayed. At some of the UK steam fairs and rallies , demonstrations of road building using the old techniques, tools and machines are re-enacted by 'Road Gangs' in authentic dress. Steam rollers feature prominently in these demonstrations. The annual Great Dorset Steam Fair has a section dedicated to road-making machinery, including a line-up of working steam rollers. A number of steamrollers ended their working lives in children's playgrounds to provide something for children to play on. [ 12 ] [ 13 ] Two popular American bands were named after steamrollers, Buffalo Springfield and Mannheim Steamroller . Parni Valjak (trans. Steamroller ) is the name of the popular Croatian and Yugoslav rock band, and the group has used the name Steam Roller on their English language releases. [ 14 ] Two different steamrollers appear as prominent characters in the Thomas & Friends television series; George and Buster, both of whom are based on the Aveling-Barford R class design. British steeplejack and engineering enthusiast Fred Dibnah was known as a national institution in Great Britain for the conservation of steam rollers and traction engines. The first engine he restored to working order was an Aveling & Porter steam roller, registration no. DM3079. Built in 1912, it was a 10-ton slide-valve, single-cylinder, 4-shaft, road roller. [ 15 ] Originally named "Allison", after his first wife, Fred renamed the engine "Betsy" (his mother's name) following his divorce – Fred's view being "wives may change but your mother remains your mother!" This roller was featured in many of Fred's early television programmes. It may still be seen at steam rallies in Britain and was in steam at the Great Dorset Steam Fair in 2011. Author Terry Pratchett instructed his collaborator Neil Gaiman that anything Pratchett had been working on at the time of his death should be destroyed by a steamroller. Pratchett's daughter and literary executor Rhianna Pratchett also stated that she had no desire to try to finish her father's work or continue the Discworld franchise without him. Accordingly, Pratchett's assistant Rob Wilkins brought Pratchett's computer hard drive to the Great Dorset Steam Fair , where a steamroller was driven over it. [ 16 ] The steamroller has a strong symbolism of an irresistible, onward-pushing force. The Imperial Russian Army was nicknamed "steamroller" during World War One , as it was huge in size, and Russia initiated the war with an offensive. The "Russian Steamroller" is one of the personifications of Russia , along with the Russian bear , double-headed eagle and Mat Zemlya .
https://en.wikipedia.org/wiki/Steamroller
A steam–electric power station is a power station in which the electric generator is steam -driven: water is heated, evaporates , and spins a steam turbine which drives an electric generator. After it passes through the turbine, the steam is condensed in a condenser . The greatest variation in the design of steam–electric power plants is due to the different fuel sources. Almost all coal , nuclear , geothermal , solar thermal electric power plants, waste incineration plants as well as many natural gas power plants are steam–electric. Natural gas is frequently combusted in gas turbines as well as boilers . The waste heat from a gas turbine can be used to raise steam, in a combined cycle plant that improves overall efficiency. Worldwide, most electric power is produced by steam–electric power plants. [ 1 ] The only widely used alternatives are photovoltaics , direct mechanical power conversion as found in hydroelectric and wind turbine power as well as some more exotic applications like tidal power or wave power and finally some forms of geothermal power plants. [ 2 ] Niche applications for methods like betavoltaics or chemical power conversion (including electrochemistry ) are only of relevance in batteries and atomic batteries . Fuel cells are a proposed alternative for a future hydrogen economy . Reciprocating steam engines have been used for mechanical power sources since the 18th Century, with notable improvements being made by James Watt . The very first commercial central electrical generating stations in New York and London, in 1882, also used reciprocating steam engines. As generator sizes increased, eventually turbines took over due to higher efficiency and lower cost of construction. By the 1920s any central station larger than a few thousand kilowatts would use a turbine prime mover. The efficiency of a conventional steam–electric power plant, defined as energy produced by the plant divided by the heating value of the fuel consumed by it, is typically 33 to 48%, limited as all heat engines are by the laws of thermodynamics (See: Carnot cycle ). The rest of the energy must leave the plant in the form of heat. This waste heat can be removed by cooling water or in cooling towers . ( cogeneration uses the waste heat for district heating ). An important class of steam power plants is associated with desalination facilities, which are typically found in desert countries with large supplies of natural gas . In these plants freshwater and electricity are equally important products. Since the efficiency of the plant is fundamentally limited by the ratio of the absolute temperatures of the steam at turbine input and output, efficiency improvements require use of higher temperature, and therefore higher pressure, steam. Historically, other working fluids such as mercury have been experimentally used in a mercury vapour turbine power plant, since these can attain higher temperatures than water at lower working pressures. However, poor heat transfer properties and the obvious hazard of toxicity have ruled out mercury as a working fluid. Another option is using a supercritical fluid as a working fluid. Supercritical fluids behave similar to gases in some respects and similar to liquids in others. Supercritical water or supercritical carbon dioxide can be heated to much higher temperatures than are achieved in conventional steam cycles thus allowing for higher thermal efficiency . However, these substances need to be kept at high pressures (above the critical pressure ) to maintain supercriticality and there are issues with corrosion. [ 3 ] [ 4 ] Steam–electric power plants use a surface condenser cooled by water circulating through tubes. The steam which was used to turn the turbine is exhausted into the condenser and is condensed as it comes in contact with the tubes full of cool circulating water. The condensed steam, commonly referred to as condensate . is withdrawn from the bottom of the condenser. The adjacent image is a diagram of a typical surface condenser. [ 5 ] [ 6 ] [ 7 ] [ 8 ] For best efficiency, the temperature in the condenser must be kept as low as practical in order to achieve the lowest possible pressure in the condensing steam. Since the condenser temperature can almost always be kept significantly below 100 °C where the vapor pressure of water is much less than atmospheric pressure, the condenser generally works under vacuum . Thus leaks of non-condensable air into the closed loop must be prevented. Plants operating in hot climates may have to reduce output if their source of condenser cooling water becomes warmer; unfortunately this usually coincides with periods of high electrical demand for air conditioning . If a good source of cooling water is not available, cooling towers may be used to reject waste heat to the atmosphere. A large river or lake can also be used as a heat sink for cooling the condensers; temperature rises in naturally occurring waters may have undesirable ecological effects, but may also incidentally improve yields of fish in some circumstances. [ citation needed ] In the case of a conventional steam–electric power plant using a drum boiler , the surface condenser removes the latent heat of vaporization from the steam as it changes states from vapor to liquid. The condensate pump then pumps the condensate water through a feedwater heater , which raises the temperature of the water by using extraction steam from various stages of the turbine. [ 5 ] [ 6 ] Preheating the feedwater reduces the irreversibilities involved in steam generation and therefore improves the thermodynamic efficiency of the system. [ 9 ] This reduces plant operating costs and also helps to avoid thermal shock to the boiler metal when the feedwater is introduced back into the steam cycle. Once this water is inside the boiler or steam generator , the process of adding the latent heat of vaporization begins. The boiler transfers energy to the water by the chemical reaction of burning some type of fuel. The water enters the boiler through a section in the convection pass called the economizer . From the economizer, it passes to the steam drum, from where it goes down the downcomers to the lower inlet water wall headers. From the inlet headers, the water rises through the waterwalls. Some of it is turned into steam due to the heat being generated by the burners located on the front and rear waterwalls (typically). From the waterwalls, the water/steam mixture enters the steam drum and passes through a series of steam and water separators and then dryers inside the steam drum . The steam separators and dryers remove water droplets from the steam; liquid water carried over into the turbine can produce destructive erosion of the turbine blades. and the cycle through the waterwalls is repeated. This process is known as natural circulation . Geothermal plants need no boiler since they use naturally occurring steam sources. Heat exchangers may be used where the geothermal steam is very corrosive or contains excessive suspended solids. Nuclear plants also boil water to raise steam, either directly passing the working steam through the reactor or else using an intermediate heat exchanger. After the steam is conditioned by the drying equipment inside the drum, it is piped from the upper drum area into an elaborate set up of tubing in different areas of the boiler, the areas known as superheater and reheater. The steam vapor picks up energy and is superheated above the saturation temperature. The superheated steam is then piped through the main steam lines to the valves of the high-pressure turbine.
https://en.wikipedia.org/wiki/Steam–electric_power_station
Stearoyl-CoA is a coenzyme involved in the metabolism of fatty acids . [ 1 ] Stearoyl-CoA is an 18-carbon long fatty acyl-CoA chain that participates in an unsaturation reaction. The reaction is catalyzed by the enzyme stearoyl-CoA desaturase , which is located in the endoplasmic reticulum . [ 2 ] It forms a cis-double bond between the ninth and tenth carbons within the chain to form the product oleoyl-CoA. [ 3 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stearoyl-CoA
Stearoylethanolamide ( SEA ) is an endocannabinoid neurotransmitter . [ 1 ] Stearoylethanolamide ( C 20 H 41 NO 2 ; 18:0), also called N-(octadecanoyl)ethanolamine, is an N-acylethanolamine and the ethanolamide of octadecanoic acid (C 18 H 36 O 2 ; 18:0) and ethanolamine (MEA: C 2 H 7 NO), and functionally related to an octadecanoic acid. [ 2 ] Levels of SEA correlate with changes in pain intensity, indicating this SEA change, reflect the pain reduction effects of IPRP. [ 3 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stearoylethanolamide
Stears is a market intelligence company for investing in Africa, with headquarters in Lagos, Abuja, and London. [ 1 ] Initially established as a media publication called Stears Business , the company was founded in 2017 by Preston Ideh, Abdul Abdulrahim, Foluso Ogunlana, and Michael Famoroti, who met at the London School of Economics and Imperial College in the United Kingdom . [ 2 ] Stears has become a provider of subscription-based data collection tools and analysis services for investing in Africa. [ 3 ] [ 4 ] Stears has provided bespoke content around specific issues such as market entry, country analysis, and digital economy for international organisations such as the United Nations Development Programme , the Foreign Commonwealth and Development Office , and knowledge workers. [ 5 ] This provides data for the work of analysts, portfolio managers, researchers, and economists. In 2022, Stears raised $3.3 million in funding from MaC Venture Capital, Serena Ventures (the investment firm of retired tennis star Serena Williams ), [ 6 ] Melo 7 Tech Partners, Omidyar Group's Laminate Fund and Cascador. [ 7 ] This Lagos –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stears_(company)
Steatocrit or acid steatocrit is a simple, rapid gravimetric method to determine steatorrhea . The test is simple, rapid, inexpensive, and reliable. It is a qualitative test that can be used when other methods are impractical. An elevated steatocrit is indicative of fat malabsorption resulting in steatorrhea . This generally results from pancreatic exocrine insufficiency but can also occur with severe small bowel disease i.e. celiac disease , liver diseases such as Primary Biliary Cirrhosis or medications that inhibit fat absorption such as orlistat .
https://en.wikipedia.org/wiki/Steatocrit
Steatosis , also called fatty change , is abnormal retention of fat ( lipids ) within a cell or organ. [ 1 ] Steatosis most often affects the liver – the primary organ of lipid metabolism – where the condition is commonly referred to as fatty liver disease . Steatosis can also occur in other organs, including the kidneys, heart, and muscle. [ 2 ] When the term is not further specified (as, for example, in 'cardiac steatosis'), it is assumed to refer to the liver. [ 3 ] Risk factors associated with steatosis are varied, and may include diabetes mellitus , protein malnutrition , hypertension, [ 4 ] cell toxins , obesity, [ 5 ] anoxia , [ 2 ] and sleep apnea . [ 6 ] Steatosis reflects an impairment of the normal processes of synthesis and elimination of triglyceride fat. Excess lipid accumulates in vesicles that displace the cytoplasm . When the vesicles are large enough to distort the nucleus , the condition is known as macrovesicular steatosis; otherwise, the condition is known as microvesicular steatosis. While not particularly detrimental to the cell in mild cases, large accumulations can disrupt cell constituents, and in severe cases the cell may even burst. No single mechanism leading to steatosis exists; rather, a varied multitude of pathologies disrupt normal lipid movement through the cell and cause accumulation. [ 7 ] These mechanisms can be separated based on whether they ultimately cause an oversupply of lipid which can not be removed quickly enough (i.e., too much in), or whether they cause a failure in lipid breakdown (i.e., not enough used). [ citation needed ] Failure of lipid metabolism can also lead to the mechanisms which would normally utilise or remove lipids becoming impaired, resulting in the accumulation of unused lipids in the cell. Certain toxins, such as alcohols, carbon tetrachloride , aspirin , and diphtheria toxin , interfere with cellular machinery involved in lipid metabolism. In those with Gaucher's disease , the lysosomes fail to degrade lipids and steatosis arises from the accumulation of glycolipids . Protein malnutrition, such as that seen in kwashiorkor , results in a lack of precursor apoproteins within the cell, therefore unused lipids which would normally participate in lipoprotein synthesis begin to accumulate. [ citation needed ] Macrovesicular steatosis is the more common form of fatty degeneration and may be caused by oversupply of lipids due to obesity, obstructive sleep apnea (OSA), [ 8 ] insulin resistance, or alcoholism . Nutrient malnutrition may also cause the mobilisation of fat from adipocytes and create a local oversupply in the liver where lipid metabolism occurs. Excess alcohol over a long period of time can induce steatosis. The breakdown of large amounts of ethanol in alcoholic drinks produces large amounts of chemical energy in the form of NADH , signalling to the cell to inhibit the breakdown of fatty acids (which also produces energy) and simultaneously increase the synthesis of fatty acids . This "false sense of energy" results in more lipid being created than is needed. [ citation needed ] Microvesicular steatosis is characterized by small intracytoplasmic fat vacuoles (liposomes) which accumulate within hepatocytes. [ 9 ] It is associated with a wide variety of conditions, including alcoholism , drug toxicity of several medications, delta hepatitis (in South America and Central Africa), sudden childhood death, congenital defects of fatty acid beta oxidation , cholesterol ester storage disease , Wolman disease and Alpers syndrome . [ 10 ] Histologically , steatosis is physically apparent as lipid within membrane bound liposomes of parenchymal cells. [ 2 ] When this tissue is fixed and stained to be better viewed under a microscope, the lipid is usually dissolved by the solvents used to prepare the sample. As such, samples prepared this way will appear to have empty holes (or vacuoles) within the cells where the lipid has been cleared. Special lipid stains, such as Sudan stains and osmium tetroxide are able to retain and show up lipid droplets, hence more conclusively indicating the presence of lipids. Other intracellular accumulations, such as water or glycogen , can also appear as clear vacuoles, therefore it becomes necessary to use stains to better determine what substance is accumulating. [ citation needed ] Grossly, steatosis causes organ enlargement and lightening in colour. [ 2 ] This is due to the high lipid content increasing the organ's volume and becoming visible to the unaided eye. In severe cases, the organ may become vastly enlarged, greasy, and yellow in appearance. [ citation needed ] On X-ray computed tomography (CT), the increased fat component will decrease the density of the liver tissue, making the image less bright. Typically the density of the spleen and liver are roughly equivalent. In steatosis, there is a difference between the density and brightness of the two organs, with the liver appearing darker. [ 12 ] On ultrasound, fat is more echogenic (capable of reflecting sound waves). The combination of liver steatosis being dark on CT and bright on ultrasound is sometimes known as the flip flop sign. [ citation needed ] On magnetic resonance imaging , multiecho gradient echo images can be used to determine the percent fat fraction of the liver. [ 13 ] The different resonance frequencies between water and fat make this technique very sensitive and accurate. Acquisition of echoes in "in phase" and "out phase" conditions (pertaining to the relative phases of the fat and water proton contingents) enables to obtain a signal proportional to the water and fat contingent, or a signal proportional to the water minus the fat contingent. These signal intensities are then algebraically combined into a percent fat. More recent techniques take into account experimental noise, signal decay and spectroscopic properties of fat. Numerous validation studies have demonstrated excellent correlations between the steatosis level quantified at MRI and the steatosis levels semi-quantitatively and quantitatively determined on liver biopsies (reference methods). [ citation needed ] Several MRI vendors offer automated calculation of percent fat with acquisition sequences no longer than a single breath hold. [ citation needed ] On abdominal ultrasonography , steatosis is seen as a hyperechoic liver as compared to the normal kidney. In Bristol University 's study Children of the 90s , 2.5% of 4,000 people born in 1991 and 1992 were found by ultrasound scanning at the age of 18 to have non-alcoholic fatty liver disease ; five years later transient elastography (fibroscan) found over 20% to have the fatty deposits on the liver of steatosis, indicating non-alcoholic fatty liver disease; half of those were classified as severe. The scans also found that 2.4% had the liver scarring of fibrosis , which can lead to cirrhosis . [ 14 ]
https://en.wikipedia.org/wiki/Steatosis
In mathematics – more specifically, in functional analysis and numerical analysis – Stechkin's lemma is a result about the ℓ q norm of the tail of a sequence , when the whole sequence is known to have finite ℓ p norm. Here, the term "tail" means those terms in the sequence that are not among the N largest terms, for an arbitrary natural number N . Stechkin's lemma is often useful when analysing best- N -term approximations to functions in a given basis of a function space . The result was originally proved by Stechkin in the case q = 2 {\displaystyle q=2} . Let 0 < p < q < ∞ {\displaystyle 0<p<q<\infty } and let I {\displaystyle I} be a countable index set . Let ( a i ) i ∈ I {\displaystyle (a_{i})_{i\in I}} be any sequence indexed by I {\displaystyle I} , and for N ∈ N {\displaystyle N\in \mathbb {N} } let I N ⊂ I {\displaystyle I_{N}\subset I} be the indices of the N {\displaystyle N} largest terms of the sequence ( a i ) i ∈ I {\displaystyle (a_{i})_{i\in I}} in absolute value . Then where Thus, Stechkin's lemma controls the ℓ q norm of the tail of the sequence ( a i ) i ∈ I {\displaystyle (a_{i})_{i\in I}} (and hence the ℓ q norm of the difference between the sequence and its approximation using its N {\displaystyle N} largest terms) in terms of the ℓ p norm of the full sequence and an rate of decay. W.l.o.g. we assume that the sequence ( a i ) i ∈ I {\displaystyle (a_{i})_{i\in I}} is sorted by | a i + 1 | ≤ | a i | , i ∈ I {\displaystyle |a_{i+1}|\leq |a_{i}|,i\in I} and we set I = N {\displaystyle I=\mathbb {N} } for notation. First, we reformulate the statement of the lemma to Now, we notice that for d ∈ N {\displaystyle d\in \mathbb {N} } Using this, we can estimate as well as Also, we get by ℓ p norm equivalence: Putting all these ingredients together completes the proof.
https://en.wikipedia.org/wiki/Stechkin's_lemma
A metallic bridge is a bridge with a structure made of metal, typically iron , cast iron , or steel . The first metallic bridge was constructed from cast iron in England. Known as the Iron Bridge, it was built in 1779 by Abraham Darby III over the River Severn at Coalbrookdale . The bridge has a span of 30.5 metres (100 ft) and a total length of 60 metres (200 ft), standing 30 metres (98 ft) above the river. [ 1 ] In France, the first metallic bridge was the Pont des Arts in Paris, constructed in 1803 by Louis-Alexandre de Cessart and Jacques Dillon. The pinnacle of cast iron bridges was reached with the Pont du Carrousel , built in Paris in 1834 by Antoine-Rémy Polonceau. [ 2 ] Suspension bridges made of iron began to develop in the United States in 1810. [ 3 ] The widespread use of metallic bridges grew with advancements in steel production techniques, coinciding with the expansion of railway networks. This golden age of metallic bridges continued until World War I , despite the emergence of reinforced concrete in France by 1898. [ 4 ] The steels used in bridge construction are low-alloy iron-carbon alloys . For aesthetic or safety reasons, other steel types, such as Corten steel or stainless steel , may be used. [ 5 ] For safety, steel in bridges is designed to operate well below its yield strength. Material fatigue limits stresses to approximately half the yield strength, around 120 megapascals (17,000 psi) for mild steel and 180 megapascals (26,000 psi) for high-strength steel. Fatigue strength is a critical factor in structural calculations. [ 5 ] Other factors, such as temperature, stress corrosion cracking , and performance in saline environments, also influence material selection. [ 6 ] Steel profiles used in bridges include: Common profiles include angle iron , U-shaped beams, and T-beams. [ 7 ] Steel assembly methods include bolting, riveting, and welding. [ 2 ] Bolts and rivets secure components through clamping force. Bolts, installed cold, are used for temporary assemblies or in cases where rivets are unsuitable. A bolt consists of a forged head, a threaded shank, and a movable nut screwed onto the threaded portion. [ 2 ] Rivets, installed hot, were historically the primary assembly method in structural steelwork . A rivet has a factory-made head and a shank; the second head is formed by forging the protruding shank while hot, creating a strong clamping force upon cooling. [ 7 ] Welding joins steel by melting and fusing components using coated steel rods (electrodes) that melt under the high temperature of an electric arc . Modern metallic bridges are typically welded, with rivets largely obsolete. Bolts remain in use for emergency bridges, which are assembled rapidly from prefabricated parts. [ 6 ] Metallic beams typically have an I-shaped profile, though U-shaped or box-section profiles are used when height is limited. [ 5 ] Solid web beams consist of one or more vertical webs and horizontal flanges (or wings ) on either side. These beams can be hot-rolled ( I-beams for smaller sizes) or assembled cold from flat plates through welding (welded reconstituted beams, or PRSs) or, historically, riveting with angle irons. [ 2 ] Flanges, with or without angle irons, form the beam’s chords in welded or rolled structures. [ 2 ] Truss beams, or triangulated beams, consist of chords connected not by a web but by vertical or inclined bars forming a triangulated framework. The arrangement of bars varies depending on the triangulation system used. [ 6 ] Common truss systems include: Riveted connections were standard before welding became prevalent. Both straight beam and truss bridges used rivets. A typical truss connection includes vertical and horizontal members made of angle irons and plates riveted together, with inclined members using U-shaped beams. Cover plates, or gussets, are added at joints to enhance rigidity. [ 7 ] Modern metallic connections typically involve welding, as seen in solid web beams. A transverse beam, or cross-girder, is welded to a longitudinal beam, or stringer. Vertical stiffeners, often terminating in a gusset , reinforce the assembly. [ 6 ] Depending on the beam structure, these include single box girder bridges (with voussoirs ), twin-girder bridges, ribbed bridges, lenticular bridges, and truss bridges. [ 3 ] In a suspension bridge , the beam is called the stiffening girder, typically made of a metallic truss. [ 3 ] Three parameters define a suspension bridge: For small to medium spans, the relationship between span and sag is generally f = L 9 {\displaystyle f={\frac {L}{9}}} [ 8 ]
https://en.wikipedia.org/wiki/Steel_bridge
A steel building is a metal structure fabricated with steel for the internal support and for exterior cladding, as opposed to steel framed buildings which generally use other materials for floors, walls, and external envelope. Steel buildings are used for a variety of purposes including storage, work spaces and living accommodation. They are classified into specific types depending on how they are used. Steel provides several advantages over other building materials, such as wood : Some common types of steel buildings are "straight-walled" and "arch," or Nissen or Quonset hut . [ 5 ] Further, the structural type may be classed as clear span or multiple span. A clear span building does not have structural supports (e.g. columns) in the interior occupied space. Straight-walled and arch type refer to the outside shape of the building. More generally, these are both structural arch forms if they rely on a rigid frame structure. However, curved roof structures are typically associated with the arch term. Steel arch buildings may be cost efficient for specific applications. They are commonly used in the agricultural industry. Straight-walled buildings provide more usable space when compared to arch buildings. They are also easier to blend into existing architecture. Straight-walled buildings are commonly used for commercial, industrial, and many other occupancy types. Clear span refers to the internal construction. Clear span steel buildings utilize large overhead support beams, thus reducing the need for internal supporting columns. Clear span steel buildings tend to be less cost efficient than structures with interior columns. However, other practical considerations may influence the selection of framing style such as an occupancy where interior structural obstructions are undesirable (e.g. aircraft hangars or sport arenas). [ 6 ] Long Bay buildings are designed for use in bay spans of over 35'. They use prefabricated metal frames combined with conventional joists to provide larger openings and clearances in buildings. Uniports, originating in the UK in 1965 after being pioneered by Alfred Booth & Co in 1948, have been widely deployed worldwide, including in Arctic Canada, the African jungle, and the Kuwait desert. They are easily assembled with basic tools, suitable for unskilled labor, and can be swiftly disassembled and relocated. Uniports offer versatility, allowing for extensions and connections, along with insulation and partition options. Compact and efficient, up to 60 Mark 1 Uniports can be packed into a single 20-foot container. Building portions that are shop assembled prior to shipment to site are commonly referenced as prefabricated . The smaller steel buildings tend to be prefabricated or simple enough to be constructed by anyone. Prefabrication offers the benefits of being less costly than traditional methods and is more environmentally friendly (since no waste is produced on-site). [ 7 ] The larger steel buildings require skilled construction workers, such as ironworkers , to ensure proper and safe assembly. [ 8 ] There are five main types of structural components that make up a steel frame - tension members, compression members, bending members, combined force members and their connections. Tension members are usually found as web and chord members in trusses and open web steel joists. Ideally tension members carry tensile forces, or pulling forces, only and its end connections are assumed to be pinned. Pin connections prevent any moment(rotation) or shear forces from being applied to the member. Compression members are also considered as columns, struts, or posts. They are vertical members or web and chord members in trusses and joists that are in compression or being squished. Bending members are also known as beams, girders, joists, spandrels, purlins, lintels, and girts. Each of these members have their own structural application, but typically bending members will carry bending moments and shear forces as primary loads and axial forces and torsion as secondary loads. Combined force members are commonly known as beam-columns and are subjected to bending and axial compression. Connections are what bring the entire building together. They join these members together and must ensure that they function together as one unit. [ 9 ]
https://en.wikipedia.org/wiki/Steel_building
Steel casing pipe , also known as encasement pipe, is most commonly used in underground construction to protect utility lines of various types from getting damaged. Such damage might occur due to the elements of nature or human activity. Steel casing pipe is used in different types of horizontal underground boring , where the pipe is jacked into an augered hole in segments and then connected together by welding or by threaded and coupled ends, or other proprietary pipe connectors such as interference-fit interlocking push-on joints. The steel casing pipe can also be set up and welded into a "ribbon" and then directionally pulled through a previously drilled hole under highways , railroads , lakes and rivers . [ 1 ] Steel casing pipe protects one or many of various types of utilities such as water mains , gas pipes , electrical power cables , fiber-optic cables , etc. The utility lines that are run through the steel casing pipe are most commonly mounted and spaced within the steel casing pipe by using "casing spacers" that are made of various materials, including stainless steel or carbon steel and the more economical plastic versions. The ends of a steel casing pipe "run" are normally sealed with "casing end seals", which can be of the "pull-on" or "wrap-around" rubber varieties. Steel casing pipe is also used in the construction of deep foundations . Steel casing pipe generally has no specific specifications, other than the need for the material to be extremely straight and round. In some areas A.S.T.M. specifications may be required by project engineers. The specification most commonly called for is A.S.T.M. 139 Grade B. This specification gives parameters for minimum yield and tensile strength of the steel pipe being used for casing, and tolerances of straightness and concentricity. Steel casing pipe is often specified as ASTM A-252 which is a structural grade material that does not require hydrostatic testing and the inspection requirements are not stringent and it usually costs less than other grades such as A-53, [ 2 ] A-139 or API 5L. Used natural gas line pipe is also used as casing on many projects because it is often reclaimed in very good condition and can offer a significant cost savings when compared to new steel pipe. Used pipe is most likely to not have any testing data associated with it and is generally used when the only required specification is a given diameter and wall thickness of steel casing pipe.
https://en.wikipedia.org/wiki/Steel_casing_pipe
A steel catenary riser (SCR) is a common method of connecting a subsea pipeline to a deepwater floating or fixed oil production platform. SCRs are used to transfer fluids like oil, gas, injection water, etc. between the platforms and the pipelines. In the offshore industry the word catenary is used as an adjective or noun with a meaning wider than is its historical meaning in mathematics. Thus, an SCR that uses a rigid, steel pipe that has a considerable bending stiffness is described as a catenary. That is because in the scale of depth of the ocean, the bending stiffness of a rigid pipe has little effect on the shape of the suspended span of an SCR. The shape assumed by the SCR is controlled mainly by weight, buoyancy and hydrodynamic forces due to currents and waves. The shape of the SCR is well approximated by stiffened catenary equations. [ 1 ] In preliminary considerations, in spite of using conventional, rigid steel pipe, the shape of the SCR can be also approximated with the use of ideal catenary equations, [ 2 ] when some further loss of accuracy is acceptable. Ideal catenary equations are used historically to describe the shape of a chain suspended between points in space. A chain line has by definition a zero bending stiffness and those described with the ideal catenary equations use infinitesimally short links. SCRs were invented by Dr. Carl G. Langner P.E., NAE who described an SCR together with a flexible joint used to accommodate angular deflections of the top region of the SCR relative a support platform, as the platform and the SCR move in currents and waves. [ 3 ] SCRs use thousands of feet of long unsupported pipe spans. Complex dynamics, hydrodynamics, including vortex induced vibrations (VIVs) and physics of pipe interactions with the seabed are involved. Those are tough on materials used to build the SCR pipe. Dr. Langner had carried out years of analytical and design work before an application for his US patent was filed. That work started before 1969, and it was reflected in internal Shell documents, which are confidential, but a patent on an early 'Bare Foot' SCR design was issued. [ 4 ] VIVs are predominantly controlled with a use of devices attached to the SCR pipe. Those can be for example VIV suppression devices, like helicoidal strakes or fairings [ 5 ] that considerably reduce VIV amplitudes. [ 6 ] The development of VIV prediction engineering programs, like for example the SHEAR7 program, is an ongoing process that originated in cooperation between MIT and Shell Exploration & Production [ 7 ] in parallel to the development of the SCR concept, while having SCR development in mind. [ 8 ] The rigid pipe of the SCR forms a catenary between its hang-off point on the floating or rigid platform, and the seabed. [ 9 ] A free-hanging SCR assumes a shape roughly similar to the letter 'J'. A catenary of a Steel Lazy Wave Riser (SLWR) consists in fact of at least three catenary segments. The top and the seabed segments of the catenary have negative submerged weight, and their curvatures 'bulge' towards the seabed. The middle segment has buoyant material attached along its entire length, so that the ensemble of the steel pipe and the buoyancy is positively buoyant. Accordingly, the curvature of the buoyant segment 'bulges' upwards (inverted catenary), and its shape can also be well approximated with the same stiffened or ideal catenary equations. The positively and negatively buoyant segments are tangent to each other at the points where they join. The overall catenary shape of the SLWR has inflection points at those locations. SLWRs were first installed on a turret moored FPSO offshore Brazil (BC-10, Shell) in 2009, [ 10 ] even though Lazy Wave configuration flexible risers had been in a wide use for several decades beforehand. The deepest application of Lazy Wave SCRs (SLWRs) is at present on the Stones turret-moored FPSO (Shell), which is moored in 9,500 feet water depth in the Gulf of Mexico . [ 11 ] The Stones FPSO turret features a disconnectable buoy, so that the vessel with the crew can be disconnected from the buoy supporting the SLWRs, and moved to a suitable shelter before an arrival of a hurricane. The SCR pipe and a short segment of pipe lying on the seabed use 'dynamic' pipe, i.e. steel pipe having slightly greater wall thickness than the pipeline wall thickness, in order to sustain dynamic bending and steel material fatigue associated in the touch-down zone of the SCR. Beyond that the SCR is typically extended with a rigid pipeline, but use of a flexible pipeline is also feasible. [ 12 ] [ 13 ] The risers are typically 8-12 inches in diameter and operate at a pressure of 2000-5000 psi. [ 14 ] Designs beyond those ranges of pipe sizes and operating pressures are also feasible. Free hanging SCRs were first used by Shell on the Auger tension leg platform (TLP) [ 15 ] in 1994 which was moored in 872 m of water. [ 16 ] Proving to Shell that the SCR concept was technically sound for use on the Auger TLP was a major achievement of Dr. Carl G. Langner. It was a technological leap. The acceptance of the SCR concept by the entire Offshore Industry followed relatively quickly. SCRs have performed reliably on oil and gas fields all over the world since their first Auger installation.
https://en.wikipedia.org/wiki/Steel_catenary_riser
Steel Design , or more specifically, Structural Steel Design , is an area of structural engineering used to design steel structures. These structures include schools , houses , bridges , commercial centers , tall buildings , warehouses , aircraft , ships and stadiums . The design and use of steel frames are commonly employed in the design of steel structures. More advanced structures include steel plates and shells . In structural engineering, a structure is a body or combination of pieces of the rigid bodies in space that form a fitness system for supporting loads and resisting moments . The effects of loads and moments on structures are determined through structural analysis . A steel structure is composed of structural members that are made of steel , usually with standard cross-sectional profiles and standards of chemical composition and mechanical properties. The depth of steel beams used in the construction of bridges is usually governed by the maximum moment, and the cross-section is then verified for shear strength near supports and lateral torsional buckling (by determining the distance between transverse members connecting adjacent beams). Steel column members must be verified as adequate to prevent buckling after axial and moment requirements are met. There are currently two common methods of steel design: The first method is the Allowable Strength Design (ASD) method. The second is the Load and Resistance Factor Design (LRFD) method. Both use a strength, or ultimate level design approach. [ 1 ] For ASD, the required strength, R a , is determined from the following load combinations (according to the AISC SCM, 13 ed.) and: [ 2 ] D + F D + H + F + L + T D + H + F + (L r or S or R) D + H + F + 0.75(L + T) + 0.75(L r or S or R) D + H + F ± (0.6W or 0.7E) D + H + F + (0.75W or 0.7E) + 0.75L + 0.75(L r or S or R) 0.6D + 0.6W 0.6D ± 0.7E where: Special Provisions exist for accounting flood loads and atmospheric loads i.e. D i and W i Note that Allowable Strength Design is NOT equivalent to Allowable Stress Design, as governed by AISC 9th Edition. Allowable Strength Design still uses a strength, or ultimate level, design approach. For LRFD, the required strength, R u , is determined from the following factored load combinations: 1.4(D + F) 1.2(D + F + T) + 1.6(L + H) + 0.5(L r or S or R) 1.2D + 1.6(L r or S or R) + (L or 0.8W) 1.2D + 1.0W + L + 0.5(L r or S or R) 1.2D ± 1.0E + L + 0.2S + 0.9D + 1.6W + 1.6H 0.9D + 1.6 H ± (1.6W or 1.0E) where the letters for the loads are the same as for ASD. The American Institute of Steel Construction ( AISC ), Inc. publishes the Steel Construction Manual (Steel construction manual, or SCM), which is currently in its 16th edition. Structural engineers use this manual in analyzing, and designing various steel structures. Some of the chapters of the book are as follows. Canadian Institute of Steel Construction publishes the "CISC Handbook of steel Construction". CISC is a national industry organization representing the structural steel, open-web steel joist and steel plate fabrication industries in Canada. It serves the same purpose as the AISC manual, but conforms with Canadian standards.
https://en.wikipedia.org/wiki/Steel_design
Steel frame is a building technique with a " skeleton frame" of vertical steel columns and horizontal I-beams , constructed in a rectangular grid to support the floors, roof and walls of a building which are all attached to the frame. The development of this technique made the construction of the skyscraper possible. [ 1 ] Steel frame has displaced its predecessor, the iron frame , in the early 20th century. [ 2 ] The rolled steel "profile" or cross section of steel columns takes the shape of the letter "Ɪ". The two wide flanges of a column are thicker and wider than the flanges on a beam , to better withstand compressive stress in the structure. Square and round tubular sections of steel can also be used, often filled with concrete. Steel beams are connected to the columns with bolts and threaded fasteners, and historically connected by rivets . The central "web" of the steel I-beam is often wider than a column web to resist the higher bending moments that occur in beams. Wide sheets of steel deck can be used to cover the top of the steel frame as a "form" or corrugated mold, below a thick layer of concrete and steel reinforcing bars . Another popular alternative is a floor of precast concrete flooring units with some form of concrete topping. Often in office buildings, the final floor surface is provided by some form of raised flooring system with the void between the walking surface and the structural floor being used for cables and air handling ducts. The frame needs to be protected from fire because steel softens at high temperature and this can cause the building to partially collapse. In the case of the columns this is usually done by encasing it in some form of fire resistant structure such as masonry, concrete or plasterboard. The beams may be cased in concrete, plasterboard or sprayed with a coating to insulate it from the heat of the fire or it can be protected by a fire-resistant ceiling construction. Asbestos was a popular material for fireproofing steel structures up until the early 1970s, before the health risks of asbestos fibres were fully understood. The exterior "skin" of the building is anchored to the frame using a variety of construction techniques and following a huge variety of architectural styles . Bricks , stone , reinforced concrete , architectural glass , sheet metal and simply paint have been used to cover the frame to protect the steel from the weather. [ 3 ] Cold-formed steel frames are also known as lightweight steel framing (LSF). Thin sheets of galvanized steel can be cold formed into steel studs for use as a structural or non-structural building material for both external and partition walls in both residential, commercial and industrial construction projects (pictured). The dimension of the room is established with a horizontal track that is anchored to the floor and ceiling to outline each room. The vertical studs are arranged in the tracks, usually spaced 16 inches (410 mm) apart, and fastened at the top and bottom. The typical profiles used in residential construction are the C-shape stud and the U-shaped track, and a variety of other profiles. Framing members are generally produced in a thickness of 12 to 25 gauge . Heavy gauges, such as 12 and 14 gauge, are commonly used when axial loads (parallel to the length of the member) are high, such as in load-bearing construction. Medium-heavy gauges, such as 16 and 18 gauge, are commonly used when there are no axial loads but heavy lateral loads (perpendicular to the member) such as exterior wall studs that need to resist hurricane-force wind loads along coasts. Light gauges, such as 25 gauge, are commonly used where there are no axial loads and very light lateral loads such as in interior construction where the members serve as framing for demising walls between rooms. The wall finish is anchored to the two flange sides of the stud, which varies from 1 + 1 ⁄ 4 to 3 inches (32 to 76 mm) thick, and the width of web ranges from 1 + 5 ⁄ 8 to 14 inches (41 to 356 mm). Rectangular sections are removed from the web to provide access for electrical wiring. Steel mills produce galvanized sheet steel, the base material for the manufacture of cold-formed steel profiles. Sheet steel is then roll-formed into the final profiles used for framing. The sheets are zinc coated (galvanized) to increase protection against oxidation and corrosion . Steel framing provides excellent design flexibility due to the high strength-to-weight ratio of steel, which allows it to span over long distances, and also resist wind and earthquake loads. Steel-framed walls can be designed to offer excellent thermal and acoustic properties – one of the specific considerations when building using cold-formed steel is that thermal bridging can occur across the wall system between the outside environment and interior conditioned space. Thermal bridging can be protected against by installing a layer of externally fixed insulation along the steel framing – typically referred to as a 'thermal break'. The spacing between studs is typically 16 inches on center for home exterior and interior walls depending on designed loading requirements. In office suites the spacing is 24 inches (610 mm) on center for all walls except for elevator and staircase wells. Hot Formed frames, also known as hot-rolled steel frames, are engineered from steel that undergoes a complex manufacturing process known as hot rolling. During this procedure, steel members are heated to temperatures above the steel’s recrystallization temperature (1700˚F).This process serves to refine the grain structure of the steel and align its crystalline lattice. It is then passed through precision rollers to achieve the desired frame profiles. [ 4 ] The distinctive feature of hot formed frames is their substantial beam thickness and larger dimensions, making them more robust compared to their cold rolled counterparts. This inherent strength makes them particularly well-suited for application in larger structures, as they show minimal deformation when subjected to substantial loads. While it is true that hot rolled steel members often have a higher initial cost per component when compared to cold rolled steel, their cost-efficiency becomes increasingly evident when used in the construction of larger structures. This is because hot rolled steel frames require fewer components to span equivalent distances, leading to economic advantages in bigger projects. The use of steel instead of iron for structural purposes was initially slow. The first iron-framed building, Ditherington Flax Mill , had been built in 1797, but it was not until the development of the Bessemer process in 1855 that steel production was made efficient enough for steel to be a widely used material. Cheap steels, which had high tensile and compressive strengths and good ductility, were available from about 1870, but wrought and cast iron continued to satisfy most of the demand for iron-based building products, due mainly to problems of producing steel from alkaline ores. These problems, caused principally by the presence of phosphorus, were solved by Sidney Gilchrist Thomas in 1879. It was not until 1880 that an era of construction based on reliable mild steel began. By that date the quality of steels being produced had become reasonably consistent. [ 5 ] The Home Insurance Building , completed in 1885, was the first to use skeleton frame construction, completely removing the load bearing function of its masonry cladding. In this case the iron columns are merely embedded in the walls, and their load carrying capacity appears to be secondary to the capacity of the masonry, particularly for wind loads. In the United States, the first steel framed building was the Rand McNally Building in Chicago, erected in 1890. The Royal Insurance Building in Liverpool designed by James Francis Doyle in 1895 (erected 1896–1903) was the first to use a steel frame in the United Kingdom. [ 6 ]
https://en.wikipedia.org/wiki/Steel_frame
Steel plate construction is a method of rapidly constructing heavy reinforced concrete items. The method was developed in Korea in 2004 at a steel fabricator . [ citation needed ] Each assembly has two parallel steel plates joined by welded stringers or tie bars. The assemblies are then moved to the job site and placed with a crane. Finally, the space between the plate walls is filled with concrete . [ 1 ] The method provides excellent strength because the steel is on the outside, where tensile forces are often greatest. Construction with this method is accomplished roughly twice as fast with as other methods of reinforced concrete construction, because by constructing at specialized off-site fabrication facilities it avoids tying rebar and constructing forms on-site. [ 2 ] Because of the rapid construction time, the cost of large-scale projects can be significantly decreased when this method is used. The method is of special interest for rapidly constructing nuclear power plants , which use large reinforced concrete structures, and typically have long construction times, with high costs. [ 3 ]
https://en.wikipedia.org/wiki/Steel_plate_construction
A steel plate shear wall ( SPSW ) consists of steel infill plates bounded by boundary elements. They constitute an SPSW. [ 1 ] Its behavior is analogous to a vertical plate girder cantilevered from its base. Similar to plate girders, the SPW system optimizes component performance by taking advantage of the post- buckling behavior of the steel infill panels. An SPW frame can be idealized as a vertical cantilever plate girder , in which the steel plates act as the web, the columns act as the flanges and the cross beams represent the transverse stiffeners. The theory that governs plate design should not be used in the design of SPW structures since the relatively high bending strength and stiffness of the beams and columns have a significant effect on the post-buckling behavior. Capacity design of structures is: to control failure in a building by pre-selecting localized ductile fuses (or weak links) to act as the primary location for energy dissipation when a building is subjected to extreme loading. The structure is designed such that all inelastic action (or damage) occurs at these critical locations (the fuses), which are designed to behave in a ductile and stable manner. Conversely, all other structural elements are protected against failure or collapse by limiting the load transfer to these elements to the yield capacity of the fuses. In SPSWs, the infill plates are meant to serve as the fuse elements. When damaged during an extreme loading event, they can be replaced at a reasonable cost and restore the full integrity of the building . In general, SPWs are categorized based on their performance, selection of structural and load-bearing systems, and the presence of perforations or stiffeners (Table 1). A significant amount of valuable research has been performed on the static and dynamic behavior of SPSWs. Much research has been conducted to not only help determine the behavior, response and performance of SPWs under cyclic and dynamic loading, but also as a means to help advance analysis and design methodologies for the engineering community. The pioneering work of Kulak and co-investigators at the University of Alberta in Canada led to a simplified method for analyzing a thin unstiffened SPSW – the strip model. [ 2 ] This model is incorporated in Chapter 20 of the most recent Canadian Steel Design Standard [ 3 ] (CAN/CSA S16-01) [ 4 ] and the National Earthquake Hazard Reduction Program (NEHRP) provisions in the US. Table 1. Categorization of steel plate walls based on performance characteristics and expectations [ 1 ] In the past two decades the steel plate shear wall (SPSW), also known as the steel plate wall (SPW), has been used in a number of buildings in Japan and North America as part of the lateral force resisting system. In earlier days, SPSWs were treated like vertically oriented plate girders and design procedures tended to be very conservative. Web buckling was prevented through extensive stiffening or by selecting an appropriately thick web plate, until more information became available on the post-buckling characteristics of web plates. Although the plate girder theory seems appropriate for the design of an SPW structure, a very important difference is the relatively high bending strength and stiffness of the beams and columns that form the boundary elements of the wall. These members are expected to have a significant effect on the overall behaviour of a building incorporating this type of system and several researchers have focused on this aspect of SPWs. The energy dissipating qualities of the web plate under extreme cyclic loading has raised the prospect of using SPSWs as a promising alternative to conventional systems in high-risk seismic regions. A further benefit is that the diagonal tension field of the web plate acts like a diagonal brace in a braced frame and thus completes the truss action, which is known to be an efficient means to control wind drift. From a designer's point of view, steel plate walls have become a very attractive alternative to other steel systems, or to replace reinforced concrete elevator cores and shear wall. In comparative studies it has been shown that the overall costs of a building can be reduced significantly when considering the following advantages: [ 5 ] In comparison with conventional bracing systems, steel panels have the advantage of being a redundant, continuous system exhibiting relatively stable and ductile behaviour under severe cyclic loading (Tromposch and Kulak, 1987). This benefit along with the high stiffness of the plates acting like tension braces to maintain stability, strongly qualifies the SPW as an ideal energy dissipation system in high risk seismic regions, while providing an efficient system to reduce lateral drift. Thus, some of the advantages of using SPWs compared with conventional bracing systems are as follows: A steel plate shear element consists of steel infill plates bounded by a column-beam system. When these infill plates occupy each level within a framed bay of a structure, they constitute an SPW. Its behaviour is analogous to a vertical plate girder cantilevered from its base. Similar to plate girders, the SPW system optimizes component performance by taking advantage of the post-buckling behaviour of the steel infill panels. An SPW frame can be idealized as a vertical cantilever plate girder, in which the steel plates act as the web, the columns act as the flanges and the cross beams1 represent the transverse stiffeners. The theory that governs the design of plate girders for buildings proposed by Basler in 1960, [ 6 ] [ 7 ] should not be used in the design of SPW structures since the relatively high bending strength and stiffness of the beams and columns are expected to have a significant effect on the post-buckling behaviour. However, Basler's theory could be used as a basis to derive an analytical model for SPW systems. Designers pioneering the use of SPWs did not have much experience nor existing data to rely upon. Typically, web plate design failed to consider post-buckling behaviour under shear, thus ignoring the advantage of the tension field and its added benefits for drift control and shear resistance. Furthermore, the inelastic deformation capacity of this highly redundant system had not been utilized, also ignoring the significant energy dissipation capability that is of great importance for buildings in high-risk seismic zones. One of the first researchers to investigate the behaviour of SPWs more closely was Kulak at the University of Alberta . Since the early 1980s, his team conducted both analytical and experimental research focused on developing design procedures suitable for drafting design standards (Driver et al., 1997, Thorburn et al., 1983, Timler and Kulak, 1983, and Tromposch and Kulak, 1987). [ 8 ] Recent research in the United States by Astaneh (2001) supports the assertion by Canadian academia that unstiffened plate, post-buckling behaviour acts as a capable shear resisting system. There are two different modelling techniques: The strip model represents shear panels as a series of inclined strip elements, capable of transmitting tension forces only, and oriented in the same direction as the average principal tensile stresses in the panel. By replacing a plate panel with struts, the resulting steel structure can be analyzed using currently available commercial computer analysis software. Research conducted at the University of British Columbia by Rezai et al. (1999) showed that the strip model is significantly incompatible and inaccurate for a wide range of SPW arrangements. The strip model is limited mostly to SPSWs with thin plates (low critical buckling capacity) and certain ratios. [ 9 ] In the development of this model, no solution has been provided for a perforated SPSW, shear walls with thick steel plates and shear walls with stiffeners. The strip model concept, although appropriate for practical analysis of thin plates, is not directly applicable to other types of plates. Moreover, its implementations have yet to be incorporated in commonly used commercial computer analysis software. In order to overcome this limitation, a general method was developed for the analysis and design of SPWs within different configurations, including walls with or without openings, with thin or thick plates, and with or without stiffeners. [ 10 ] This method considers the behavior of the steel plate and frame separately, and accounts for the interaction of these two elements, which leads to a more rational engineering design of an SPSW system. However, this model has serious shortcomings when the flexural behavior of an SPSW needs to be properly accounted for, such as the case of a slender tall building. Modified Plate-Frame Interaction (M-PFI) model is based upon an existing shear model originally presented by Roberts and Sabouri-Ghomi (1992). Sabouri-Ghomi, Ventura and Kharrazi (2005) further refined the model and named it the Plate-Frame Interaction (PFI) model. In this paper, the PFI analytical model is then further enhanced by ‘modifying’ the load-displacement diagram to include the effect of overturning moments on the SPW response, hence the given name of the M-PFI model. [ 11 ] [ 12 ] [ 13 ] The method also addresses bending and shear interactions of the plastic ultimate capacity of steel panels, as well as bending and shear interactions of the ultimate yield strength for each individual component, that is the steel plate and surrounding frame. Saeed Tabatabaei and Roberts (1991 and 1992), Roberts and Sabouri-Ghomi (1991 and 1992), and Berman and Bruneau (2005)
https://en.wikipedia.org/wiki/Steel_plate_shear_wall
Steen Rasmussen (born 7 July 1955) is a Danish physicist mainly working in the areas of artificial life and complex systems . He is currently a professor in physics and a center director at University of Southern Denmark as well as an external research professor at the Santa Fe Institute . His formal training was at the Technical University of Denmark (1985 PhD in physics of complex systems) and University of Copenhagen (philosophy). He spent 20 years as a researcher at Los Alamos National Laboratory (1988-2007) the last five years as a leader of the Self-Organized Systems team. He has been part of the Santa Fe Institute since 1988. The main scientific effort of Rasmussen has since 2001 has been to explore, understand and construct a transition from nonliving to living materials. Bridging this gap requires an interdisciplinary scientific effort, which is why he has assembled, sponsored and lead research teams in the US, across Europe and in Denmark. He became a scientific team leader in 2002 at Los Alamos National Laboratory, USA. [ 1 ] He has since held research leadership positions at the Santa Fe Institute , University of Copenhagen and University of Southern Denmark . In 2004 he represented Los Alamos National Laboratory scientifically in cofounding together with primarily European scientific institutions the European Centre for Living Technology in Venice, Italy where he later served as Chairman of the Science Board. Since late 2007 he has been the director of the Center for Fundamental Living Technology at University of Southern Denmark . In 2018 he received the Lifetime Achievement Award form the International Society of Artificial Life (ISAL) Rasmussen has for many years been actively engaged in the public discourse regarding science and society and on this background he founded The Initiative for Science, Society and Policy (ISSP) in 2009. ISSP is currently funded by two Danish universities, has a Director, five Science Focus Leaders and a Science Board. In 2018 he received the Lifetime Achievement Award from the International Society of Artificial Life (ISAL) .
https://en.wikipedia.org/wiki/Steen_Rasmussen_(physicist)
Steensen Varming is an engineering firm headquartered in Copenhagen , Denmark. It was founded by Niels Steensen and Jørgen Varming in Copenhagen, Denmark, in 1933. The firm specialised in civil, structural and building services engineering. During the 20th century, the practice grew out of Denmark and new offices were established in Australia (Steensen Varming Australia ‐ 1973), United Kingdom (Steensen Varming Mulcahy ‐ 1957) and Ireland (Varming Mulcahy Reilly Associates ‐ 1947). [ 1 ] Jørgen Varming was the son of a prominent Danish architect, Kristoffer Varming ; Jørgen studied engineering at the University of Newcastle. [ 2 ] Steensen and Varming were chosen by the Danish architect Jørn Utzon as the mechanical consulting engineers for the Sydney Opera House in Sydney in 1957. The Australian branch of Steensen & Varming Australia (later to be known as Steensen Varming) was led by Vagn Prestmark a partner from the Danish Steensen & Varming firm. [ 3 ] [ 4 ] Prestmark established Steensen Varming in Australia in 1957 and the company was permanently established in Australia in 1973. [ 5 ] Steensen & Varming was not well known in Australia prior to the Sydney Opera House, it was however well established in Europe with offices in Dublin , Belfast , London , Edinburgh and Copenhagen and employed over 500 people by 1973. [ 6 ] When Utzon resigned from the Sydney Opera House in 1966, Steensen & Varming continued as the mechanical consultants ultimately delivering the design, documentation, contract administration and detailed site supervision of all mechanical, hydraulic and fire protection services, including the controls and supervisory system. Steensen Varming's most known contribution to the Sydney Opera House, was the design for the water heat pump system. The architects and engineers agreed that constructing a boiler chimney stack or a cooling tower, would not be in keeping with the design of the Opera House, which ruled out the two normal approaches for large-scale air conditioning. Steensen Varming provided the design solution in using a heat pump system, which used water from the harbour as the cooling agent. [ 5 ] There were three main considerations which led to the design of the Opera House air conditioning as a heat pump system, the availability of the waters of Sydney harbour as a heat sources and sink, the aesthetics and the savings that could be achieved with a water-to-water heat pump. Three pumps draw water from Circular Quay , the water is filtered to remove debris and then passes through tubes and is discharged into the harbour at the opposite side of the Opera House. Fresh water circulates between the heat exchanger shells and the shells of the condenser and evaporators of three centrifugal chillers / heat pump sets. [ 7 ] The design innovation and technical expertise demonstrated in this landmark project subsequently led to the awarding of other projects in Australia to the Steensen Varming practice. [ 5 ] The engineering construction of the Sydney Opera House was featured in a National Geographic / BBC production hosted Richard Hammond called Engineering Connections . The programme aired in Australia on 13 March 2010. Part of the documentary featured the seawater heat rejection system originally designed by Steensen Varming and assistance on this documentary was provided by Steensen Varming who acted as technical liaison to the production team. [ 8 ] Steensen Varming was the first Australian organisation to win an Award of Excellence from the International Association of Lighting Designers for the lighting of the Ian Thorpe Aquatic Centre , Sydney. The Ian Thorpe Aquatic Centre was one of the last architectural designs by the architect Harry Seidler and was completed in 2008. [ 9 ] The Sydney Mint was recently named as one of 30 projects that have reshaped the built environment since 1978. "The refurbishment project is an example of the Integration of services systems (by Steensen Varming), to provide a modern, functional headquarters while minimising the impact on the heritage and archaeological fabric of a site." [ 10 ]
https://en.wikipedia.org/wiki/Steensen_Varming
Steering ratio refers to the ratio between the turn of the steering wheel (in degrees) or handlebars and the turn of the wheels (in degrees). [ 1 ] The steering ratio is the ratio of the number of degrees of turn of the steering wheel to the number of degrees the wheel(s) turn as a result. In motorcycles , delta tricycles and bicycles , the steering ratio is always 1:1, because the steering wheel is fixed to the front wheel. A steering ratio of x:y means that a turn of the steering wheel x degree(s) causes the wheel(s) to turn y degree(s). In most passenger cars , the ratio is between 12:1 and 20:1. For example, if one and a half turns of the steering wheel, 540 degrees, causes the inner & outer wheel to turn 35 and 30 degrees respectively, due to Ackermann steering geometry , the ratio is then 540:((35+30)/2) = 16.6:1. A higher steering ratio means that the steering wheel is turned more to get the wheels turning, but it will be easier to turn the steering wheel. A lower steering ratio means that the steering wheel is turned less to get the wheels turning, but it will be harder to turn the steering wheel. Larger and heavier vehicles will often have a higher steering ratio, which will make the steering wheel easier to turn. If a truck had a low steering ratio, it would be very hard to turn the steering wheel. In normal and lighter cars, the wheels are easier to turn, so the steering ratio doesn't have to be as high. In race cars the ratio is typically very low, because the vehicle must respond to steering input much faster than in normal cars. The steering wheel is therefore harder to turn. Variable-ratio steering is a system that uses different ratios on the rack in a rack and pinion steering system. At the center of the rack, the space between the teeth are smaller and the space becomes larger as the pinion moves down the rack. In the middle of the rack there is a higher ratio and the ratio becomes lower as the steering wheel is turned towards lock. That makes the steering less sensitive when the steering wheel is close to its center position and makes it harder for the driver to over steer at high speeds. As the steering wheel is turned towards lock, the wheels begin to react more to steering input. A steering quickener is used to modify the steering ratio of factory-installed steering system, which in turn modifies the response time and overall handling of vehicle. When a steering quickener is employed in an automobile, the driver of the automobile can turn the steering wheel a smaller degree compared to a factory-installed steering system without a steering quickener, to turn the vehicle through same distance. [ 2 ] On the other hand, the steering effort needed will greatly increase. If the automobile is equipped with power steering, overloading the power steering pump can also be a concern. [ 3 ]
https://en.wikipedia.org/wiki/Steering_ratio
A steering wheel (also called a driving wheel , a hand wheel , or simply wheel ) is a type of steering control in vehicles . Steering wheels are used in most modern land vehicles, including all mass-production automobiles , buses, light and heavy trucks, as well as tractors and tanks . The steering wheel is the part of the steering system that the driver manipulates; the rest of the steering system responds to such driver inputs. This can be through direct mechanical contact as in recirculating ball or rack and pinion steering gears, without or with the assistance of hydraulic power steering , HPS, or as in some modern production cars with the help of computer-controlled motors, known as electric power steering . Near the start of the 18th century, many sea vessels appeared using the ship's wheel design. However, historians are unclear when that approach to steering was first used. [ 1 ] The first automobiles were steered with a tiller , but in 1894, Alfred Vacheron took part in the Paris–Rouen race with a Panhard 4 hp model which he had fitted with a steering wheel. [ 2 ] That is believed to be one of the earliest employments of the principle. [ 3 ] From 1898, the Panhard et Levassor cars were equipped as standard with steering wheels. Charles Rolls introduced the first car in Britain fitted with a steering wheel when he imported a 6 hp Panhard from France in 1898. [ 4 ] Arthur Constantin Krebs replaced the tiller with an inclined steering wheel for the Panhard car he designed for the 1898 Paris–Amsterdam–Paris race which ran 7–13 July 1898. [ 5 ] In 1898, Thomas B. Jeffery and his son, Charles T. Jeffery, developed two advanced experimental cars featuring a front-mounted engine and a steering wheel mounted on the left-hand side. [ 6 ] However, the early automaker adopted a more "conventional" rear-engine and tiller-steering layout for its first mass-produced Ramblers in 1902. [ 6 ] The following year, the Rambler Model E was largely unchanged, except that it came equipped with a tiller early in the year that was changed to a steering wheel by the end of 1903. [ 7 ] By 1904, all Ramblers featured steering wheels. [ 8 ] Within a decade, the steering wheel had entirely replaced the tiller in automobiles. At the insistence of Thomas B. Jeffery, the driver's position was also moved to the left-hand side of the car during the 1903 Rambler production. [ 9 ] Most other car makers began offering cars with left-hand drive in 1910. [ 10 ] Soon after, most cars in the US converted to left-hand drive. [ 11 ] Steering wheels for passenger automobiles are generally circular. They are mounted to the steering column by a hub connected to the outer ring of the steering wheel by one or more spokes (single spoke wheels being a relatively rare exception). Other types of vehicles may use a modified circular design, a butterfly shape, or some other shape, such as a yoke. [ 12 ] On some Tesla models, the steering control is through a yoke rectangle shaped with rounded edges and two pistol grips. [ 13 ] The C8 Corvette includes a square-type steering wheel with rounded corners, described as a 'squircle'. [ 14 ] The objective of the flat bottom is to ease diver egress while the flattened top enhances the line of sight when driving. [ 14 ] General Motors applied for a US patent for a modular steering control that can be updated with components or changed in shape ranging from a traditional circle to a yoke. [ 15 ] In countries where cars must drive on the left side of the road , the steering wheel is typically on the right side of the car (right-hand drive or RHD); the converse applies in countries where cars drive on the right side of the road (left-hand drive or LHD). In addition to its use in steering, the steering wheel is the usual location for a button to activate the car's horn . Modern automobiles may have other controls, such as cruise control , audio system, and telephone controls, as well as paddle-shifters , built into the steering wheel to minimize the extent to which the driver must take their hands off the wheel. The steering wheels were rigid and mounted on non-collapsible steering columns . This arrangement increased the risk of impaling the driver in case of a severe crash. The first collapsible steering column was invented in 1934 but was never successfully marketed. [ 16 ] By 1956, Ford came out with a safety steering wheel that was set high above the post with spokes that would flex, [ 17 ] but the column was still rigid. In 1968, United States regulations ( FMVSS Standard No. 204) were implemented concerning the acceptable rearward movement of the steering wheel in case of a crash. [ 18 ] Collapsible steering columns were required to meet that standard. Before this invention, the Citroën DS incorporated a curved and off-center single-spoke steering wheel designed to deflect the driver from the steering column in case of a crash. [ 19 ] Power steering affords the driver reduced effort to steer the car. Modern power steering has almost universally relied on a hydraulic system, although electrical systems are steadily replacing this technology. Mechanical power steering systems were introduced, such as on 1953 Studebakers . [ 20 ] However, hydraulically assisted systems have prevailed. While other methods of steering passenger cars have resulted from experiments, for example, the "wrist-twist" steering of the 1965 Mercury Park Lane concept car was controlled by two 5-inch (127 mm) rings, [ 21 ] none have yet been deployed as successfully as the conventional large steering wheel. Passenger automobile regulations implemented by the U.S. Department of Transportation required the locking of steering wheel rotation (or transmission locked in "park") to hinder motor vehicle theft ; in most vehicles, this is accomplished when the ignition key is removed from the ignition lock. See steering lock . [ 22 ] The driver's seat and steering wheel are centrally located on certain high-performance sports cars, such as the McLaren F1 , and most single-seat racing cars. As drivers may continuously have their hands on the steering wheel for many hours, these are designed with ergonomics in mind. However, the most crucial concern is that the driver can effectively convey torque to the steering system, especially in vehicles without power steering or in the rare event of a loss of steering assist. A typical design for circular steering wheels is a steel or magnesium rim with a plastic or rubberized grip molded over and around it. Some drivers purchase vinyl or textile steering wheel covers to enhance grip and comfort or simply as decoration. Another device used to make steering easier is the brodie knob . A similar device in aircraft is the yoke . Water vessels not steered from a stern-mounted tiller are directed with the ship's wheel , which may have inspired the concept of the steering wheel. The steering wheel is better than other user interfaces and has persisted because driving requires precise feedback that is provided by a large interface. [ 23 ] Early Formula One cars used steering wheels taken directly from road cars. They were normally made from wood. Without interior cabin packaging constraints, they tended to be made as large a diameter as possible to reduce the effort needed to turn. As cars grew progressively lower and driver's areas more compact throughout the 1960s and 1970s, steering wheels became smaller to fit into the interior space. [ 24 ] The number of spokes in the steering wheel has continuously changed. Most early cars had four-spoke steering wheels. [ 25 ] A Banjo steering wheel was an option in early automobiles. [ 26 ] They predate power steering. The wire spokes were a buffer or absorber between the driver's hands and the vibration transmitted from the road surfaces. Most were three- or four-spokes made of four or five wires in each spoke, hence the name "Banjo". Edward James Lobdell developed the original tilt wheel in the early 1900s. [ 27 ] A 7-position tilt wheel was introduced by the Saginaw Division of General Motors in 1963 for all passenger car divisions except Chevrolet which received the tilt wheel in 1964. [ 28 ] This tilt wheel was also supplied to the other US automakers (except Ford). [ 29 ] Originally a luxury option on cars, the tilt function helps to adjust the steering wheel by moving the wheel through an arc in an up and down motion. Tilt Steering Wheels rely upon a ratchet joint located in the steering column just below the steering wheel. The wheel can be adjusted upward or downward by disengaging the ratchet lock while the steering column remains stationary below the joint. Some designs place the pivot slightly forward along the column, allowing for a fair amount of vertical movement of the steering wheel with slight actual tilt. In contrast, other designs place the pivot almost inside the steering wheel, allowing adjustment of the angle of the steering wheel with nearly no change in its height. An adjustable steering column allows the steering wheel height to be adjusted with only a small, useful change in tilt. Most of these systems work with compression locks or electric motors instead of ratchet mechanisms; the latter may be capable of moving to a memorized position when a given driver uses the car or automatically moving up and forward to ease egress. Many pre-war British cars offered telescoping steering wheels that required loosening a locknut before adjustment, many using the Douglas ASW (Adjustable Steering Wheel). [ 27 ] [ 30 ] In 1949, the Jaguar XK120 introduced a new steering wheel supplied by Bluemel that was driver-adjustable by loosening a sleeve around the column by hand. [ 31 ] The 1955-1957 Ford Thunderbird had a similar design with 3 inches (76 mm) of total travel. [ 32 ] [ 33 ] In 1956, the travel was restricted to 2 inches (51 mm). A patent was filed regarding a telescoping steering wheel in July 1942 by Bernard Maurer of the Saginaw Steering Gear Division of General Motors (now Nexteer Automotive ). Nevertheless, GM would not offer a telescoping wheel of their own until the debut of the optional telescopic wheel on the 1965 Corvette and Corvair , and the optional tilt/telescope wheel on 1965 Cadillacs . The GM column was released by twisting a locking ring surrounding the center hub and offered a 3-inch (76 mm) range of adjustment. A swing-away steering column was introduced in the 1961 Ford Thunderbird and made available on other Ford products during the 1960s. The swing-away steering wheel allowed the steering wheel to move 9 inches (229 mm) to the right when the transmission selector was in the "park" position to make the driver's exit and entry easier. [ 27 ] [ 34 ] A tilt-away wheel was introduced by Ford in 1967 after updates to Federal Motor Vehicle Safety Standards requirements. Though it was an update to the swing-away steering wheel, which did not meet updated safety standards, it offers limited movement but added convenience due to the automatic pop-over function over its predecessor. [ 35 ] Some steering wheels can be mounted on a detachable or a quick-release hub. The steering wheel can be removed without using tools by pressing a button. The system is often found in narrow-spaced racing cars to facilitate the driver getting in and out, as well as in other cars as an anti-theft device. [ 36 ] The quick-release connector is often brand-specific, with some makes being interchangeable. The most common mounting pattern is 6×70 mm, [ 37 ] which denotes a bolt circle pattern with six bolts placed along a circle 70 mm in diameter. [ 38 ] Other examples of common bolt patterns are 3×1.75 in (44.45 mm), 5×2.75 in (69.85 mm), 6×74 mm and 6×2.75 in (69.85 mm). [ 37 ] The quick release itself is often proprietary. [ citation needed ] The steering wheel should be used with strategic movements of the hand and wrist in spinning motions. Caution and care should be used to ensure the safety of the extremities. The constant motions used must be performed with caution. "Proper posture of the hand-arm system while using hand tools is essential. As a rule, the wrist should not be bent, but must be kept straight to avoid overexertion of tissues like tendons and tendon sheaths and compression of nerves and blood vessels." [ 39 ] Turning the steering wheel while the vehicle is stationary is called dry steering . It is generally advised to avoid dry steering as it strains the steering mechanism and causes undue wear to the tires. The first button added to the steering wheel was a switch to activate the car's electric horn . Traditionally located on the steering wheel hub or center pad, the horn switch was sometimes placed on the spokes or activated via a decorative horn ring, which obviated the necessity of moving a hand away from the rim. Electrical connections are made via a slip ring . A further development, the Rim Blow steering wheel, integrated the horn switch into the steering wheel rim. In 1966, Ford offered the Highway Pilot Speed Control option with steering wheel pad-mounted rocker switches, on its Thunderbird . [ 40 ] Uniquely, the Thunderbird also lightly applied the brakes and illuminated the stop lamps when the Retard was continuously depressed with the cruise control on, but not engaged. In 1974, Lincoln added two rocker switches on the steering wheel to activate various cruise control functions on the Continental and Continental Mark IV. [ 41 ] In 1988, Pontiac offered a steering wheel with 12 buttons controlling various audio functions on the Trans-Am, [ 42 ] 6000 STE and Bonneville . In the 1990s, a proliferation of new buttons began to appear on automobile steering wheels. Remote or alternate adjustments could include vehicle audio , telephone, and voice control navigation. Scroll wheels or buttons are often used to set volume levels or page through menus and change radio stations or audio tracks. These controls can use universal interfaces, [ 43 ] wired or wirelessly. Game controllers are available for arcade cabinets , personal computers, and console games that are designed to look and feel like a steering wheel and intended for use in racing games . An early example is the Telstar Arcade , which featured a wheel in 1977 for use in the Road Race game that came packaged with it. [ 44 ] Some modern video game steering wheels employ haptic technology to simulate the feedback a real driver feels from a steering wheel, as well as buttons to allow for more inputs.
https://en.wikipedia.org/wiki/Steering_wheel
In glaciology and civil engineering , Stefan's equation (or Stefan's formula ) describes the dependence of ice-cover thickness on the temperature history. It says in particular that the expected ice accretion is proportional to the square root of the number of degree days below freezing. It is named for Slovenian physicist Josef Stefan . This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it . This glaciology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stefan's_equation
In thermodynamics , Stefan's formula says that the specific surface energy at a given interface is determined by the respective enthalpy difference Δ H ∗ {\displaystyle \Delta H^{*}} . σ = γ 0 ( Δ H ∗ N A 1 / 3 V m 2 / 3 ) , {\displaystyle \sigma =\gamma _{0}\left({\frac {\Delta H^{*}}{N_{\text{A}}^{1/3}V_{\text{m}}^{2/3}}}\right),} where σ is the specific surface energy , N A is the Avogadro constant , γ 0 {\displaystyle \gamma _{0}} is a steric dimensionless coefficient, and V m is the molar volume . [ 1 ] [ better source needed ]
https://en.wikipedia.org/wiki/Stefan's_formula
Stefan Bringezu is a German environmental scientist. He has conducted pioneering research in the field of material flow analysis [ 1 ] [ 2 ] and derived policy-relevant indicators of resource use, [ 3 ] which contributed to statistical standards in the EU , [ 4 ] OECD , [ 5 ] and UNEP [ 6 ] and environmental footprinting across scales. [ 7 ] [ 8 ] He had been selected as inaugural member of the International Panel for Sustainable Resource Management (now: International Resource Panel [ 9 ] ) and lead-coordinated in three of their reports. [ 10 ] [ 11 ] [ 12 ] He was scientific director of the Center for Environmental Systems Research at Kassel University, Germany. [ 13 ]
https://en.wikipedia.org/wiki/Stefan_Bringezu
Stefan Karlsson (born Iceland, 31 May 1950) is a Professor of Molecular Medicine and Gene Therapy at the Lund Stem Cell Center, in the Department of Laboratory Medicine, Lund University , Sweden. [ 1 ] He is recognized for significant contributions to the fields of gene therapy and hematopoietic stem cell biology and in 2009 was awarded the Tobias Prize [ 2 ] by The Royal Swedish Academy of Sciences . Karlsson's field of research is the regulation of hematopoietic stem cells and development of stem cell expansion protocols and gene therapy . He performed postdoctoral studies with Professor Arthur W. Nienhuis (1983-1986) at the National Heart Lung and Blood Institute (NHLBI) at the National Institutes of Health (NIH), Bethesda Maryland, where he received a Fogarty International Research Collaboration Award. He served as Chief of the Molecular and Medical Genetics Section, Developmental and Metabolic Neurology Branch (DMNB), National Institute for Neurological Disorders and Stroke (NINDS) , NIH , 1988-1996. In 1995 he was recruited as a full professor to Lund University , Sweden, where he founded the Division of Molecular Medicine and Gene Therapy [ 3 ] and has been head of this division for twenty years. [ 4 ] He is also a founding and current member of the Lund Stem Cell Center, [ 5 ] since 2003, and director of the Hemato-Linné Strategic Research Environment [ 6 ] (2006-2016) funded through a 10-year Linnaeus grant from the Swedish Research Council . [ 7 ] Dr. Karlsson also holds a consultant physician position at Skåne University Hospital, [ 8 ] Sweden. Karlsson's early research focused on retroviral vector based gene correction of hematopoietic cells from monogenetic disorders, such as Gaucher’s disease [ 9 ] and hemoglobinopathies . [ 10 ] The results of these studies led to the first gene therapy clinical trial for the treatment of Gaucher’s disease (1995). [ 11 ] He has also developed lentiviral vectors for gene correction of hematopoietic stem cells, [ 12 ] and more recently developed preclinical gene therapy models for Gaucher’s disease [ 13 ] and Diamond Blackfan anemia. [ 14 ] An equal component of his research has been in the field of hematopoietic stem cell biology, where Dr. Karlsson focused on studying the mechanisms of hematopoietic stem cell expansion and maintenance with major contributions to understanding the role of Tgf-beta [ 15 ] and more recently Cripto. [ 16 ]
https://en.wikipedia.org/wiki/Stefan_Karlsson_(professor)
Chemical Neurological Stefan Lorenz Sorgner is a German metahumanist philosopher, [ 1 ] [ 2 ] a Nietzsche scholar, [ 3 ] [ 4 ] [ 5 ] a philosopher of music [ 6 ] [ 7 ] and an authority in the field of ethics of emerging technologies . [ 8 ] [ 9 ] [ 10 ] Sorgner was born on 15 October 1973 in Wetzlar (Germany). He studied philosophy at King's College London (BA), the University of Durham (MA by thesis; examiners: David E. Cooper , Durham; David Owen, Southampton), the Justus-Liebig-Universität Gießen and the Friedrich-Schiller-Universität Jena (Dr. phil.; Examiners: Wolfgang Welsch , Jena; Gianni Vattimo , Turin). [ 11 ] He taught philosophy and ethics at the Universities of Giessen , Jena , Erfurt and Erlangen . [ 12 ] Currently, he teaches at a US Liberal Arts College, John Cabot University . [ 13 ] Sorgner is a member of several editorial and advisory boards. [ 12 ] In issue 20(1) of the Journal of Evolution and Technology , Sorgner's article "Nietzsche, the Overhuman, and Transhumanism" was published. In it, he shows that there are significant similarities between Nietzsche's concept of the overhuman and the concept of the posthuman according to the view of some transhumanists. [ 14 ] Sorgner is in explicit controversy with Nick Bostrom , who is keen to differentiate his type of transhumanism from Nietzsche's philosophy. [ 15 ] Sorgner's interpretation brought about a response both among Nietzsche scholars as well as among transhumanists. The editors of the Journal of Evolution and Technology dedicated a special issue to the question concerning the relationship between transhumanism , Nietzsche and European posthumanist philosophies ( posthumanism ). Vol. 21 Issue 1, January 2010 of the Journal of Evolution and Technology was entitled "Nietzsche and European Posthumanisms", [ 16 ] and it included other responses to Sorgner's article, for example by Max More , [ 17 ] Michael Hauskeller . [ 18 ] Due to the intense debate, the editors of the journal decided to give Sorgner the chance to react to the articles. [ 19 ] In vol. 21 Issue 2 – October 2010, Sorgner replied to the various responses in his article "Beyond Humanism: Reflections on Trans- and Posthumanism". [ 20 ] Going back to Bostrom's criticism of Nietzsche in the reply to his critics, Sorgner also dealt with the views of Jürgen Habermas , who had also identified a similarity between Nietzsche and transhumanism, but for reasons opposite to Sorgner's and at odds with Bostrom's observations. Sorgner had argued that Nietzsche's philosophy could be shared by transhumanists due to its progressive aspect regarding man's freedom to self-overcome and pursue self-betterment. According to Habermas, who rejected all procedures of genetic enhancement, [ 21 ] transhumanism was unacceptable due to the danger that a new "Nietzschean-elite" could impose a "liberal eugenics", which would be essentially "fascist". Sorgner criticized Habermas, accusing him of being just "rhetorically gifted", and claimed that Habermas knew "exactly what he was doing – that an effective way to bring about negative reactions to human biotechnological procedures in the reader would be to identify those measures with procedures undertaken in Nazi Germany". [ 20 ] Sorgner also criticized what Habermas had said about the difference between education and genetic engineering. According to Habermas, genetic manipulation would be very different from education due to its irreversibility. [ 22 ] Sorger disputed both that the outcomes of education could always be modified by children, and that genetic modifications were always irreversible, as demonstrated by developments, above all, in the field of epigenetics . [ 20 ] Sorgner also put forward some aspects of his own philosophical position, which was strongly influenced by his teacher Gianni Vattimo . [ 23 ] Sorgner, accepts Vattimo's "weak thought" ( Italian : "pensiero debole"), but criticises Vattimo's understanding of the history of the "weakening of Being ". [ 24 ] [ 25 ] As an alternative, Sorgner suggests a this-worldly, naturalist and perspectivist interpretation of the world, a view that he explained in more detail in his 2010 monograph Menschenwürde nach Nietzsche: Die Geschichte eines Begriffs (Human dignity after Nietzsche: history of a concept). [ 26 ] Sorgner regards " nihilism ", as described by Nietzsche, "entirely a gain": [ 27 ] "This also means that the dominant concept of human dignity, from the perspective of perspectivism, has no higher status in terms of knowing the truth in correspondence to reality than the conceptions of Adolf Hitler or Pol Pot ". [ 28 ] After bioethicists and transhumanists discussed the relationship between Nietzsche and transhumanism, the debate was taken up by some leading Nietzsche scholars. Keith Ansell-Pearson , Paul Loeb and Babette Babich wrote responses in the journal The Agonist which is being published by the Nietzsche Circle (New York). [ 29 ] Sorgner's perspectivist "metahumanism" [ 30 ] and in particular his Menschenwürde nach Nietzsche was dealt with in a symposium organised by the "Nietzsche Forum Munich" which had been co-founded by Thomas Mann . [ 31 ] Leading German philosophers, e.g. Annemarie Pieper , responded to Sorgner's radical suggestions concerning the need to revise the prevalent conception of human dignity at this event. In May 2013, the weekly newspaper Die Zeit published an interview with Sorgner in which several of his suggestions concerning human dignity, emerging technologies and trans- and posthumanism were summarized. [ 32 ] In Autumn 2014, an essay collection entitled Umwertung der Menschenwürde ( Transvaluation of human dignity), edited by Beatrix Vogel, was published by Alber Verlag in which leading international theologians, philosophers, and ethicists wrote critical replies to Sorgner's suggestions concerning the notion "human dignity". [ 25 ] Sorgner has been an invited and keynote speaker at many important events and conferences, e.g. phil.cologne [ de ] , [ 33 ] TED , [ 34 ] and the World Humanities Forum, ICISTS-KAIST. [ 10 ] According to Rainer Zimmmermann of the "Identity Foundation", a recently set-up German private think tank, Sorgner is "Germany's leading post- and transhumanist philosopher ("Deutschlands führender post- und transhumanistischer Philosoph"). [ 35 ] In 2021, Sorgner published We Have Always Been Cyborgs , [ 36 ] in which the author argues that since one can define a "cyborg" as "a governed, a steered organism", [ 37 ] then "we have always been cyborgs". The kind of transhumanism proposed by Sorgner relies above all on what he calls "carbon-based transhuman technologies", that is gene editing , genetic engineering and gene selection, which he refers to as mankind's "most important scientific invention". [ 38 ] As for him gene modification is "structurally analogous to traditional parental education", [ 39 ] also from an ethical point of view we should not use different moral criteria for "traditional" education and for genetic engineering, if the latter is aimed at achieving the greatest good for humanity. For the same reason, according to Sorgner, all ethical reservations advanced so far against moral enhancement disappear. [ 40 ]
https://en.wikipedia.org/wiki/Stefan_Lorenz_Sorgner
Stefan adhesion is the normal stress (force per unit area) acting between two discs when their separation is attempted. Stefan's law governs the flow of a viscous fluid between the solid parallel plates and thus the forces acting when the plates are approximated or separated. The force F {\displaystyle F} resulting at distance h {\displaystyle h} between two parallel circular disks of radius R {\displaystyle R} , immersed in a Newtonian fluid with viscosity η {\displaystyle \eta } , at time t {\displaystyle t} , depends on the rate of change of separation d h d t {\displaystyle {\frac {dh}{dt}}} : Stefan adhesion is mentioned in conjunction with bioadhesion by mucus -secreting animals. Nevertheless, most such systems violate the assumptions of the equation. [ 1 ] In addition, these systems are much more complex when the fluid is non-Newtonian or inertial effects are relevant (high flow rate).
https://en.wikipedia.org/wiki/Stefan_adhesion
The Stefan flow , occasionally called Stefan's flow , is a transport phenomenon concerning the movement of a chemical species by a flowing fluid (typically in the gas phase ) that is induced to flow by the production or removal of the species at an interface . Any process that adds the species of interest to or removes it from the flowing fluid may cause the Stefan flow, but the most common processes include evaporation , condensation , chemical reaction , sublimation , ablation , adsorption , absorption , and desorption . It was named after the Slovenian physicist, mathematician, and poet Josef Stefan for his early work on calculating evaporation rates. The Stefan flow is distinct from diffusion as described by Fick's law , but diffusion almost always also occurs in multi-species systems that are experiencing the Stefan flow. [ 1 ] In systems undergoing one of the species addition or removal processes mentioned previously, the addition or removal generates a mean flow in the flowing fluid as the fluid next to the interface is displaced by the production or removal of additional fluid by the processes occurring at the interface. The transport of the species by this mean flow is the Stefan flow. When concentration gradients of the species are also present, diffusion transports the species relative to the mean flow. The total transport rate of the species is then given by a summation of the Stefan flow and diffusive contributions. An example of the Stefan flow occurs when a droplet of liquid evaporates in air. In this case, the vapor /air mixture surrounding the droplet is the flowing fluid, and liquid/vapor boundary of the droplet is the interface. As heat is absorbed by the droplet from the environment , some of the liquid evaporates into vapor at the surface of the droplet, and flows away from the droplet as it is displaced by additional vapor evaporating from the droplet. This process causes the flowing medium to move away from the droplet at some mean speed that is dependent on the evaporation rate and other factors such as droplet size and composition. In addition to this mean flow, a concentration gradient must exist in the neighborhood of the droplet (assuming an isolated droplet) since the flowing medium is mostly air far from the droplet and mostly vapor near the droplet. This gradient causes Fickian diffusion that transports the vapor away from the droplet and the air towards it, with respect to the mean flow. Thus, in the frame of the droplet, the flow of vapor away from the droplet is faster than for the pure Stefan flow, since diffusion is working in the same direction as the mean flow. However, the flow of air away from the droplet is slower than the pure Stefan flow, since diffusion is working to transport air back towards the droplet against the Stefan flow. Such flow from evaporating droplets is important in understanding the combustion of liquid fuels such as diesel in internal combustion engines, and in the design of such engines. The Stefan flow from evaporating droplets and subliming ice particles also plays prominently in meteorology as it influences the formation and dispersion of clouds and precipitation.
https://en.wikipedia.org/wiki/Stefan_flow
In chemical engineering , a Stefan tube is a device that was devised by Josef Stefan in 1874. [ 1 ] It is often used for measuring diffusion coefficients . [ 1 ] [ 2 ] It comprises a vertical tube, over the top of which a gas flows and at the bottom of which is a pool of volatile liquid that is maintained in a constant-temperature bath. [ 1 ] [ 3 ] [ 4 ] The liquid in the pool evaporates , diffuses through the gas above it in the tube, and is carried away by the gas flow over the tube mouth at the top. [ 1 ] [ 3 ] One then measures the fall in the level of the liquid in the tube. [ 4 ] The tube conventionally has a narrow diameter, in order to suppress convection . [ 4 ] The way that a Stefan tube is modelled, mathematically, is very similar to how one can model the diffusion of perfume fragrance molecules from (say) a drop of perfume on skin or clothes, evaporating up through the air to a person's nose. There are some differences between the models. However, they turn out to have little effect on results at highly dilute vapour concentrations. [ 5 ] In the analysis of the system, various assumptions are made. The liquid, conventionally denoted A , is neither soluble in the gas in the tube, conventionally denoted B , nor reacts with it. [ 3 ] The decrease in volume of the liquid A and increase in volume of the gas B over time can be ignored for the purposes of solving the equations that describe the behaviour, and an assumption can be made that the instantaneous flux at any time is the steady state value. [ 4 ] [ 2 ] There are no radial or circumferential components to the concentration gradients , resulting from convection or turbulence caused by excessively vigorous flow at the upper mouth of the tube, and the diffusion can thus be treated as a simple one-dimensional flow in the vertical direction. [ 1 ] [ 6 ] The mole fraction of A at the upper mouth of the tube is zero, as a consequence of the gas flow. [ 2 ] At the interface between A and B the flux of B is zero (because it is insoluble in A ) and the mole fraction is the equilibrium value. [ 6 ] [ 4 ] The flux of B , denoted N B , is thus zero throughout the tube, [ 4 ] its diffusive flux downward (along its concentration gradient) is balanced by its convective flux upward caused by A . [ 3 ] [ 6 ] Applying these assumptions, the system can be modelled using Fick's laws of diffusion [ 1 ] or as Maxwell–Stefan diffusion . [ 6 ]
https://en.wikipedia.org/wiki/Stefan_tube
Stefanie Barz is a German physicist and Professor of Quantum Information and Technology at the University of Stuttgart working in the field of photonic quantum technology . [ 1 ] Barz studied at the Johannes Gutenberg University Mainz , with a stay at the KTH Royal Institute of Technology within the Erasmus Programme . [ 2 ] She earned her PhD from the University of Vienna under the supervision of Anton Zeilinger , working on various aspects of photonic quantum computing , including 'blind' quantum computing using entangled photons, where the sender knows the initial state of entanglement while companies in control of data processing do not, making it impossible for them to decode the information without destroying it. [ 3 ] [ 4 ] [ 5 ] [ 6 ] For her dissertation she was awarded the Laudimaxima Prize of the University of Vienna, [ 2 ] [ 7 ] and her work has been featured in New Scientist and covered by the BBC and NBC . [ 5 ] [ 8 ] In 2013 Barz was awarded the Maria Schaumayer Prize [ 9 ] and the Loschmidt Prize. [ 10 ] From 2014 to 2017, Barz was a Marie Skłodowska-Curie Fellow as well as a Millard and Lee Alexander Fellow in the Christ Church College at the University of Oxford. She worked with Ian Walmsley on three-photon interference, [ 11 ] [ 12 ] [ 13 ] during which project she created integrated photon sources, fibre components and waveguide circuits. [ 14 ] Barz was appointed full professor at the University of Stuttgart in 2017. [ 15 ] The research focus of her group is on the study of single photons and quantum states of light, covering both fundamental quantum physics and aspects of integrated photonic technology, with the ultimate goal of applying fundamental advances in quantum computing, quantum communication, and quantum networking. [ 10 ] [ 16 ] [ 17 ] In 2018 she was awarded a multi-million-Euro grant to work on quantum technologies using silicon-based photonics [ 18 ] and since 2022 she leads the PhotonQ project, [ 19 ] which aims to realize a photonic quantum processor. Since 2022, Barz is Director of the Center for Integrated Quantum Science and Technology (IQST), the joint quantum centre of the Universities of Stuttgart, Ulm , and the Max Planck Institute for Solid State Research . [ 20 ] She also serves on the Strategic Advisory Board of QuantERA, a network of quantum technology researchers, [ 21 ] on the executive board of the CZS Center QPhoton, [ 22 ] and the advisory board of QuantumBW, an innovation initiative bundling quantum technology expertise in Baden-Württemberg . [ 23 ]
https://en.wikipedia.org/wiki/Stefanie_Barz
The Stefan–Boltzmann law , also known as Stefan's law , describes the intensity of the thermal radiation emitted by matter in terms of that matter's temperature . It is named for Josef Stefan , who empirically derived the relationship, and Ludwig Boltzmann who derived the law theoretically. For an ideal absorber/emitter or black body , the Stefan–Boltzmann law states that the total energy radiated per unit surface area per unit time (also known as the radiant exitance ) is directly proportional to the fourth power of the black body's temperature, T : M ∘ = σ T 4 . {\displaystyle M^{\circ }=\sigma \,T^{4}.} The constant of proportionality , σ {\displaystyle \sigma } , is called the Stefan–Boltzmann constant . It has the value In the general case, the Stefan–Boltzmann law for radiant exitance takes the form: M = ε M ∘ = ε σ T 4 , {\displaystyle M=\varepsilon \,M^{\circ }=\varepsilon \,\sigma \,T^{4},} where ε {\displaystyle \varepsilon } is the emissivity of the surface emitting the radiation. The emissivity is generally between zero and one. An emissivity of one corresponds to a black body. The radiant exitance (previously called radiant emittance ), M {\displaystyle M} , has dimensions of energy flux (energy per unit time per unit area), and the SI units of measure are joules per second per square metre (J⋅s −1 ⋅m −2 ), or equivalently, watts per square metre (W⋅m −2 ). [ 2 ] The SI unit for absolute temperature , T , is the kelvin (K). To find the total power , P {\displaystyle P} , radiated from an object, multiply the radiant exitance by the object's surface area, A {\displaystyle A} : P = A ⋅ M = A ε σ T 4 . {\displaystyle P=A\cdot M=A\,\varepsilon \,\sigma \,T^{4}.} Matter that does not absorb all incident radiation emits less total energy than a black body. Emissions are reduced by a factor ε {\displaystyle \varepsilon } , where the emissivity , ε {\displaystyle \varepsilon } , is a material property which, for most matter, satisfies 0 ≤ ε ≤ 1 {\displaystyle 0\leq \varepsilon \leq 1} . Emissivity can in general depend on wavelength , direction, and polarization . However, the emissivity which appears in the non-directional form of the Stefan–Boltzmann law is the hemispherical total emissivity , which reflects emissions as totaled over all wavelengths, directions, and polarizations. [ 3 ] : 60 The form of the Stefan–Boltzmann law that includes emissivity is applicable to all matter, provided that matter is in a state of local thermodynamic equilibrium (LTE) so that its temperature is well-defined. [ 3 ] : 66n, 541 (This is a trivial conclusion, since the emissivity, ε {\displaystyle \varepsilon } , is defined to be the quantity that makes this equation valid. What is non-trivial is the proposition that ε ≤ 1 {\displaystyle \varepsilon \leq 1} , which is a consequence of Kirchhoff's law of thermal radiation . [ 4 ] : 385 ) A so-called grey body is a body for which the spectral emissivity is independent of wavelength, so that the total emissivity, ε {\displaystyle \varepsilon } , is a constant. [ 3 ] : 71 In the more general (and realistic) case, the spectral emissivity depends on wavelength. The total emissivity, as applicable to the Stefan–Boltzmann law, may be calculated as a weighted average of the spectral emissivity, with the blackbody emission spectrum serving as the weighting function . It follows that if the spectral emissivity depends on wavelength then the total emissivity depends on the temperature, i.e., ε = ε ( T ) {\displaystyle \varepsilon =\varepsilon (T)} . [ 3 ] : 60 However, if the dependence on wavelength is small, then the dependence on temperature will be small as well. Wavelength- and subwavelength-scale particles, [ 5 ] metamaterials , [ 6 ] and other nanostructures [ 7 ] are not subject to ray-optical limits and may be designed to have an emissivity greater than 1. In national and international standards documents, the symbol M {\displaystyle M} is recommended to denote radiant exitance ; a superscript circle (°) indicates a term relate to a black body. [ 2 ] (A subscript "e" is added when it is important to distinguish the energetic ( radiometric ) quantity radiant exitance , M e {\displaystyle M_{\mathrm {e} }} , from the analogous human vision ( photometric ) quantity, luminous exitance , denoted M v {\displaystyle M_{\mathrm {v} }} . [ 8 ] ) In common usage, the symbol used for radiant exitance (often called radiant emittance ) varies among different texts and in different fields. The Stefan–Boltzmann law may be expressed as a formula for radiance as a function of temperature. Radiance is measured in watts per square metre per steradian (W⋅m −2 ⋅sr −1 ). The Stefan–Boltzmann law for the radiance of a black body is: [ 9 ] : 26 [ 10 ] L Ω ∘ = M ∘ π = σ π T 4 . {\displaystyle L_{\Omega }^{\circ }={\frac {M^{\circ }}{\pi }}={\frac {\sigma }{\pi }}\,T^{4}.} The Stefan–Boltzmann law expressed as a formula for radiation energy density is: [ 11 ] w e ∘ = 4 c M ∘ = 4 c σ T 4 , {\displaystyle w_{\mathrm {e} }^{\circ }={\frac {4}{c}}\,M^{\circ }={\frac {4}{c}}\,\sigma \,T^{4},} where c {\displaystyle c} is the speed of light. In 1864, John Tyndall presented measurements of the infrared emission by a platinum filament and the corresponding color of the filament. [ 12 ] [ 13 ] [ 14 ] [ 15 ] The proportionality to the fourth power of the absolute temperature was deduced by Josef Stefan (1835–1893) in 1877 on the basis of Tyndall's experimental measurements, in the article Über die Beziehung zwischen der Wärmestrahlung und der Temperatur ( On the relationship between thermal radiation and temperature ) in the Bulletins from the sessions of the Vienna Academy of Sciences. [ 16 ] A derivation of the law from theoretical considerations was presented by Ludwig Boltzmann (1844–1906) in 1884, drawing upon the work of Adolfo Bartoli . [ 17 ] Bartoli in 1876 had derived the existence of radiation pressure from the principles of thermodynamics . Following Bartoli, Boltzmann considered an ideal heat engine using electromagnetic radiation instead of an ideal gas as working matter. The law was almost immediately experimentally verified. Heinrich Weber in 1888 pointed out deviations at higher temperatures, but perfect accuracy within measurement uncertainties was confirmed up to temperatures of 1535 K by 1897. [ 18 ] The law, including the theoretical prediction of the Stefan–Boltzmann constant as a function of the speed of light , the Boltzmann constant and the Planck constant , is a direct consequence of Planck's law as formulated in 1900. The Stefan–Boltzmann constant, σ , is derived from other known physical constants : σ = 2 π 5 k 4 15 c 2 h 3 {\displaystyle \sigma ={\frac {2\pi ^{5}k^{4}}{15c^{2}h^{3}}}} where k is the Boltzmann constant , the h is the Planck constant , and c is the speed of light in vacuum . [ 19 ] [ 4 ] : 388 As of the 2019 revision of the SI , which establishes exact fixed values for k , h , and c , the Stefan–Boltzmann constant is exactly: σ = [ 2 π 5 ( 1.380 649 × 10 − 23 ) 4 15 ( 2.997 924 58 × 10 8 ) 2 ( 6.626 070 15 × 10 − 34 ) 3 ] W m 2 ⋅ K 4 {\displaystyle \sigma =\left[{\frac {2\pi ^{5}\left(1.380\ 649\times 10^{-23}\right)^{4}}{15\left(2.997\ 924\ 58\times 10^{8}\right)^{2}\left(6.626\ 070\ 15\times 10^{-34}\right)^{3}}}\right]\,{\frac {\mathrm {W} }{\mathrm {m} ^{2}{\cdot }\mathrm {K} ^{4}}}} Thus, Prior to this, the value of σ {\displaystyle \sigma } was calculated from the measured value of the gas constant . [ 20 ] The numerical value of the Stefan–Boltzmann constant is different in other systems of units, as shown in the table below. With his law, Stefan also determined the temperature of the Sun 's surface. [ 22 ] He inferred from the data of Jacques-Louis Soret (1827–1890) [ 23 ] that the energy flux density from the Sun is 29 times greater than the energy flux density of a certain warmed metal lamella (a thin plate). A round lamella was placed at such a distance from the measuring device that it would be seen at the same angular diameter as the Sun. Soret estimated the temperature of the lamella to be approximately 1900 °C to 2000 °C. Stefan surmised that 1/3 of the energy flux from the Sun is absorbed by the Earth's atmosphere , so he took for the correct Sun's energy flux a value 3/2 times greater than Soret's value, namely 29 × 3/2 = 43.5. Precise measurements of atmospheric absorption were not made until 1888 and 1904. The temperature Stefan obtained was a median value of previous ones, 1950 °C and the absolute thermodynamic one 2200 K. As 2.57 4 = 43.5, it follows from the law that the temperature of the Sun is 2.57 times greater than the temperature of the lamella, so Stefan got a value of 5430 °C or 5700 K. This was the first sensible value for the temperature of the Sun. Before this, values ranging from as low as 1800 °C to as high as 13 000 000 °C [ 24 ] were claimed. The lower value of 1800 °C was determined by Claude Pouillet (1790–1868) in 1838 using the Dulong–Petit law . [ 25 ] [ 26 ] Pouillet also took just half the value of the Sun's correct energy flux. The temperature of stars other than the Sun can be approximated using a similar means by treating the emitted energy as a black body radiation. [ 27 ] So: L = 4 π R 2 σ T 4 {\displaystyle L=4\pi R^{2}\sigma T^{4}} where L is the luminosity , σ is the Stefan–Boltzmann constant, R is the stellar radius and T is the effective temperature . This formula can then be rearranged to calculate the temperature: T = L 4 π R 2 σ 4 {\displaystyle T={\sqrt[{4}]{\frac {L}{4\pi R^{2}\sigma }}}} or alternatively the radius: R = L 4 π σ T 4 {\displaystyle R={\sqrt {\frac {L}{4\pi \sigma T^{4}}}}} The same formulae can also be simplified to compute the parameters relative to the Sun: L L ⊙ = ( R R ⊙ ) 2 ( T T ⊙ ) 4 T T ⊙ = ( L L ⊙ ) 1 / 4 ( R ⊙ R ) 1 / 2 R R ⊙ = ( T ⊙ T ) 2 ( L L ⊙ ) 1 / 2 {\displaystyle {\begin{aligned}{\frac {L}{L_{\odot }}}&=\left({\frac {R}{R_{\odot }}}\right)^{2}\left({\frac {T}{T_{\odot }}}\right)^{4}\\[1ex]{\frac {T}{T_{\odot }}}&=\left({\frac {L}{L_{\odot }}}\right)^{1/4}\left({\frac {R_{\odot }}{R}}\right)^{1/2}\\[1ex]{\frac {R}{R_{\odot }}}&=\left({\frac {T_{\odot }}{T}}\right)^{2}\left({\frac {L}{L_{\odot }}}\right)^{1/2}\end{aligned}}} where R ⊙ {\displaystyle R_{\odot }} is the solar radius , and so forth. They can also be rewritten in terms of the surface area A and radiant exitance M ∘ {\displaystyle M^{\circ }} : L = A M ∘ M ∘ = L A A = L M ∘ {\displaystyle {\begin{aligned}L&=AM^{\circ }\\[1ex]M^{\circ }&={\frac {L}{A}}\\[1ex]A&={\frac {L}{M^{\circ }}}\end{aligned}}} where A = 4 π R 2 {\displaystyle A=4\pi R^{2}} and M ∘ = σ T 4 . {\displaystyle M^{\circ }=\sigma T^{4}.} With the Stefan–Boltzmann law, astronomers can easily infer the radii of stars. The law is also met in the thermodynamics of black holes in so-called Hawking radiation . Similarly we can calculate the effective temperature of the Earth T ⊕ by equating the energy received from the Sun and the energy radiated by the Earth, under the black-body approximation (Earth's own production of energy being small enough to be negligible). The luminosity of the Sun, L ⊙ , is given by: L ⊙ = 4 π R ⊙ 2 σ T ⊙ 4 {\displaystyle L_{\odot }=4\pi R_{\odot }^{2}\sigma T_{\odot }^{4}} At Earth, this energy is passing through a sphere with a radius of a 0 , the distance between the Earth and the Sun, and the irradiance (received power per unit area) is given by E ⊕ = L ⊙ 4 π a 0 2 {\displaystyle E_{\oplus }={\frac {L_{\odot }}{4\pi a_{0}^{2}}}} The Earth has a radius of R ⊕ , and therefore has a cross-section of π R ⊕ 2 {\displaystyle \pi R_{\oplus }^{2}} . The radiant flux (i.e. solar power) absorbed by the Earth is thus given by: Φ abs = π R ⊕ 2 × E ⊕ {\displaystyle \Phi _{\text{abs}}=\pi R_{\oplus }^{2}\times E_{\oplus }} Because the Stefan–Boltzmann law uses a fourth power, it has a stabilizing effect on the exchange and the flux emitted by Earth tends to be equal to the flux absorbed, close to the steady state where: 4 π R ⊕ 2 σ T ⊕ 4 = π R ⊕ 2 × E ⊕ = π R ⊕ 2 × 4 π R ⊙ 2 σ T ⊙ 4 4 π a 0 2 {\displaystyle {\begin{aligned}4\pi R_{\oplus }^{2}\sigma T_{\oplus }^{4}&=\pi R_{\oplus }^{2}\times E_{\oplus }\\&=\pi R_{\oplus }^{2}\times {\frac {4\pi R_{\odot }^{2}\sigma T_{\odot }^{4}}{4\pi a_{0}^{2}}}\\\end{aligned}}} T ⊕ can then be found: T ⊕ 4 = R ⊙ 2 T ⊙ 4 4 a 0 2 T ⊕ = T ⊙ × R ⊙ 2 a 0 = 5780 K × 6.957 × 10 8 m 2 × 1.495 978 707 × 10 11 m ≈ 279 K {\displaystyle {\begin{aligned}T_{\oplus }^{4}&={\frac {R_{\odot }^{2}T_{\odot }^{4}}{4a_{0}^{2}}}\\T_{\oplus }&=T_{\odot }\times {\sqrt {\frac {R_{\odot }}{2a_{0}}}}\\&=5780\;{\rm {K}}\times {\sqrt {6.957\times 10^{8}\;{\rm {m}} \over 2\times 1.495\ 978\ 707\times 10^{11}\;{\rm {m}}}}\\&\approx 279\;{\rm {K}}\end{aligned}}} where T ⊙ is the temperature of the Sun, R ⊙ the radius of the Sun, and a 0 is the distance between the Earth and the Sun. This gives an effective temperature of 6 °C on the surface of the Earth, assuming that it perfectly absorbs all emission falling on it and has no atmosphere. The Earth has an albedo of 0.3, meaning that 30% of the solar radiation that hits the planet gets scattered back into space without absorption. The effect of albedo on temperature can be approximated by assuming that the energy absorbed is multiplied by 0.7, but that the planet still radiates as a black body (the latter by definition of effective temperature , which is what we are calculating). This approximation reduces the temperature by a factor of 0.7 1/4 , giving 255 K (−18 °C; −1 °F). [ 28 ] [ 29 ] The above temperature is Earth's as seen from space, not ground temperature but an average over all emitting bodies of Earth from surface to high altitude. Because of the greenhouse effect , the Earth's actual average surface temperature is about 288 K (15 °C; 59 °F), which is higher than the 255 K (−18 °C; −1 °F) effective temperature, and even higher than the 279 K (6 °C; 43 °F) temperature that a black body would have. In the above discussion, we have assumed that the whole surface of the earth is at one temperature. Another interesting question is to ask what the temperature of a blackbody surface on the earth would be assuming that it reaches equilibrium with the sunlight falling on it. This of course depends on the angle of the sun on the surface and on how much air the sunlight has gone through. When the sun is at the zenith and the surface is horizontal, the irradiance can be as high as 1120 W/m 2 . [ 30 ] The Stefan–Boltzmann law then gives a temperature of T = ( 1120 W/m 2 σ ) 1 / 4 ≈ 375 K {\displaystyle T=\left({\frac {1120{\text{ W/m}}^{2}}{\sigma }}\right)^{1/4}\approx 375{\text{ K}}} or 102 °C (216 °F). (Above the atmosphere, the result is even higher: 394 K (121 °C; 250 °F).) We can think of the earth's surface as "trying" to reach equilibrium temperature during the day, but being cooled by the atmosphere, and "trying" to reach equilibrium with starlight and possibly moonlight at night, but being warmed by the atmosphere. The fact that the energy density of the box containing radiation is proportional to T 4 {\displaystyle T^{4}} can be derived using thermodynamics. [ 31 ] [ 15 ] This derivation uses the relation between the radiation pressure p and the internal energy density u {\displaystyle u} , a relation that can be shown using the form of the electromagnetic stress–energy tensor . This relation is: p = u 3 . {\displaystyle p={\frac {u}{3}}.} Now, from the fundamental thermodynamic relation d U = T d S − p d V , {\displaystyle dU=T\,dS-p\,dV,} we obtain the following expression, after dividing by d V {\displaystyle dV} and fixing T {\displaystyle T} : ( ∂ U ∂ V ) T = T ( ∂ S ∂ V ) T − p = T ( ∂ p ∂ T ) V − p . {\displaystyle \left({\frac {\partial U}{\partial V}}\right)_{T}=T\left({\frac {\partial S}{\partial V}}\right)_{T}-p=T\left({\frac {\partial p}{\partial T}}\right)_{V}-p.} The last equality comes from the following Maxwell relation : ( ∂ S ∂ V ) T = ( ∂ p ∂ T ) V . {\displaystyle \left({\frac {\partial S}{\partial V}}\right)_{T}=\left({\frac {\partial p}{\partial T}}\right)_{V}.} From the definition of energy density it follows that U = u V {\displaystyle U=uV} where the energy density of radiation only depends on the temperature, therefore ( ∂ U ∂ V ) T = u ( ∂ V ∂ V ) T = u . {\displaystyle \left({\frac {\partial U}{\partial V}}\right)_{T}=u\left({\frac {\partial V}{\partial V}}\right)_{T}=u.} Now, the equality is u = T ( ∂ p ∂ T ) V − p , {\displaystyle u=T\left({\frac {\partial p}{\partial T}}\right)_{V}-p,} after substitution of ( ∂ U ∂ V ) T . {\displaystyle \left({\frac {\partial U}{\partial V}}\right)_{T}.} Meanwhile, the pressure is the rate of momentum change per unit area. Since the momentum of a photon is the same as the energy divided by the speed of light, u = T 3 ( ∂ u ∂ T ) V − u 3 , {\displaystyle u={\frac {T}{3}}\left({\frac {\partial u}{\partial T}}\right)_{V}-{\frac {u}{3}},} where the factor 1/3 comes from the projection of the momentum transfer onto the normal to the wall of the container. Since the partial derivative ( ∂ u ∂ T ) V {\displaystyle \left({\frac {\partial u}{\partial T}}\right)_{V}} can be expressed as a relationship between only u {\displaystyle u} and T {\displaystyle T} (if one isolates it on one side of the equality), the partial derivative can be replaced by the ordinary derivative. After separating the differentials the equality becomes d u 4 u = d T T , {\displaystyle {\frac {du}{4u}}={\frac {dT}{T}},} which leads immediately to u = A T 4 {\displaystyle u=AT^{4}} , with A {\displaystyle A} as some constant of integration. The law can be derived by considering a small flat black body surface radiating out into a half-sphere. This derivation uses spherical coordinates , with θ as the zenith angle and φ as the azimuthal angle; and the small flat blackbody surface lies on the xy-plane, where θ = π / 2 . The intensity of the light emitted from the blackbody surface is given by Planck's law , I ( ν , T ) = 2 h ν 3 c 2 1 e h ν / ( k T ) − 1 , {\displaystyle I(\nu ,T)={\frac {2h\nu ^{3}}{c^{2}}}{\frac {1}{e^{h\nu /(kT)}-1}},} where The quantity I ( ν , T ) A cos ⁡ θ d ν d Ω {\displaystyle I(\nu ,T)~A\cos \theta ~d\nu ~d\Omega } is the power radiated by a surface of area A through a solid angle d Ω in the frequency range between ν and ν + dν . The Stefan–Boltzmann law gives the power emitted per unit area of the emitting body, P A = ∫ 0 ∞ I ( ν , T ) d ν ∫ cos ⁡ θ d Ω {\displaystyle {\frac {P}{A}}=\int _{0}^{\infty }I(\nu ,T)\,d\nu \int \cos \theta \,d\Omega } Note that the cosine appears because black bodies are Lambertian (i.e. they obey Lambert's cosine law ), meaning that the intensity observed along the sphere will be the actual intensity times the cosine of the zenith angle. To derive the Stefan–Boltzmann law, we must integrate d Ω = sin ⁡ θ d θ d φ {\textstyle d\Omega =\sin \theta \,d\theta \,d\varphi } over the half-sphere and integrate ν {\displaystyle \nu } from 0 to ∞. P A = ∫ 0 ∞ I ( ν , T ) d ν ∫ 0 2 π d φ ∫ 0 π / 2 cos ⁡ θ sin ⁡ θ d θ = π ∫ 0 ∞ I ( ν , T ) d ν {\displaystyle {\begin{aligned}{\frac {P}{A}}&=\int _{0}^{\infty }I(\nu ,T)\,d\nu \int _{0}^{2\pi }\,d\varphi \int _{0}^{\pi /2}\cos \theta \sin \theta \,d\theta \\&=\pi \int _{0}^{\infty }I(\nu ,T)\,d\nu \end{aligned}}} Then we plug in for I : P A = 2 π h c 2 ∫ 0 ∞ ν 3 e h ν k T − 1 d ν {\displaystyle {\frac {P}{A}}={\frac {2\pi h}{c^{2}}}\int _{0}^{\infty }{\frac {\nu ^{3}}{e^{\frac {h\nu }{kT}}-1}}\,d\nu } To evaluate this integral, do a substitution, u = h ν k T d u = h k T d ν {\displaystyle {\begin{aligned}u&={\frac {h\nu }{kT}}\\[6pt]du&={\frac {h}{kT}}\,d\nu \end{aligned}}} which gives: P A = 2 π h c 2 ( k T h ) 4 ∫ 0 ∞ u 3 e u − 1 d u . {\displaystyle {\frac {P}{A}}={\frac {2\pi h}{c^{2}}}\left({\frac {kT}{h}}\right)^{4}\int _{0}^{\infty }{\frac {u^{3}}{e^{u}-1}}\,du.} The integral on the right is standard and goes by many names: it is a particular case of a Bose–Einstein integral , the polylogarithm , or the Riemann zeta function ζ ( s ) {\displaystyle \zeta (s)} . The value of the integral is Γ ( 4 ) ζ ( 4 ) = π 4 15 {\displaystyle \Gamma (4)\zeta (4)={\frac {\pi ^{4}}{15}}} (where Γ ( s ) {\displaystyle \Gamma (s)} is the Gamma function ), giving the result that, for a perfect blackbody surface: M ∘ = σ T 4 , σ = 2 π 5 k 4 15 c 2 h 3 = π 2 k 4 60 ℏ 3 c 2 . {\displaystyle M^{\circ }=\sigma T^{4}~,~~\sigma ={\frac {2\pi ^{5}k^{4}}{15c^{2}h^{3}}}={\frac {\pi ^{2}k^{4}}{60\hbar ^{3}c^{2}}}.} Finally, this proof started out only considering a small flat surface. However, any differentiable surface can be approximated by a collection of small flat surfaces. So long as the geometry of the surface does not cause the blackbody to reabsorb its own radiation, the total energy radiated is just the sum of the energies radiated by each surface; and the total surface area is just the sum of the areas of each surface—so this law holds for all convex blackbodies, too, so long as the surface has the same temperature throughout. The law extends to radiation from non-convex bodies by using the fact that the convex hull of a black body radiates as though it were itself a black body. The total energy density U can be similarly calculated, except the integration is over the whole sphere and there is no cosine, and the energy flux (U c) should be divided by the velocity c to give the energy density U : U = 1 c ∫ 0 ∞ I ( ν , T ) d ν ∫ d Ω {\displaystyle U={\frac {1}{c}}\int _{0}^{\infty }I(\nu ,T)\,d\nu \int \,d\Omega } Thus ∫ 0 π / 2 cos ⁡ θ sin ⁡ θ d θ {\textstyle \int _{0}^{\pi /2}\cos \theta \sin \theta \,d\theta } is replaced by ∫ 0 π sin ⁡ θ d θ {\textstyle \int _{0}^{\pi }\sin \theta \,d\theta } , giving an extra factor of 4. Thus, in total: U = 4 c σ T 4 {\displaystyle U={\frac {4}{c}}\,\sigma \,T^{4}} The product 4 c σ {\displaystyle {\frac {4}{c}}\sigma } is sometimes known as the radiation constant or radiation density constant . [ 32 ] [ 33 ] The Stefan–Boltzmann law can be expressed as [ 34 ] M ∘ = σ T 4 = N p h o t ⟨ E p h o t ⟩ {\displaystyle M^{\circ }=\sigma \,T^{4}=N_{\mathrm {phot} }\,\langle E_{\mathrm {phot} }\rangle } where the flux of photons, N p h o t {\displaystyle N_{\mathrm {phot} }} , is given by N p h o t = π ∫ 0 ∞ B ν h ν d ν {\displaystyle N_{\mathrm {phot} }=\pi \int _{0}^{\infty }{\frac {B_{\nu }}{h\nu }}\,\mathrm {d} \nu } N p h o t = ( 1.5205 × 10 15 photons ⋅ s − 1 ⋅ m − 2 ⋅ K − 3 ) ⋅ T 3 {\displaystyle N_{\mathrm {phot} }=\left({1.5205\times 10^{15}}\;{\textrm {photons}}{\cdot }{\textrm {s}}^{-1}{\cdot }{\textrm {m}}^{-2}{\cdot }\mathrm {K} ^{-3}\right)\cdot T^{3}} and the average energy per photon, ⟨ E phot ⟩ {\displaystyle \langle E_{\textrm {phot}}\rangle } , is given by ⟨ E phot ⟩ = π 4 30 ζ ( 3 ) k T = ( 3.7294 × 10 − 23 J ⋅ K − 1 ) ⋅ T . {\displaystyle \langle E_{\textrm {phot}}\rangle ={\frac {\pi ^{4}}{30\,\zeta (3)}}k\,T=\left({3.7294\times 10^{-23}}\mathrm {J} {\cdot }\mathrm {K} ^{-1}\right)\cdot T\,.} Marr and Wilkin (2012) recommend that students be taught about ⟨ E phot ⟩ {\displaystyle \langle E_{\textrm {phot}}\rangle } instead of being taught Wien's displacement law , and that the above decomposition be taught when the Stefan–Boltzmann law is taught. [ 34 ]
https://en.wikipedia.org/wiki/Stefan–Boltzmann_constant
Steffensen's inequality is an equation in mathematics named after Johan Frederik Steffensen . [ 1 ] It is an integral inequality in real analysis, stating: This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Steffensen's_inequality
The Steglich esterification is a variation of an esterification with dicyclohexylcarbodiimide as a coupling reagent and 4-dimethylaminopyridine as a catalyst . The reaction was first described by Wolfgang Steglich in 1978. [ 1 ] It is an adaptation of an older method for the formation of amides by means of DCC (dicyclohexylcarbodiimide) and 1-hydroxybenzotriazole (HOBT). [ 2 ] [ 3 ] This reaction generally takes place at room temperature . A variety of polar aprotic solvents can be used. [ 4 ] Because the reaction is mild, esters can be obtained that are inaccessible through other methods for instance esters of the sensitive 2,4-dihydroxybenzoic acid. A characteristic is the formal uptake of water generated in the reaction by DCC, forming the urea compound dicyclohexylurea (DCU). The reaction mechanism is described as follows: With amines , the reaction proceeds without problems to the corresponding amides because amines are more nucleophilic . If the esterification is slow, a side-reaction occurs, diminishing the final yield or complicating purification of the product. This side-reaction is a 1,3-rearrangement of the O -acyl intermediate to an N -acylurea which is unable to further react with the alcohol. DMAP suppresses this side reaction, acting as an acyl transfer-reagent in the following manner:
https://en.wikipedia.org/wiki/Steglich_esterification
In decision theory and estimation theory , Stein's example (also known as Stein's phenomenon or Stein's paradox ) is the observation that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error ) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University , who discovered the phenomenon in 1955. [ 1 ] An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent. If one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse. The following is the simplest form of the paradox, the special case in which the number of observations is equal to the number of parameters to be estimated. Let θ {\displaystyle {\boldsymbol {\theta }}} be a vector consisting of n ≥ 3 {\displaystyle n\geq 3} unknown parameters. To estimate these parameters, a single measurement X i {\displaystyle X_{i}} is performed for each parameter θ i {\displaystyle \theta _{i}} , resulting in a vector X {\displaystyle \mathbf {X} } of length n {\displaystyle n} . Suppose the measurements are known to be independent , Gaussian random variables , with mean θ {\displaystyle {\boldsymbol {\theta }}} and variance 1, i.e., X ∼ N ( θ , I n ) {\displaystyle \mathbf {X} \sim {\mathcal {N}}({\boldsymbol {\theta }},\mathbf {I} _{n})} . Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate. Under these conditions, it is intuitive and common to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written as θ ^ = X {\displaystyle {\hat {\boldsymbol {\theta }}}=\mathbf {X} } , which is the maximum likelihood estimator (MLE). The quality of such an estimator is measured by its risk function . A commonly used risk function is the mean squared error , defined as E [ ‖ θ − θ ^ ‖ 2 ] {\displaystyle \mathbb {E} [\|{\boldsymbol {\theta }}-{\hat {\boldsymbol {\theta }}}\|^{2}]} . Surprisingly, it turns out that the "ordinary" decision rule is suboptimal ( inadmissible ) in terms of mean squared error when n ≥ 3 {\displaystyle n\geq 3} . In other words, in the setting discussed here, there exist alternative estimators which always achieve lower mean squared error, no matter what the value of θ {\displaystyle {\boldsymbol {\theta }}} is. For a given θ {\displaystyle {\boldsymbol {\theta }}} one could obviously define a perfect "estimator" which is always just θ {\displaystyle {\boldsymbol {\theta }}} , but this estimator would be bad for other values of θ {\displaystyle {\boldsymbol {\theta }}} . The estimators of Stein's paradox are, for a given θ {\displaystyle {\boldsymbol {\theta }}} , better than the "ordinary" decision rule X {\displaystyle \mathbf {X} } for some X {\displaystyle \mathbf {X} } but necessarily worse for others. It is only on average that they are better. More accurately, an estimator θ ^ 1 {\displaystyle {\hat {\boldsymbol {\theta }}}_{1}} is said to dominate another estimator θ ^ 2 {\displaystyle {\hat {\boldsymbol {\theta }}}_{2}} if, for all values of θ {\displaystyle {\boldsymbol {\theta }}} , the risk of θ ^ 1 {\displaystyle {\hat {\boldsymbol {\theta }}}_{1}} is lower than, or equal to, the risk of θ ^ 2 {\displaystyle {\hat {\boldsymbol {\theta }}}_{2}} , and if the inequality is strict for some θ {\displaystyle {\boldsymbol {\theta }}} . An estimator is said to be admissible if no other estimator dominates it, otherwise it is inadmissible . Thus, Stein's example can be simply stated as follows: The "ordinary" decision rule of the mean of a multivariate Gaussian distribution is inadmissible under mean squared error risk. Many simple, practical estimators achieve better performance than the "ordinary" decision rule. The best-known example is the James–Stein estimator , which shrinks X {\displaystyle \mathbf {X} } towards a particular point (such as the origin) by an amount inversely proportional to the distance of X {\displaystyle \mathbf {X} } from that point. For a sketch of the proof of this result, see Proof of Stein's example . An alternative proof is due to Larry Brown : he proved that the ordinary estimator for an n {\displaystyle n} -dimensional multivariate normal mean vector is admissible if and only if the n {\displaystyle n} -dimensional Brownian motion is recurrent. [ 2 ] Since the Brownian motion is not recurrent for n ≥ 3 {\displaystyle n\geq 3} , the MLE is not admissible for n ≥ 3 {\displaystyle n\geq 3} . For any particular value of θ {\displaystyle {\boldsymbol {\theta }}} the new estimator will improve at least one of the individual mean square errors E [ ( θ i − θ ^ i ) 2 ] . {\displaystyle \mathbb {E} [(\theta _{i}-{\hat {\theta }}_{i})^{2}].} This is not hard − for instance, if θ {\displaystyle {\boldsymbol {\theta }}} is between −1 and 1, and σ = 1 {\displaystyle \sigma =1} , then an estimator that linearly shrinks X {\displaystyle \mathbf {X} } towards 0 by 0.5 (i.e., sign ⁡ ( X i ) max ( | X i | − 0.5 , 0 ) {\displaystyle \operatorname {sign} (X_{i})\max(|X_{i}|-0.5,0)} , soft thresholding with threshold 0.5 {\displaystyle 0.5} ) will have a lower mean square error than X {\displaystyle \mathbf {X} } itself. But there are other values of θ {\displaystyle {\boldsymbol {\theta }}} for which this estimator is worse than X {\displaystyle \mathbf {X} } itself. The trick of the Stein estimator, and others that yield the Stein paradox, is that they adjust the shift in such a way that there is always (for any θ {\displaystyle {\boldsymbol {\theta }}} vector) at least one X i {\displaystyle X_{i}} whose mean square error is improved, and its improvement more than compensates for any degradation in mean square error that might occur for another θ ^ i {\displaystyle {\hat {\theta }}_{i}} . The trouble is that, without knowing θ {\displaystyle {\boldsymbol {\theta }}} , you don't know which of the n {\displaystyle n} mean square errors are improved, so you can't use the Stein estimator only for those parameters. An example of the above setting occurs in channel estimation in telecommunications, for instance, because different factors affect overall channel performance. Stein's example is surprising, since the "ordinary" decision rule is intuitive and commonly used. In fact, numerous methods for estimator construction, including maximum likelihood estimation , best linear unbiased estimation , least squares estimation and optimal equivariant estimation , all result in the "ordinary" estimator. Yet, as discussed above, this estimator is suboptimal. To demonstrate the unintuitive nature of Stein's example, consider the following real-world example. Suppose we are to estimate three unrelated parameters, such as the US wheat yield for 1993, the number of spectators at the Wimbledon tennis tournament in 2001, and the weight of a randomly chosen candy bar from the supermarket. Suppose we have independent Gaussian measurements of each of these quantities. Stein's example now tells us that we can get a better estimate (on average) for the vector of three parameters by simultaneously using the three unrelated measurements. At first sight it appears that somehow we get a better estimator for US wheat yield by measuring some other unrelated statistics such as the number of spectators at Wimbledon and the weight of a candy bar. However, we have not obtained a better estimator for US wheat yield by itself, but we have produced an estimator for the vector of the means of all three random variables, which has a reduced total risk. This occurs because the cost of a bad estimate in one component of the vector is compensated by a better estimate in another component. Also, a specific set of the three estimated mean values obtained with the new estimator will not necessarily be better than the ordinary set (the measured values). It is only on average that the new estimator is better. The risk function of the decision rule d ( x ) = x {\displaystyle d(\mathbf {x} )=\mathbf {x} } is Now consider the decision rule where α = n − 2 {\displaystyle \alpha =n-2} . We will show that d ′ {\displaystyle d'} is a better decision rule than d {\displaystyle d} . The risk function is — a quadratic in α {\displaystyle \alpha } . We may simplify the middle term by considering a general "well-behaved" function h : x ↦ h ( x ) ∈ R {\displaystyle h:\mathbf {x} \mapsto h(\mathbf {x} )\in \mathbb {R} } and using integration by parts . For 1 ≤ i ≤ n {\displaystyle 1\leq i\leq n} , for any continuously differentiable h {\displaystyle h} growing sufficiently slowly for large x i {\displaystyle x_{i}} we have: Therefore, (This result is known as Stein's lemma .) Now, we choose If h {\displaystyle h} met the "well-behaved" condition (it doesn't, but this can be remedied—see below), we would have and so Then returning to the risk function of d ′ {\displaystyle d'} : This quadratic in α {\displaystyle \alpha } is minimized at α = n − 2 {\displaystyle \alpha =n-2} , giving which of course satisfies R ( θ , d ′ ) < R ( θ , d ) . {\displaystyle R(\theta ,d')<R(\theta ,d).} making d {\displaystyle d} an inadmissible decision rule. It remains to justify the use of This function is not continuously differentiable, since it is singular at x = 0 {\displaystyle \mathbf {x} =0} . However, the function is continuously differentiable, and after following the algebra through and letting ε → 0 {\displaystyle \varepsilon \to 0} , one obtains the same result.
https://en.wikipedia.org/wiki/Stein's_example
Stein's lemma , named in honor of Charles Stein , is a theorem of probability theory that is of interest primarily because of its applications to statistical inference — in particular, to James–Stein estimation and empirical Bayes methods — and its applications to portfolio choice theory . [ 1 ] The theorem gives a formula for the covariance of one random variable with the value of a function of another, when the two random variables are jointly normally distributed . Note that the name "Stein's lemma" is also commonly used [ 2 ] to refer to a different result in the area of statistical hypothesis testing , which connects the error exponents in hypothesis testing with the Kullback–Leibler divergence . This result is also known as the Chernoff–Stein lemma [ 3 ] and is not related to the lemma discussed in this article. Suppose X is a normally distributed random variable with expectation μ and variance σ 2 . Further suppose g is a differentiable function for which the two expectations E ⁡ ( g ( X ) ( X − μ ) ) {\displaystyle \operatorname {E} (g(X)(X-\mu ))} and E ⁡ ( g ′ ( X ) ) {\displaystyle \operatorname {E} (g'(X))} both exist. (The existence of the expectation of any random variable is equivalent to the finiteness of the expectation of its absolute value .) Then In general, suppose X and Y are jointly normally distributed. Then For a general multivariate Gaussian random vector ( X 1 , . . . , X n ) ∼ N ( μ , Σ ) {\displaystyle (X_{1},...,X_{n})\sim {\mathcal {N}}(\mu ,\Sigma )} it follows that Similarly, when μ = 0 {\displaystyle \mu =0} , E ⁡ [ ∂ i g ( X ) ] = E ⁡ [ g ( X ) ( Σ − 1 X ) i ] , E ⁡ [ ∂ i ∂ j g ( X ) ] = E ⁡ [ g ( X ) ( ( Σ − 1 X ) i ( Σ − 1 X ) j − Σ i j − 1 ) ] {\displaystyle \operatorname {E} [\partial _{i}g(X)]=\operatorname {E} [g(X)(\Sigma ^{-1}X)_{i}],\quad \operatorname {E} [\partial _{i}\partial _{j}g(X)]=\operatorname {E} [g(X)((\Sigma ^{-1}X)_{i}(\Sigma ^{-1}X)_{j}-\Sigma _{ij}^{-1})]} Stein's lemma can be used to stochastically estimate gradient: ∇ E ϵ ∼ N ( 0 , I ) ⁡ ( g ( x + Σ 1 / 2 ϵ ) ) = Σ − 1 / 2 E ϵ ∼ N ( 0 , I ) ⁡ ( g ( x + Σ 1 / 2 ϵ ) ϵ ) ≈ Σ − 1 / 2 1 N ∑ i = 1 N g ( x + Σ 1 / 2 ϵ i ) ϵ i {\displaystyle \nabla \operatorname {E} _{\epsilon \sim {\mathcal {N}}(0,I)}{\bigl (}g(x+\Sigma ^{1/2}\epsilon ){\bigr )}=\Sigma ^{-1/2}\operatorname {E} _{\epsilon \sim {\mathcal {N}}(0,I)}{\bigl (}g(x+\Sigma ^{1/2}\epsilon )\epsilon {\bigr )}\approx \Sigma ^{-1/2}{\frac {1}{N}}\sum _{i=1}^{N}g(x+\Sigma ^{1/2}\epsilon _{i})\epsilon _{i}} where ϵ 1 , … , ϵ N {\displaystyle \epsilon _{1},\dots ,\epsilon _{N}} are IID samples from the standard normal distribution N ( 0 , I ) {\displaystyle {\mathcal {N}}(0,I)} . This form has applications in Stein variational gradient descent [ 4 ] and Stein variational policy gradient . [ 5 ] The univariate probability density function for the univariate normal distribution with expectation 0 and variance 1 is Since ∫ x exp ⁡ ( − x 2 / 2 ) d x = − exp ⁡ ( − x 2 / 2 ) {\displaystyle \int x\exp(-x^{2}/2)\,dx=-\exp(-x^{2}/2)} we get from integration by parts : The case of general variance σ 2 {\displaystyle \sigma ^{2}} follows by substitution . Isserlis' theorem is equivalently stated as E ⁡ ( X 1 f ( X 1 , … , X n ) ) = ∑ i = 1 n Cov ⁡ ( X 1 , X i ) E ⁡ ( ∂ X i f ( X 1 , … , X n ) ) . {\displaystyle \operatorname {E} (X_{1}f(X_{1},\ldots ,X_{n}))=\sum _{i=1}^{n}\operatorname {Cov} (X_{1},X_{i})\operatorname {E} (\partial _{X_{i}}f(X_{1},\ldots ,X_{n})).} where ( X 1 , … X n ) {\displaystyle (X_{1},\dots X_{n})} is a zero-mean multivariate normal random vector. Suppose X is in an exponential family , that is, X has the density Suppose this density has support ( a , b ) {\displaystyle (a,b)} where a , b {\displaystyle a,b} could be − ∞ , ∞ {\displaystyle -\infty ,\infty } and as x → a or b {\displaystyle x\rightarrow a{\text{ or }}b} , exp ⁡ ( η ′ T ( x ) ) h ( x ) g ( x ) → 0 {\displaystyle \exp(\eta 'T(x))h(x)g(x)\rightarrow 0} where g {\displaystyle g} is any differentiable function such that E | g ′ ( X ) | < ∞ {\displaystyle E|g'(X)|<\infty } or exp ⁡ ( η ′ T ( x ) ) h ( x ) → 0 {\displaystyle \exp(\eta 'T(x))h(x)\rightarrow 0} if a , b {\displaystyle a,b} finite. Then The derivation is same as the special case, namely, integration by parts. If we only know X {\displaystyle X} has support R {\displaystyle \mathbb {R} } , then it could be the case that E | g ( X ) | < ∞ and E | g ′ ( X ) | < ∞ {\displaystyle E|g(X)|<\infty {\text{ and }}E|g'(X)|<\infty } but lim x → ∞ f η ( x ) g ( x ) ≠ 0 {\displaystyle \lim _{x\rightarrow \infty }f_{\eta }(x)g(x)\not =0} . To see this, simply put g ( x ) = 1 {\displaystyle g(x)=1} and f η ( x ) {\displaystyle f_{\eta }(x)} with infinitely spikes towards infinity but still integrable. One such example could be adapted from f ( x ) = { 1 x ∈ [ n , n + 2 − n ) 0 otherwise {\displaystyle f(x)={\begin{cases}1&x\in [n,n+2^{-n})\\0&{\text{otherwise}}\end{cases}}} so that f {\displaystyle f} is smooth. Extensions to elliptically-contoured distributions also exist. [ 6 ] [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Stein's_lemma
Stein Aerts is a Belgian bio-engineer and computational biologist. He leads the Laboratory of Computational Biology [ 1 ] at VIB and KU Leuven ( University of Leuven ), and is director of VIB.AI, the VIB Center for AI & Computational Biology. He has received several accolades for his research into the workings of the genomic regulatory code. [ 2 ] Aerts was born and raised in Heusden-Zolder , Belgium , where he completed his secondary education at Heilig-Hart College. [ 3 ] He obtained a Master's degree in Bioscience Engineering (Molecular Biology) from the University of Leuven , and subsequently combined a job as Assistant IT Project Leader at Janssen Pharmaceutica with advanced studies in Applied Computer Science at the University of Brussels . He obtained a PhD in Engineering (Bioinformatics), working at the Department of Electrical Engineering ESAT-SCD at the University of Leuven . [ 4 ] Aerts completed his postdoc training working on the genomics of gene regulation in the fruit fly model Drosophila melanogaster in the lab of Bassem Hassan at VIB in Leuven , including a research visit at the Developmental Biology Institute of Marseille , Luminy (IBDML), in France , with Denis Thierry and Carl Herrmann. In 2009, Aerts was appointed assistant professor at the University of Leuven , where he is now full professor , and heads the Laboratory of Computational Biology at the KU Leuven Department of Human Genetics. Since 2016, he was also appointed VIB group leader. Aerts teaches several courses, including Introduction to Bioinformatics , Bioinformatics: Structural and Comparative Genomics , Bioinformatics and Systems Biology: Sequence, Structure & Evolution and Bioinformatics and Systems Biology: Expression, Regulation and Networks at the University of Leuven . His research focuses on deciphering the genomic regulatory code, using a combination of single-cell, machine-learning , and high-throughput experimental approaches. [ 5 ] Aerts research interest in regulatory genomics and gene regulatory networks cover a wide range of experimental and computational approaches, applied in the context of neuronal development, neurodegeneration, as well as cancer. [ 6 ] [ 7 ] During his PhD research, Aerts invented one of the first bioinformatics algorithms for the prediction of genomic enhancers (ModuleSearcher) [ 8 ] and developed several bioinformatics tools for the analysis of cis-regulatory sequences (TOUCAN) [ 9 ] [ 10 ] [ 11 ] and for gene prioritisation (Endeavour). [ 12 ] Other scientific contributions include new bioinformatics methods for the analysis of single-cell gene regulatory networks, namely iRegulon, [ 13 ] SCENIC [ 14 ] and cisTopic ; [ 15 ] a new experimental technique for massively parallel enhancer reporter assays (CHEQ-seq); and a deep learning implementation for enhancer modelling (DeepMEL [ 16 ] and DeepFlyBrain [ 17 ] ). Aerts co-founded the Fly Cell Atlas consortium [ 18 ] and generated a single-cell atlas of the ageing Drosophila brain. [ 19 ] In 2022, the consortium announced the completion of a single-nucleus transcriptomic atlas of the adult fruit fly, [ 20 ] [ 21 ] which they hope will serve as a valuable resource for the research community and as a reference for studies of gene function at single-cell resolution. [ 22 ] The generation of cell and tissue atlases help research to study biological processes, not only in flies but also for modeling human diseases at a whole-organism level with cell-type resolution. Aerts is also part of a pan-European research consortium called LifeTime, which aims to track, understand and target human cells during the onset and progression of complex diseases, and to analyse their response to therapy at single-cell resolution. [ 23 ] As an advocate for open science, Aerts deposits the data and methods developed by his team on open repositories, or makes them freely available as open source software and databases. [ 24 ] MendelCraft, a MineCraft mod developed by the Aerts lab, is a video game designed to teach children about DNA , genetics , and the laws of Mendel , by allowing them to cross and clone different breeds of virtual chickens. [ 25 ]
https://en.wikipedia.org/wiki/Stein_Aerts
Stein Bjornar Jacobsen (born 1950) [ 1 ] is a Norwegian-American geochemist who works within cosmochemistry . Hailing from Drammen , he finished a cand.mag. degree at the University of Oslo before studying geology in California with a Rotary grant. [ 2 ] Jacobsen became a professor of geochemistry at Harvard University . [ 3 ] He was an inducted into the Norwegian Academy of Science and Letters in 1994. [ 1 ] In 2009 he was inducted into the American Academy of Arts and Sciences , mainly for using "the distribution of long-lived and extinct radioisotopes to date the formation of the earth's core and to define the effects of core separation on the early history of the core-mantle-crust system". [ 4 ] This biography of an academic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stein_Jacobsen
The Steinhaus longimeter , patented by the professor Hugo Steinhaus , is an instrument used to measure the lengths of curves on maps. It is a transparent sheet of three grids, turned against each other by 30 degrees, each consisting of parallel lines spaced at equal distances 3.82 mm . The measurement is done by counting crossings of the curve with grid lines. The number of crossings is the approximate length of the curve in millimetres. The design of the Steinhaus longimeter can be seen as an application of the Crofton formula , according to which the length of a curve equals the expected number of times it is crossed by a random line. [ 1 ] This mathematics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Steinhaus_longimeter
In the mathematical field of real analysis , the Steinhaus theorem states that the difference set of a set of positive measure contains an open neighbourhood of zero. It was first proved by Hugo Steinhaus . [ 1 ] Let A be a Lebesgue-measurable set on the real line such that the Lebesgue measure of A is not zero. Then the difference set contains an open neighbourhood of the origin. The general version of the theorem, first proved by André Weil , [ 2 ] states that if G is a locally compact group , and A ⊂ G a subset of positive (left) Haar measure , then contains an open neighbourhood of unity. The theorem can also be extended to nonmeagre sets with the Baire property . A corollary of this theorem is that any measurable proper subgroup of ( R , + ) {\displaystyle (\mathbb {R} ,+)} is of measure zero.
https://en.wikipedia.org/wiki/Steinhaus_theorem
In mathematics , Steinhaus–Moser notation is a notation for expressing certain large numbers . It is an extension (devised by Leo Moser ) of Hugo Steinhaus 's polygon notation. [ 1 ] etc.: n written in an ( m + 1 )-sided polygon is equivalent to "the number n inside n nested m -sided polygons". In a series of nested polygons, they are associated inward. The number n inside two triangles is equivalent to n n inside one triangle, which is equivalent to n n raised to the power of n n . Steinhaus defined only the triangle, the square, and the circle , which is equivalent to the pentagon defined above. Steinhaus defined: Moser's number is the number represented by "2 in a megagon". Megagon is here the name of a polygon with "mega" sides (not to be confused with the polygon with one million sides ). Alternative notations: A mega, ②, is already a very large number, since ② = square(square(2)) = square(triangle(triangle(2))) = square(triangle(2 2 )) = square(triangle(4)) = square(4 4 ) = square(256) = triangle(triangle(triangle(...triangle(256)...))) [256 triangles] = triangle(triangle(triangle(...triangle(256 256 )...))) [255 triangles] ~ triangle(triangle(triangle(...triangle(3.2317 × 10 616 )...))) [255 triangles] ... Using the other notation: mega = M ( 2 , 1 , 5 ) = M ( 256 , 256 , 3 ) {\displaystyle M(2,1,5)=M(256,256,3)} With the function f ( x ) = x x {\displaystyle f(x)=x^{x}} we have mega = f 256 ( 256 ) = f 258 ( 2 ) {\displaystyle f^{256}(256)=f^{258}(2)} where the superscript denotes a functional power , not a numerical power. We have (note the convention that powers are evaluated from right to left): Similarly: etc. Thus: Rounding more crudely (replacing the 257 at the end by 256), we get mega ≈ 256 ↑ ↑ 257 {\displaystyle 256\uparrow \uparrow 257} , using Knuth's up-arrow notation . After the first few steps the value of n n {\displaystyle n^{n}} is each time approximately equal to 256 n {\displaystyle 256^{n}} . In fact, it is even approximately equal to 10 n {\displaystyle 10^{n}} (see also approximate arithmetic for very large numbers ). Using base 10 powers we get: ... It has been proven that in Conway chained arrow notation , and, in Knuth's up-arrow notation , Therefore, Moser's number, although incomprehensibly large, is vanishingly small compared to Graham's number : [ 2 ]
https://en.wikipedia.org/wiki/Steinhaus–Moser_notation
In mathematics , the Stein–Strömberg theorem or Stein–Strömberg inequality is a result in measure theory concerning the Hardy–Littlewood maximal operator . The result is foundational in the study of the problem of differentiation of integrals . The result is named after the mathematicians Elias M. Stein and Jan-Olov Strömberg . Let λ n denote n - dimensional Lebesgue measure on n -dimensional Euclidean space R n and let M denote the Hardy–Littlewood maximal operator: for a function f : R n → R , Mf : R n → R is defined by where B r ( x ) denotes the open ball of radius r with center x . Then, for each p > 1, there is a constant C p > 0 such that, for all natural numbers n and functions f ∈ L p ( R n ; R ), In general, a maximal operator M is said to be of strong type ( p , p ) if for all f ∈ L p ( R n ; R ). Thus, the Stein–Strömberg theorem is the statement that the Hardy–Littlewood maximal operator is of strong type ( p , p ) uniformly with respect to the dimension n .
https://en.wikipedia.org/wiki/Stein–Strömberg_theorem
Stella Ifeanyi Smith is a Nigerian medical scientist with interests in molecular biology and biotechnology. Smith joined Nigeria Institute of Medical Research in 1988, and was made director of research in 2013. [ 1 ] As of March 2025 [update] , she had 5,749 citations on Google scholar . [ 2 ] She was elected a Fellow of the African Academy of Sciences in 2022. [ 3 ] Born January 28, 1965, Smith had her first degree in microbiology from University of Ilorin in 1986. Thereafter, she obtained a master's degree in Medical microbiology from University of Lagos . She completed her doctorate degree from the same institution in 1996. [ 4 ] In 2001, she conducted a study on 459 diarrhoeal patients in Lagos. The patients were described as isolates from Shigella spp. and Escherichia coli . From her findings, she recommended that ampicillin, tetracycline, co-trimoxazole, and streptomycin should be avoided in the first stage treatment of shigellosis as the properties and resistance level of its effect from the study were of concern. She identified Nalidixic acid, ciprofloxacin and ofloxacin as safer alternatives. [ 5 ] Her most cited work in Google Scholar was done in 2006, where she examined the antibacterial effect of edible plant extract on escherichia coli 0157:H7. In her work, four different plants ( Entada africana (bark), Terminalia avicennoides (bark), Mitragyna stipulosa (bark) Lannae acida (stem bark)) were examined with ethanol and aqueous extract using agar diffusion method for their reaction to ten strains of E. coli 0157:H7 (EHEC). The results varied depending on the combination used for the test. [ 6 ]
https://en.wikipedia.org/wiki/Stella_Ifeanyi_Smith
Stellar archaeology is the study of the early history of the universe, based on its early composition. [ 1 ] By examining the chemical abundances of the earliest stars in the universe: metal-poor , Population II stars; insights are gained into their earlier, metal-free, Population III progenitors. This sheds light on such processes as galaxy formation and evolution , early star formation, nucleosynthesis in stars and supernovae , and the formation processes of the galactic halo . [ 2 ] [ 3 ] The field has already discovered that the Milky Way cannibalizes surrounding dwarf galaxies , giving it a youthful appearance. [ 4 ]
https://en.wikipedia.org/wiki/Stellar_archaeology
Stellar chemistry is the study of chemical composition of astronomical objects; stars in particular, hence the name stellar chemistry. The significance of stellar chemical composition is an open ended question at this point. Some research asserts that a greater abundance of certain elements (such as carbon, sodium, silicon, and magnesium) in the stellar mass are necessary for a star's inner solar system to be habitable over long periods of time. [ 1 ] [ 2 ] The hypothesis being that the "abundance of these elements make the star cooler and cause it to evolve more slowly, thereby giving planets in its habitable zone more time to develop life as we know it." [ 1 ] Stellar abundance of oxygen also appears to be critical to the length of time newly developed planets exist in a habitable zone around their host star. [ 2 ] Researchers postulate that if our own sun had a lower abundance of oxygen, the Earth would have ceased to "live" in a habitable zone a billion years ago, long before complex organisms had the opportunity to evolve. [ 1 ] Other research is being or has been done in numerous areas relating to the chemical nature of stars. The formation of stars is of particular interest. Research published in 2009 presents spectroscopic observations of so-called "young stellar objects" viewed in the Large Magellanic Cloud with the Spitzer Space Telescope . This research suggests that water, or, more specifically, ice, plays a large role in the formation of these eventual stars [ 3 ] Others are researching much more tangible ideas relating to stars and chemistry. Research published in 2010 studied the effects of a strong stellar flare on the atmospheric chemistry of an Earth-like planet orbiting an M dwarf star , specifically, the M dwarf AD Leonis . This research simulated the effects an observed flare produced by AD Leonis on April 12, 1985 would have on a hypothetical Earth-like planet. After simulating the effects of both UV radiation and protons on the hypothetical planet's atmosphere, the researchers concluded that "flares may not present a direct hazard for life on the surface of an orbiting habitable planet. Given that AD Leo[nis] is one of the most magnetically active M dwarfs known, this conclusion should apply to planets around other M dwarfs with lower levels of chromospheric activity." [ 4 ]
https://en.wikipedia.org/wiki/Stellar_chemistry
In astronomy , stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines . Each line indicates a particular chemical element or molecule , with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary mainly due to the temperature of the photosphere , although in some cases there are true abundance differences. The spectral class of a star is a short code primarily summarizing the ionization state, giving an objective measure of the photosphere's temperature. Most stars are currently classified under the Morgan–Keenan (MK) system using the letters O , B , A , F , G , K , and M , a sequence from the hottest ( O type) to the coolest ( M type). Each letter class is then subdivided using a numeric digit with 0 being hottest and 9 being coolest (e.g., A8, A9, F0, and F1 form a sequence from hotter to cooler). The sequence has been expanded with three classes for other stars that do not fit in the classical system: W , S and C . Some stellar remnants or objects of deviating mass have also been assigned letters: D for white dwarfs and L , T and Y for brown dwarfs (and exoplanets ). In the MK system, a luminosity class is added to the spectral class using Roman numerals . This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants , class I for supergiants , class II for bright giants , class III for regular giants , class IV for subgiants , class V for main-sequence stars , class sd (or VI ) for subdwarfs , and class D (or VII ) for white dwarfs . The full spectral class for the Sun is then G2V, indicating a main-sequence star with a surface temperature around 5,800 K. The conventional colour description takes into account only the peak of the stellar spectrum. In actuality, however, stars radiate in all parts of the spectrum. Because all spectral colours combined appear white, the actual apparent colours the human eye would observe are far lighter than the conventional colour descriptions would suggest. This characteristic of 'lightness' indicates that the simplified assignment of colours within the spectrum can be misleading. Excluding colour-contrast effects in dim light, in typical viewing conditions there are no green, cyan, indigo, or violet stars. "Yellow" dwarfs such as the Sun are white, "red" dwarfs are a deep shade of yellow/orange, and "brown" dwarfs do not literally appear brown, but hypothetically would appear dim red or grey/black to a nearby observer. The modern classification system is known as the Morgan–Keenan (MK) classification. Each star is assigned a spectral class (from the older Harvard spectral classification, which did not include luminosity [ 1 ] ) and a luminosity class using Roman numerals as explained below, forming the star's spectral type. Other modern stellar classification systems , such as the UBV system , are based on color indices —the measured differences in three or more color magnitudes . [ 2 ] Those numbers are given labels such as "U−V" or "B−V", which represent the colors passed by two standard filters (e.g. U ltraviolet, B lue and V isual). The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon , who re-ordered and simplified the prior alphabetical system by Draper (see History ). Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions. Main-sequence stars vary in surface temperature from approximately 2,000 to 50,000 K , whereas more-evolved stars – in particular, newly-formed white dwarfs – can have surface temperatures above 100,000 K. [ 3 ] Physically, the classes indicate the temperature of the star's atmosphere and are normally listed from hottest to coldest. A common mnemonic for remembering the order of the spectral type letters, from hottest to coolest, is " O h, B e A F ine G uy/ G irl: K iss M e!", or another one is " O ur B right A stronomers F requently G enerate K iller M nemonics!". [ 12 ] The spectral classes O through M, as well as other more specialized classes discussed later, are subdivided by Arabic numerals (0–9), where 0 denotes the hottest stars of a given class. For example, A0 denotes the hottest stars in class A and A9 denotes the coolest ones. Fractional numbers are allowed; for example, the star Mu Normae is classified as O9.7. [ 13 ] The Sun is classified as G2. [ 14 ] The fact that the Harvard classification of a star indicated its surface or photospheric temperature (or more precisely, its effective temperature ) was not fully understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated (by 1914), this was generally suspected to be true. [ 15 ] In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere, then to stellar spectra. [ 16 ] Harvard astronomer Cecilia Payne then demonstrated that the O-B-A-F-G-K-M spectral sequence is actually a sequence in temperature. [ 17 ] Because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon (largely subjective) estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals. The Yerkes spectral classification , also called the MK, or Morgan-Keenan (alternatively referred to as the MKK, or Morgan-Keenan-Kellman) [ 18 ] [ 19 ] system from the authors' initials, is a system of stellar spectral classification introduced in 1943 by William Wilson Morgan , Philip C. Keenan , and Edith Kellman from Yerkes Observatory . [ 20 ] This two-dimensional ( temperature and luminosity ) classification scheme is based on spectral lines sensitive to stellar temperature and surface gravity , which is related to luminosity (whilst the Harvard classification is based on just surface temperature). Later, in 1953, after some revisions to the list of standard stars and classification criteria, the scheme was named the Morgan–Keenan classification , or MK , [ 21 ] which remains in use today. Denser stars with higher surface gravity exhibit greater pressure broadening of spectral lines. The gravity, and hence the pressure, on the surface of a giant star is much lower than for a dwarf star because the radius of the giant is much greater than a dwarf of similar mass. Therefore, differences in the spectrum can be interpreted as luminosity effects and a luminosity class can be assigned purely from examination of the spectrum. A number of different luminosity classes are distinguished, as listed in the table below. [ 22 ] Marginal cases are allowed; for example, a star may be either a supergiant or a bright giant, or may be in between the subgiant and main-sequence classifications. In these cases, two special symbols are used: For example, a star classified as A3-4III/IV would be in between spectral types A3 and A4, while being either a giant star or a subgiant. Sub-dwarf classes have also been used: VI for sub-dwarfs (stars slightly less luminous than the main sequence). Nominal luminosity class VII (and sometimes higher numerals) is now rarely used for white dwarf or "hot sub-dwarf" classes, since the temperature-letters of the main sequence and giant stars no longer apply to white dwarfs. Occasionally, letters a and b are applied to luminosity classes other than supergiants; for example, a giant star slightly less luminous than typical may be given a luminosity class of IIIb, while a luminosity class IIIa indicates a star slightly brighter than a typical giant. [ 32 ] A sample of extreme V stars with strong absorption in He II λ4686 spectral lines have been given the Vz designation. An example star is HD 93129 B . [ 33 ] Additional nomenclature, in the form of lower-case letters, can follow the spectral type to indicate peculiar features of the spectrum. [ 34 ] For example, 59 Cygni is listed as spectral type B1.5Vnne, [ 41 ] indicating a spectrum with the general classification B1.5V, as well as very broad absorption lines and certain emission lines. The reason for the odd arrangement of letters in the Harvard classification is historical, having evolved from the earlier Secchi classes and been progressively modified as understanding improved. During the 1860s and 1870s, pioneering stellar spectroscopist Angelo Secchi created the Secchi classes in order to classify observed spectra. By 1866, he had developed three classes of stellar spectra, shown in the table below. [ 42 ] [ 43 ] [ 44 ] In the late 1890s, this classification began to be superseded by the Harvard classification, which is discussed in the remainder of this article. [ 45 ] [ 46 ] [ 47 ] The Roman numerals used for Secchi classes should not be confused with the completely unrelated Roman numerals used for Yerkes luminosity classes and the proposed neutron star classes. In the 1880s, the astronomer Edward C. Pickering began to make a survey of stellar spectra at the Harvard College Observatory , using the objective-prism method. A first result of this work was the Draper Catalogue of Stellar Spectra , published in 1890. Williamina Fleming classified most of the spectra in this catalogue and was credited with classifying over 10,000 featured stars and discovering 10 novae and more than 200 variable stars. [ 54 ] With the help of the Harvard Computers , especially Williamina Fleming , the first iteration of the Henry Draper catalogue was devised to replace the Roman-numeral scheme established by Angelo Secchi. [ 55 ] The catalogue used a scheme in which the previously used Secchi classes (I to V) were subdivided into more specific classes, given letters from A to P. Also, the letter Q was used for stars not fitting into any other class. [ 51 ] [ 52 ] Fleming worked with Pickering to differentiate 17 different classes based on the intensity of hydrogen spectral lines, which causes variation in the wavelengths emanated from stars and results in variation in color appearance. The spectra in class A tended to produce the strongest hydrogen absorption lines while spectra in class O produced virtually no visible lines. The lettering system displayed the gradual decrease in hydrogen absorption in the spectral classes when moving down the alphabet. This classification system was later modified by Annie Jump Cannon and Antonia Maury to produce the Harvard spectral classification scheme. [ 54 ] [ 56 ] In 1897, another astronomer at Harvard, Antonia Maury , placed the Orion subtype of Secchi class I ahead of the remainder of Secchi class I, thus placing the modern type B ahead of the modern type A. She was the first to do so, although she did not use lettered spectral types, but rather a series of twenty-two types numbered from I–XXII. [ 57 ] [ 58 ] Because the 22 Roman numeral groupings did not account for additional variations in spectra, three additional divisions were made to further specify differences: Lowercase letters were added to differentiate relative line appearance in spectra; the lines were defined as: [ 59 ] Antonia Maury published her own stellar classification catalogue in 1897 called "Spectra of Bright Stars Photographed with the 11 inch Draper Telescope as Part of the Henry Draper Memorial", which included 4,800 photographs and Maury's analyses of 681 bright northern stars. This was the first instance in which a woman was credited for an observatory publication. [ 60 ] In 1901, Annie Jump Cannon returned to the lettered types, but dropped all letters except O, B, A, F, G, K, M, and N used in that order, as well as P for planetary nebulae and Q for some peculiar spectra. She also used types such as B5A for stars halfway between types B and A, F2G for stars one fifth of the way from F to G, and so on. [ 61 ] [ 62 ] Finally, by 1912, Cannon had changed the types B, A, B5A, F2G, etc. to B0, A0, B5, F2, etc. [ 63 ] [ 64 ] This is essentially the modern form of the Harvard classification system. This system was developed through the analysis of spectra on photographic plates, which could convert light emanated from stars into a readable spectrum. [ 65 ] A luminosity classification known as the Mount Wilson system was used to distinguish between stars of different luminosities. [ 66 ] [ 67 ] [ 68 ] This notation system is still sometimes seen on modern spectra. [ 69 ] The stellar classification system is taxonomic , based on type specimens , similar to classification of species in biology : The categories are defined by one or more standard stars for each category and sub-category, with an associated description of the distinguishing features. [ 70 ] Stars are often referred to as early or late types. "Early" is a synonym for hotter , while "late" is a synonym for cooler . Depending on the context, "early" and "late" may be absolute or relative terms. "Early" as an absolute term would therefore refer to O or B, and possibly A stars. As a relative reference it relates to stars hotter than others, such as "early K" being perhaps K0, K1, K2 and K3. "Late" is used in the same way, with an unqualified use of the term indicating stars with spectral types such as K and M, but it can also be used for stars that are cool relative to other stars, as in using "late G" to refer to G7, G8, and G9. In the relative sense, "early" means a lower Arabic numeral following the class letter, and "late" means a higher number. This obscure terminology is a hold-over from a late nineteenth century model of stellar evolution , which supposed that stars were powered by gravitational contraction via the Kelvin–Helmholtz mechanism , which is now known to not apply to main-sequence stars . If that were true, then stars would start their lives as very hot "early-type" stars and then gradually cool down into "late-type" stars. This mechanism provided ages of the Sun that were much smaller than what is observed in the geologic record , and was rendered obsolete by the discovery that stars are powered by nuclear fusion . [ 71 ] The terms "early" and "late" were carried over, beyond the demise of the model they were based on. O-type stars are very hot and extremely luminous, with most of their radiated output in the ultraviolet range. These are the rarest of all main-sequence stars. About 1 in 3,000,000 (0.00003%) of the main-sequence stars in the solar neighborhood are O-type stars. [ c ] [ 11 ] Some of the most massive stars lie within this spectral class. O-type stars frequently have complicated surroundings that make measurement of their spectra difficult. O-type spectra formerly were defined by the ratio of the strength of the He II λ4541 relative to that of He I λ4471, where λ is the radiation wavelength . Spectral type O7 was defined to be the point at which the two intensities are equal, with the He I line weakening towards earlier types. Type O3 was, by definition, the point at which said line disappears altogether, although it can be seen very faintly with modern technology. Due to this, the modern definition uses the ratio of the nitrogen line N IV λ4058 to N III λλ4634-40-42. [ 72 ] O-type stars have dominant lines of absorption and sometimes emission for He II lines, prominent ionized ( Si IV, O III, N III, and C III) and neutral helium lines, strengthening from O5 to O9, and prominent hydrogen Balmer lines , although not as strong as in later types. Higher-mass O-type stars do not retain extensive atmospheres due to the extreme velocity of their stellar wind , which may reach 2,000 km/s. Because they are so massive, O-type stars have very hot cores and burn through their hydrogen fuel very quickly, so they are the first stars to leave the main sequence . When the MKK classification scheme was first described in 1943, the only subtypes of class O used were O5 to O9.5. [ 73 ] The MKK scheme was extended to O9.7 in 1971 [ 74 ] and O4 in 1978, [ 75 ] and new classification schemes that add types O2, O3, and O3.5 have subsequently been introduced. [ 76 ] Example spectral standards: [ 70 ] B-type stars are very luminous and blue. Their spectra have neutral helium lines, which are most prominent at the B2 subclass, and moderate hydrogen lines. As O- and B-type stars are so energetic, they only live for a relatively short time. Thus, due to the low probability of kinematic interaction during their lifetime, they are unable to stray far from the area in which they formed, apart from runaway stars . The transition from class O to class B was originally defined to be the point at which the He II λ4541 disappears. However, with modern equipment, the line is still apparent in the early B-type stars. Today for main-sequence stars, the B class is instead defined by the intensity of the He I violet spectrum, with the maximum intensity corresponding to class B2. For supergiants, lines of silicon are used instead; the Si IV λ4089 and Si III λ4552 lines are indicative of early B. At mid-B, the intensity of the latter relative to that of Si II λλ4128-30 is the defining characteristic, while for late B, it is the intensity of Mg II λ4481 relative to that of He I λ4471. [ 72 ] These stars tend to be found in their originating OB associations , which are associated with giant molecular clouds . The Orion OB1 association occupies a large portion of a spiral arm of the Milky Way and contains many of the brighter stars of the constellation Orion . About 1 in 800 (0.125%) of the main-sequence stars in the solar neighborhood are B-type main-sequence stars . [ c ] [ 11 ] B-type stars are relatively uncommon and the closest is Regulus, at around 80 light years. [ 77 ] Massive yet non- supergiant stars known as Be stars have been observed to show one or more Balmer lines in emission, with the hydrogen -related electromagnetic radiation series projected out by the stars being of particular interest. Be stars are generally thought to feature unusually strong stellar winds , high surface temperatures, and significant attrition of stellar mass as the objects rotate at a curiously rapid rate. [ 78 ] Objects known as B[e] stars – or B(e) stars for typographic reasons – possess distinctive neutral or low ionisation emission lines that are considered to have forbidden mechanisms , undergoing processes not normally allowed under current understandings of quantum mechanics . Example spectral standards: [ 70 ] A-type stars are among the more common naked eye stars, and are white or bluish-white. They have strong hydrogen lines, at a maximum by A0, and also lines of ionized metals ( Fe II, Mg II, Si II) at a maximum at A5. The presence of Ca II lines is notably strengthening by this point. About 1 in 160 (0.625%) of the main-sequence stars in the solar neighborhood are A-type stars, [ c ] [ 11 ] which includes 9 stars within 15 parsecs. [ 79 ] Example spectral standards: [ 70 ] F-type stars have strengthening spectral lines H and K of Ca II. Neutral metals ( Fe I, Cr I) beginning to gain on ionized metal lines by late F. Their spectra are characterized by the weaker hydrogen lines and ionized metals. Their color is white. About 1 in 33 (3.03%) of the main-sequence stars in the solar neighborhood are F-type stars, [ c ] [ 11 ] including 1 star Procyon A within 20 ly. [ 80 ] Example spectral standards: [ 70 ] [ 81 ] [ 82 ] [ 83 ] [ 84 ] G-type stars, including the Sun , [ 14 ] have prominent spectral lines H and K of Ca II, which are most pronounced at G2. They have even weaker hydrogen lines than F, but along with the ionized metals, they have neutral metals. There is a prominent spike in the G band of CN molecules. Class G main-sequence stars make up about 7.5%, nearly one in thirteen, of the main-sequence stars in the solar neighborhood. There are 21 G-type stars within 10pc. [ c ] [ 11 ] Class G contains the "Yellow Evolutionary Void". [ 85 ] Supergiant stars often swing between O or B (blue) and K or M (red). While they do this, they do not stay for long in the unstable yellow supergiant class. Example spectral standards: [ 70 ] K-type stars are orangish stars that are slightly cooler than the Sun. They make up about 12% of the main-sequence stars in the solar neighborhood. [ c ] [ 11 ] There are also giant K-type stars, which range from hypergiants like RW Cephei , to giants and supergiants , such as Arcturus , whereas orange dwarfs , like Alpha Centauri B, are main-sequence stars. They have extremely weak hydrogen lines, if those are present at all, and mostly neutral metals ( Mn I, Fe I, Si I). By late K, molecular bands of titanium oxide become present. Mainstream theories (those rooted in lower harmful radioactivity and star longevity) would thus suggest such stars have the optimal chances of heavily evolved life developing on orbiting planets (if such life is directly analogous to Earth's) due to a broad habitable zone yet much lower harmful periods of emission compared to those with the broadest such zones. [ 86 ] [ 87 ] Example spectral standards: [ 70 ] Class M stars are by far the most common. About 76% of the main-sequence stars in the solar neighborhood are class M stars. [ c ] [ f ] [ 11 ] However, class M main-sequence stars ( red dwarfs ) have such low luminosities that none are bright enough to be seen with the unaided eye, unless under exceptional conditions. The brightest-known M class main-sequence star is Lacaille 8760 , class M0V, with magnitude 6.7 (the limiting magnitude for typical naked-eye visibility under good conditions being typically quoted as 6.5), and it is extremely unlikely that any brighter examples will be found. Although most class M stars are red dwarfs, most of the largest-known supergiant stars in the Milky Way are class M stars, such as VY Canis Majoris , VV Cephei , Antares , and Betelgeuse . Furthermore, some larger, hotter brown dwarfs are late class M, usually in the range of M6.5 to M9.5. The spectrum of a class M star contains lines from oxide molecules (in the visible spectrum , especially TiO ) and all neutral metals, but absorption lines of hydrogen are usually absent. TiO bands can be strong in class M stars, usually dominating their visible spectrum by about M5. Vanadium(II) oxide bands become present by late M. Example spectral standards: [ 70 ] A number of new spectral types have been taken into use from newly discovered types of stars. [ 88 ] Spectra of some very hot and bluish stars exhibit marked emission lines from carbon or nitrogen, or sometimes oxygen. Once included as type O stars, the Wolf–Rayet stars of class W [ 90 ] or WR are notable for spectra lacking hydrogen lines. Instead their spectra are dominated by broad emission lines of highly ionized helium, nitrogen, carbon, and sometimes oxygen. They are thought to mostly be dying supergiants with their hydrogen layers blown away by stellar winds , thereby directly exposing their hot helium shells. Class WR is further divided into subclasses according to the relative strength of nitrogen and carbon emission lines in their spectra (and outer layers). [ 40 ] WR spectra range is listed below: [ 91 ] [ 92 ] Although the central stars of most planetary nebulae (CSPNe) show O-type spectra, [ 93 ] around 10% are hydrogen-deficient and show WR spectra. [ 94 ] These are low-mass stars and to distinguish them from the massive Wolf–Rayet stars, their spectra are enclosed in square brackets: e.g. [WC]. Most of these show [WC] spectra, some [WO], and very rarely [WN]. The slash stars are O-type stars with WN-like lines in their spectra. The name "slash" comes from their printed spectral type having a slash in it (e.g. "Of/WNL") [ 72 ] ). There is a secondary group found with these spectra, a cooler, "intermediate" group designated "Ofpe/WN9". [ 72 ] These stars have also been referred to as WN10 or WN11, but that has become less popular with the realisation of the evolutionary difference from other Wolf–Rayet stars. Recent discoveries of even rarer stars have extended the range of slash stars as far as O2-3.5If * /WN5-7, which are even hotter than the original "slash" stars. [ 95 ] They are O stars with strong magnetic fields. Designation is Of?p. [ 72 ] The new spectral types L, T, and Y were created to classify infrared spectra of cool stars. This includes both red dwarfs and brown dwarfs that are very faint in the visible spectrum . [ 96 ] Brown dwarfs , stars that do not undergo hydrogen fusion , cool as they age and so progress to later spectral types. Brown dwarfs start their lives with M-type spectra and will cool through the L, T, and Y spectral classes, faster the less massive they are; the highest-mass brown dwarfs cannot have cooled to Y or even T dwarfs within the age of the universe. Because this leads to an unresolvable overlap between spectral types ' effective temperature and luminosity for some masses and ages of different L-T-Y types, no distinct temperature or luminosity values can be given. [ 10 ] Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs. They are a very dark red in color and brightest in infrared . Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra. [ 97 ] [ 98 ] [ 99 ] Due to low surface gravity in giant stars, TiO - and VO -bearing condensates never form. Thus, L-type stars larger than dwarfs can never form in an isolated environment. However, it may be possible for these L-type supergiants to form through stellar collisions, an example of which is V838 Monocerotis while in the height of its luminous red nova eruption. Class T dwarfs are cool brown dwarfs with surface temperatures between approximately 550 and 1,300 K (277 and 1,027 °C; 530 and 1,880 °F). Their emission peaks in the infrared . Methane is prominent in their spectra. [ 97 ] [ 98 ] Study of the number of proplyds (protoplanetary disks, clumps of gas in nebulae from which stars and planetary systems are formed) indicates that the number of stars in the galaxy should be several orders of magnitude higher than what was previously conjectured. It is theorized that these proplyds are in a race with each other. The first one to form will become a protostar , which are very violent objects and will disrupt other proplyds in the vicinity, stripping them of their gas. The victim proplyds will then probably go on to become main-sequence stars or brown dwarfs of the L and T classes, which are quite invisible to us. [ 100 ] Brown dwarfs of spectral class Y are cooler than those of spectral class T and have qualitatively different spectra from them. A total of 17 objects have been placed in class Y as of August 2013. [ 101 ] Although such dwarfs have been modelled [ 102 ] and detected within forty light-years by the Wide-field Infrared Survey Explorer (WISE) [ 88 ] [ 103 ] [ 104 ] [ 105 ] [ 106 ] there is no well-defined spectral sequence yet and no prototypes. Nevertheless, several objects have been proposed as spectral classes Y0, Y1, and Y2. [ 107 ] The spectra of these prospective Y objects display absorption around 1.55 micrometers . [ 108 ] Delorme et al. have suggested that this feature is due to absorption from ammonia , and that this should be taken as the indicative feature for the T-Y transition. [ 108 ] [ 109 ] In fact, this ammonia-absorption feature is the main criterion that has been adopted to define this class. [ 107 ] However, this feature is difficult to distinguish from absorption by water and methane , [ 108 ] and other authors have stated that the assignment of class Y0 is premature. [ 110 ] The latest brown dwarf proposed for the Y spectral type, WISE 1828+2650 , is a > Y2 dwarf with an effective temperature originally estimated around 300 K , the temperature of the human body. [ 103 ] [ 104 ] [ 111 ] Parallax measurements have, however, since shown that its luminosity is inconsistent with it being colder than ~400 K. The coolest Y dwarf currently known is WISE 0855−0714 with an approximate temperature of 250 K, and a mass just seven times that of Jupiter. [ 112 ] The mass range for Y dwarfs is 9–25 Jupiter masses, but young objects might reach below one Jupiter mass (although they cool to become planets), which means that Y class objects straddle the 13 Jupiter mass deuterium -fusion limit that marks the current IAU division between brown dwarfs and planets. [ 107 ] Young brown dwarfs have low surface gravities because they have larger radii and lower masses compared to the field stars of similar spectral type. These sources are marked by a letter beta ( β ) for intermediate surface gravity and gamma ( γ ) for low surface gravity. Indication for low surface gravity are weak CaH, K I and Na I lines, as well as strong VO line. [ 115 ] Alpha ( α ) stands for normal surface gravity and is usually dropped. Sometimes an extremely low surface gravity is denoted by a delta ( δ ). [ 117 ] The suffix "pec" stands for peculiar. The peculiar suffix is still used for other features that are unusual and summarizes different properties, indicative of low surface gravity, subdwarfs and unresolved binaries. [ 118 ] The prefix sd stands for subdwarf and only includes cool subdwarfs. This prefix indicates a low metallicity and kinematic properties that are more similar to halo stars than to disk stars. [ 114 ] Subdwarfs appear bluer than disk objects. [ 119 ] The red suffix describes objects with red color, but an older age. This is not interpreted as low surface gravity, but as a high dust content. [ 116 ] [ 117 ] The blue suffix describes objects with blue near-infrared colors that cannot be explained with low metallicity. Some are explained as L+T binaries, others are not binaries, such as 2MASS J11263991−5003550 and are explained with thin and/or large-grained clouds. [ 117 ] Carbon-stars are stars whose spectra indicate production of carbon – a byproduct of triple-alpha helium fusion. With increased carbon abundance, and some parallel s-process heavy element production, the spectra of these stars become increasingly deviant from the usual late spectral classes G, K, and M. Equivalent classes for carbon-rich stars are S and C. The giants among those stars are presumed to produce this carbon themselves, but some stars in this class are double stars, whose odd atmosphere is suspected of having been transferred from a companion that is now a white dwarf, when the companion was a carbon-star. Originally classified as R and N stars, these are also known as carbon stars . These are red giants, near the end of their lives, in which there is an excess of carbon in the atmosphere. The old R and N classes ran parallel to the normal classification system from roughly mid-G to late M. These have more recently been remapped into a unified carbon classifier C with N0 starting at roughly C6. Another subset of cool carbon stars are the C–J-type stars, which are characterized by the strong presence of molecules of 13 CN in addition to those of 12 CN . [ 120 ] A few main-sequence carbon stars are known, but the overwhelming majority of known carbon stars are giants or supergiants. There are several subclasses: Class S stars form a continuum between class M stars and carbon stars. Those most similar to class M stars have strong ZrO absorption bands analogous to the TiO bands of class M stars, whereas those most similar to carbon stars have strong sodium D lines and weak C 2 bands. [ 121 ] Class S stars have excess amounts of zirconium and other elements produced by the s-process , and have more similar carbon and oxygen abundances to class M or carbon stars. Like carbon stars, nearly all known class S stars are asymptotic-giant-branch stars. The spectral type is formed by the letter S and a number between zero and ten. This number corresponds to the temperature of the star and approximately follows the temperature scale used for class M giants. The most common types are S3 to S5. The non-standard designation S10 has only been used for the star Chi Cygni when at an extreme minimum. The basic classification is usually followed by an abundance indication, following one of several schemes: S2,5; S2/5; S2 Zr4 Ti2; or S2*5. A number following a comma is a scale between 1 and 9 based on the ratio of ZrO and TiO. A number following a slash is a more-recent but less-common scheme designed to represent the ratio of carbon to oxygen on a scale of 1 to 10, where a 0 would be an MS star. Intensities of zirconium and titanium may be indicated explicitly. Also occasionally seen is a number following an asterisk, which represents the strength of the ZrO bands on a scale from 1 to 5. In between the M and S classes, border cases are named MS stars. In a similar way, border cases between the S and C-N classes are named SC or CS. The sequence M → MS → S → SC → C-N is hypothesized to be a sequence of increased carbon abundance with age for carbon stars in the asymptotic giant branch . The class D (for Degenerate ) is the modern classification used for white dwarfs—low-mass stars that are no longer undergoing nuclear fusion and have shrunk to planetary size, slowly cooling down. Class D is further divided into spectral types DA, DB, DC, DO, DQ, DX, and DZ. The letters are not related to the letters used in the classification of other stars, but instead indicate the composition of the white dwarf's visible outer layer or atmosphere. The white dwarf types are as follows: [ 122 ] [ 123 ] The type is followed by a number giving the white dwarf's surface temperature. This number is a rounded form of 50400/ T eff , where T eff is the effective surface temperature , measured in kelvins . Originally, this number was rounded to one of the digits 1 through 9, but more recently fractional values have started to be used, as well as values below 1 and above 9.(For example DA1.5 for IK Pegasi B) [ 122 ] [ 124 ] Two or more of the type letters may be used to indicate a white dwarf that displays more than one of the spectral features above. [ 122 ] A different set of spectral peculiarity symbols are used for white dwarfs than for other types of stars: [ 122 ] Luminous blue variables (LBVs) are rare, massive and evolved stars that show unpredictable and sometimes dramatic variations in their spectra and brightness. During their "quiescent" states, they are usually similar to B-type stars, although with unusual spectral lines. During outbursts, they are more similar to F-type stars, with significantly lower temperatures. Many papers treat LBV as its own spectral type. [ 125 ] [ 126 ] Finally, the classes P and Q are left over from the system developed by Cannon for the Henry Draper Catalogue . They are occasionally used for certain objects, not associated with a single star: Type P objects are stars within planetary nebulae (typically young white dwarfs or hydrogen-poor M giants); type Q objects are novae . [ citation needed ] Stellar remnants are objects associated with the death of stars. Included in the category are white dwarfs , and as can be seen from the radically different classification scheme for class D, stellar remnants are difficult to fit into the MK system. The Hertzsprung–Russell diagram, which the MK system is based on, is observational in nature so these remnants cannot easily be plotted on the diagram, or cannot be placed at all. Old neutron stars are relatively small and cold, and would fall on the far right side of the diagram. Planetary nebulae are dynamic and tend to quickly fade in brightness as the progenitor star transitions to the white dwarf branch. If shown, a planetary nebula would be plotted to the right of the diagram's upper right quadrant. A black hole emits no visible light of its own, and therefore would not appear on the diagram. [ 127 ] A classification system for neutron stars using Roman numerals has been proposed: type I for less massive neutron stars with low cooling rates, type II for more massive neutron stars with higher cooling rates, and a proposed type III for more massive neutron stars (possible exotic star candidates) with higher cooling rates. [ 128 ] The more massive a neutron star is, the higher neutrino flux it carries. These neutrinos carry away so much heat energy that after only a few years the temperature of an isolated neutron star falls from the order of billions to only around a million Kelvin. This proposed neutron star classification system is not to be confused with the earlier Secchi spectral classes and the Yerkes luminosity classes. Several spectral types, all previously used for non-standard stars in the mid-20th century, have been replaced during revisions of the stellar classification system. They may still be found in old editions of star catalogs: R and N have been subsumed into the new C class as C-R and C-N. While humans may eventually be able to colonize any kind of stellar habitat, this section will address the probability of life arising around other stars. Stability, luminosity, and lifespan are all factors in stellar habitability. Humans know of only one star that hosts life, the G-class Sun, a star with an abundance of heavy elements and low variability in brightness. The Solar System is also unlike many stellar systems in that it only contains one star (see Habitability of binary star systems ). Working from these constraints and the problems of having an empirical sample set of only one, the range of stars that are predicted to be able to support life is limited by a few factors. Of the main-sequence star types, stars more massive than 1.5 times that of the Sun (spectral types O, B, and A) age too quickly for advanced life to develop (using Earth as a guideline). On the other extreme, dwarfs of less than half the mass of the Sun (spectral type M) are likely to tidally lock planets within their habitable zone, along with other problems (see Habitability of red dwarf systems ). [ 129 ] While there are many problems facing life on red dwarfs, many astronomers continue to model these systems due to their sheer numbers and longevity. For these reasons NASA's Kepler Mission is searching for habitable planets at nearby main-sequence stars that are less massive than spectral type A but more massive than type M—making the most probable stars to host life dwarf stars of types F, G, and K. [ 129 ]
https://en.wikipedia.org/wiki/Stellar_classification
A corona ( pl. : coronas or coronae ) is the outermost layer of a star 's atmosphere . It is a hot but relatively dim region of plasma populated by intermittent coronal structures such as prominences , coronal loops , and helmet streamers . The Sun 's corona lies above the chromosphere and extends millions of kilometres into outer space. Coronal light is typically obscured by diffuse sky radiation and glare from the solar disk, but can be easily seen by the naked eye during a total solar eclipse or with a specialized coronagraph . [ 1 ] Spectroscopic measurements indicate strong ionization in the corona and a plasma temperature in excess of 1 000 000 kelvins , [ 2 ] much hotter than the surface of the Sun, known as the photosphere . Corona ( Latin for 'crown') is, in turn, derived from Ancient Greek κορώνη ( korṓnē ) ' garland, wreath ' . In 1724, French-Italian astronomer Giacomo F. Maraldi recognized that the aura visible during a solar eclipse belongs to the Sun, not to the Moon . [ 3 ] In 1809, Spanish astronomer José Joaquín de Ferrer coined the term 'corona'. [ 4 ] Based on his own observations of the 1806 solar eclipse at Kinderhook (New York), de Ferrer also proposed that the corona was part of the Sun and not of the Moon. English astronomer Norman Lockyer identified the first element unknown on Earth in the Sun's chromosphere, which was called helium (from Greek helios 'sun'). French astronomer Jules Jenssen noted, after comparing his readings between the 1871 and 1878 eclipses, that the size and shape of the corona changes with the sunspot cycle . [ 5 ] In 1930, Bernard Lyot invented the "coronograph" (now "coronagraph") , which allows viewing the corona without a total eclipse. In 1952, American astronomer Eugene Parker proposed that the solar corona might be heated by myriad tiny 'nanoflares', miniature brightenings resembling solar flares that would occur all over the surface of the Sun. The high temperature of the Sun's corona gives it unusual spectral features, which led some in the 19th century to suggest that it contained a previously unknown element, " coronium ". Instead, these spectral features have since been explained by highly ionized iron (Fe-XIV, or Fe 13+ ). Bengt Edlén , following the work of Walter Grotrian in 1939, first identified the coronal spectral lines in 1940 (observed since 1869) as transitions from low-lying metastable levels of the ground configuration of highly ionised metals (the green Fe-XIV line from Fe 13+ at 5 303 Å , but also the red Fe-X line from Fe 9+ at 6 374 Å ). [ 2 ] The solar corona has three recognized, and distinct, sources of light that occupy the same volume: the "F-corona" (for "Fraunhofer"), the "K-corona" (for "Kontinuierlich"), and the "E-corona" (for "emission"). [ 6 ] The "F-corona" is named for the Fraunhofer spectrum of absorption lines in ordinary sunlight, which are preserved by reflection off small material objects. The F-corona is faint near the Sun itself, but drops in brightness only gradually far from the Sun, extending far across the sky and becoming the zodiacal light . The F-corona is recognized to arise from small dust grains orbiting the Sun; these form a tenuous cloud that extends through much of the Solar System . The "K-corona" is named for the fact that its spectrum is a continuum, with no major spectral features. It is sunlight that is Thomson-scattered by free electrons in the hot plasma of the Sun's outer atmosphere. The continuum nature of the spectrum arises from Doppler broadening of the Sun's Fraunhofer absorption lines in the reference frame of the (hot and therefore fast-moving) electrons. Although the K-corona is a phenomenon of the electrons in the plasma, the term is frequently used to describe the plasma itself (as distinct from the dust that gives rise to the F-corona). The "E-corona" is the component of the corona with an emission-line spectrum, either inside or outside the wavelength band of visible light. It is a phenomenon of the ion component of the plasma, as individual ions are excited by collision with other ions or electrons, or by absorption of ultraviolet light from the Sun. The Sun's corona is much hotter (by a factor from 150 to 450) than the visible surface of the Sun: the corona's temperature is 1 to 3 million kelvin compared to the photosphere 's average temperature – around 5 800 kelvin . The corona is far less dense than the photosphere, and produces about one-millionth as much visible light. The corona is separated from the photosphere by the relatively shallow chromosphere . The exact mechanism by which the corona is heated is still the subject of some debate, but likely possibilities include episodic energy releases from the pervasive magnetic field and magnetohydrodynamic waves from below. The outer edges of the Sun's corona are constantly being transported away, creating the "open" magnetic flux entrained in the solar wind . The corona is not always evenly distributed across the surface of the Sun. During periods of quiet, the corona is more or less confined to the equatorial regions, with coronal holes covering the polar regions. However, during the Sun's active periods, the corona is evenly distributed over the equatorial and polar regions, though it is most prominent in areas with sunspot activity. The solar cycle spans approximately 11 years, from one solar minimum to the following minimum. Since the solar magnetic field is continually wound up due to the faster rotation of mass at the Sun's equator ( differential rotation ), sunspot activity is more pronounced at solar maximum where the magnetic field is more twisted. Associated with sunspots are coronal loops , loops of magnetic flux , upwelling from the solar interior. The magnetic flux pushes the hotter photosphere aside, exposing the cooler plasma below, thus creating the relatively dark sun spots. High-resolution X-ray images of the Sun's corona photographed by Skylab in 1973, by Yohkoh in 1991–2001, and by subsequent space-based instruments revealed the structure of the corona to be quite varied and complex, leading astronomers to classify various zones on the coronal disc. [ 7 ] [ 8 ] [ 9 ] Astronomers usually distinguish several regions, [ 10 ] as described below. Active regions are ensembles of loop structures connecting points of opposite magnetic polarity in the photosphere, the so-called coronal loops. They generally distribute in two zones of activity, which are parallel to the solar equator. The average temperature is between two and four million kelvin, while the density goes from 10 9 to 10 10 particles per cubic centimetre. Active regions involve all the phenomena directly linked to the magnetic field, which occur at different heights above the Sun's surface: [ 10 ] sunspots and faculae occur in the photosphere; spicules , Hα filaments and plages in the chromosphere; prominences in the chromosphere and transition region; and flares and coronal mass ejections (CME) happen in the corona and chromosphere. If flares are very violent, they can also perturb the photosphere and generate a Moreton wave . On the contrary, quiescent prominences are large, cool, dense structures which are observed as dark, "snake-like" Hα ribbons (appearing like filaments) on the solar disc. Their temperature is about 5 000 – 8 000 K , and so they are usually considered as chromospheric features. In 2013, images from the High Resolution Coronal Imager revealed never-before-seen "magnetic braids" of plasma within the outer layers of these active regions. [ 11 ] Coronal loops are the basic structures of the magnetic solar corona. These loops are the closed-magnetic flux cousins of the open-magnetic flux that can be found in coronal holes and the solar wind. Loops of magnetic flux well up from the solar body and fill with hot solar plasma. [ 12 ] Due to the heightened magnetic activity in these coronal loop regions, coronal loops can often be the precursor to solar flares and CMEs. The solar plasma that feeds these structures is heated from under 6 000 K to well over 10 6 K from the photosphere, through the transition region, and into the corona. Often, the solar plasma will fill these loops from one point and drain to another, called foot points ( siphon flow due to a pressure difference, [ 13 ] or asymmetric flow due to some other driver). When the plasma rises from the foot points towards the loop top, as always occurs during the initial phase of a compact flare, it is defined as chromospheric evaporation. When the plasma rapidly cools and falls toward the photosphere, it is called chromospheric condensation. There may also be symmetric flow from both loop foot points, causing a build-up of mass in the loop structure. The plasma may cool rapidly in this region (for a thermal instability), its dark filaments obvious against the solar disk or prominences off the Sun's limb . Coronal loops may have lifetimes in the order of seconds (in the case of flare events), minutes, hours or days. Where there is a balance in loop energy sources and sinks, coronal loops can last for long periods of time and are known as steady state or quiescent coronal loops ( example ). Coronal loops are very important to our understanding of the current coronal heating problem . Coronal loops are highly radiating sources of plasma and are therefore easy to observe by instruments such as TRACE . An explanation of the coronal heating problem remains as these structures are being observed remotely, where many ambiguities are present (i.e., radiation contributions along the line-of-sight propagation ). In-situ measurements are required before a definitive answer can be determined, but due to the high plasma temperatures in the corona, in-situ measurements are, at present, impossible. The next mission of the NASA Parker Solar Probe will approach the Sun very closely, allowing more direct observations. Large-scale structures are very long arcs which can cover over a quarter of the solar disk but contain plasma less dense than in the coronal loops of the active regions. They were first detected in the June 8, 1968, flare observation during a rocket flight. [ 14 ] The large-scale structure of the corona changes over the 11-year solar cycle and becomes particularly simple during the minimum period, when the magnetic field of the Sun is almost similar to a dipolar configuration (plus a quadrupolar component). The interconnections of active regions are arcs connecting zones of opposite magnetic field, of different active regions. Significant variations of these structures are often seen after a flare. [ 15 ] Some other features of this kind are helmet streamers – large, cap-like coronal structures with long, pointed peaks that usually overlie sunspots and active regions. Coronal streamers are considered to be sources of the slow solar wind. [ 15 ] Filament cavities are zones which look dark in the X-rays and are above the regions where Hα filaments are observed in the chromosphere. They were first observed in the two 1970 rocket flights which also detected coronal holes . [ 14 ] Filament cavities are cooler clouds of plasma suspended above the Sun's surface by magnetic forces. The regions of intense magnetic field look dark in images because they are empty of hot plasma. In fact, the sum of the magnetic pressure and plasma pressure must be constant everywhere on the heliosphere in order to have an equilibrium configuration: where the magnetic field is higher, the plasma must be cooler or less dense. The plasma pressure p {\displaystyle p} can be calculated by the state equation of a perfect gas: p = n k B T {\displaystyle p=nk_{B}T} , where n {\displaystyle n} is the particle number density , k B {\displaystyle k_{B}} the Boltzmann constant and T {\displaystyle T} the plasma temperature. It is evident from the equation that the plasma pressure lowers when the plasma temperature decreases with respect to the surrounding regions or when the zone of intense magnetic field empties. The same physical effect renders sunspots apparently dark in the photosphere. [ citation needed ] Bright points are small active regions found on the solar disk. X-ray bright points were first detected on April 8, 1969, during a rocket flight. [ 14 ] The fraction of the solar surface covered by bright points varies with the solar cycle. They are associated with small bipolar regions of the magnetic field. Their average temperature ranges from 1.1 MK to 3.4 MK. The variations in temperature are often correlated with changes in the X-ray emission. [ 16 ] Coronal holes are unipolar regions which look dark in the X-rays since they do not emit much radiation. [ 17 ] These are wide zones of the Sun where the magnetic field is unipolar and opens towards the interplanetary space. The high speed solar wind arises mainly from these regions. In the UV images of the coronal holes, some small structures, similar to elongated bubbles, are often seen as they were suspended in the solar wind. These are the coronal plumes. More precisely, they are long thin streamers that project outward from the Sun's north and south poles. [ 18 ] The solar regions which are not part of active regions and coronal holes are commonly identified as the quiet Sun. The equatorial region has a faster rotation speed than the polar zones. The result of the Sun's differential rotation is that the active regions always arise in two bands parallel to the equator and their extension increases during the periods of maximum of the solar cycle, while they almost disappear during each minimum. Therefore, the quiet Sun always coincides with the equatorial zone and its surface is less active during the maximum of the solar cycle. Approaching the minimum of the solar cycle (also named butterfly cycle), the extension of the quiet Sun increases until it covers the whole disk surface excluding some bright points on the hemisphere and the poles, where there are coronal holes. The Alfvén surface is the boundary separating the corona from the solar wind defined as where the coronal plasma's Alfvén speed and the large-scale solar wind speed are equal. [ 19 ] [ 20 ] Researchers were unsure exactly where the Alfvén critical surface of the Sun lay. Based on remote images of the corona, estimates had put it somewhere between 10 and 20 solar radii from the surface of the Sun. On April 28, 2021, during its eighth flyby of the Sun, NASA's Parker Solar Probe encountered the specific magnetic and particle conditions at 18.8 solar radii that indicated that it penetrated the Alfvén surface. [ 21 ] A portrait, as diversified as the one already pointed out for the coronal features, is emphasized by the analysis of the dynamics of the main structures of the corona, which evolve at differential times. Studying coronal variability in its complexity is not easy because the times of evolution of the different structures can vary considerably: from seconds to several months. The typical sizes of the regions where coronal events take place vary in the same way, as it is shown in the following table. Flares take place in active regions and are characterized by a sudden increase of the radiative flux emitted from small regions of the corona. They are very complex phenomena, visible at different wavelengths; they involve several zones of the solar atmosphere and many physical effects, thermal and not thermal, and sometimes wide reconnections of the magnetic field lines with material expulsion. Flares are impulsive phenomena, of average duration of 15 minutes, and the most energetic events can last several hours. Flares produce a high and rapid increase of the density and temperature. An emission in white light is only seldom observed: usually, flares are only seen at extreme UV wavelengths and into the X-rays, typical of the chromospheric and coronal emission. In the corona, the morphology of flares is described by observations in the UV, soft and hard X-rays, and in Hα wavelengths, and is very complex. However, two kinds of basic structures can be distinguished: [ 22 ] As for temporal dynamics, three different phases are generally distinguished, whose duration are not comparable. The durations of those periods depend on the range of wavelengths used to observe the event: Sometimes also a phase preceding the flare can be observed, usually called as "pre-flare" phase. Often accompanying large solar flares and prominences are coronal mass ejections (CME). These are enormous emissions of coronal material and magnetic field that travel outward from the Sun at up to 3000 km/s, [ 24 ] containing roughly 10 times the energy of the solar flare or prominence that accompanies them. Some larger CMEs can propel hundreds of millions of tons of material into interplanetary space at roughly 1.5 million kilometers an hour. [ citation needed ] Coronal stars are ubiquitous among the stars in the cool half of the Hertzsprung–Russell diagram . [ 25 ] These coronae can be detected using X-ray telescopes . Some stellar coronae, particularly in young stars, are much more luminous than the Sun's. For example, FK Comae Berenices is the prototype for the FK Com class of variable star . These are giants of spectral types G and K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous ( L x ≥ 10 32 erg·s −1 or 10 25 W) and the hottest known with dominant temperatures up to 40 MK. [ 25 ] The astronomical observations planned with the Einstein Observatory by Giuseppe Vaiana and his group [ 26 ] showed that F-, G-, K- and M-stars have chromospheres and often coronae much like the Sun. The O-B stars , which do not have surface convection zones, have a strong X-ray emission. However these stars do not have coronae, but the outer stellar envelopes emit this radiation during shocks due to thermal instabilities in rapidly moving gas blobs. Also A-stars do not have convection zones but they do not emit at the UV and X-ray wavelengths. Thus they appear to have neither chromospheres nor coronae. The matter in the external part of the solar atmosphere is in the state of plasma , at very high temperature (a few million kelvin) and at very low density (of the order of 10 15 particles/m 3 ). According to the definition of plasma, it is a quasi-neutral ensemble of particles which exhibits a collective behaviour. The composition is similar to that in the Sun's interior, mainly hydrogen, but with much greater ionization of its heavier elements than that found in the photosphere. Heavier metals, such as iron, are partially ionized and have lost most of the external electrons. The ionization state of a chemical element depends strictly on the temperature and is regulated by the Saha equation in the lowest atmosphere, but by collisional equilibrium in the optically thin corona. Historically, the presence of the spectral lines emitted from highly ionized states of iron allowed determination of the high temperature of the coronal plasma, revealing that the corona is much hotter than the internal layers of the chromosphere. The corona behaves like a gas which is very hot but very light at the same time: the pressure in the corona is usually only 0.1 to 0.6 Pa in active regions, while on the Earth the atmospheric pressure is about 100 kPa, approximately a million times higher than on the solar surface. However it is not properly a gas, because it is made of charged particles, basically protons and electrons, moving at different velocities. Supposing that they have the same kinetic energy on average (for the equipartition theorem ), electrons have a mass roughly 1 800 times smaller than protons, therefore they acquire more velocity. Metal ions are always slower. This fact has relevant physical consequences either on radiative processes (that are very different from the photospheric radiative processes), or on thermal conduction. Furthermore, the presence of electric charges induces the generation of electric currents and high magnetic fields. Magnetohydrodynamic waves (MHD waves) can also propagate in this plasma, [ 27 ] even though it is still not clear how they can be transmitted or generated in the corona. Coronal plasma is optically thin and therefore transparent to the electromagnetic radiation that it emits and to that coming from lower layers. The plasma is very rarefied and the photon mean free path overcomes by far all the other length-scales, including the typical sizes of common coronal features. [ citation needed ] Electromagnetic radiation from the corona has been identified coming from three main sources, located in the same volume of space: In the corona thermal conduction occurs from the external hotter atmosphere towards the inner cooler layers. Responsible for the diffusion process of the heat are the electrons, which are much lighter than ions and move faster, as explained above. When there is a magnetic field the thermal conductivity of the plasma becomes higher in the direction which is parallel to the field lines rather than in the perpendicular direction. [ 29 ] A charged particle moving in the direction perpendicular to the magnetic field line is subject to the Lorentz force which is normal to the plane individuated by the velocity and the magnetic field. This force bends the path of the particle. In general, since particles also have a velocity component along the magnetic field line, the Lorentz force constrains them to bend and move along spirals around the field lines at the cyclotron frequency. If collisions between the particles are very frequent, they are scattered in every direction. This happens in the photosphere, where the plasma carries the magnetic field in its motion. In the corona, on the contrary, the mean free-path of the electrons is of the order of kilometres and even more, so each electron can do a helicoidal motion long before being scattered after a collision. Therefore, the heat transfer is enhanced along the magnetic field lines and inhibited in the perpendicular direction. In the direction longitudinal to the magnetic field, the thermal conductivity of the corona is [ 29 ] k = 20 ( 2 π ) 3 / 2 ( k B T ) 5 / 2 k B m e 1 / 2 e 4 ln ⁡ Λ ≈ T 5 / 2 ln ⁡ Λ × 1.8 × 10 − 10 W m − 1 K − 1 {\displaystyle k=20\left({\frac {2}{\pi }}\right)^{3/2}{\frac {\left(k_{\text{B}}T\right)^{5/2}k_{\text{B}}}{m_{e}^{1/2}e^{4}\ln \Lambda }}\approx {\frac {T^{5/2}}{\ln \Lambda }}\times 1.8\times 10^{-10}~\mathrm {Wm^{-1}K^{-1}} } where k B {\displaystyle k_{\text{B}}} is the Boltzmann constant , T {\displaystyle T} is the temperature in kelvin, m e {\displaystyle m_{e}} is the electron mass, e {\displaystyle e} is the electric charge of the electron, ln ⁡ Λ = ln ⁡ ( 12 π n λ D 3 ) {\displaystyle \ln \Lambda =\ln \left(12\pi n\lambda _{D}^{3}\right)} is the Coulomb logarithm, and λ D = k B T 4 π n e 2 {\displaystyle \lambda _{D}={\sqrt {\frac {k_{\text{B}}T}{4\pi ne^{2}}}}} is the Debye length of the plasma with particle density n {\displaystyle n} . The Coulomb logarithm ln ⁡ Λ {\displaystyle \ln \Lambda } is roughly 20 in the corona, with a mean temperature of 1 MK and a density of 10 15 particles/m 3 , and about 10 in the chromosphere, where the temperature is approximately 10kK and the particle density is of the order of 10 18 particles/m 3 , and in practice it can be assumed constant. Thence, if we indicate with q {\displaystyle q} the heat for a volume unit, expressed in J m −3 , the Fourier equation of heat transfer, to be computed only along the direction x {\displaystyle x} of the field line, becomes ∂ q ∂ t = 0.9 × 10 − 11 ∂ 2 T 7 / 2 ∂ x 2 . {\displaystyle {\frac {\partial q}{\partial t}}=0.9\times 10^{-11}~{\frac {\partial ^{2}T^{7/2}}{\partial x^{2}}}.} Numerical calculations have shown that the thermal conductivity of the corona is comparable to that of copper. Coronal seismology is a method of studying the plasma of the solar corona with the use of magnetohydrodynamic (MHD) waves. MHD studies the dynamics of electrically conducting fluids – in this case, the fluid is the coronal plasma. Philosophically, coronal seismology is similar to the Earth's seismology , the Sun's helioseismology , and MHD spectroscopy of laboratory plasma devices. In all these approaches, waves of various kinds are used to probe a medium. The potential of coronal seismology in the estimation of the coronal magnetic field, density scale height , fine structure and heating has been demonstrated by different research groups. The coronal heating problem in solar physics relates to the question of why the temperature of the Sun's corona is millions of kelvins greater than the thousands of kelvins of the surface. Several theories have been proposed to explain this phenomenon, but it is still challenging to determine which is correct. [ 30 ] The problem first emerged after the identification of unknown spectral lines in the solar spectrum with highly ionized iron and calcium atoms. [ 31 ] [ 30 ] The comparison of the coronal and the photospheric temperatures of 6 000 K , leads to the question of how the 200-times-hotter coronal temperature can be maintained. [ 31 ] The problem is primarily concerned with how the energy is transported up into the corona and then converted into heat within a few solar radii. [ 32 ] The high temperatures require energy to be carried from the solar interior to the corona by non-thermal processes, because the second law of thermodynamics prevents heat from flowing directly from the solar photosphere (surface), which is at about 5 800 K , to the much hotter corona at about 1 to 3 MK (parts of the corona can even reach 10 MK ). Between the photosphere and the corona, the thin region through which the temperature increases is known as the transition region . It ranges from only tens to hundreds of kilometers thick. Energy cannot be transferred from the cooler photosphere to the corona by conventional heat transfer as this would violate the second law of thermodynamics. An analogy of this would be a light bulb raising the temperature of the air surrounding the bulb to a temperature greater than that of the bulb's glass surface. Hence, some other manner of energy transfer must be involved in the heating of the corona. The amount of power required to heat the solar corona can easily be calculated as the difference between coronal radiative losses and heating by thermal conduction toward the chromosphere through the transition region. It is about 1 kilowatt for every square meter of surface area on the Sun's chromosphere, or 1/ 40 000 of the amount of light energy that escapes the Sun. Many coronal heating theories have been proposed, [ 33 ] but two theories have remained as the most likely candidates: wave heating and magnetic reconnection (or nanoflares ). [ 34 ] Through most of the past 50 years, neither theory has been able to account for the extreme coronal temperatures. In 2012, high resolution (<0.2″) soft X-ray imaging with the High Resolution Coronal Imager aboard a sounding rocket revealed tightly wound braids in the corona. It is hypothesized that the reconnection and unravelling of braids can act as primary sources of heating of the active solar corona to temperatures of up to 4 million kelvin. The main heat source in the quiescent corona (about 1.5 million kelvin) is assumed to originate from MHD waves. [ 35 ] NASA 's Parker Solar Probe is intended to approach the Sun to a distance of approximately 9.5 solar radii to investigate coronal heating and the origin of the solar wind. It was successfully launched on August 12, 2018 [ 36 ] and by late 2022 had completed the first 13 of more than 20 planned close approaches to the Sun. [ 37 ] The wave heating theory, proposed in 1949 by Evry Schatzman , proposes that waves carry energy from the solar interior to the solar chromosphere and corona. The Sun is made of plasma rather than ordinary gas, so it supports several types of waves analogous to sound waves in air. The most important types of wave are magneto-acoustic waves and Alfvén waves . [ 38 ] Magneto-acoustic waves are sound waves that have been modified by the presence of a magnetic field, and Alfvén waves are similar to ultra low frequency radio waves that have been modified by interaction with matter in the plasma. Both types of waves can be launched by the turbulence of granulation and super granulation at the solar photosphere, and both types of waves can carry energy for some distance through the solar atmosphere before turning into shock waves that dissipate their energy as heat. One problem with wave heating is delivery of the heat to the appropriate place. Magneto-acoustic waves cannot carry sufficient energy upward through the chromosphere to the corona, both because of the low pressure present in the chromosphere and because they tend to be reflected back to the photosphere. Alfvén waves can carry enough energy, but do not dissipate that energy rapidly enough once they enter the corona. Waves in plasmas are notoriously difficult to understand and describe analytically, but computer simulations, carried out by Thomas Bogdan and colleagues in 2003, seem to show that Alfvén waves can transmute into other wave modes at the base of the corona, providing a pathway that can carry large amounts of energy from the photosphere through the chromosphere and transition region and finally into the corona where it dissipates it as heat. Another problem with wave heating has been the complete absence, until the late 1990s, of any direct evidence of waves propagating through the solar corona. The first direct observation of waves propagating into and through the solar corona was made in 1997 with the Solar and Heliospheric Observatory space-borne solar observatory, the first platform capable of observing the Sun in the extreme ultraviolet (EUV) for long periods of time with stable photometry . Those were magneto-acoustic waves with a frequency of about 1 millihertz (mHz, corresponding to a 1 000 second wave period), that carry only about 10% of the energy required to heat the corona. Many observations exist of localized wave phenomena, such as Alfvén waves launched by solar flares, but those events are transient and cannot explain the uniform coronal heat. It is not yet known exactly how much wave energy is available to heat the corona. Results published in 2004 using data from the TRACE spacecraft seem to indicate that there are waves in the solar atmosphere at frequencies as high as 100 mHz (10 second period). Measurements of the temperature of different ions in the solar wind with the UVCS instrument aboard SOHO give strong indirect evidence that there are waves at frequencies as high as 200 Hz , well into the range of human hearing. These waves are very difficult to detect under normal circumstances, but evidence collected during solar eclipses by teams from Williams College suggest the presences of such waves in the 1– 10 Hz range. Recently, Alfvénic motions have been found in the lower solar atmosphere [ 39 ] [ 40 ] and also in the quiet Sun, in coronal holes and in active regions using observations with AIA on board the Solar Dynamics Observatory . [ 41 ] These Alfvénic oscillations have significant power, and seem to be connected to the chromospheric Alfvénic oscillations previously reported with the Hinode spacecraft. [ 42 ] Solar wind observations with the Wind spacecraft have recently shown evidence to support theories of Alfvén-cyclotron dissipation, leading to local ion heating. [ 43 ] The magnetic reconnection theory relies on the solar magnetic field to induce electric currents in the solar corona. [ 44 ] The currents then collapse suddenly, releasing energy as heat and wave energy in the corona. This process is called "reconnection" because of the peculiar way that magnetic fields behave in plasma (or any electrically conductive fluid such as mercury or seawater ). In a plasma, magnetic field lines are normally tied to individual pieces of matter, so that the topology of the magnetic field remains the same: if a particular north and south magnetic pole are connected by a single field line, then even if the plasma is stirred or if the magnets are moved around, that field line will continue to connect those particular poles. The connection is maintained by electric currents that are induced in the plasma. Under certain conditions, the electric currents can collapse, allowing the magnetic field to "reconnect" to other magnetic poles and release heat and wave energy in the process. Magnetic reconnection is hypothesized to be the mechanism behind solar flares, the largest explosions in the Solar System. Furthermore, the surface of the Sun is covered with millions of small magnetized regions 50– 1 000 km across. These small magnetic poles are buffeted and churned by the constant granulation. The magnetic field in the solar corona must undergo nearly constant reconnection to match the motion of this "magnetic carpet", so the energy released by the reconnection is a natural candidate for the coronal heat, perhaps as a series of "microflares" that individually provide very little energy but together account for the required energy. The idea that nanoflares might heat the corona was proposed by Eugene Parker in the 1980s but is still controversial. In particular, ultraviolet telescopes such as TRACE and SOHO /EIT can observe individual micro-flares as small brightenings in extreme ultraviolet light, [ 45 ] but there seem to be too few of these small events to account for the energy released into the corona. The additional energy not accounted for could be made up by wave energy, or by gradual magnetic reconnection that releases energy more smoothly than micro-flares and therefore does not appear well in the TRACE data. Variations on the micro-flare hypothesis use other mechanisms to stress the magnetic field or to release the energy, and are a subject of active research in 2005. For decades, researchers believed spicules could send heat into the corona. However, following observational research in the 1980s, it was found that spicule plasma did not reach coronal temperatures, and so the theory was discounted. As per studies performed in 2010 at the National Center for Atmospheric Research in Colorado , in collaboration with the Lockheed Martin's Solar and Astrophysics Laboratory (LMSAL) and the Institute of Theoretical Astrophysics of the University of Oslo , a new class of spicules (TYPE II) discovered in 2007, which travel faster (up to 100 km/s) and have shorter lifespans, can account for the problem. [ 46 ] These jets insert heated plasma into the Sun's outer atmosphere. The Atmospheric Imaging Assembly on NASA's Solar Dynamics Observatory and NASA's Focal Plane Package for the Solar Optical Telescope on the Japanese Hinode satellite were used to test this hypothesis. The high spatial and temporal resolutions of the newer instruments reveal this coronal mass supply. According to analysis in 2011 by de Pontieu and colleagues, these observations reveal a one-to-one connection between plasma that is heated to millions of degrees and the spicules that insert this plasma into the corona. [ 47 ]
https://en.wikipedia.org/wiki/Stellar_corona
A stellar encounter is an astronomical event in which two or more stars get within a close distance of each other. [ 2 ] Encounters between stars outside dense regions are rare, but they are more frequent in regions dense with stars such as star clusters or multiple star systems . Impacts between two stars do happen but are extremely rare events. [ 3 ] Such stellar encounters can caused both star systems to exchange materials such as cosmic dust and planets . After such encounters, especially for stars with protoplanetary disk , both systems will come out with material from the other system. [ 4 ] Stars with protoplanetary disk in stellar rich regions undergo background heating, disk truncation and photoevaporation . Theses effects can halt the growth of gas giant planets during their planetary formation phase or not form any gas giant planets at all. [ 5 ] Stars that passes close to our Sun within 1-2 parsecs can have major effect a on the Solar system . Their gravity can perturb objects in the Oort Cloud sending comets from the outer solar system into the inner solar system in a “comet shower”, [ 6 ] some of which may collide with planet Earth . [ 7 ] [ 2 ] Stellar encounters to hot stars can expose the earth to powerful UV radiation . Nearby supernova can be devastating to life on Earth causing the total extinction of life on this planet and erode the ozone layer . [ 7 ] Especially close stellar encounters can affect the orbits of the planets. A close star can disturb the orbits of the outer gas giant planets ( Jupiter , Saturn , Uranus and Neptune ). Their altered orbits can then affect the orbit of the other giant planets and the terrestrial planets including Earth. Such orbital alterations to the orbit of Earth can have major climate effects and caused major extinction events . [ 8 ]
https://en.wikipedia.org/wiki/Stellar_encounter
Stellar engineering is a type of engineering (currently a form of exploratory engineering ) concerned with creating or modifying stars through artificial means. While humanity does not yet possess the technological ability to perform stellar engineering of any kind, stellar manipulation (or husbandry), requiring substantially less technological advancement than would be needed to create a new star, could eventually be performed in order to stabilize or prolong the lifetime of a star, mine it for useful material (known as star lifting ) or use it as a direct energy source. Since a civilization advanced enough to be capable of manufacturing a new star would likely have vast material and energy resources at its disposal, it almost certainly wouldn't need to do so. Many science fiction authors have explored the possible applications of stellar engineering, among them Iain M Banks , Larry Niven and Arthur C. Clarke . In the novel series Star Carrier by Ian Douglas the Sh’daar species merge many stars to make blue giants, which then explode to become black holes. These perfectly synchronized black holes form a Tipler cylinder called the Texagu Resh gravitational anomaly. In the novel series The Book of The New Sun by Gene Wolfe , the brightness of Urth's sun seems to have been reduced by artificial means. In the season 3 (1989) episode "Take Me to Your Leader" of the 1987 Teenage Mutant Ninja Turtles cartoon , Krang , Shredder , Bebop and Rocksteady use a Solar Siphon to aim towards the Sun , and store the solar energy into compact batteries, freezing the Earth making it too cold for people to resist them. Once the Turtles have defeated them, Donatello reverses the flow. [ 1 ] In episode 12 of Stargate Universe , Destiny was dropped prematurely out of FTL by an uncharted star that the crew determines to be artificially created and younger than 200 million years old with an Earth -sized planet containing a biosphere exactly like Earth's being the only planet in the system. In Firefly (TV series) , set 500 years in the future, several gas giants are "Helioformed" to create viable suns for the surrounding planets and moons. In the Space Empires series the last available technology for research is called Stellar Manipulation. In addition to the ability to create and destroy stars, this branch also gives a race the ability to create and destroy black holes , wormholes , nebulae , planets , ringworlds and sphereworlds . Just as described above, this technology is so advanced that once the player has the ability to use them, they usually don't need them anymore. This is even more the case with the last two; once one of these megastructures is complete, the race controlling the ringworld or sphereworld has almost unlimited resources, usually leading to defeat of the others. [ 2 ] In The Saga of the Seven Suns , by Kevin J. Anderson, humans are able to convert gas giant planets into stars through the use of a "Klikiss Torch". This device creates a wormhole between two points in space, allowing a neutron star to be dropped into the planet and ignite stellar nuclear fusion.
https://en.wikipedia.org/wiki/Stellar_engineering
Stellar evolution is the process by which a star changes over the course of time. Depending on the mass of the star, its lifetime can range from a few million years for the most massive to trillions of years for the least massive, which is considerably longer than the current age of the universe . The table shows the lifetimes of stars as a function of their masses. [ 1 ] All stars are formed from collapsing clouds of gas and dust, often called nebulae or molecular clouds . Over the course of millions of years, these protostars settle down into a state of equilibrium, becoming what is known as a main-sequence star. Nuclear fusion powers a star for most of its existence. Initially the energy is generated by the fusion of hydrogen atoms at the core of the main-sequence star. Later, as the preponderance of atoms at the core becomes helium , stars like the Sun begin to fuse hydrogen along a spherical shell surrounding the core. This process causes the star to gradually grow in size, passing through the subgiant stage until it reaches the red-giant phase. Stars with at least half the mass of the Sun can also begin to generate energy through the fusion of helium at their core, whereas more-massive stars can fuse heavier elements along a series of concentric shells. Once a star like the Sun has exhausted its nuclear fuel, its core collapses into a dense white dwarf and the outer layers are expelled as a planetary nebula . Stars with around ten or more times the mass of the Sun can explode in a supernova as their inert iron cores collapse into an extremely dense neutron star or black hole . Although the universe is not old enough for any of the smallest red dwarfs to have reached the end of their existence, stellar models suggest they will slowly become brighter and hotter before running out of hydrogen fuel and becoming low-mass white dwarfs. [ 2 ] Stellar evolution is not studied by observing the life of a single star, as most stellar changes occur too slowly to be detected, even over many centuries. Instead, astrophysicists come to understand how stars evolve by observing numerous stars at various points in their lifetime, and by simulating stellar structure using computer models . Stellar evolution starts with the gravitational collapse of a giant molecular cloud . Typical giant molecular clouds are roughly 100 light-years (9.5 × 10 14 km) across and contain up to 6,000,000 solar masses (1.2 × 10 37 kg ). As it collapses, a giant molecular cloud breaks into smaller and smaller pieces. In each of these fragments, the collapsing gas releases gravitational potential energy as heat. As its temperature and pressure increase, a fragment condenses into a rotating ball of superhot gas known as a protostar . [ 3 ] Filamentary structures are truly ubiquitous in the molecular cloud. Dense molecular filaments will fragment into gravitationally bound cores, which are the precursors of stars. Continuous accretion of gas, geometrical bending, and magnetic fields may control the detailed fragmentation manner of the filaments. In supercritical filaments, observations have revealed quasi-periodic chains of dense cores with spacing comparable to the filament inner width, and embedded two protostars with gas outflows. [ 4 ] A protostar continues to grow by accretion of gas and dust from the molecular cloud, becoming a pre-main-sequence star as it reaches its final mass. Further development is determined by its mass. Mass is typically compared to the mass of the Sun : 1.0 M ☉ (2.0 × 10 30 kg) means 1 solar mass. Protostars are encompassed in dust, and are thus more readily visible at infrared wavelengths. Observations from the Wide-field Infrared Survey Explorer (WISE) have been especially important for unveiling numerous galactic protostars and their parent star clusters . [ 5 ] [ 6 ] Protostars with masses less than roughly 0.08 M ☉ (1.6 × 10 29 kg) never reach temperatures high enough for nuclear fusion of hydrogen to begin. These are known as brown dwarfs . The International Astronomical Union defines brown dwarfs as stars massive enough to fuse deuterium at some point in their lives (13 Jupiter masses ( M J ), 2.5 × 10 28 kg, or 0.0125 M ☉ ). Objects smaller than 13 M J are classified as sub-brown dwarfs (but if they orbit around another stellar object they are classified as planets). [ 7 ] Both types, deuterium-burning and not, shine dimly and fade away slowly, cooling gradually over hundreds of millions of years. For a more-massive protostar, the core temperature will eventually reach 10 million kelvin , initiating the proton–proton chain reaction and allowing hydrogen to fuse, first to deuterium and then to helium . In stars of slightly over 1 M ☉ (2.0 × 10 30 kg), the carbon–nitrogen–oxygen fusion reaction ( CNO cycle ) contributes a large portion of the energy generation. The onset of nuclear fusion leads relatively quickly to a hydrostatic equilibrium in which energy released by the core maintains a high gas pressure, balancing the weight of the star's matter and preventing further gravitational collapse. The star thus evolves rapidly to a stable state, beginning the main-sequence phase of its evolution. A new star will sit at a specific point on the main sequence of the Hertzsprung–Russell diagram , with the main-sequence spectral type depending upon the mass of the star. Small, relatively cold, low-mass red dwarfs fuse hydrogen slowly and will remain on the main sequence for hundreds of billions of years or longer, whereas massive, hot O-type stars will leave the main sequence after just a few million years. A mid-sized yellow dwarf star, like the Sun, will remain on the main sequence for about 10 billion years. The Sun is thought to be in the middle of its main sequence lifespan. A star may gain a protoplanetary disk , which furthermore can develop into a planetary system . Eventually the star's core exhausts its supply of hydrogen and the star begins to evolve off the main sequence . Without the outward radiation pressure generated by the fusion of hydrogen to counteract the force of gravity , the core contracts until either electron degeneracy pressure becomes sufficient to oppose gravity or the core becomes hot enough (around 100 MK) for helium fusion to begin. Which of these happens first depends upon the star's mass. What happens after a low-mass star ceases to produce energy through fusion has not been directly observed; the universe is around 13.8 billion years old, which is less time (by several orders of magnitude, in some cases) than it takes for fusion to cease in such stars. Recent astrophysical models suggest that red dwarfs of 0.1 M ☉ may stay on the main sequence for some six to twelve trillion years, gradually increasing in both temperature and luminosity , and take several hundred billion years more to collapse, slowly, into a white dwarf . [ 9 ] [ 10 ] Such stars will not become red giants as the whole star is a convection zone and it will not develop a degenerate helium core with a shell burning hydrogen. Instead, hydrogen fusion will proceed until almost the whole star is helium. Slightly more massive stars do expand into red giants , but their helium cores are not massive enough to reach the temperatures required for helium fusion so they never reach the tip of the red-giant branch. When hydrogen shell burning finishes, these stars move directly off the red-giant branch like a post- asymptotic-giant-branch (AGB) star, but at lower luminosity, to become a white dwarf. [ 2 ] A star with an initial mass about 0.6 M ☉ will be able to reach temperatures high enough to fuse helium, and these "mid-sized" stars go on to further stages of evolution beyond the red-giant branch. [ 11 ] Stars of roughly 0.6–10 M ☉ become red giants , which are large non- main-sequence stars of stellar classification K or M. Red giants lie along the right edge of the Hertzsprung–Russell diagram due to their red color and large luminosity. Examples include Aldebaran in the constellation Taurus and Arcturus in the constellation of Boötes . Mid-sized stars are red giants during two different phases of their post-main-sequence evolution: red-giant-branch stars, with inert cores made of helium and hydrogen-burning shells, and asymptotic-giant-branch stars, with inert cores made of carbon and helium-burning shells inside the hydrogen-burning shells. [ 12 ] Between these two phases, stars spend a period on the horizontal branch with a helium-fusing core. Many of these helium-fusing stars cluster towards the cool end of the horizontal branch as K-type giants and are referred to as red clump giants. When a star exhausts the hydrogen in its core, it leaves the main sequence and begins to fuse hydrogen in a shell outside the core. The core increases in mass as the shell produces more helium. Depending on the mass of the helium core, this continues for several million to one or two billion years, with the star expanding and cooling at a similar or slightly lower luminosity to its main sequence state. Eventually either the core becomes degenerate, in stars around the mass of the sun, or the outer layers cool sufficiently to become opaque, in more massive stars. Either of these changes cause the hydrogen shell to increase in temperature and the luminosity of the star to increase, at which point the star expands onto the red-giant branch. [ 13 ] The expanding outer layers of the star are convective , with the material being mixed by turbulence from near the fusing regions up to the surface of the star. For all but the lowest-mass stars, the fused material has remained deep in the stellar interior prior to this point, so the convecting envelope makes fusion products visible at the star's surface for the first time. At this stage of evolution, the results are subtle, with the largest effects, alterations to the isotopes of hydrogen and helium, being unobservable. The effects of the CNO cycle appear at the surface during the first dredge-up , with lower 12 C/ 13 C ratios and altered proportions of carbon and nitrogen. These are detectable with spectroscopy and have been measured for many evolved stars. The helium core continues to grow on the red-giant branch. It is no longer in thermal equilibrium, either degenerate or above the Schönberg–Chandrasekhar limit , so it increases in temperature which causes the rate of fusion in the hydrogen shell to increase. The star increases in luminosity towards the tip of the red-giant branch . Red-giant-branch stars with a degenerate helium core all reach the tip with very similar core masses and very similar luminosities, although the more massive of the red giants become hot enough to ignite helium fusion before that point. In the helium cores of stars in the 0.6 to 2.0 solar mass range, which are largely supported by electron degeneracy pressure , helium fusion will ignite on a timescale of days in a helium flash . In the nondegenerate cores of more massive stars, the ignition of helium fusion occurs relatively slowly with no flash. [ 14 ] The nuclear power released during the helium flash is very large, on the order of 10 8 times the luminosity of the Sun for a few days [ 13 ] and 10 11 times the luminosity of the Sun (roughly the luminosity of the Milky Way Galaxy ) for a few seconds. [ 15 ] However, the energy is consumed by the thermal expansion of the initially degenerate core and thus cannot be seen from outside the star. [ 13 ] [ 15 ] [ 16 ] Due to the expansion of the core, the hydrogen fusion in the overlying layers slows and total energy generation decreases. The star contracts, although not all the way to the main sequence, and it migrates to the horizontal branch on the Hertzsprung–Russell diagram, gradually shrinking in radius and increasing its surface temperature. Core helium flash stars evolve to the red end of the horizontal branch but do not migrate to higher temperatures before they gain a degenerate carbon-oxygen core and start helium shell burning. These stars are often observed as a red clump of stars in the colour-magnitude diagram of a cluster, hotter and less luminous than the red giants. Higher-mass stars with larger helium cores move along the horizontal branch to higher temperatures, some becoming unstable pulsating stars in the yellow instability strip ( RR Lyrae variables ), whereas some become even hotter and can form a blue tail or blue hook to the horizontal branch. The morphology of the horizontal branch depends on parameters such as metallicity, age, and helium content, but the exact details are still being modelled. [ 17 ] After a star has consumed the helium at the core, hydrogen and helium fusion continues in shells around a hot core of carbon and oxygen . The star follows the asymptotic giant branch on the Hertzsprung–Russell diagram, paralleling the original red-giant evolution, but with even faster energy generation (which lasts for a shorter time). [ 18 ] Although helium is being burnt in a shell, the majority of the energy is produced by hydrogen burning in a shell further from the core of the star. Helium from these hydrogen burning shells drops towards the center of the star and periodically the energy output from the helium shell increases dramatically. This is known as a thermal pulse and they occur towards the end of the asymptotic-giant-branch phase, sometimes even into the post-asymptotic-giant-branch phase. Depending on mass and composition, there may be several to hundreds of thermal pulses. There is a phase on the ascent of the asymptotic-giant-branch where a deep convective zone forms and can bring carbon from the core to the surface. This is known as the second dredge up, and in some stars there may even be a third dredge up. In this way a carbon star is formed, very cool and strongly reddened stars showing strong carbon lines in their spectra. A process known as hot bottom burning may convert carbon into oxygen and nitrogen before it can be dredged to the surface, and the interaction between these processes determines the observed luminosities and spectra of carbon stars in particular clusters. [ 19 ] Another well known class of asymptotic-giant-branch stars is the Mira variables , which pulsate with well-defined periods of tens to hundreds of days and large amplitudes up to about 10 magnitudes (in the visual, total luminosity changes by a much smaller amount). In more-massive stars the stars become more luminous and the pulsation period is longer, leading to enhanced mass loss, and the stars become heavily obscured at visual wavelengths. These stars can be observed as OH/IR stars , pulsating in the infrared and showing OH maser activity. These stars are clearly oxygen rich, in contrast to the carbon stars, but both must be produced by dredge ups. These mid-range stars ultimately reach the tip of the asymptotic-giant-branch and run out of fuel for shell burning. They are not sufficiently massive to start full-scale carbon fusion, so they contract again, going through a period of post-asymptotic-giant-branch superwind to produce a planetary nebula with an extremely hot central star. The central star then cools to a white dwarf. The expelled gas is relatively rich in heavy elements created within the star and may be particularly oxygen or carbon enriched, depending on the type of the star. The gas builds up in an expanding shell called a circumstellar envelope and cools as it moves away from the star, allowing dust particles and molecules to form. With the high infrared energy input from the central star, ideal conditions are formed in these circumstellar envelopes for maser excitation. It is possible for thermal pulses to be produced once post-asymptotic-giant-branch evolution has begun, producing a variety of unusual and poorly understood stars known as born-again asymptotic-giant-branch stars. [ 20 ] These may result in extreme horizontal-branch stars ( subdwarf B stars ), hydrogen deficient post-asymptotic-giant-branch stars, variable planetary nebula central stars, and R Coronae Borealis variables . In massive stars, the core is already large enough at the onset of the hydrogen burning shell that helium ignition will occur before electron degeneracy pressure has a chance to become prevalent. Thus, when these stars expand and cool, they do not brighten as dramatically as lower-mass stars; however, they were more luminous on the main sequence and they evolve to highly luminous supergiants. Their cores become massive enough that they cannot support themselves by electron degeneracy and will eventually collapse to produce a neutron star or black hole . [ citation needed ] Extremely massive stars (more than approximately 40 M ☉ ), which are very luminous and thus have very rapid stellar winds, lose mass so rapidly due to radiation pressure that they tend to strip off their own envelopes before they can expand to become red supergiants , and thus retain extremely high surface temperatures (and blue-white color) from their main-sequence time onwards. The largest stars of the current generation are about 100-150 M ☉ because the outer layers would be expelled by the extreme radiation. Although lower-mass stars normally do not burn off their outer layers so rapidly, they can likewise avoid becoming red giants or red supergiants if they are in binary systems close enough so that the companion star strips off the envelope as it expands, or if they rotate rapidly enough so that convection extends all the way from the core to the surface, resulting in the absence of a separate core and envelope due to thorough mixing. [ 21 ] The core of a massive star, defined as the region depleted of hydrogen, grows hotter and denser as it accretes material from the fusion of hydrogen outside the core. In sufficiently massive stars, the core reaches temperatures and densities high enough to fuse carbon and heavier elements via the alpha process . At the end of helium fusion, the core of a star consists primarily of carbon and oxygen. In stars heavier than about 8 M ☉ , the carbon ignites and fuses to form neon, sodium, and magnesium. Stars somewhat less massive may partially ignite carbon, but they are unable to fully fuse the carbon before electron degeneracy sets in, and these stars will eventually leave an oxygen-neon-magnesium white dwarf . [ 22 ] [ 23 ] The exact mass limit for full carbon burning depends on several factors such as metallicity and the detailed mass lost on the asymptotic giant branch , but is approximately 8-9 M ☉ . [ 22 ] After carbon burning is complete, the core of these stars reaches about 2.5 M ☉ and becomes hot enough for heavier elements to fuse. Before oxygen starts to fuse , neon begins to capture electrons which triggers neon burning . For a range of stars of approximately 8-12 M ☉ , this process is unstable and creates runaway fusion resulting in an electron capture supernova . [ 24 ] [ 23 ] In more massive stars, the fusion of neon proceeds without a runaway deflagration. This is followed in turn by complete oxygen burning and silicon burning , producing a core consisting largely of iron-peak elements . Surrounding the core are shells of lighter elements still undergoing fusion. The timescale for complete fusion of a carbon core to an iron core is so short, just a few hundred years, that the outer layers of the star are unable to react and the appearance of the star is largely unchanged. The iron core grows until it reaches an effective Chandrasekhar mass , higher than the formal Chandrasekhar mass due to various corrections for the relativistic effects, entropy, charge, and the surrounding envelope. The effective Chandrasekhar mass for an iron core varies from about 1.34 M ☉ in the least massive red supergiants to more than 1.8 M ☉ in more massive stars. Once this mass is reached, electrons begin to be captured into the iron-peak nuclei and the core becomes unable to support itself. The core collapses and the star is destroyed, either in a supernova or direct collapse to a black hole . [ 23 ] When the core of a massive star collapses, it will form a neutron star , or in the case of cores that exceed the Tolman–Oppenheimer–Volkoff limit , a black hole . Through a process that is not completely understood, some of the gravitational potential energy released by this core collapse is converted into a Type Ib, Type Ic, or Type II supernova . It is known that the core collapse produces a massive surge of neutrinos , as observed with supernova SN 1987A . The extremely energetic neutrinos fragment some nuclei; some of their energy is consumed in releasing nucleons , including neutrons , and some of their energy is transformed into heat and kinetic energy , thus augmenting the shock wave started by rebound of some of the infalling material from the collapse of the core. Electron capture in very dense parts of the infalling matter may produce additional neutrons. Because some of the rebounding matter is bombarded by the neutrons, some of its nuclei capture them, creating a spectrum of heavier-than-iron material including the radioactive elements up to (and likely beyond) uranium . [ 25 ] Although non-exploding red giants can produce significant quantities of elements heavier than iron using neutrons released in side reactions of earlier nuclear reactions , the abundance of elements heavier than iron (and in particular, of certain isotopes of elements that have multiple stable or long-lived isotopes) produced in such reactions is quite different from that produced in a supernova. Neither abundance alone matches that found in the Solar System , so both supernovae, neutron star mergers [ 26 ] and ejection of elements from red giants are required to explain the observed abundance of heavy elements and isotopes thereof. The energy transferred from collapse of the core to rebounding material not only generates heavy elements, but provides for their acceleration well beyond escape velocity , thus causing a Type Ib, Type Ic, or Type II supernova. Current understanding of this energy transfer is still not satisfactory; although current computer models of Type Ib, Type Ic, and Type II supernovae account for part of the energy transfer, they are not able to account for enough energy transfer to produce the observed ejection of material. [ 27 ] However, neutrino oscillations may play an important role in the energy transfer problem as they not only affect the energy available in a particular flavour of neutrinos but also through other general-relativistic effects on neutrinos. [ 28 ] [ 29 ] Some evidence gained from analysis of the mass and orbital parameters of binary neutron stars (which require two such supernovae) hints that the collapse of an oxygen-neon-magnesium core may produce a supernova that differs observably (in ways other than size) from a supernova produced by the collapse of an iron core. [ 30 ] The most massive stars that exist today may be completely destroyed by a supernova with an energy greatly exceeding its gravitational binding energy . This rare event, caused by pair-instability , leaves behind no black hole remnant. [ 31 ] In the past history of the universe, some stars were even larger than the largest that exists today, and they would immediately collapse into a black hole at the end of their lives, due to photodisintegration . After a star has burned out its fuel supply, its remnants can take one of three forms, depending on the mass during its lifetime. For a star of 1 M ☉ , the resulting white dwarf is of about 0.6 M ☉ , compressed into approximately the volume of the Earth. White dwarfs are stable because the inward pull of gravity is balanced by the degeneracy pressure of the star's electrons, a consequence of the Pauli exclusion principle . Electron degeneracy pressure provides a rather soft limit against further compression; therefore, for a given chemical composition, white dwarfs of higher mass have a smaller volume. With no fuel left to burn, the star radiates its remaining heat into space for billions of years. A white dwarf is very hot when it first forms, more than 100,000 K at the surface and even hotter in its interior. It is so hot that a lot of its energy is lost in the form of neutrinos for the first 10 million years of its existence and will have lost most of its energy after a billion years. [ 32 ] The chemical composition of the white dwarf depends upon its mass. A star that has a mass of about 8-12 solar masses will ignite carbon fusion to form magnesium, neon, and smaller amounts of other elements, resulting in a white dwarf composed chiefly of oxygen, neon, and magnesium, provided that it can lose enough mass to get below the Chandrasekhar limit (see below), and provided that the ignition of carbon is not so violent as to blow the star apart in a supernova. [ 33 ] A star of mass on the order of magnitude of the Sun will be unable to ignite carbon fusion, and will produce a white dwarf composed chiefly of carbon and oxygen, and of mass too low to collapse unless matter is added to it later (see below). A star of less than about half the mass of the Sun will be unable to ignite helium fusion (as noted earlier), and will produce a white dwarf composed chiefly of helium. In the end, all that remains is a cold dark mass sometimes called a black dwarf . However, the universe is not old enough for any black dwarfs to exist yet. If the white dwarf's mass increases above the Chandrasekhar limit , which is 1.4 M ☉ for a white dwarf composed chiefly of carbon, oxygen, neon, and/or magnesium, then electron degeneracy pressure fails due to electron capture and the star collapses. Depending upon the chemical composition and pre-collapse temperature in the center, this will lead either to collapse into a neutron star or runaway ignition of carbon and oxygen. Heavier elements favor continued core collapse, because they require a higher temperature to ignite, because electron capture onto these elements and their fusion products is easier; higher core temperatures favor runaway nuclear reaction, which halts core collapse and leads to a Type Ia supernova . [ 34 ] These supernovae may be many times brighter than the Type II supernova marking the death of a massive star, even though the latter has the greater total energy release. This instability to collapse means that no white dwarf more massive than approximately 1.4 M ☉ can exist (with a possible minor exception for very rapidly spinning white dwarfs, whose centrifugal force due to rotation partially counteracts the weight of their matter). Mass transfer in a binary system may cause an initially stable white dwarf to surpass the Chandrasekhar limit. If a white dwarf forms a close binary system with another star, hydrogen from the larger companion may accrete around and onto a white dwarf until it gets hot enough to fuse in a runaway reaction at its surface, although the white dwarf remains below the Chandrasekhar limit. Such an explosion is termed a nova . Ordinarily, atoms are mostly electron clouds by volume, with very compact nuclei at the center (proportionally, if atoms were the size of a football stadium, their nuclei would be the size of dust mites). When a stellar core collapses, the pressure causes electrons and protons to fuse by electron capture . Without electrons, which keep nuclei apart, the neutrons collapse into a dense ball (in some ways like a giant atomic nucleus ), with a thin overlying layer of degenerate matter (chiefly iron unless matter of different composition is added later). The neutrons resist further compression by the Pauli exclusion principle , in a way analogous to electron degeneracy pressure, but stronger. These stars, known as neutron stars, are extremely small—on the order of radius 10 km, no bigger than the size of a large city—and are phenomenally dense. Their period of rotation shortens dramatically as the stars shrink (due to conservation of angular momentum ); observed rotational periods of neutron stars range from about 1.5 milliseconds (over 600 revolutions per second) to several seconds. [ 35 ] When these rapidly rotating stars' magnetic poles are aligned with the Earth, we detect a pulse of radiation each revolution. Such neutron stars are called pulsars , and were the first neutron stars to be discovered. Though electromagnetic radiation detected from pulsars is most often in the form of radio waves, pulsars have also been detected at visible, X-ray, and gamma ray wavelengths. [ 36 ] If the mass of the stellar remnant is high enough, the neutron degeneracy pressure will be insufficient to prevent collapse below the Schwarzschild radius . The stellar remnant thus becomes a black hole. The mass at which this occurs is not known with certainty, but is currently estimated at between 2 and 3 M ☉ . Black holes are predicted by the theory of general relativity . According to classical general relativity, no matter or information can flow from the interior of a black hole to an outside observer, although quantum effects may allow deviations from this strict rule. The existence of black holes in the universe is well supported, both theoretically and by astronomical observation. Because the core-collapse mechanism of a supernova is, at present, only partially understood, it is still not known whether it is possible for a star to collapse directly to a black hole without producing a visible supernova, or whether some supernovae initially form unstable neutron stars which then collapse into black holes; the exact relation between the initial mass of the star and the final remnant is also not completely certain. Resolution of these uncertainties requires the analysis of more supernovae and supernova remnants. A stellar evolutionary model is a mathematical model that can be used to compute the evolutionary phases of a star from its formation until it becomes a remnant. The mass and chemical composition of the star are used as the inputs, and the luminosity and surface temperature are the only constraints. The model formulae are based upon the physical understanding of the star, usually under the assumption of hydrostatic equilibrium. Extensive computer calculations are then run to determine the changing state of the star over time, yielding a table of data that can be used to determine the evolutionary track of the star across the Hertzsprung–Russell diagram , along with other evolving properties. [ 37 ] Accurate models can be used to estimate the current age of a star by comparing its physical properties with those of stars along a matching evolutionary track. [ 38 ]
https://en.wikipedia.org/wiki/Stellar_evolution
Stellar flyby refers to the close passage of two or more stars , which remain unbound after their passage The Sun resides in a region of relatively low stellar density in the Milky Way . Thus, close stellar flybys are relatively rare. However, once in a while a star can come relatively close. One example is Scholz's star ( WISE designation WISE 0720−0846 or fully WISE J072003.20−084651.2 ), which is a dim binary stellar system 22 light-years (6.8 parsecs ) from the Sun in the constellation Monoceros near the galactic plane . The system passed through the Solar System 's Oort cloud roughly 70,000 years ago. [ 1 ] Gliese 710 or HIP 89825 , an orange 0.6 M ☉ star in the constellation Serpens Cauda, is projected to pass near the Sun in about 1.29 million years at a predicted minimum distance of 0.051 parsecs —0.1663 light-years (10,520 astronomical units ) (about 1.60 trillion km) – about 1/25th of the current distance to Proxima Centauri . [ 2 ] Close flybys are usually relatively rare among field stars , but are more common in star clusters [ 3 ] In these groups of stars the stellar density is much higher, so that close passages of between stars are more common. In particular in young star clusters, open clusters and globular clusters stellar flybys are thought to be common. In young clusters, such close stellar flybys might influence the frequency and size of protoplanetary discs , [ 2 ] and influence the planet formation process in these environments.
https://en.wikipedia.org/wiki/Stellar_flyby
The origin of life is an ongoing field of research that requires the study of interactions of many physical and biological processes. One of these physical processes has to do with the characteristics of the host star of a planet, and how stellar influences on an origin of life setting can dictate how life evolves, if at all. Life required an energy source at its origin, and scientists have long speculated that this energy source could have been the ultraviolet radiation that rains down on Earth. Though it may potentially be harmful to life, UV has also been shown to trigger important prebiotic reactions that might have taken place on a younger Earth. [ 1 ] Main sequence M dwarf stars are generally the focus of studies that investigate the role of UV in prebiotic chemistry and an origin of life setting, given that the potential habitability of planets in these systems is a big field of research, and the lifecycles and characteristics of these stars are relatively well known. Furthermore, in the context of biosignatures , M dwarfs are considered one of the better places to look to find life for various reasons (see: habitability of M dwarfs ). In order to experimentally investigate the effects of UV on the origin of life, studies make use of UV sources such as mercury lamps, and computational simulations of radiative transfer that model how UV interacts with different atmospheric compositions with different levels of shielding. Altogether, these methods allow scientists to test how different aspects of prebiotic chemistry operate under conditions of UV (so called 'light chemistry') versus how they operate in conditions with reduced UV or none at all ('dark chemistry'). [ 2 ] This work is helpful for understanding the conditions under which life might have started on a prebiotic Earth and can also be used to identify exoplanets that may have the right conditions for life. Since the first discovery in 1992, more than 5000 exoplanets have been found, and a subset of these exist within the liquid water habitable zone of their host star. [ 3 ] By experimentally estimating how much light is needed for UV photochemistry based on reaction rates for light chemistry versus dark chemistry, scientists have designated another zone around a star, called the ' abiogenesis zone'. [ 2 ] This is based on an understanding of how proximity to a host star changes the flux of UV, and how that influences prebiotic chemistry. A planet existing within overlap of the habitable zone with the abiogenesis zone of a given star would theoretically provide the right conditions for life to evolve there. UV light is considered a key component of prebiotic chemistry on an early Earth. Because of their short wavelengths (10-400 nanometers), UV photons carry enough energy to effect the electronic structure of molecules by interacting with molecular bonds by breaking them ( photolysis ), ionizing them ( photoionization ), or exciting their electrons ( photoexcitation ). [ 4 ] This can sometimes lead to degradation of biologically important molecules, causing subsequent environmental stress and providing a barrier to abiogenesis. [ 5 ] In 1973, Carl Sagan first suggested that UV might pose a selection threat against abiogenesis because early biological repair mechanisms would have been more primitive than now, and early prebiotic chemistry would have faced intense selection pressure to shield against UV. [ 5 ] [ 6 ] Conversely, these potentially destructive properties are also what makes UV an ideal candidate for a source of energy, similar to the Miller-Urey experiment that synthesized prebiotic molecules using simulated lightning. Across many different studies of prebiotic chemistry, UV light has been incorporated to explain the origin of chirality , [ 7 ] the synthesis of amino acids , [ 8 ] and the formation of ribonucleotides . [ 9 ] Due to a lack of biogenic UV-shielding O 2 and O 3 present in the prebiotic atmosphere, and the fact that a younger Sun at the time had a higher fractional output of UV (see: the young sun paradox ), UV is expected to have been common in the prebiotic environment. [ 4 ] One study suggested that in the absence of O 3 , UV light with at wavelengths < 300 nm contributed 3 orders of magnitude more energy than electrical discharge to the surface of an early Earth. [ 10 ] Given that UV may have been the most available source of energy for prebiotic chemistry, many experiments aim to deduce and quantify the effects of UV irradiance on synthesis pathways. A fraction of these studies deal with the formation of prebiotic molecules on interstellar ices and cometary surfaces. [ 11 ] For this, UV is sourced from lamps or synchrotrons in ultrahigh vacuum conditions, and the UV output is generally below a wavelength of 160 nm. Other studies are interested in prebiotic chemistry that occurred in aqueous solution , and also make use of UV lamps. UV lamps are a good source because they are safe, stable, and affordable. However their output is generally narrowband, and real solar UV output is broadband. [ 4 ] Many UV-dependent prebiotic reactions are wavelength dependent, so research using the narrowband UV lamps may not reach the same conclusions as those conducted under more realistic conditions. [ 4 ] There are many studies that include UV as a key component of their prebiotic synthesis pathways. In particular, a ribonucleotide synthesis [ 9 ] and a sugar synthesis [ 12 ] pathway both rely on irradiation from a mercury UV lamp. For all studies, and the se two in particular, early Solar UV irradiation and variability would have heavily influenced the availability of prebiotically important feedstock gases like CH 4 and HCN (hydrogen cyanide). [ 4 ] And although the effects are difficult to constrain due to lack of a full understanding of what the prebiotic Earth environment looked like, understanding how prebiotic chemistry might have proceeded in the presence of solar UV is one step towards a better understanding of abiogenesis. A key issue facing the RNA world hypothesis, and the eventual pathway to abiogenesis, is how the comparatively complex RNA molecules actually came to be. One proposed pathway to RNA synthesis under plausible prebiotic conditions used irradiation from UV light at 254 nm. [ 9 ] The UV here plays two roles: first, it destroys competing molecules that are generated along the synthesis pathway, and secondly, it was required to promote the synthesis via photoactivation. [ 4 ] A proposed companion synthesis reaction of two- and three-carbon sugars glycolaldehyde and glyceraldehyde from hydrogen cyanide (HCN) and formaldehyde also requires UV irradiation. [ 4 ] Previously, the mechanism used to explain the prebiotic synthesis of sugars was the formose reaction , wherein formaldehyde polymerizes to form longer sugars. This polymerization tends to run away, and leads to an insoluble tar, effectively terminating the reaction. [ 4 ] The UV-driven synthesis, however, is much more selective and generates a smaller number of products that are useful for further chemical reactions. This synthesis relies of production of solvated electrons and protons via photoionization of a photocatalytic transition metal complex under UV light at 254 nm. [ 12 ] Further considerations of environments conducive to prebiotic chemistry extend beyond how the Sun influenced Earth, and are interested in how other stars and their different characteristics might affect abiogenesis on other planets as well. Stars are not stagnant; from when they first start fusing hydrogen on the main sequence , to when they reach the end stages of their lifetimes, they are outputting energy in the form of radiation and high energy particles. [ 13 ] Radiation can be, for example, x-ray coronal emissions or flares . The high energy particles emitted come in the forms of winds , and coronal mass ejections (CMEs). [ 13 ] As stars evolve, so do their emissions; younger stars tend to be the most active, meaning they have stronger winds, larger flaring events, and an increased frequency of CMEs. [ 13 ] This means that planets orbiting younger stars would endure more volatile stellar events that impact their habitable and abiogenesis zones, perhaps even making them transient. M dwarf stars range in temperature from 2,000 to 3,500 K, and they exhibit variable activity over both short and long timescales. [ 14 ] For example, in one study of 177 M dwarfs of varying spectral times, 75% of them exhibited long-term variability. [ 14 ] Stellar activity is linked to rotation, [ 15 ] so the fraction of active stars tends to be much higher amongst M dwarfs compared to solar-type stars ( G type ). This is because they tend to have longer rotational braking times (timescale for stellar rotation to slow), and show stronger activity based on their period of rotation. [ 16 ] Deducing the right balance of stellar activity required to help prebiotic chemistry along without completely sterilizing the surface of a planet is complicated. It depends on multiple stellar and planetary factors, such as frequency of stellar events, intensity of stellar events, planetary atmosphere composition (radiation shielding effects), and the existence and strength of a planetary magnetic field (can provide further shielding). Although M dwarfs exhibit high variability, because of their low luminosities they emit less prebiotically relevant near-UV (NUV) radiation, which can lead to a shortage thereof. [ 17 ] Additionally, planetary atmospheres of specific compositions can further increase this shortage by functioning as UV shields due to the absorption features of their constituent molecules. This acts to reduce the amount of stellar UV reaching the surface, and can act as a barrier to UV-driven prebiotic chemistry. [ 17 ] For example, planets orbiting M dwarfs with inefficient atmospheric sinks of O 2 and CO, combined with a CO 2 rich atmosphere are more likely to built up UV shields such as O 3 that can block the already low NUV flux reaching the surface from the star. [ 17 ] Effects like this can be theoretically overcome in a number of ways, both to do with characteristics of the star and the planet in question. Firstly, this issue could be overcome simply by having a thinner atmosphere that blocks less of the impinging UV. This kind of situation could arise in the case where elevated UV emission from the M dwarf strips the atmosphere of habitable zone planets orbiting close enough in. [ 18 ] Relative dose rates of UV to the surface would then increase. However, it is likely that the increased UV only served to promote destructive reactions such as photolysis of a prebiotically relevant RNA monomer. [ 17 ] This is because the thinner atmosphere was admitting destructive far-UV (FUV) wavelengths and making the environment inclement to abiogenesis. Thus, atmospheric stripping is not deemed an effective way to solve the M dwarf UV shortage problem. Alternatively, researchers considered whether M dwarf flaring might play a role in providing the requisite UV for abiogenesis. Typically, these kinds of flaring events are deemed fatal for life, because of the increase in output of NUV. [ 19 ] Given that flaring is cyclical, there exists a possible scenario whereby photosensitive prebiotic chemistry is promoted during the flares, and ceases in the intervening periods. This kind of situation is analogous to terrestrial day/night cycles, and is proposed as the best mechanism to solve the shortage problem around particularly active M dwarfs. [ 17 ] That being said, further work is needed in order to constrain whether shorter duration/higher intensity UV flux from a flare is even sufficient to promote prebiotic chemistry. Additionally, intense flaring events run the risk of completely ablating a planetary atmosphere, which would sterilize the surface, and effectively counteract this proposed solution. [ 17 ] Main sequence M dwarfs are not the only stellar host considered in studies of UV prebiotic chemistry. Younger M dwarfs, such as the pre-main sequence ones emit more of their bolometric luminosity in the NUV range. [ 20 ] Depending on how low their mass is, some of these stars remain in this state for up to 1 Gyr (billion year). From here, it is possible to begin assessing whether planets orbiting a star in these conditions might receive NUV insolation that is actually comparable to solar-like stars (G type). That being said, earlier type M dwarf stars have higher bolometric luminosity, which increases the likelihood of their orbiting habitable zone planets undergoing a runaway greenhouse state if they are orbiting closer in. Temperatures under these conditions would reach > 1000 K and easily vaporize any water on the surface. Consequently, aqueous-phase (in water) prebiotic chemistry is unlikely to proceed which is a definite barrier to aqueous prebiotic chemistry. [ 17 ] For planets further from the star in the habitable zone during this pre-main sequence phase, they may achieve habitable conditions and even abiogenesis for up to Gyr timescales. [ 17 ] The caveat to this is that once their host star exits the pre-main sequence phase, the total bolometric luminosity will decrease, and the planets will no longer exist within the habitable zone. Unless clement temperatures are maintained by some intrinsic planetary mechanism such as greenhouse warming liquid water will freeze, and if life is not immediately killed off it would be confined to volcanic or subsurface reservoirs. [ 17 ] All the cons aside, planets in the habitable zones of these late-type, low mass pre-main sequence M dwarfs are still an area of focus for researchers interested in UV-driven prebiotic chemistry. Going beyond M dwarf stars, main sequence K dwarf stars are also considered as possible hosts of habitable and abiogenetic planets. Based on a chemical pathway for photochemically building up prebiotic reservoirs, it is possible that hotter stars, such as spectral type K are better at powering prebiotic chemistry. [ 2 ] A main sequence K star (spectral class: K5), has a temperature of around 4400 K. [ 21 ] For stars cooler than this, and in the absence of cyclical stellar activity, the incident flux is too low for the planets that exist within the habitable zone to also exist within the abiogenesis zone. [ 2 ] This consequently sets a potential boundary on what kinds of stars are most likely to have sufficient overlap in their habitable and abiogenesis zones. Just as understanding the UV flux of different stars can somewhat constrain favorable conditions for abiogenesis, so does understanding what the UV environment would have looked like on a prebiotic Earth. Given certain atmospheric compositions, UV surface fluence is generally a function of albedo , and solar zenith angle (SZA). [ 22 ] In experimental situations where atmospheric composition is left as a free variable, it also heavily dictates not only the bolometric surface flux of UV, but also what particular wavelengths are transmitted that may or may not be harmful to prebiotic chemistry and abiogenesis. This presents the need for experiments to reduce the uncertainty still surrounding the planetary conditions and the surface UV at abiogenesis. [ citation needed ] For consideration of the surficial UV conditions of a prebiotic Earth, there are 7 atmospheric gases that can influence UV attenuation and hence prebiotic chemistry: CO 2 , SO 2 , H 2 S, CH 4 , H 2 O, O 2 , and O 3 . For each of these gases, different constraints provide different estimates for their effect on UV transmission. Depending on the amount of transmission, different biologically effective doses (BEDs) of UV will be delivered to the surface; too little prohibits prebiotic chemistry, and too much is harmful. Additionally, solar UV input shapes the overall photochemistry of Earth's atmosphere which impacts the availability of chemical reactants. One study showed that high levels of atmospheric CO 2 can suppress UV-relevant prebiotic chemistry, [ 22 ] due to the argument that atmospheric CO 2 would cut off UV fluence for wavelengths shorter than 204 nm. [ 4 ] This reduces the surface flux of photons at energies useful for prebiotic chemistry. At lower levels of atmospheric CO 2 however, the transmitted UV can moderately enhance prebiotic chemistry. [ 22 ] This study found the overall effects of CO 2 on the BED of UV to be minimal, in the absence of other absorbers. For the range of CO 2 values evaluated, the variation of the biologically effective dose of radiation was <2 orders of magnitude. [ 22 ] However, given the significant presence of other absorbers (below) that have absorption cross sections from 100-500 nm, they could have influenced the surface UV environment of an early Earth. SO 2 strongly absorbs over a much wider range than CO 2 , but is conversely more vulnerable to loss via photolysis and oxidation reactions . [ 22 ] Volcanos are a significant source of SO 2 , and assuming levels of volcanic outgassing at abiogenesis (around 3.9 Ga) were comparable to today, [ 23 ] the build-up of SO 2 would not significantly influence the surface UV environment. However, during transiently high levels of volcanic outgassing SO 2 would build-up, and under these conditions could have heavily modified surface UV irradiance. [ 22 ] Higher levels of SO 2 reduce BEDs by over 60 orders of magnitude, [ 22 ] which implies that UV-driven prebiotic chemistry might not have been sustained. Like SO 2 , H 2 S is a strong and broad absorber of UV, and is primarily generated through volcanic outgassing. As a result of its vulnerability to atmospheric loss through photolysis and oxidation reactions, it is not expected that H 2 S would have been a significant constituent of a prebiotic environment, [ 22 ] let alone that it might have had an impact on prebiotic chemistry. [ 24 ] As with SO 2 , higher levels of H 2 S as a result of a transient increase in volcanic activity, may have affected the surface UV irradiation. The lower expected BEDs might have reduced the ability of some prebiotic chemical pathways to proceed. [ 22 ] In the presence of trace amounts of CO 2 , UV absorption by CH 4 is considered negligible. [ 22 ] [ 25 ] This is because CH 4 absorbs UV below 165 nm. As a result, CH 4 is not very useful for constraining surface UV in the presence of CO 2 . Primordial atmospheric water content is hard to constrain, but it is a strong UV absorber for wavelengths below 198 nm. [ 26 ] For plausible levels of H 2 O, UV flux is blocked below around 201 nm even without attenuation from CO 2 . [ 26 ] This means that regardless of the level of CO 2 present at a given time, UV more energetic than around 200 nm would have probably been unavailable for prebiotic chemistry. [ 22 ] O 2 is a strong shield for UV, but at levels expected from photochemical models, O 2 does not provide strong constraints on the surface UV environment. [ 22 ] However, given that O 2 is a stronger absorber than CO 2 , it is possible that O 2 might have an effect in situations where CO 2 is a smaller constituent in the atmosphere. [ 22 ] At such high levels, O 2 may start to have a noticeable effect on UV transmission, but at abiogenesis it is unlikely that high levels were achieved given a strong reducing sink from volcanogenic gases, and a lack of a strong source. [ 27 ] Ozone is a well known greenhouse gas that acts as a shield for UV. It is generated in the atmosphere via photolysis of O 2 , hence its abundance is linked to that of O 2 . [ 28 ] At nominal O 2 values expected for around abiogenesis, O 3 is not expected to have a huge influence on the surface UV. [ 22 ] Prebiotic chemistry is heavily linked to UV irradiation from the host star of the planet. In particular, frequency flaring from M dwarf stars might be enough to drive prebiotic photochemistry on terrestrial planets within their habitable zone. This does require that the atmosphere can survive the flaring and not be fully ablated thus sterilizing the surface. But under those conditions, flaring rates for stars, and particularly M dwarfs early in their evolution is relevant to biosignature detection. [ 2 ] Detecting a biosignature requires finding a planet where life has existed long enough to noticeably change the planet in detectable ways either through its surface or atmosphere (see: biosphere ). Given that UV impacts prebiotic chemistry, understanding stellar UV variations is an important step on the way to finding potential candidates for biosignature searches, [ 29 ] as well as understanding more about abiogenesis on Earth.
https://en.wikipedia.org/wiki/Stellar_influences_on_an_origin_of_life_setting
A stellar magnetic field is a magnetic field generated by the motion of conductive plasma inside a star . This motion is created through convection , which is a form of energy transport involving the physical movement of material. A localized magnetic field exerts a force on the plasma, effectively increasing the pressure without a comparable gain in density. As a result, the magnetized region rises relative to the remainder of the plasma, until it reaches the star's photosphere . This creates starspots on the surface, and the related phenomenon of coronal loops . [ 1 ] A star's magnetic field can be measured using the Zeeman effect . Normally the atoms in a star's atmosphere will absorb certain frequencies of energy in the electromagnetic spectrum , producing characteristic dark absorption lines in the spectrum. However, when the atoms are within a magnetic field, these lines become split into multiple, closely spaced lines. The energy also becomes polarized with an orientation that depends on the orientation of the magnetic field. Thus the strength and direction of the star's magnetic field can be determined by examination of the Zeeman effect lines. [ 2 ] [ 3 ] A stellar spectropolarimeter is used to measure the magnetic field of a star. This instrument consists of a spectrograph combined with a polarimeter . The first instrument to be dedicated to the study of stellar magnetic fields was NARVAL, which was mounted on the Bernard Lyot Telescope at the Pic du Midi de Bigorre in the French Pyrenees mountains. [ 4 ] Various measurements—including magnetometer measurements over the last 150 years; [ 5 ] 14 C in tree rings; and 10 Be in ice cores [ 6 ] —have established substantial magnetic variability of the Sun on decadal, centennial and millennial time scales. [ 7 ] Stellar magnetic fields, according to solar dynamo theory, are caused within the convective zone of the star. The convective circulation of the conducting plasma functions like a dynamo . This activity destroys the star's primordial magnetic field, then generates a dipolar magnetic field. As the star undergoes differential rotation—rotating at different rates for various latitudes—the magnetism is wound into a toroidal field of "flux ropes" that become wrapped around the star. The fields can become highly concentrated, producing activity when they emerge on the surface. [ 8 ] The magnetic field of a rotating body of conductive gas or liquid develops self-amplifying electric currents , and thus a self-generated magnetic field, due to a combination of differential rotation (different angular velocity of different parts of body), Coriolis forces and induction. The distribution of currents can be quite complicated, with numerous open and closed loops, and thus the magnetic field of these currents in their immediate vicinity is also quite twisted. At large distances, however, the magnetic fields of currents flowing in opposite directions cancel out and only a net dipole field survives, slowly diminishing with distance. Because the major currents flow in the direction of conductive mass motion (equatorial currents), the major component of the generated magnetic field is the dipole field of the equatorial current loop, thus producing magnetic poles near the geographic poles of a rotating body. The magnetic fields of all celestial bodies are often aligned with the direction of rotation, with notable exceptions such as certain pulsars . Another feature of this dynamo model is that the currents are AC rather than DC. Their direction, and thus the direction of the magnetic field they generate, alternates more or less periodically, changing amplitude and reversing direction, although still more or less aligned with the axis of rotation. The Sun 's major component of magnetic field reverses direction every 11 years (so the period is about 22 years), resulting in a diminished magnitude of magnetic field near reversal time. During this dormancy, the sunspots activity is at maximum (because of the lack of magnetic braking on plasma) and, as a result, massive ejection of high energy plasma into the solar corona and interplanetary space takes place. Collisions of neighboring sunspots with oppositely directed magnetic fields result in the generation of strong electric fields near rapidly disappearing magnetic field regions. This electric field accelerates electrons and protons to high energies (kiloelectronvolts) which results in jets of extremely hot plasma leaving the Sun's surface and heating coronal plasma to high temperatures (millions of kelvin ). If the gas or liquid is very viscous (resulting in turbulent differential motion), the reversal of the magnetic field may not be very periodic. This is the case with the Earth's magnetic field, which is generated by turbulent currents in a viscous outer core. Starspots are regions of intense magnetic activity on the surface of a star. (On the Sun they are termed sunspots .) These form a visible component of magnetic flux tubes that are formed within a star's convection zone . Due to the differential rotation of the star, the tube becomes curled up and stretched, inhibiting convection and producing zones of lower than normal temperature. [ 9 ] Coronal loops often form above starspots, forming from magnetic field lines that stretch out into the stellar corona . These in turn serve to heat the corona to temperatures over a million kelvins . [ 10 ] The magnetic fields linked to starspots and coronal loops are linked to flare activity, and the associated coronal mass ejection . The plasma is heated to tens of millions of kelvins, and the particles are accelerated away from the star's surface at extreme velocities. [ 11 ] Surface activity appears to be related to the age and rotation rate of main-sequence stars. Young stars with a rapid rate of rotation exhibit strong activity. By contrast middle-aged, Sun-like stars with a slow rate of rotation show low levels of activity that varies in cycles. Some older stars display almost no activity, which may mean they have entered a lull that is comparable to the Sun's Maunder minimum . Measurements of the time variation in stellar activity can be useful for determining the differential rotation rates of a star. [ 12 ] A star with a magnetic field will generate a magnetosphere that extends outward into the surrounding space. Field lines from this field originate at one magnetic pole on the star then end at the other pole, forming a closed loop. The magnetosphere contains charged particles that are trapped from the stellar wind , which then move along these field lines. As the star rotates, the magnetosphere rotates with it, dragging along the charged particles. [ 13 ] As stars emit matter with a stellar wind from the photosphere, the magnetosphere creates a torque on the ejected matter. This results in a transfer of angular momentum from the star to the surrounding space, causing a slowing of the stellar rotation rate. Rapidly rotating stars have a higher mass loss rate, resulting in a faster loss of momentum. As the rotation rate slows, so too does the angular deceleration. By this means, a star will gradually approach, but never quite reach, the state of zero rotation. [ 14 ] A T Tauri star is a type of pre-main-sequence star that is being heated through gravitational contraction and has not yet begun to burn hydrogen at its core. They are variable stars that are magnetically active. The magnetic field of these stars is thought to interact with its strong stellar wind, transferring angular momentum to the surrounding protoplanetary disk . This allows the star to brake its rotation rate as it collapses. [ 15 ] Small, M-class stars (with 0.1–0.6 solar masses ) that exhibit rapid, irregular variability are known as flare stars . These fluctuations are hypothesized to be caused by flares, although the activity is much stronger relative to the size of the star. The flares on this class of stars can extend up to 20% of the circumference, and radiate much of their energy in the blue and ultraviolet portion of the spectrum. [ 16 ] Straddling the boundary between stars that undergo nuclear fusion in their cores and non-hydrogen fusing brown dwarfs are the ultracool dwarfs . These objects can emit radio waves due to their strong magnetic fields. Approximately 5–10% of these objects have had their magnetic fields measured. [ 17 ] The coolest of these, 2MASS J10475385+2124234 with a temperature of 800-900 K, retains a magnetic field stronger than 1.7 kG, making it some 3000 times stronger than the Earth's magnetic field. [ 18 ] Radio observations also suggest that their magnetic fields periodically change their orientation, similar to the Sun during the solar cycle . [ 19 ] Planetary nebulae are created when a red giant star ejects its outer envelope, forming an expanding shell of gas. However it remains a mystery why these shells are not always spherically symmetrical. 80% of planetary nebulae do not have a spherical shape; instead forming bipolar or elliptical nebulae. One hypothesis for the formation of a non-spherical shape is the effect of the star's magnetic field. Instead of expanding evenly in all directions, the ejected plasma tends to leave by way of the magnetic poles. Observations of the central stars in at least four planetary nebulae have confirmed that they do indeed possess powerful magnetic fields. [ 20 ] After some massive stars have ceased thermonuclear fusion , a portion of their mass collapses into a compact body of neutrons called a neutron star . These bodies retain a significant magnetic field from the original star, but the collapse in size causes the strength of this field to increase dramatically. The rapid rotation of these collapsed neutron stars results in a pulsar , which emits a narrow beam of energy that can periodically point toward an observer. Compact and fast-rotating astronomical objects ( white dwarfs , neutron stars and black holes ) have extremely strong magnetic fields. The magnetic field of a newly born fast-spinning neutron star is so strong (up to 10 8 teslas) that it electromagnetically radiates enough energy to quickly (in a matter of few million years) damp down the star rotation by 100 to 1000 times. Matter falling on a neutron star also has to follow the magnetic field lines, resulting in two hot spots on the surface where it can reach and collide with the star's surface. These spots are literally a few feet (about a metre) across but tremendously bright. Their periodic eclipsing during star rotation is hypothesized to be the source of pulsating radiation (see pulsars ). An extreme form of a magnetized neutron star is the magnetar . These are formed as the result of a core-collapse supernova . [ 21 ] The existence of such stars was confirmed in 1998 with the measurement of the star SGR 1806-20 . The magnetic field of this star has increased the surface temperature to 18 million K and it releases enormous amounts of energy in gamma ray bursts . [ 22 ] Jets of relativistic plasma are often observed along the direction of the magnetic poles of active black holes in the centers of very young galaxies. In 2008, a team of astronomers first described how as the exoplanet orbiting HD 189733 A reaches a certain place in its orbit, it causes increased stellar flaring . In 2010, a different team found that every time they observe the exoplanet at a certain position in its orbit, they also detected X-ray flares. Theoretical research since 2000 suggested that an exoplanet very near to the star that it orbits may cause increased flaring due to the interaction of their magnetic fields , or because of tidal forces . In 2019, astronomers combined data from Arecibo Observatory , MOST , and the Automated Photoelectric Telescope, in addition to historical observations of the star at radio, optical, ultraviolet, and X-ray wavelengths to examine these claims. Their analysis found that the previous claims were exaggerated and the host star failed to display many of the brightness and spectral characteristics associated with stellar flaring and solar active regions , including sunspots. They also found that the claims did not stand up to statistical analysis, given that many stellar flares are seen regardless of the position of the exoplanet, therefore debunking the earlier claims. The magnetic fields of the host star and exoplanet do not interact, and this system is no longer believed to have a "star-planet interaction." [ 23 ]
https://en.wikipedia.org/wiki/Stellar_magnetic_field
In astrophysics , stellar nucleosynthesis is the creation of chemical elements by nuclear fusion reactions within stars . Stellar nucleosynthesis has occurred since the original creation of hydrogen , helium and lithium during the Big Bang . As a predictive theory , it yields accurate estimates of the observed abundances of the elements. It explains why the observed abundances of elements change over time and why some elements and their isotopes are much more abundant than others. The theory was initially proposed by Fred Hoyle in 1946, [ 1 ] who later refined it in 1954. [ 2 ] Further advances were made, especially to nucleosynthesis by neutron capture of the elements heavier than iron , by Margaret and Geoffrey Burbidge , William Alfred Fowler and Fred Hoyle in their famous 1957 B 2 FH paper , [ 3 ] which became one of the most heavily cited papers in astrophysics history. Stars evolve because of changes in their composition (the abundance of their constituent elements) over their lifespans, first by burning hydrogen ( main sequence star), then helium ( horizontal branch star), and progressively burning higher elements . However, this does not by itself significantly alter the abundances of elements in the universe as the elements are contained within the star. Later in its life, a low-mass star will slowly eject its atmosphere via stellar wind , forming a planetary nebula , while a higher–mass star will eject mass via a sudden catastrophic event called a supernova . The term supernova nucleosynthesis is used to describe the creation of elements during the explosion of a massive star or white dwarf . The advanced sequence of burning fuels is driven by gravitational collapse and its associated heating, resulting in the subsequent burning of carbon , oxygen and silicon . However, most of the nucleosynthesis in the mass range A = 28–56 (from silicon to nickel) is actually caused by the upper layers of the star collapsing onto the core , creating a compressional shock wave rebounding outward. The shock front briefly raises temperatures by roughly 50%, thereby causing furious burning for about a second. This final burning in massive stars, called explosive nucleosynthesis or supernova nucleosynthesis , is the final epoch of stellar nucleosynthesis. A stimulus to the development of the theory of nucleosynthesis was the discovery of variations in the abundances of elements found in the universe . The need for a physical description was already inspired by the relative abundances of the chemical elements in the Solar System . Those abundances, when plotted on a graph as a function of the atomic number of the element, have a jagged sawtooth shape that varies by factors of tens of millions (see history of nucleosynthesis theory ). [ 4 ] This suggested a natural process that is not random. A second stimulus to understanding the processes of stellar nucleosynthesis occurred during the 20th century, when it was realized that the energy released from nuclear fusion reactions accounted for the longevity of the Sun as a source of heat and light. [ 5 ] In 1920, Arthur Eddington , on the basis of the precise measurements of atomic masses by F.W. Aston and a preliminary suggestion by Jean Perrin , proposed that stars obtained their energy from nuclear fusion of hydrogen to form helium and raised the possibility that the heavier elements are produced in stars. [ 6 ] [ 7 ] [ 8 ] This was a preliminary step toward the idea of stellar nucleosynthesis. In 1928 George Gamow derived what is now called the Gamow factor , a quantum-mechanical formula yielding the probability for two contiguous nuclei to overcome the electrostatic Coulomb barrier between them and approach each other closely enough to undergo nuclear reaction due to the strong nuclear force which is effective only at very short distances. [ 9 ] : 410 In the following decade the Gamow factor was used by Robert d'Escourt Atkinson and Fritz Houtermans and later by Edward Teller and Gamow himself to derive the rate at which nuclear reactions would occur at the high temperatures believed to exist in stellar interiors. In 1939, in a Nobel lecture entitled "Energy Production in Stars", Hans Bethe analyzed the different possibilities for reactions by which hydrogen is fused into helium. [ 10 ] He defined two processes that he believed to be the sources of energy in stars. The first one, the proton–proton chain reaction , is the dominant energy source in stars with masses up to about the mass of the Sun. The second process, the carbon–nitrogen–oxygen cycle , which was also considered by Carl Friedrich von Weizsäcker in 1938, is more important in more massive main-sequence stars. [ 11 ] : 167 These works concerned the energy generation capable of keeping stars hot. A clear physical description of the proton–proton chain and of the CNO cycle appears in a 1968 textbook. [ 12 ] : 365 Bethe's two papers did not address the creation of heavier nuclei, however. That theory was begun by Fred Hoyle in 1946 with his argument that a collection of very hot nuclei would assemble thermodynamically into iron . [ 1 ] Hoyle followed that in 1954 with a paper describing how advanced fusion stages within massive stars would synthesize the elements from carbon to iron in mass. [ 2 ] [ 13 ] Hoyle's theory was extended to other processes, beginning with the publication of the 1957 review paper "Synthesis of the Elements in Stars" by Margaret Burbidge , Geoffrey Burbidge , William Alfred Fowler and Fred Hoyle , more commonly referred to as the B 2 FH paper . [ 3 ] This review paper collected and refined earlier research into a heavily cited picture that gave promise of accounting for the observed relative abundances of the elements; but it did not itself enlarge Hoyle's 1954 picture for the origin of primary nuclei as much as many assumed, except in the understanding of nucleosynthesis of those elements heavier than iron by neutron capture. Significant improvements were made by Alastair G. W. Cameron and by Donald D. Clayton . In 1957 Cameron presented his own independent approach to nucleosynthesis, [ 14 ] informed by Hoyle's example, and introduced computers into time-dependent calculations of evolution of nuclear systems. Clayton calculated the first time-dependent models of the s -process in 1961 [ 15 ] and of the r -process in 1965, [ 16 ] as well as of the burning of silicon into the abundant alpha-particle nuclei and iron-group elements in 1968, [ 17 ] [ 18 ] and discovered radiogenic chronologies [ 19 ] for determining the age of the elements. The most important reactions in stellar nucleosynthesis: Hydrogen fusion (nuclear fusion of four protons to form a helium-4 nucleus [ 20 ] ) is the dominant process that generates energy in the cores of main-sequence stars. It is also called "hydrogen burning", which should not be confused with the chemical combustion of hydrogen in an oxidizing atmosphere. There are two predominant processes by which stellar hydrogen fusion occurs: proton–proton chain and the carbon–nitrogen–oxygen (CNO) cycle. Ninety percent of all stars, with the exception of white dwarfs , are fusing hydrogen by these two processes. [ 21 ] : 245 In the cores of lower-mass main-sequence stars such as the Sun , the dominant energy production process is the proton–proton chain reaction . This creates a helium-4 nucleus through a sequence of reactions that begin with the fusion of two protons to form a deuterium nucleus (one proton plus one neutron) along with an ejected positron and neutrino. [ 22 ] In each complete fusion cycle, the proton–proton chain reaction releases about 26.2 MeV. [ 22 ] Proton-proton chain with a dependence of approximately T 4 , meaning the reaction cycle is highly sensitive to temperature; a 10% rise of temperature would increase energy production by this method by 46%, hence, this hydrogen fusion process can occur in up to a third of the star's radius and occupy half the star's mass. For stars above 35% of the Sun's mass, [ 23 ] the energy flux toward the surface is sufficiently low and energy transfer from the core region remains by radiative heat transfer , rather than by convective heat transfer . [ 24 ] As a result, there is little mixing of fresh hydrogen into the core or fusion products outward. In higher-mass stars, the dominant energy production process is the CNO cycle , which is a catalytic cycle that uses nuclei of carbon, nitrogen and oxygen as intermediaries and in the end produces a helium nucleus as with the proton–proton chain. [ 22 ] During a complete CNO cycle, 25.0 MeV of energy is released. The difference in energy production of this cycle, compared to the proton–proton chain reaction, is accounted for by the energy lost through neutrino emission. [ 22 ] CNO cycle is highly sensitive to temperature, with rates proportional to the 16th to 20th power of the temperature; a 10% increase in temperature would result in a 350% increase in energy production. About 90% of the CNO cycle energy generation occurs within the inner 15% of the star's mass, hence it is strongly concentrated at the core. [ 25 ] This results in such an intense outward energy flux that convective energy transfer becomes more important than does radiative transfer . As a result, the core region becomes a convection zone , which stirs the hydrogen fusion region and keeps it well mixed with the surrounding proton-rich region. [ 26 ] This core convection occurs in stars where the CNO cycle contributes more than 20% of the total energy. As the star ages and the core temperature increases, the region occupied by the convection zone slowly shrinks from 20% of the mass down to the inner 8% of the mass. [ 25 ] The Sun produces on the order of 1% of its energy from the CNO cycle. [ 27 ] [ a ] [ 28 ] : 357 [ 29 ] [ b ] The type of hydrogen fusion process that dominates in a star is determined by the temperature dependency differences between the two reactions. The proton–proton chain reaction starts at temperatures about 4 × 10 6 K , [ 30 ] making it the dominant fusion mechanism in smaller stars. A self-maintaining CNO chain requires a higher temperature of approximately 1.6 × 10 7 K , but thereafter it increases more rapidly in efficiency as the temperature rises, than does the proton–proton reaction. [ 31 ] Above approximately 1.7 × 10 7 K , the CNO cycle becomes the dominant source of energy. This temperature is achieved in the cores of main-sequence stars with at least 1.3 times the mass of the Sun . [ 32 ] The Sun itself has a core temperature of about 1.57 × 10 7 K . [ 33 ] : 5 As a main-sequence star ages, the core temperature will rise, resulting in a steadily increasing contribution from its CNO cycle. [ 25 ] Main sequence stars accumulate helium in their cores as a result of hydrogen fusion, but the core does not become hot enough to initiate helium fusion. Helium fusion first begins when a star leaves the red giant branch after accumulating sufficient helium in its core to ignite it. In stars around the mass of the Sun, this begins at the tip of the red giant branch with a helium flash from a degenerate helium core, and the star moves to the horizontal branch where it burns helium in its core. More massive stars ignite helium in their core without a flash and execute a blue loop before reaching the asymptotic giant branch . Such a star initially moves away from the AGB toward bluer colours, then loops back again to what is called the Hayashi track . An important consequence of blue loops is that they give rise to classical Cepheid variables , of central importance in determining distances in the Milky Way and to nearby galaxies. [ 34 ] : 250 Despite the name, stars on a blue loop from the red giant branch are typically not blue in colour but are rather yellow giants, possibly Cepheid variables. They fuse helium until the core is largely carbon and oxygen . The most massive stars become supergiants when they leave the main sequence and quickly start helium fusion as they become red supergiants . After the helium is exhausted in the core of a star, helium fusion will continue in a shell around the carbon–oxygen core. [ 20 ] [ 24 ] In all cases, helium is fused to carbon via the triple-alpha process, i.e., three helium nuclei are transformed into carbon via 8 Be . [ 35 ] : 30 This can then form oxygen, neon, and heavier elements via the alpha process. In this way, the alpha process preferentially produces elements with even numbers of protons by the capture of helium nuclei. Elements with odd numbers of protons are formed by other fusion pathways. [ 36 ] : 398 The reaction rate density between species A and B , having number densities n A , B , is given by: r = n A n B k r {\displaystyle r=n_{A}\,n_{B}\,k_{r}} where k r is the reaction rate constant of each single elementary binary reaction composing the nuclear fusion process; k r = ⟨ σ ( v ) v ⟩ {\displaystyle k_{r}=\langle \sigma (v)\,v\rangle } where σ ( v ) is the cross-section at relative velocity v , and averaging is performed over all velocities. Semi-classically, the cross section is proportional to π λ 2 {\textstyle \pi \,\lambda ^{2}} , where λ = h / p {\textstyle \lambda =h/p} is the de Broglie wavelength . Thus semi-classically the cross section is proportional to E m = c 2 {\textstyle {\frac {E}{m}}=c^{2}} . However, since the reaction involves quantum tunneling , there is an exponential damping at low energies that depends on Gamow factor E G , given by an Arrhenius-type equation : σ ( E ) = S ( E ) E e − E G E . {\displaystyle \sigma (E)={\frac {S(E)}{E}}e^{-{\sqrt {\frac {E_{\text{G}}}{E}}}}.} Here astrophysical S -factor S ( E ) depends on the details of the nuclear interaction, and has the dimension of an energy multiplied by a cross section . One then integrates over all energies to get the total reaction rate, using the Maxwell–Boltzmann distribution and the relation: r V = n A n B ∫ 0 ∞ ( S ( E ) E e − E G E ⋅ 2 E π ( k T ) 3 e − E k T ⋅ 2 E m R ) d E {\displaystyle {\frac {r}{V}}=n_{A}n_{B}\int _{0}^{\infty }{\Bigl (}{\frac {S(E)}{E}}\,e^{-{\sqrt {\frac {E_{\text{G}}}{E}}}}\cdot 2{\sqrt {\frac {E}{\pi (kT)^{3}}}}\,e^{-{\frac {E}{kT}}}\,\cdot {\sqrt {\frac {2E}{m_{\text{R}}}}}{\Bigr )}dE} where k = 86,17 μeV/K, m R = m A m B m A + m B {\displaystyle m_{\text{R}}={\frac {m_{A}m_{B}}{m_{A}+m_{B}}}} is the reduced mass . The integrand equals S ( E ) e − E G E ⋅ 2 2 / π ( k T ) − 3 / 2 e − E k T / m R . {\displaystyle S(E)\,e^{-{\sqrt {\frac {E_{\text{G}}}{E}}}}\cdot 2{\sqrt {2/\pi }}(kT)^{-3/2}\,e^{-{\frac {E}{kT}}}\,/{\sqrt {m_{\text{R}}}}.} Since this integration of f ( E , constant T ) has an exponential damping at high energies of the form ∼ e − E k T {\textstyle \sim e^{-{\frac {E}{kT}}}} and at low energies from the Gamow factor, the integral almost vanishes everywhere except around the peak at E 0 , called Gamow peak. [ 37 ] : 185 There: − ∂ ∂ E ( E G E + E k T ) = 0 {\displaystyle -{\frac {\partial }{\partial E}}\left({\sqrt {\frac {E_{\text{G}}}{E}}}+{\frac {E}{kT}}\right)\,=\,0} Thus: E 0 = ( 1 2 k T E G ) 2 3 {\displaystyle E_{0}=\left({\frac {1}{2}}kT{\sqrt {E_{\text{G}}}}\right)^{\frac {2}{3}}} and E G = E 0 3 2 / 1 2 k T {\displaystyle {\sqrt {E_{\text{G}}}}=E_{0}^{\frac {3}{2}}/{\frac {1}{2}}kT} The exponent can then be approximated around E 0 as: e − ( E k T + E G E ) ≈ e − 3 E 0 k T e ( − 3 ( E − E 0 ) 2 4 E 0 k T ) = e − 3 E 0 k T ( 1 + ( E − E 0 2 E 0 ) 2 ) = e − 3 E 0 k T ( 1 + ( E / E 0 − 1 ) 2 / 4 ) {\displaystyle e^{-({\frac {E}{kT}}+{\sqrt {\frac {E_{\text{G}}}{E}}})}\approx e^{-{\frac {3E_{0}}{kT}}}e^{{\bigl (}-{\frac {3(E-E_{0})^{2}}{4E_{0}kT}}{\bigr )}}=e^{-{\frac {3E_{0}}{kT}}{\bigl (}1+({\frac {E-E_{0}}{2E_{0}}})^{2}{\bigr )}}=e^{-{\frac {3E_{0}}{kT}}{\bigl (}1+(E/E_{0}-1)^{2}/4{\bigr )}}} And the reaction rate is approximated as: [ 38 ] r V ≈ n A n B 4 ( 2 / 3 ) m R E 0 S ( E 0 ) k T e − 3 E 0 k T {\displaystyle {\frac {r}{V}}\approx n_{A}\,n_{B}\,{\frac {4{\sqrt {(}}2/3)}{\sqrt {m_{\text{R}}}}}\,{\sqrt {E_{0}}}{\frac {S(E_{0})}{kT}}\,e^{-{\frac {3E_{0}}{kT}}}} Values of S ( E 0 ) are typically 10 −3 – 10 3 keV · b , but are damped by a huge factor when involving a beta decay , due to the relation between the intermediate bound state (e.g. diproton ) half-life and the beta decay half-life, as in the proton–proton chain reaction . Note that typical core temperatures in main-sequence stars (the Sun) give kT of the order of 1 keV: [ 39 ] log 10 ⁡ k T = − 16 + log 10 ⁡ 2.17 {\textstyle \log _{10}kT=-16+\log _{10}2.17} . [ 40 ] : ch. 3 Thus, the limiting reaction in the CNO cycle , proton capture by 14 7 N , has S ( E 0 ) ~ S (0) = 3.5 keV·b, while the limiting reaction in the proton–proton chain reaction , the creation of deuterium from two protons, has a much lower S ( E 0 ) ~ S (0) = 4×10 −22 keV·b. [ 41 ] [ 42 ] Incidentally, since the former reaction has a much higher Gamow factor, and due to the relative abundance of elements in typical stars, the two reaction rates are equal at a temperature value that is within the core temperature ranges of main-sequence stars. [ 43 ]
https://en.wikipedia.org/wiki/Stellar_nucleosynthesis
Stellar parallax is the apparent shift of position ( parallax ) of any nearby star (or other object) against the background of distant stars. By extension, it is a method for determining the distance to the star through trigonometry, the stellar parallax method . Created by the different orbital positions of Earth , the extremely small observed shift is largest at time intervals of about six months, when Earth arrives at opposite sides of the Sun in its orbit, giving a baseline (the shortest side of the triangle made by a star to be observed and two positions of Earth) distance of about two astronomical units between observations. The parallax itself is considered to be half of this maximum, about equivalent to the observational shift that would occur due to the different positions of Earth and the Sun, a baseline of one astronomical unit (AU). Stellar parallax is so difficult to detect that its existence was the subject of much debate in astronomy for hundreds of years. Thomas Henderson , Friedrich Georg Wilhelm von Struve , and Friedrich Bessel made the first successful parallax measurements in 1832–1838, for the stars Alpha Centauri , Vega , and 61 Cygni . Stellar parallax is so small that it was unobservable until the 19th century, and its apparent absence was used as a scientific argument against heliocentrism during the early modern age . It is clear from Euclid 's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons, such gigantic distances involved seemed entirely implausible: it was one of Tycho Brahe 's principal objections to Copernican heliocentrism that for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn and the eighth sphere (the fixed stars). [ 1 ] James Bradley first tried to measure stellar parallaxes in 1729. The stellar movement proved too insignificant for his telescope , but he instead discovered the aberration of light [ 2 ] and the nutation of Earth's axis, and catalogued 3,222 stars. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars. In the second quarter of the 19th century, technological progress reached to the level which provided sufficient accuracy and precision for stellar parallax measurements. Giuseppe Calandrelli noted stellar parallax in 1805-6 and came up with a 4-second value for the star Vega which was a gross overestimate. [ 3 ] The first successful stellar parallax measurements were done by Thomas Henderson in Cape Town , South Africa , in 1832–1833, where he measured parallax of one of the closest stars, Alpha Centauri . [ 4 ] [ 5 ] Between 1835 and 1836, astronomer Friedrich Georg Wilhelm von Struve at the Dorpat university observatory measured the distance of Vega , publishing his results in 1837. [ 6 ] Friedrich Bessel , a friend of Struve, carried out an intense observational campaign in 1837–1838 at Koenigsberg Observatory for the star 61 Cygni using a heliometer , and published his results in 1838. [ 7 ] [ 8 ] Henderson published his results in 1839, after returning from South Africa. Those three results, two of which were measured with the best instruments at the time (Fraunhofer great refractor used by Struve and Fraunhofer heliometer by Bessel) were the first ones in history to establish the reliable distance scale to the stars. [ 9 ] A large heliometer was installed at Kuffner Observatory (In Vienna) in 1896, and was used for measuring the distance to other stars by trigonometric parallax. [ 10 ] By 1910 it had computed 16 parallax distances to other stars, out of only 108 total known to science at that time. [ 10 ] Being very difficult to measure, only about 60 stellar parallaxes had been obtained by the end of the 19th century, mostly by use of the filar micrometer . Astrographs using astronomical photographic plates sped the process in the early 20th century. Automated plate-measuring machines [ 11 ] and more sophisticated computer technology of the 1960s allowed more efficient compilation of star catalogues . In the 1980s, charge-coupled devices (CCDs) replaced photographic plates and reduced optical uncertainties to one milliarcsecond. [ citation needed ] Stellar parallax remains the standard for calibrating other measurement methods (see Cosmic distance ladder ). Accurate calculations of distance based on stellar parallax require a measurement of the distance from Earth to the Sun, now known to exquisite accuracy based on radar reflection off the surfaces of planets. [ 12 ] In 1989, the satellite Hipparcos was launched primarily for obtaining parallaxes and proper motions of nearby stars, increasing the number of stellar parallaxes measured to milliarcsecond accuracy a thousandfold. Even so, Hipparcos is only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy . The Hubble telescope WFC3 now has a precision of 20 to 40 microarcseconds, enabling reliable distance measurements up to 3,066 parsecs (10,000 ly) for a small number of stars. [ 14 ] This gives more accuracy to the cosmic distance ladder and improves the knowledge of distances in the Universe, based on the dimensions of the Earth's orbit. As distances between the two points of observation are increased, the visual effect of the parallax is likewise rendered more visible. NASA 's New Horizons spacecraft performed the first interstellar parallax measurement on 22 April 2020, taking images of Proxima Centauri and Wolf 359 in conjunction with earth-based observatories. The relative proximity of the two stars combined with the 6.5 billion kilometer (about 43 AU) distance of the spacecraft from Earth yielded a discernible parallax of arcminutes, allowing the parallax to be seen visually without instrumentation. [ 15 ] The European Space Agency 's Gaia mission , launched 19 December 2013, is expected to measure parallax angles to an accuracy of 10 micro arcseconds for all moderately bright stars, thus mapping nearby stars (and potentially planets) up to a distance of tens of thousands of light-years from Earth. [ 17 ] Data Release 2 in 2018 claims mean errors for the parallaxes of 15th magnitude and brighter stars of 20–40 microarcseconds. [ 18 ] Very long baseline interferometry in the radio band can produce images with angular resolutions of about 1 milliarcsecond, and hence, for bright radio sources, the precision of parallax measurements made in the radio can easily exceed [ dubious – discuss ] those of optical telescopes like Gaia. These measurements tend to be sensitivity limited, and need to be made one at a time, so the work is generally done only for sources like pulsars and X-ray binaries, where the radio emission is strong relative to the optical emission. [ citation needed ] Throughout the year the position of a star S is noted in relation to other stars in its apparent neighborhood: Stars that did not seem to move in relation to each other are used as reference points to determine the path of S. The observed path is an ellipse: the projection of Earth's orbit around the Sun through S onto the distant background of non-moving stars. The farther S is removed from Earth's orbital axis, the greater the eccentricity of the path of S. The center of the ellipse corresponds to the point where S would be seen from the Sun: The plane of Earth's orbit is at an angle to a line from the Sun through S. The vertices v and v' of the elliptical projection of the path of S are projections of positions of Earth E and E ′ such that a line E-E ′ intersects the line Sun-S at a right angle; the triangle created by points E, E ′ and S is an isosceles triangle with the line Sun-S as its symmetry axis. Any stars that did not move between observations are, for the purpose of the accuracy of the measurement, infinitely far away. This means that the distance of the movement of the Earth compared to the distance to these infinitely far away stars is, within the accuracy of the measurement, 0. Thus a line of sight from Earth's first position E to vertex v will be essentially the same as a line of sight from the Earth's second position E ′ to the same vertex v, and will therefore run parallel to it - impossible to depict convincingly in an image of limited size: Since line E ′ -v ′ is a transversal in the same (approximately Euclidean) plane as parallel lines E-v and E ′ -v, it follows that the corresponding angles of intersection of these parallel lines with this transversal are congruent: the angle θ between lines of sight E-v and E ′ -v ′ is equal to the angle θ between E ′ -v and E ′ -v ′ , which is the angle θ between observed positions of S in relation to its apparently unmoving stellar surroundings. The distance d from the Sun to S now follows from simple trigonometry: so that d = E-Sun / tan( ⁠ 1 / 2 ⁠ θ), where E-Sun is 1 AU. The more distant an object is, the smaller its parallax. Stellar parallax measures are given in the tiny units of arcseconds , or even in thousandths of arcseconds (milliarcseconds). The distance unit parsec is defined as the length of the leg of a right triangle adjacent to the angle of one arcsecond at one vertex , where the other leg is 1 AU long. Because stellar parallaxes and distances all involve such skinny right triangles , a convenient trigonometric approximation can be used to convert parallaxes (in arcseconds) to distance (in parsecs). The approximate distance is simply the reciprocal of the parallax: d (pc) ≈ 1 / p (arcsec) . {\displaystyle d{\text{ (pc)}}\approx 1/p{\text{ (arcsec)}}.} For example, Proxima Centauri (the nearest star to Earth other than the Sun), whose parallax is 0.7685, is 1 / 0.7685 parsecs = 1.301 parsecs (4.24 ly) distant. [ 19 ] Stellar parallax is most often measured using annual parallax , defined as the difference in position of a star as seen from Earth and Sun, i.e. the angle subtended at a star by the mean radius of Earth's orbit around the Sun. The parsec (3.26 light-years ) is defined as the distance for which the annual parallax is 1 arcsecond . Annual parallax is normally measured by observing the position of a star at different times of the year as Earth moves through its orbit. The angles involved in these calculations are very small and thus difficult to measure. The nearest star to the Sun (and also the star with the largest parallax), Proxima Centauri , has a parallax of 0.7685 ± 0.0002 arcsec. [ 19 ] This angle is approximately that subtended by an object 2 centimeters in diameter located 5.3 kilometers away. For a right triangle , where p {\displaystyle p} is the parallax, 1 au (149,600,000 km) is approximately the average distance from the Sun to Earth, and d {\displaystyle d} is the distance to the star. Using small-angle approximations (valid when the angle is small compared to 1 radian ), so the parallax, measured in arcseconds, is If the parallax is 1", then the distance is This defines the parsec , a convenient unit for measuring distance using parallax. Therefore, the distance, measured in parsecs, is simply d = 1 / p {\displaystyle d=1/p} , when the parallax is given in arcseconds. [ 20 ] Precise parallax measurements of distance have an associated error. This error in the measured parallax angle does not translate directly into an error for the distance, except for relatively small errors. The reason for this is that an error toward a smaller angle results in a greater error in distance than an error toward a larger angle. However, an approximation of the distance error can be computed by where d is the distance and p is the parallax. The approximation is far more accurate for parallax errors that are small relative to the parallax than for relatively large errors. For meaningful results in stellar astronomy , Dutch astronomer Floor van Leeuwen recommends that the parallax error be no more than 10% of the total parallax when computing this error estimate. [ 21 ]
https://en.wikipedia.org/wiki/Stellar_parallax
Stellar pulsations are caused by expansions and contractions in the outer layers as a star seeks to maintain equilibrium . These fluctuations in stellar radius cause corresponding changes in the luminosity of the star . Astronomers are able to deduce this mechanism by measuring the spectrum and observing the Doppler effect . [ 1 ] Many intrinsic variable stars that pulsate with large amplitudes , such as the classical Cepheids , RR Lyrae stars and large-amplitude Delta Scuti stars show regular light curves . This regular behavior is in contrast with the variability of stars that lie parallel to and to the high-luminosity/low-temperature side of the classical variable stars in the Hertzsprung–Russell diagram . These giant stars are observed to undergo pulsations ranging from weak irregularity, when one can still define an average cycling time or period , (as in most RV Tauri and semiregular variables ) to the near absence of repetitiveness in the irregular variables. The W Virginis variables are at the interface; the short period ones are regular and the longer period ones show first relatively regular alternations in the pulsations cycles, followed by the onset of mild irregularity as in the RV Tauri stars into which they gradually morph as their periods get longer. [ 2 ] [ 3 ] Stellar evolution and pulsation theories suggest that these irregular stars have a much higher luminosity to mass (L/M) ratios. Many stars are non-radial pulsators, which have smaller fluctuations in brightness than those of regular variables used as standard candles. [ 4 ] [ 5 ] A prerequisite for irregular variability is that the star be able to change its amplitude on the time scale of a period. In other words, the coupling between pulsation and heat flow must be sufficiently large to allow such changes. This coupling is measured by the relative linear growth- or decay rate κ ( kappa ) of the amplitude of a given normal mode in one pulsation cycle (period). For the regular variables (Cepheids, RR Lyrae, etc.) numerical stellar modeling and linear stability analysis show that κ is at most of the order of a couple of percent for the relevant, excited pulsation modes. On the other hand, the same type of analysis shows that for the high L/M models κ is considerably larger (30% or higher). For the regular variables the small relative growth rates κ imply that there are two distinct time scales, namely the period of oscillation and the longer time associated with the amplitude variation. Mathematically speaking, the dynamics has a center manifold , or more precisely a near center manifold. In addition, it has been found that the stellar pulsations are only weakly nonlinear in the sense that their description can be limited powers of the pulsation amplitudes. These two properties are very general and occur for oscillatory systems in many other fields such as population dynamics , oceanography , plasma physics , etc. The weak nonlinearity and the long time scale of the amplitude variation allows the temporal description of the pulsating system to be simplified to that of only the pulsation amplitudes, thus eliminating motion on the short time scale of the period. The result is a description of the system in terms of amplitude equations that are truncated to low powers of the amplitudes. Such amplitude equations have been derived by a variety of techniques, e.g. the Poincaré–Lindstedt method of elimination of secular terms, or the multi-time asymptotic perturbation method, [ 6 ] [ 7 ] [ 8 ] and more generally, normal form theory. [ 9 ] [ 10 ] [ 11 ] For example, in the case of two non-resonant modes, a situation generally encountered in RR Lyrae variables, the temporal evolution of the amplitudes A 1 and A 2 of the two normal modes 1 and 2 is governed by the following set of ordinary differential equations d A 1 d t = κ 1 A 1 + ( Q 11 A 1 2 + Q 12 A 2 2 ) A 1 d A 2 d t = κ 2 A 2 + ( Q 21 A 1 2 + Q 22 A 2 2 ) A 2 {\displaystyle {\begin{aligned}{\frac {dA_{1}}{dt}}&=\kappa _{1}A_{1}+\left(Q_{11}A_{1}^{2}+Q_{12}A_{2}^{2}\right)A_{1}\\[1ex]{\frac {dA_{2}}{dt}}&=\kappa _{2}A_{2}+\left(Q_{21}A_{1}^{2}+Q_{22}A_{2}^{2}\right)A_{2}\end{aligned}}} where the Q ij are the nonresonant coupling coefficients. [ 12 ] [ 13 ] These amplitude equations have been limited to the lowest order nontrivial nonlinearities. The solutions of interest in stellar pulsation theory are the asymptotic solutions (as time tends towards infinity) because the time scale for the amplitude variations is generally very short compared to the evolution time scale of the star which is the nuclear burning time scale . The equations above have fixed point solutions with constant amplitudes, corresponding to single-mode (A 1 ≠ {\displaystyle \neq } 0, A 2 = 0) or (A 1 = 0, A 2 ≠ {\displaystyle \neq } 0) and double-mode (A 1 ≠ {\displaystyle \neq } 0, A 2 ≠ {\displaystyle \neq } 0) solutions. These correspond to singly periodic and doubly periodic pulsations of the star. No other asymptotic solution of the above equations exists for physical (i.e., negative) coupling coefficients. For resonant modes the appropriate amplitude equations have additional terms that describe the resonant coupling among the modes. The Hertzsprung progression in the light curve morphology of classical (singly periodic) Cepheids is the result of a well-known 2:1 resonance among the fundamental pulsation mode and the second overtone mode. [ 14 ] The amplitude equation can be further extended to nonradial stellar pulsations. [ 15 ] [ 16 ] In the overall analysis of pulsating stars, the amplitude equations allow the bifurcation diagram between possible pulsational states to be mapped out. In this picture, the boundaries of the instability strip where pulsation sets in during the star's evolution correspond to a Hopf bifurcation . [ 17 ] The existence of a center manifold eliminates the possibility of chaotic (i.e. irregular) pulsations on the time scale of the period. Although resonant amplitude equations are sufficiently complex to also allow for chaotic solutions, this is a very different chaos because it is in the temporal variation of the amplitudes and occurs on a long time scale. While long term irregular behavior in the temporal variations of the pulsation amplitudes is possible when amplitude equations apply, this is not the general situation. Indeed, for the majority of the observations and modeling, the pulsations of these stars occur with constant Fourier amplitudes, leading to regular pulsations that can be periodic or multi-periodic (quasi-periodic in the mathematical literature). The light curves of intrinsic variable stars with large amplitudes have been known for centuries to exhibit behavior that goes from extreme regularity, as for the classical Cepheids and the RR Lyrae stars, to extreme irregularity, as for the so-called Irregular variables . In the Population II stars this irregularity gradually increases from the low period W Virginis variables through the RV Tauri variables into the regime of the semiregular variables . Low-dimensional chaos in stellar pulsations is the current interpretation of this established phenomenon. The regular behavior of the Cepheids has been successfully modeled with numerical hydrodynamics since the 1960s, [ 18 ] [ 19 ] and from a theoretical point of view it is easily understood as due to the presence of center manifold which arises because of the weakly dissipative nature of the dynamical system . [ 20 ] This, and the fact that the pulsations are weakly nonlinear, allows a description of the system in terms of amplitude equations [ 21 ] [ 22 ] and a construction of the bifurcation diagram (see also bifurcation theory ) of the possible types of pulsation (or limit cycles ), such fundamental mode pulsation, first or second overtone pulsation, or more complicated, double-mode pulsations in which several modes are excited with constant amplitudes. The boundaries of the instability strip where pulsation sets in during the star's evolution correspond to a Hopf bifurcation . In contrast, the irregularity of the large amplitude Population II stars is more challenging to explain. The variation of the pulsation amplitude over one period implies large dissipation, and therefore there exists no center manifold. Various mechanisms have been proposed, but are found lacking. One, suggests the presence of several closely spaced pulsation frequencies that would beat against each other, but no such frequencies exist in the appropriate stellar models. Another, more interesting suggestion is that the variations are of a stochastic nature, [ 23 ] but no mechanism has been proposed or exists that could provide the energy for such large observed amplitude variations. It is now established that the mechanism behind the irregular light curves is an underlying low dimensional chaotic dynamics (see also Chaos theory ). This conclusion is based on two types of studies. The computational fluid dynamics numerical forecasts for the pulsations of sequences of W Virginis stellar models exhibit two approaches to irregular behavior that are a clear signature of low dimensional chaos . The first indication comes from first return maps in which one plots one maximum radius, or any other suitable variable, versus the next one. The sequence of models shows a period doubling bifurcation , or cascade, leading to chaos. The near quadratic shape of the map is indicative of chaos and implies an underlying horseshoe map . [ 24 ] [ 25 ] Other sequences of models follow a somewhat different route, but also to chaos, namely the Pommeau–Manneville or tangent bifurcation route. [ 26 ] [ 27 ] The following shows a similar visualization of the period doubling cascade to chaos for a sequence of stellar models that differ by their average surface temperature T. The graph shows triplets of values of the stellar radius (R i , R i+1 , R i+2 ) where the indices i , i+1 , i+2 indicate successive time intervals. The presence of low dimensional chaos is also confirmed by another, more sophisticated, analysis of the model pulsations which extracts the lowest unstable periodic orbits and examines their topological organization (twisting). The underlying attractor is found to be banded like the Roessler attractor , with however an additional twist in the band. [ 28 ] The method of global flow reconstruction [ 29 ] uses a single observed signal {s i } to infer properties of the dynamical system that generated it. First N-dimensional 'vectors' S i = s i , s i − 1 , s i − 2 , . . . s i − N + 1 {\displaystyle S_{i}=s_{i},s_{i-1},s_{i-2},...s_{i-N+1}} are constructed. The next step consists in finding an expression for the nonlinear evolution operator M {\displaystyle M} that takes the system from time i {\displaystyle i} to time i + 1 {\displaystyle i+1} , i.e., S i + 1 = M ( S i ) {\displaystyle S_{i+1}=M(S_{i})} . Takens' theorem guarantees that under very general circumstances the topological properties of this reconstructed evolution operator are the same as that of the physical system, provided the embedding dimension N is large enough. Thus from the knowledge of a single observed variable one can infer properties about the real physical system which is governed by a number of independent variables. This approach has been applied to the AAVSO data for the star R Scuti [ 30 ] [ 31 ] It could be inferred that the irregular pulsations of this star arise from an underlying 4-dimensional dynamics. Phrased differently this says that from any 4 neighboring observations one can predict the next one. From a physical point of view it says that there are 4 independent variables that describe the dynamic of the system. The method of false nearest neighbors corroborates an embedding dimension of 4. The fractal dimension of the dynamics of R Scuti as inferred from the computed Lyapunov exponents lies between 3.1 and 3.2. From an analysis of the fixed points of the evolution operator a nice physical picture can be inferred, namely that the pulsations arise from the excitation of an unstable pulsation mode that couples nonlinearly to a second, stable pulsation mode which is in a 2:1 resonance with the first one , a scenario described by the Shilnikov theorem. [ 32 ] This resonance mechanism is not limited to R Scuti, but has been found to hold for several other stars for which the observational data are sufficiently good. [ 33 ]
https://en.wikipedia.org/wiki/Stellar_pulsation
A stellarium is a three-dimensional map of the stars, typically centered on Earth . They are common fixtures at planetariums , where they illustrate the local deep space. Older examples were normally built using small colored balls or lights on support rods (painted black to make them less obvious), but more recent examples use a variety of projection techniques instead. This astronomy -related article is a stub . You can help Wikipedia by expanding it . This cartography or mapping term article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stellarium
In geometry , a stellation diagram or stellation pattern is a two-dimensional diagram in the plane of some face of a polyhedron , showing lines where other face planes intersect with this one. The lines cause 2D space to be divided up into regions. Regions not intersected by any further lines are called elementary regions. Usually unbounded regions are excluded from the diagram, along with any portions of the lines extending to infinity . Each elementary region represents a top face of one cell , and a bottom face of another. A collection of these diagrams, one for each face type, can be used to represent any stellation of the polyhedron, by shading the regions which should appear in that stellation. A stellation diagram exists for every face of a given polyhedron. In face transitive polyhedra, symmetry can be used to require all faces have the same diagram shading. Semiregular polyhedra like the Archimedean solids will have different stellation diagrams for different kinds of faces. This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stellation_diagram
Stelpaviricetes is a class of non-enveloped , positive-strand RNA viruses which infect plants and animals . [ 1 ] Characteristic of the group is a VPg protein attached to the 5'-end of the genome and a conserved 3C - like protease from the PA clan of proteases for processing the translated polyprotein . [ 2 ] [ 3 ] The name of the group is a syllabic abbreviation of member orders " stel lavirales, pa tatavirales" with the suffix -viricetes denoting a virus class . [ 4 ] The following orders are recognized: This virus -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stelpaviricetes
A stem cell line is a group of stem cells that is cultured in vitro and can be propagated indefinitely. Stem cell lines are derived from either animal or human tissues and come from one of three sources: embryonic stem cells , adult stem cells , or induced pluripotent stem cells . They are commonly used in research and regenerative medicine. By definition, stem cells possess two properties: (1) they can self-renew, which means that they can divide indefinitely while remaining in an undifferentiated state; and (2) they are pluripotent or multipotent , which means that they can differentiate to form specialized cell types. Due to the self-renewal capacity of stem cells, a stem cell line can be cultured in vitro indefinitely. A stem-cell line is distinctly different from an immortalized cell line , such as the HeLa line. While stem cells can propagate indefinitely in culture due to their inherent properties, immortalized cells would not normally divide indefinitely but have gained this ability due to mutation. Immortalized cell lines can be generated from cells isolated from tumors, or mutations can be introduced to make the cells immortal. [ 1 ] A stem cell line is also distinct from primary cells. Primary cells are cells that have been isolated and then used immediately. Primary cells cannot divide indefinitely and thus cannot be cultured for long periods of time in vitro. [ citation needed ] An embryonic stem cell line is created from cells derived from the inner cell mass of a blastocyst , an early stage, pre-implantation embryo . [ 2 ] In humans, the blastocyst stage occurs 4–5 days post fertilization. To create an embryonic stem cell line, the inner cell-mass is removed from the blastocyst, separated from the trophoectoderm , and cultured on a layer of supportive cells in vitro. In the derivation of human embryonic stem cell lines, embryos left over from in vitro fertilization (IVF) procedures are used. The fact that the blastocyst is destroyed during the process has raised controversy and ethical concerns. Embryonic stem cells are pluripotent , meaning they can differentiate to form all cell types in the body. In vitro, embryonic stem cells can be cultured under defined conditions to keep them in their pluripotent state, or they can be stimulated with biochemical and physical cues to differentiate them to different cell types. Adult stem cells are found in juvenile or adult tissues. Adult stem cells are multipotent : they can generate a limited number of differentiated cell types (unlike pluripotent embryonic stem cells). Types of adult stem cells include hematopoietic stem cells and mesenchymal stem cells . Hematopoietic stem cells are found in the bone marrow and generate all cells of the immune system all blood cell types. Mesenchymal stem cells are found in umbilical cord blood, amniotic fluid, and adipose tissue and can generate a number of cell types, including osteoblasts , chondrocytes , and adipocytes . In medicine, adult stem cells are mostly commonly used in bone marrow transplants to treat many bone and blood cancers as well as some autoimmune diseases. [ 3 ] (See Hematopoietic stem cell transplantation ) Of the types of adult stem cells have successfully been isolated and identified, only mesenchymal stem cells can successfully be grown in culture for long periods of time. Other adult stem cell types, such as hematopoietic stem cells, are difficult to grow and propagate in vitro. [ 4 ] Identifying methods for maintaining hematopoietic stem cells in vitro is an active area of research. Thus, while mesenchymal stem cell lines exist, other types of adult stem cells that are grown in vitro can better be classified as primary cells. Induced pluripotent stem cell (iPSC) lines are pluripotent stem cells that have been generated from adult/somatic cells. The method of generating iPSCs was developed by Shinya Yamanaka 's lab in 2006; his group demonstrated that the introduction of four specific genes could induce somatic cells to revert to a pluripotent stem cell state. [ 5 ] Compared to embryonic stem-cell lines, iPSC lines are also pluripotent in nature but can be derived without the use of human embryos—a process that has raised ethical concerns. Furthermore, patient-specific iPSC cell lines can be generated—that is, cell lines that are genetically matched to an individual. Patient-specific iPSC lines have been generated for the purposes of studying diseases [ 6 ] and for developing patient-specific medical therapies. Stem-cell lines are grown and maintained at specific temperature and atmospheric conditions (37 degrees Celsius and 5% CO 2 ) in incubators. Culture conditions such as the cell growth medium and surface on which cells are grown vary widely depending on the specific stem cell line. Different biochemical factors can be added to the medium to control the cell phenotype—for example to keep stem cells in a pluripotent state or to differentiate them to a specific cell type. Stem-cell lines are used in research and regenerative medicine. They can be used to study stem-cell biology and early human development. In the field of regenerative medicine, it has been proposed that stem cells be used in cell-based therapies to replace injured or diseased cells and tissues. Examples of conditions that researchers are working to develop stem-cell-based treatments for include neurodegenerative diseases, diabetes, and spinal cord injuries. Stem-cell in-vitro Stem cells could be used as an ideal in vitro platform to study developmental changes at the molecular level. Neural stem cells (NSC) for examples have been used as a model to study the mechanisms behind the differentiation and maturation of cells of the central nervous system (CNS). These studies are gaining more attention recently since they can be optimised and relevant to modelling neurodegenerative diseases and brain tumors. [ 7 ] There is controversy associated with the derivation and use of human embryonic stem cell lines. This controversy stems from the fact that derivation of human embryonic stem cells requires the destruction of a blastocyst-stage, pre-implantation human embryo. There is a wide range of viewpoints regarding the moral consideration that blastocyst-stage human embryos should be given. [ 8 ] [ 9 ] In the United States, Executive Order 13505 established that federal money can be used for research in which approved human embryonic stem-cell (hESC) lines are used, but it cannot be used to derive new lines. [ 10 ] The National Institutes of Health (NIH) Guidelines on Human Stem Cell Research, effective July 7, 2009, implemented the Executive Order 13505 by establishing criteria which hESC lines must meet to be approved for funding. [ 11 ] The NIH Human Embryonic Stem Cell Registry can be accessed online and has updated information on cell lines eligible for NIH funding. [ 12 ] There are 486 approved lines as of January 2022. [ 13 ] Studies have found that approved hESC lines are not uniformly used in the US data from cell banks and surveys of researchers indicate that only a handful of the available hESC lines are routinely used in research. Access and utility are cited as the two primary factors influencing what hESC lines scientists choose to work with. [ 14 ] A 2011 survey of stem cell scientists in the US who use hESC lines in their research found that 54% of respondents used two or fewer lines and 75% used three or fewer lines. [ 15 ] Another study tracked cell-line requests fulfilled from the largest US repositories, the National Stem Cell Bank (NSCB) and the Harvard Stem Cell Institute (HSCI; Cambridge, MA, USA), for the periods March 1999 – December 2008 (for NSCB) and April 2004 – December 2008 (for HSCI). [ 16 ] For NSCB, out of twenty-one approved cell lines, 77% of requests were for two of the lines (H1 and H9). For HSCI, out of the 17 lines requested more than once, 24.7% of requests were for the two most commonly requested lines.
https://en.wikipedia.org/wiki/Stem-cell_line
The Stem Cell Network (SCN) is a Canadian non-profit that supports stem cell and regenerative medicine research, teaches the next generation of highly qualified personal, and delivers outreach activities across Canada. [ 1 ] [ 2 ] The Network has been supported by the Government of Canada , since inception in 2001. [ 1 ] [ 3 ] SCN has catalyzed 25 clinical trials, 21 start-up companies, incubated several international and Canadian research networks and organizations, and established the Till & McCulloch Meetings, Canada's foremost stem cell research event. [ citation needed ] The organization is based in Ottawa, Ontario . Since 2001, SCN has hosted an annual scientific conference. This conference is open to SCN investigators and trainees, and provides a forum to share new research. The conference takes place in a different Canadian city each year. In 2012, the annual conference was re-branded as the Till & McCulloch Meetings. The establishment of the Meetings ensured that the country's stem cell and regenerative medicine research community would continue to have a venue for collaboration and the sharing of important research. The Till & McCulloch Meetings are Canada's largest stem cell and regenerative medicine conference. The SCN training program includes studentships, fellowships, research grants and workshops. Since 2001, SCN has offered training opportunities to more than 5,000 trainees. [ citation needed ] SCN and its membership engage in collaborative funding and research activities. Current members institutions include: [ 4 ]
https://en.wikipedia.org/wiki/Stem_Cell_Network
Stem Cell Reports is a monthly peer-reviewed open access journal covering research into stem cells . It was established in 2013 and is published exclusively online by Cell Press . It is the official journal of the International Society for Stem Cell Research . The editor-in-chief is Martin Pera ( Jackson Laboratory ). According to the Journal Citation Reports , the journal has a 2020 impact factor of 7.765. [ 1 ] This article about a biology journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Stem_Cell_Reports
Stem Cell Research Enhancement Act was the name of two similar bills that both passed through the United States House of Representatives and Senate , but were both vetoed by President George W. Bush and were not enacted into law. [ citation needed ] Chemical Neurological The Stem Cell Research Enhancement Act of 2005 ( H.R. 810 ) was the first bill ever vetoed by United States President George W. Bush , more than five years after his inauguration . The bill, which passed both houses of Congress , but by less than the two-thirds majority needed to override the veto, would have allowed federal funding of stem cell research on new lines of stem cells derived from discarded human embryos created for fertility treatments. [ 1 ] The bill passed the House of Representatives by a vote of 238 to 194 on May 24, 2005., [ 2 ] then passed the Senate by a vote of 63 to 37 on July 18, 2006. President Bush vetoed the bill on July 19, 2006. The House of Representatives then failed to override the veto (235 to 193) on July 19, 2006. [ 1 ] The Stem Cell Research Enhancement Act of 2007 ( S. 5 ), was proposed federal legislation that would have amended the Public Health Service Act to provide for human embryonic stem cell research . It was similar in content to the vetoed Stem Cell Research Enhancement Act of 2005. The bill passed the Senate on April 11, 2007, by a vote of 63–34, then passed the House on June 7, 2007, by a vote of 247–176. President Bush vetoed the bill on June 19, 2007, [ 3 ] and an override was not attempted. The bill was re-introduced in the 111th Congress. It was introduced in the House by Representative Diana DeGette (D-CO) on February 4, 2009. [ 4 ] A Senate version was introduced by Tom Harkin (D-IA) on February 26, 2009. [ 5 ] The House bill had 113 co-sponsors [ 6 ] and the Senate 10 co-sponsors, as of November 20, 2009. [ 7 ]
https://en.wikipedia.org/wiki/Stem_Cell_Research_Enhancement_Act
The Stem Cell Therapeutic and Research Act of 2005 ( Pub. L. 109–129 (text) (PDF) ) is a United States federal law that assigns the United States Secretary of Health and Human Services to create a national stockpile of cord blood stem cells, and rewrites provisions within the Public Health Service Act to account for cord blood and bone marrow donors. The House bill (HR-2520) was introduced on May 23, 2005, by Rep. Chris Smith of New Jersey . On December 20, 2005, the bill was signed by the President after passing through both chambers with unanimous consent. [ 1 ] This United States federal legislation article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Stem_Cell_Therapeutic_and_Research_Act_of_2005
Stem Cell and Regenerative Medicine Cluster ( Lithuanian : Kamieninių ląstelių ir regeneracinės medicinos inovacijų klasteris ) is a business cluster located in Vilnius , Lithuania . Founded in 2011 [ 1 ] the cluster unifies 11 SMEs and state enterprises, including research centers, hospitals, medical centers, and other related institutions. The cluster is currently managed by the Stem Cell Research Center. The cluster engages in fundamental and applied scientific research in the fields of stem cells and regenerative medicine . In addition, members of the cluster carry out clinical research (including pre-clinical and clinical trials ) in related fields. Members of the cluster have the capacity to offer an integrated value chain approach, as facilities available include cGMP , in vivo and in vitro testing, and pre-clinical and clinical research.
https://en.wikipedia.org/wiki/Stem_Cell_and_Regenerative_Medicine_Cluster
Stem Cells and Development is a biweekly peer-reviewed scientific journal covering cell biology , with a specific focus on biomedical applications of stem cells . It was established in 1992 as the Journal of Hematotherapy , and was renamed the Journal of Hematotherapy & Stem Cell Research in 1999. The journal obtained its current name in 2004. [ 1 ] It is published by Mary Ann Liebert, Inc. and the editor-in-chief is Graham C. Parker ( Wayne State University School of Medicine ). According to the Journal Citation Reports , the journal has a 2018 impact factor of 3.147. [ 2 ] This article about a molecular and cell biology journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Stem_Cells_and_Development