text
stringlengths
26
3.6k
page_title
stringlengths
1
71
source
stringclasses
1 value
token_count
int64
10
512
id
stringlengths
2
8
url
stringlengths
31
117
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
The function composition operation satisfies the axioms of a group. It is associative, meaning , and products of more than two permutations are usually written without parentheses. The composition operation also has an identity element (the identity permutation ), and each permutation has an inverse (its inverse function) with . Other uses of the term permutation The concept of a permutation as an ordered arrangement admits several generalizations that have been called permutations, especially in older literature. k-permutations of n In older literature and elementary textbooks, a k-permutation of n (sometimes called a partial permutation, sequence without repetition, variation, or arrangement) means an ordered arrangement (list) of a k-element subset of an n-set. The number of such k-permutations (k-arrangements) of is denoted variously by such symbols as , , , , , or , computed by the formula: , which is 0 when , and otherwise is equal to The product is well defined without the assumption that is a non-negative integer, and is of importance outside combinatorics as well; it is known as the Pochhammer symbol or as the -th falling factorial power :This usage of the term permutation is closely associated with the term combination to mean a subset. A k-combination of a set S is a k-element subset of S: the elements of a combination are not ordered. Ordering the k-combinations of S in all possible ways produces the k-permutations of S. The number of k-combinations of an n-set, C(n,k), is therefore related to the number of k-permutations of n by: These numbers are also known as binomial coefficients, usually denoted : Permutations with repetition Ordered arrangements of k elements of a set S, where repetition is allowed, are called k-tuples. They have sometimes been referred to as permutations with repetition, although they are not permutations in the usual sense. They are also called words or strings over the alphabet S. If the set S has n elements, the number of k-tuples over S is Permutations of multisets
Permutation
Wikipedia
460
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
If M is a finite multiset, then a multiset permutation is an ordered arrangement of elements of M in which each element appears a number of times equal exactly to its multiplicity in M. An anagram of a word having some repeated letters is an example of a multiset permutation. If the multiplicities of the elements of M (taken in some order) are , , ..., and their sum (that is, the size of M) is n, then the number of multiset permutations of M is given by the multinomial coefficient, For example, the number of distinct anagrams of the word MISSISSIPPI is: . A k-permutation of a multiset M is a sequence of k elements of M in which each element appears a number of times less than or equal to its multiplicity in M (an element's repetition number). Circular permutations Permutations, when considered as arrangements, are sometimes referred to as linearly ordered arrangements. If, however, the objects are arranged in a circular manner this distinguished ordering is weakened: there is no "first element" in the arrangement, as any element can be considered as the start. An arrangement of distinct objects in a circular manner is called a circular permutation. These can be formally defined as equivalence classes of ordinary permutations of these objects, for the equivalence relation generated by moving the final element of the linear arrangement to its front. Two circular permutations are equivalent if one can be rotated into the other. The following four circular permutations on four letters are considered to be the same. 1 4 2 3 4 3 2 1 3 4 1 2 2 3 1 4 The circular arrangements are to be read counter-clockwise, so the following two are not equivalent since no rotation can bring one to the other. 1 1 4 3 3 4 2 2 There are (n – 1)! circular permutations of a set with n elements. Properties The number of permutations of distinct objects is !. The number of -permutations with disjoint cycles is the signless Stirling number of the first kind, denoted or . Cycle type
Permutation
Wikipedia
446
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
The cycles (including the fixed points) of a permutation of a set with elements partition that set; so the lengths of these cycles form an integer partition of , which is called the cycle type (or sometimes cycle structure or cycle shape) of . There is a "1" in the cycle type for every fixed point of , a "2" for every transposition, and so on. The cycle type of is This may also be written in a more compact form as . More precisely, the general form is , where are the numbers of cycles of respective length. The number of permutations of a given cycle type is . The number of cycle types of a set with elements equals the value of the partition function . Polya's cycle index polynomial is a generating function which counts permutations by their cycle type. Conjugating permutations In general, composing permutations written in cycle notation follows no easily described pattern – the cycles of the composition can be different from those being composed. However the cycle type is preserved in the special case of conjugating a permutation by another permutation , which means forming the product . Here, is the conjugate of by and its cycle notation can be obtained by taking the cycle notation for and applying to all the entries in it. It follows that two permutations are conjugate exactly when they have the same cycle type. Order of a permutation The order of a permutation is the smallest positive integer m so that . It is the least common multiple of the lengths of its cycles. For example, the order of is . Parity of a permutation Every permutation of a finite set can be expressed as the product of transpositions. Although many such expressions for a given permutation may exist, either they all contain an even number of transpositions or they all contain an odd number of transpositions. Thus all permutations can be classified as even or odd depending on this number. This result can be extended so as to assign a sign, written , to each permutation. if is even and if is odd. Then for two permutations and It follows that The sign of a permutation is equal to the determinant of its permutation matrix (below). Matrix representation
Permutation
Wikipedia
470
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
A permutation matrix is an n × n matrix that has exactly one entry 1 in each column and in each row, and all other entries are 0. There are several ways to assign a permutation matrix to a permutation of {1, 2, ..., n}. One natural approach is to define to be the linear transformation of which permutes the standard basis by , and define to be its matrix. That is, has its jth column equal to the n × 1 column vector : its (i, j) entry is to 1 if i = σ(j), and 0 otherwise. Since composition of linear mappings is described by matrix multiplication, it follows that this construction is compatible with composition of permutations:. For example, the one-line permutations have product , and the corresponding matrices are: It is also common in the literature to find the inverse convention, where a permutation σ is associated to the matrix whose (i, j) entry is 1 if j = σ(i) and is 0 otherwise. In this convention, permutation matrices multiply in the opposite order from permutations, that is, . In this correspondence, permutation matrices act on the right side of the standard row vectors : . The Cayley table on the right shows these matrices for permutations of 3 elements. Permutations of totally ordered sets In some applications, the elements of the set being permuted will be compared with each other. This requires that the set S has a total order so that any two elements can be compared. The set {1, 2, ..., n} with the usual ≤ relation is the most frequently used set in these applications. A number of properties of a permutation are directly related to the total ordering of S, considering the permutation written in one-line notation as a sequence . Ascents, descents, runs, exceedances, records An ascent of a permutation σ of n is any position i < n where the following value is bigger than the current one. That is, i is an ascent if . For example, the permutation 3452167 has ascents (at positions) 1, 2, 5, and 6. Similarly, a descent is a position i < n with , so every i with is either an ascent or a descent.
Permutation
Wikipedia
488
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
An ascending run of a permutation is a nonempty increasing contiguous subsequence that cannot be extended at either end; it corresponds to a maximal sequence of successive ascents (the latter may be empty: between two successive descents there is still an ascending run of length 1). By contrast an increasing subsequence of a permutation is not necessarily contiguous: it is an increasing sequence obtained by omitting some of the values of the one-line notation. For example, the permutation 2453167 has the ascending runs 245, 3, and 167, while it has an increasing subsequence 2367. If a permutation has k − 1 descents, then it must be the union of k ascending runs. The number of permutations of n with k ascents is (by definition) the Eulerian number ; this is also the number of permutations of n with k descents. Some authors however define the Eulerian number as the number of permutations with k ascending runs, which corresponds to descents. An exceedance of a permutation σ1σ2...σn is an index j such that . If the inequality is not strict (that is, ), then j is called a weak exceedance. The number of n-permutations with k exceedances coincides with the number of n-permutations with k descents. A record or left-to-right maximum of a permutation σ is an element i such that σ(j) < σ(i) for all j < i. Foata's transition lemma Foata's fundamental bijection transforms a permutation with a given canonical cycle form into the permutation whose one-line notation has the same sequence of elements with parentheses removed. For example:Here the first element in each canonical cycle of becomes a record (left-to-right maximum) of . Given , one may find its records and insert parentheses to construct the inverse transformation . Underlining the records in the above example: , which allows the reconstruction of the cycles of . The following table shows and for the six permutations of S = {1, 2, 3}, with the bold text on each side showing the notation used in the bijection: one-line notation for and canonical cycle notation for .
Permutation
Wikipedia
489
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
As a first corollary, the number of n-permutations with exactly k records is equal to the number of n-permutations with exactly k cycles: this last number is the signless Stirling number of the first kind, . Furthermore, Foata's mapping takes an n-permutation with k weak exceedances to an n-permutation with ascents. For example, (2)(31) = 321 has k = 2 weak exceedances (at index 1 and 2), whereas has ascent (at index 1; that is, from 2 to 3). Inversions An inversion of a permutation σ is a pair of positions where the entries of a permutation are in the opposite order: and . Thus a descent is an inversion at two adjacent positions. For example, has (i, j) = (1, 3), (2, 3), and (4, 5), where (σ(i), σ(j)) = (2, 1), (3, 1), and (5, 4). Sometimes an inversion is defined as the pair of values (σ(i), σ(j)); this makes no difference for the number of inversions, and the reverse pair (σ(j), σ(i)) is an inversion in the above sense for the inverse permutation σ−1.
Permutation
Wikipedia
291
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
The number of inversions is an important measure for the degree to which the entries of a permutation are out of order; it is the same for σ and for σ−1. To bring a permutation with k inversions into order (that is, transform it into the identity permutation), by successively applying (right-multiplication by) adjacent transpositions, is always possible and requires a sequence of k such operations. Moreover, any reasonable choice for the adjacent transpositions will work: it suffices to choose at each step a transposition of i and where i is a descent of the permutation as modified so far (so that the transposition will remove this particular descent, although it might create other descents). This is so because applying such a transposition reduces the number of inversions by 1; as long as this number is not zero, the permutation is not the identity, so it has at least one descent. Bubble sort and insertion sort can be interpreted as particular instances of this procedure to put a sequence into order. Incidentally this procedure proves that any permutation σ can be written as a product of adjacent transpositions; for this one may simply reverse any sequence of such transpositions that transforms σ into the identity. In fact, by enumerating all sequences of adjacent transpositions that would transform σ into the identity, one obtains (after reversal) a complete list of all expressions of minimal length writing σ as a product of adjacent transpositions. The number of permutations of n with k inversions is expressed by a Mahonian number. This is the coefficient of in the expansion of the product The notation denotes the q-factorial. This expansion commonly appears in the study of necklaces. Let such that and . In this case, say the weight of the inversion is . Kobayashi (2011) proved the enumeration formula where denotes Bruhat order in the symmetric groups. This graded partial order often appears in the context of Coxeter groups. Permutations in computing Numbering permutations
Permutation
Wikipedia
423
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
One way to represent permutations of n things is by an integer N with 0 ≤ N < n!, provided convenient methods are given to convert between the number and the representation of a permutation as an ordered arrangement (sequence). This gives the most compact representation of arbitrary permutations, and in computing is particularly attractive when n is small enough that N can be held in a machine word; for 32-bit words this means n ≤ 12, and for 64-bit words this means n ≤ 20. The conversion can be done via the intermediate form of a sequence of numbers dn, dn−1, ..., d2, d1, where di is a non-negative integer less than i (one may omit d1, as it is always 0, but its presence makes the subsequent conversion to a permutation easier to describe). The first step then is to simply express N in the factorial number system, which is just a particular mixed radix representation, where, for numbers less than n!, the bases (place values or multiplication factors) for successive digits are , , ..., 2!, 1!. The second step interprets this sequence as a Lehmer code or (almost equivalently) as an inversion table. In the Lehmer code for a permutation σ, the number dn represents the choice made for the first term σ1, the number dn−1 represents the choice made for the second term σ2 among the remaining elements of the set, and so forth. More precisely, each dn+1−i gives the number of remaining elements strictly less than the term σi. Since those remaining elements are bound to turn up as some later term σj, the digit dn+1−i counts the inversions (i,j) involving i as smaller index (the number of values j for which i < j and σi > σj). The inversion table for σ is quite similar, but here dn+1−k counts the number of inversions (i,j) where k = σj occurs as the smaller of the two values appearing in inverted order.
Permutation
Wikipedia
445
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
Both encodings can be visualized by an n by n Rothe diagram (named after Heinrich August Rothe) in which dots at (i,σi) mark the entries of the permutation, and a cross at (i,σj) marks the inversion (i,j); by the definition of inversions a cross appears in any square that comes both before the dot (j,σj) in its column, and before the dot (i,σi) in its row. The Lehmer code lists the numbers of crosses in successive rows, while the inversion table lists the numbers of crosses in successive columns; it is just the Lehmer code for the inverse permutation, and vice versa. To effectively convert a Lehmer code dn, dn−1, ..., d2, d1 into a permutation of an ordered set S, one can start with a list of the elements of S in increasing order, and for i increasing from 1 to n set σi to the element in the list that is preceded by dn+1−i other ones, and remove that element from the list. To convert an inversion table dn, dn−1, ..., d2, d1 into the corresponding permutation, one can traverse the numbers from d1 to dn while inserting the elements of S from largest to smallest into an initially empty sequence; at the step using the number d from the inversion table, the element from S inserted into the sequence at the point where it is preceded by d elements already present. Alternatively one could process the numbers from the inversion table and the elements of S both in the opposite order, starting with a row of n empty slots, and at each step place the element from S into the empty slot that is preceded by d other empty slots.
Permutation
Wikipedia
378
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
Converting successive natural numbers to the factorial number system produces those sequences in lexicographic order (as is the case with any mixed radix number system), and further converting them to permutations preserves the lexicographic ordering, provided the Lehmer code interpretation is used (using inversion tables, one gets a different ordering, where one starts by comparing permutations by the place of their entries 1 rather than by the value of their first entries). The sum of the numbers in the factorial number system representation gives the number of inversions of the permutation, and the parity of that sum gives the signature of the permutation. Moreover, the positions of the zeroes in the inversion table give the values of left-to-right maxima of the permutation (in the example 6, 8, 9) while the positions of the zeroes in the Lehmer code are the positions of the right-to-left minima (in the example positions the 4, 8, 9 of the values 1, 2, 5); this allows computing the distribution of such extrema among all permutations. A permutation with Lehmer code dn, dn−1, ..., d2, d1 has an ascent if and only if . Algorithms to generate permutations In computing it may be required to generate permutations of a given sequence of values. The methods best adapted to do this depend on whether one wants some randomly chosen permutations, or all permutations, and in the latter case if a specific ordering is required. Another question is whether possible equality among entries in the given sequence is to be taken into account; if so, one should only generate distinct multiset permutations of the sequence.
Permutation
Wikipedia
364
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
An obvious way to generate permutations of n is to generate values for the Lehmer code (possibly using the factorial number system representation of integers up to n!), and convert those into the corresponding permutations. However, the latter step, while straightforward, is hard to implement efficiently, because it requires n operations each of selection from a sequence and deletion from it, at an arbitrary position; of the obvious representations of the sequence as an array or a linked list, both require (for different reasons) about n2/4 operations to perform the conversion. With n likely to be rather small (especially if generation of all permutations is needed) that is not too much of a problem, but it turns out that both for random and for systematic generation there are simple alternatives that do considerably better. For this reason it does not seem useful, although certainly possible, to employ a special data structure that would allow performing the conversion from Lehmer code to permutation in O(n log n) time. Random generation of permutations For generating random permutations of a given sequence of n values, it makes no difference whether one applies a randomly selected permutation of n to the sequence, or chooses a random element from the set of distinct (multiset) permutations of the sequence. This is because, even though in case of repeated values there can be many distinct permutations of n that result in the same permuted sequence, the number of such permutations is the same for each possible result. Unlike for systematic generation, which becomes unfeasible for large n due to the growth of the number n!, there is no reason to assume that n will be small for random generation.
Permutation
Wikipedia
355
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
The basic idea to generate a random permutation is to generate at random one of the n! sequences of integers d1,d2,...,dn satisfying (since d1 is always zero it may be omitted) and to convert it to a permutation through a bijective correspondence. For the latter correspondence one could interpret the (reverse) sequence as a Lehmer code, and this gives a generation method first published in 1938 by Ronald Fisher and Frank Yates. While at the time computer implementation was not an issue, this method suffers from the difficulty sketched above to convert from Lehmer code to permutation efficiently. This can be remedied by using a different bijective correspondence: after using di to select an element among i remaining elements of the sequence (for decreasing values of i), rather than removing the element and compacting the sequence by shifting down further elements one place, one swaps the element with the final remaining element. Thus the elements remaining for selection form a consecutive range at each point in time, even though they may not occur in the same order as they did in the original sequence. The mapping from sequence of integers to permutations is somewhat complicated, but it can be seen to produce each permutation in exactly one way, by an immediate induction. When the selected element happens to be the final remaining element, the swap operation can be omitted. This does not occur sufficiently often to warrant testing for the condition, but the final element must be included among the candidates of the selection, to guarantee that all permutations can be generated. The resulting algorithm for generating a random permutation of a[0], a[1], ..., a[n − 1] can be described as follows in pseudocode: for i from n downto 2 do di ← random element of { 0, ..., i − 1 } swap a[di] and a[i − 1] This can be combined with the initialization of the array a[i] = i as follows for i from 0 to n−1 do di+1 ← random element of { 0, ..., i } a[i] ← a[di+1] a[di+1] ← i If di+1 = i, the first assignment will copy an uninitialized value, but the second will overwrite it with the correct value i.
Permutation
Wikipedia
495
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
However, Fisher-Yates is not the fastest algorithm for generating a permutation, because Fisher-Yates is essentially a sequential algorithm and "divide and conquer" procedures can achieve the same result in parallel. Generation in lexicographic order There are many ways to systematically generate all permutations of a given sequence. One classic, simple, and flexible algorithm is based upon finding the next permutation in lexicographic ordering, if it exists. It can handle repeated values, for which case it generates each distinct multiset permutation once. Even for ordinary permutations it is significantly more efficient than generating values for the Lehmer code in lexicographic order (possibly using the factorial number system) and converting those to permutations. It begins by sorting the sequence in (weakly) increasing order (which gives its lexicographically minimal permutation), and then repeats advancing to the next permutation as long as one is found. The method goes back to Narayana Pandita in 14th century India, and has been rediscovered frequently. The following algorithm generates the next permutation lexicographically after a given permutation. It changes the given permutation in-place.
Permutation
Wikipedia
246
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
Find the largest index k such that . If no such index exists, the permutation is the last permutation. Find the largest index l greater than k such that . Swap the value of a[k] with that of a[l]. Reverse the sequence from a[k + 1] up to and including the final element a[n]. For example, given the sequence [1, 2, 3, 4] (which is in increasing order), and given that the index is zero-based, the steps are as follows: Index k = 2, because 3 is placed at an index that satisfies condition of being the largest index that is still less than a[k + 1] which is 4. Index l = 3, because 4 is the only value in the sequence that is greater than 3 in order to satisfy the condition a[k] < a[l]. The values of a[2] and a[3] are swapped to form the new sequence [1, 2, 4, 3]. The sequence after k-index a[2] to the final element is reversed. Because only one value lies after this index (the 3), the sequence remains unchanged in this instance. Thus the lexicographic successor of the initial state is permuted: [1, 2, 4, 3]. Following this algorithm, the next lexicographic permutation will be [1, 3, 2, 4], and the 24th permutation will be [4, 3, 2, 1] at which point a[k] < a[k + 1] does not exist, indicating that this is the last permutation. This method uses about 3 comparisons and 1.5 swaps per permutation, amortized over the whole sequence, not counting the initial sort. Generation with minimal changes
Permutation
Wikipedia
380
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
An alternative to the above algorithm, the Steinhaus–Johnson–Trotter algorithm, generates an ordering on all the permutations of a given sequence with the property that any two consecutive permutations in its output differ by swapping two adjacent values. This ordering on the permutations was known to 17th-century English bell ringers, among whom it was known as "plain changes". One advantage of this method is that the small amount of change from one permutation to the next allows the method to be implemented in constant time per permutation. The same can also easily generate the subset of even permutations, again in constant time per permutation, by skipping every other output permutation. An alternative to Steinhaus–Johnson–Trotter is Heap's algorithm, said by Robert Sedgewick in 1977 to be the fastest algorithm of generating permutations in applications. The following figure shows the output of all three aforementioned algorithms for generating all permutations of length , and of six additional algorithms described in the literature. Lexicographic ordering; Steinhaus–Johnson–Trotter algorithm; Heap's algorithm; Ehrlich's star-transposition algorithm: in each step, the first entry of the permutation is exchanged with a later entry; Zaks' prefix reversal algorithm: in each step, a prefix of the current permutation is reversed to obtain the next permutation; Sawada-Williams' algorithm: each permutation differs from the previous one either by a cyclic left-shift by one position, or an exchange of the first two entries; Corbett's algorithm: each permutation differs from the previous one by a cyclic left-shift of some prefix by one position; Single-track ordering: each column is a cyclic shift of the other columns; Single-track Gray code: each column is a cyclic shift of the other columns, plus any two consecutive permutations differ only in one or two transpositions. Nested swaps generating algorithm in steps connected to the nested subgroups . Each permutation is obtained from the previous by a transposition multiplication to the left. Algorithm is connected to the Factorial_number_system of the index.
Permutation
Wikipedia
456
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
Generation of permutations in nested swap steps Explicit sequence of swaps (transpositions, 2-cycles ), is described here, each swap applied (on the left) to the previous chain providing a new permutation, such that all the permutations can be retrieved, each only once. This counting/generating procedure has an additional structure (call it nested), as it is given in steps: after completely retrieving , continue retrieving by cosets of in , by appropriately choosing the coset representatives to be described below. Since each is sequentially generated, there is a last element . So, after generating by swaps, the next permutation in has to be for some . Then all swaps that generated are repeated, generating the whole coset , reaching the last permutation in that coset ; the next swap has to move the permutation to representative of another coset . Continuing the same way, one gets coset representatives for the cosets of in ; the ordered set () is called the set of coset beginnings. Two of these representatives are in the same coset if and only if , that is, . Concluding, permutations are all representatives of distinct cosets if and only if for any , (no repeat condition). In particular, for all generated permutations to be distinct it is not necessary for the values to be distinct. In the process, one gets that and this provides the recursion procedure. EXAMPLES: obviously, for one has ; to build there are only two possibilities for the coset beginnings satisfying the no repeat condition; the choice leads to . To continue generating one needs appropriate coset beginnings (satisfying the no repeat condition): there is a convenient choice: , leading to . Then, to build a convenient choice for the coset beginnings (satisfying the no repeat condition) is , leading to . From examples above one can inductively go to higher in a similar way, choosing coset beginnings of in , as follows: for even choosing all coset beginnings equal to 1 and for odd choosing coset beginnings equal to . With such choices the "last" permutation is for odd and for even (). Using these explicit formulae one can easily compute the permutation of certain index in the counting/generation steps with minimum computation. For this, writing the index in factorial base is useful. For example, the permutation for index is: , yelding finally, .
Permutation
Wikipedia
510
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
Because multiplying by swap permutation takes short computing time and every new generated permutation requires only one such swap multiplication, this generation procedure is quite efficient. Moreover as there is a simple formula, having the last permutation in each can save even more time to go directly to a permutation with certain index in fewer steps than expected as it can be done in blocks of subgroups rather than swap by swap. Applications Permutations are used in the interleaver component of the error detection and correction algorithms, such as turbo codes, for example 3GPP Long Term Evolution mobile telecommunication standard uses these ideas (see 3GPP technical specification 36.212). Such applications raise the question of fast generation of permutations satisfying certain desirable properties. One of the methods is based on the permutation polynomials. Also as a base for optimal hashing in Unique Permutation Hashing.
Permutation
Wikipedia
184
44027
https://en.wikipedia.org/wiki/Permutation
Mathematics
Discrete mathematics
null
Solvations describes the interaction of a solvent with dissolved molecules. Both ionized and uncharged molecules interact strongly with a solvent, and the strength and nature of this interaction influence many properties of the solute, including solubility, reactivity, and color, as well as influencing the properties of the solvent such as its viscosity and density. If the attractive forces between the solvent and solute particles are greater than the attractive forces holding the solute particles together, the solvent particles pull the solute particles apart and surround them. The surrounded solute particles then move away from the solid solute and out into the solution. Ions are surrounded by a concentric shell of solvent. Solvation is the process of reorganizing solvent and solute molecules into solvation complexes and involves bond formation, hydrogen bonding, and van der Waals forces. Solvation of a solute by water is called hydration. Solubility of solid compounds depends on a competition between lattice energy and solvation, including entropy effects related to changes in the solvent structure. Distinction from solubility By an IUPAC definition, solvation is an interaction of a solute with the solvent, which leads to stabilization of the solute species in the solution. In the solvated state, an ion or molecule in a solution is surrounded or complexed by solvent molecules. Solvated species can often be described by coordination number, and the complex stability constants. The concept of the solvation interaction can also be applied to an insoluble material, for example, solvation of functional groups on a surface of ion-exchange resin. Solvation is, in concept, distinct from solubility. Solvation or dissolution is a kinetic process and is quantified by its rate. Solubility quantifies the dynamic equilibrium state achieved when the rate of dissolution equals the rate of precipitation. The consideration of the units makes the distinction clearer. The typical unit for dissolution rate is mol/s. The units for solubility express a concentration: mass per volume (mg/mL), molarity (mol/L), etc. Solvents and intermolecular interactions Solvation involves different types of intermolecular interactions: Hydrogen bonding Ion–dipole interactions The van der Waals forces, which consist of dipole–dipole, dipole–induced dipole, and induced dipole–induced dipole interactions.
Solvation
Wikipedia
494
44041
https://en.wikipedia.org/wiki/Solvation
Physical sciences
Mixture
Chemistry
Which of these forces are at play depends on the molecular structure and properties of the solvent and solute. The similarity or complementary character of these properties between solvent and solute determines how well a solute can be solvated by a particular solvent. Solvent polarity is the most important factor in determining how well it solvates a particular solute. Polar solvents have molecular dipoles, meaning that part of the solvent molecule has more electron density than another part of the molecule. The part with more electron density will experience a partial negative charge while the part with less electron density will experience a partial positive charge. Polar solvent molecules can solvate polar solutes and ions because they can orient the appropriate partially charged portion of the molecule towards the solute through electrostatic attraction. This stabilizes the system and creates a solvation shell (or hydration shell in the case of water) around each particle of solute. The solvent molecules in the immediate vicinity of a solute particle often have a much different ordering than the rest of the solvent, and this area of differently ordered solvent molecules is called the cybotactic region. Water is the most common and well-studied polar solvent, but others exist, such as ethanol, methanol, acetone, acetonitrile, and dimethyl sulfoxide. Polar solvents are often found to have a high dielectric constant, although other solvent scales are also used to classify solvent polarity. Polar solvents can be used to dissolve inorganic or ionic compounds such as salts. The conductivity of a solution depends on the solvation of its ions. Nonpolar solvents cannot solvate ions, and ions will be found as ion pairs.
Solvation
Wikipedia
345
44041
https://en.wikipedia.org/wiki/Solvation
Physical sciences
Mixture
Chemistry
Hydrogen bonding among solvent and solute molecules depends on the ability of each to accept H-bonds, donate H-bonds, or both. Solvents that can donate H-bonds are referred to as protic, while solvents that do not contain a polarized bond to a hydrogen atom and cannot donate a hydrogen bond are called aprotic. H-bond donor ability is classified on a scale (α). Protic solvents can solvate solutes that can accept hydrogen bonds. Similarly, solvents that can accept a hydrogen bond can solvate H-bond-donating solutes. The hydrogen bond acceptor ability of a solvent is classified on a scale (β). Solvents such as water can both donate and accept hydrogen bonds, making them excellent at solvating solutes that can donate or accept (or both) H-bonds. Some chemical compounds experience solvatochromism, which is a change in color due to solvent polarity. This phenomenon illustrates how different solvents interact differently with the same solute. Other solvent effects include conformational or isomeric preferences and changes in the acidity of a solute. Solvation energy and thermodynamic considerations The solvation process will be thermodynamically favored only if the overall Gibbs energy of the solution is decreased, compared to the Gibbs energy of the separated solvent and solid (or gas or liquid). This means that the change in enthalpy minus the change in entropy (multiplied by the absolute temperature) is a negative value, or that the Gibbs energy of the system decreases. A negative Gibbs energy indicates a spontaneous process but does not provide information about the rate of dissolution. Solvation involves multiple steps with different energy consequences. First, a cavity must form in the solvent to make space for a solute. This is both entropically and enthalpically unfavorable, as solvent ordering increases and solvent-solvent interactions decrease. Stronger interactions among solvent molecules leads to a greater enthalpic penalty for cavity formation. Next, a particle of solute must separate from the bulk. This is enthalpically unfavorable since solute-solute interactions decrease, but when the solute particle enters the cavity, the resulting solvent-solute interactions are enthalpically favorable. Finally, as solute mixes into solvent, there is an entropy gain.
Solvation
Wikipedia
484
44041
https://en.wikipedia.org/wiki/Solvation
Physical sciences
Mixture
Chemistry
The enthalpy of solution is the solution enthalpy minus the enthalpy of the separate systems, whereas the entropy of solution is the corresponding difference in entropy. The solvation energy (change in Gibbs free energy) is the change in enthalpy minus the product of temperature (in Kelvin) times the change in entropy. Gases have a negative entropy of solution, due to the decrease in gaseous volume as gas dissolves. Since their enthalpy of solution does not decrease too much with temperature, and their entropy of solution is negative and does not vary appreciably with temperature, most gases are less soluble at higher temperatures. Enthalpy of solvation can help explain why solvation occurs with some ionic lattices but not with others. The difference in energy between that which is necessary to release an ion from its lattice and the energy given off when it combines with a solvent molecule is called the enthalpy change of solution. A negative value for the enthalpy change of solution corresponds to an ion that is likely to dissolve, whereas a high positive value means that solvation will not occur. It is possible that an ion will dissolve even if it has a positive enthalpy value. The extra energy required comes from the increase in entropy that results when the ion dissolves. The introduction of entropy makes it harder to determine by calculation alone whether a substance will dissolve or not. A quantitative measure for solvation power of solvents is given by donor numbers. Although early thinking was that a higher ratio of a cation's ion charge to ionic radius, or the charge density, resulted in more solvation, this does not stand up to scrutiny for ions like iron(III) or lanthanides and actinides, which are readily hydrolyzed to form insoluble (hydrous) oxides. As these are solids, it is apparent that they are not solvated. Strong solvent–solute interactions make the process of solvation more favorable. One way to compare how favorable the dissolution of a solute is in different solvents is to consider the free energy of transfer. The free energy of transfer quantifies the free energy difference between dilute solutions of a solute in two different solvents. This value essentially allows for comparison of solvation energies without including solute-solute interactions.
Solvation
Wikipedia
473
44041
https://en.wikipedia.org/wiki/Solvation
Physical sciences
Mixture
Chemistry
In general, thermodynamic analysis of solutions is done by modeling them as reactions. For example, if you add sodium chloride to water, the salt will dissociate into the ions sodium(+aq) and chloride(-aq). The equilibrium constant for this dissociation can be predicted by the change in Gibbs energy of this reaction. The Born equation is used to estimate Gibbs free energy of solvation of a gaseous ion. Recent simulation studies have shown that the variation in solvation energy between the ions and the surrounding water molecules underlies the mechanism of the Hofmeister series. Macromolecules and assemblies Solvation (specifically, hydration) is important for many biological structures and processes. For instance, solvation of ions and/or of charged macromolecules, like DNA and proteins, in aqueous solutions influences the formation of heterogeneous assemblies, which may be responsible for biological function. As another example, protein folding occurs spontaneously, in part because of a favorable change in the interactions between the protein and the surrounding water molecules. Folded proteins are stabilized by 5-10 kcal/mol relative to the unfolded state due to a combination of solvation and the stronger intramolecular interactions in the folded protein structure, including hydrogen bonding. Minimizing the number of hydrophobic side chains exposed to water by burying them in the center of a folded protein is a driving force related to solvation. Solvation also affects host–guest complexation. Many host molecules have a hydrophobic pore that readily encapsulates a hydrophobic guest. These interactions can be used in applications such as drug delivery, such that a hydrophobic drug molecule can be delivered in a biological system without needing to covalently modify the drug in order to solubilize it. Binding constants for host–guest complexes depend on the polarity of the solvent. Hydration affects electronic and vibrational properties of biomolecules. Importance of solvation in computer simulations Due to the importance of the effects of solvation on the structure of macromolecules, early computer simulations which attempted to model their behaviors without including the effects of solvent (in vacuo) could yield poor results when compared with experimental data obtained in solution. Small molecules may also adopt more compact conformations when simulated in vacuo; this is due to favorable van der Waals interactions and intramolecular electrostatic interactions which would be dampened in the presence of a solvent.
Solvation
Wikipedia
512
44041
https://en.wikipedia.org/wiki/Solvation
Physical sciences
Mixture
Chemistry
As computer power increased, it became possible to try and incorporate the effects of solvation within a simulation and the simplest way to do this is to surround the molecule being simulated with a "skin" of solvent molecules, akin to simulating the molecule within a drop of solvent if the skin is sufficiently deep.
Solvation
Wikipedia
61
44041
https://en.wikipedia.org/wiki/Solvation
Physical sciences
Mixture
Chemistry
Oceanography (), also known as oceanology, sea science, ocean science, and marine science, is the scientific study of the ocean, including its physics, chemistry, biology, and geology. It is an Earth science, which covers a wide range of topics, including ocean currents, waves, and geophysical fluid dynamics; fluxes of various chemical substances and physical properties within the ocean and across its boundaries; ecosystem dynamics; and plate tectonics and seabed geology. Oceanographers draw upon a wide range of disciplines to deepen their understanding of the world’s oceans, incorporating insights from astronomy, biology, chemistry, geography, geology, hydrology, meteorology and physics. History Early history Humans first acquired knowledge of the waves and currents of the seas and oceans in pre-historic times. Observations on tides were recorded by Aristotle and Strabo in 384–322 BC. Early exploration of the oceans was primarily for cartography and mainly limited to its surfaces and of the animals that fishermen brought up in nets, though depth soundings by lead line were taken. The Portuguese campaign of Atlantic navigation is the earliest example of a systematic scientific large project, sustained over many decades, studying the currents and winds of the Atlantic. The work of Pedro Nunes (1502–1578) is remembered in the navigation context for the determination of the loxodromic curve: the shortest course between two points on the surface of a sphere represented onto a two-dimensional map. When he published his "Treatise of the Sphere" (1537), mostly a commentated translation of earlier work by others, he included a treatise on geometrical and astronomic methods of navigation. There he states clearly that Portuguese navigations were not an adventurous endeavour: "nam se fezeram indo a acertar: mas partiam os nossos mareantes muy ensinados e prouidos de estromentos e regras de astrologia e geometria que sam as cousas que os cosmographos ham dadar apercebidas (...) e leuaua cartas muy particularmente rumadas e na ja as de que os antigos vsauam" (were not done by chance: but our seafarers departed well taught and provided with instruments and rules of astrology (astronomy) and geometry which were matters the cosmographers would provide (...) and they took charts with exact routes and no longer those used by the ancient).
Oceanography
Wikipedia
511
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
His credibility rests on being personally involved in the instruction of pilots and senior seafarers from 1527 onwards by Royal appointment, along with his recognized competence as mathematician and astronomer. The main problem in navigating back from the south of the Canary Islands (or south of Boujdour) by sail alone, is due to the change in the regime of winds and currents: the North Atlantic gyre and the Equatorial counter current will push south along the northwest bulge of Africa, while the uncertain winds where the Northeast trades meet the Southeast trades (the doldrums) leave a sailing ship to the mercy of the currents. Together, prevalent current and wind make northwards progress very difficult or impossible. It was to overcome this problem and clear the passage to India around Africa as a viable maritime trade route, that a systematic plan of exploration was devised by the Portuguese. The return route from regions south of the Canaries became the 'volta do largo' or 'volta do mar'. The 'rediscovery' of the Azores islands in 1427 is merely a reflection of the heightened strategic importance of the islands, now sitting on the return route from the western coast of Africa (sequentially called 'volta de Guiné' and 'volta da Mina'); and the references to the Sargasso Sea (also called at the time 'Mar da Baga'), to the west of the Azores, in 1436, reveals the western extent of the return route. This is necessary, under sail, to make use of the southeasterly and northeasterly winds away from the western coast of Africa, up to the northern latitudes where the westerly winds will bring the seafarers towards the western coasts of Europe.
Oceanography
Wikipedia
348
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
The secrecy involving the Portuguese navigations, with the death penalty for the leaking of maps and routes, concentrated all sensitive records in the Royal Archives, completely destroyed by the Lisbon earthquake of 1775. However, the systematic nature of the Portuguese campaign, mapping the currents and winds of the Atlantic, is demonstrated by the understanding of the seasonal variations, with expeditions setting sail at different times of the year taking different routes to take account of seasonal predominate winds. This happens from as early as late 15th century and early 16th: Bartolomeu Dias followed the African coast on his way south in August 1487, while Vasco da Gama would take an open sea route from the latitude of Sierra Leone, spending three months in the open sea of the South Atlantic to profit from the southwards deflection of the southwesterly on the Brazilian side (and the Brazilian current going southward - Gama departed in July 1497); and Pedro Álvares Cabral (departing March 1500) took an even larger arch to the west, from the latitude of Cape Verde, thus avoiding the summer monsoon (which would have blocked the route taken by Gama at the time he set sail). Furthermore, there were systematic expeditions pushing into the western Northern Atlantic (Teive, 1454; Vogado, 1462; Teles, 1474; Ulmo, 1486).
Oceanography
Wikipedia
277
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
The documents relating to the supplying of ships, and the ordering of sun declination tables for the southern Atlantic for as early as 1493–1496, all suggest a well-planned and systematic activity happening during the decade long period between Bartolomeu Dias finding the southern tip of Africa, and Gama's departure; additionally, there are indications of further travels by Bartolomeu Dias in the area. The most significant consequence of this systematic knowledge was the negotiation of the Treaty of Tordesillas in 1494, moving the line of demarcation 270 leagues to the west (from 100 to 370 leagues west of the Azores), bringing what is now Brazil into the Portuguese area of domination. The knowledge gathered from open sea exploration allowed for the well-documented extended periods of sail without sight of land, not by accident but as pre-determined planned route; for example, 30 days for Bartolomeu Dias culminating on Mossel Bay, the three months Gama spent in the South Atlantic to use the Brazil current (southward), or the 29 days Cabral took from Cape Verde up to landing in Monte Pascoal, Brazil. The Danish expedition to Arabia 1761–67 can be said to be the world's first oceanographic expedition, as the ship Grønland had on board a group of scientists, including naturalist Peter Forsskål, who was assigned an explicit task by the king, Frederik V, to study and describe the marine life in the open sea, including finding the cause of mareel, or milky seas. For this purpose, the expedition was equipped with nets and scrapers, specifically designed to collect samples from the open waters and the bottom at great depth. Although Juan Ponce de León in 1513 first identified the Gulf Stream, and the current was well known to mariners, Benjamin Franklin made the first scientific study of it and gave it its name. Franklin measured water temperatures during several Atlantic crossings and correctly explained the Gulf Stream's cause. Franklin and Timothy Folger printed the first map of the Gulf Stream in 1769–1770.
Oceanography
Wikipedia
424
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
Information on the currents of the Pacific Ocean was gathered by explorers of the late 18th century, including James Cook and Louis Antoine de Bougainville. James Rennell wrote the first scientific textbooks on oceanography, detailing the current flows of the Atlantic and Indian oceans. During a voyage around the Cape of Good Hope in 1777, he mapped "the banks and currents at the Lagullas". He was also the first to understand the nature of the intermittent current near the Isles of Scilly, (now known as Rennell's Current). The tides and currents of the ocean are distinct. Tides are the rise and fall of sea levels created by the combination of the gravitational forces of the Moon along with the Sun (the Sun just in a much lesser extent) and are also caused by the Earth and Moon orbiting each other. An ocean current is a continuous, directed movement of seawater generated by a number of forces acting upon the water, including wind, the Coriolis effect, breaking waves, cabbeling, and temperature and salinity differences. Sir James Clark Ross took the first modern sounding in deep sea in 1840, and Charles Darwin published a paper on reefs and the formation of atolls as a result of the second voyage of HMS Beagle in 1831–1836. Robert FitzRoy published a four-volume report of Beagles three voyages. In 1841–1842 Edward Forbes undertook dredging in the Aegean Sea that founded marine ecology. The first superintendent of the United States Naval Observatory (1842–1861), Matthew Fontaine Maury devoted his time to the study of marine meteorology, navigation, and charting prevailing winds and currents. His 1855 textbook Physical Geography of the Sea was one of the first comprehensive oceanography studies. Many nations sent oceanographic observations to Maury at the Naval Observatory, where he and his colleagues evaluated the information and distributed the results worldwide. Modern oceanography Knowledge of the oceans remained confined to the topmost few fathoms of the water and a small amount of the bottom, mainly in shallow areas. Almost nothing was known of the ocean depths. The British Royal Navy's efforts to chart all of the world's coastlines in the mid-19th century reinforced the vague idea that most of the ocean was very deep, although little more was known. As exploration ignited both popular and scientific interest in the polar regions and Africa, so too did the mysteries of the unexplored oceans.
Oceanography
Wikipedia
488
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
The seminal event in the founding of the modern science of oceanography was the 1872–1876 Challenger expedition. As the first true oceanographic cruise, this expedition laid the groundwork for an entire academic and research discipline. In response to a recommendation from the Royal Society, the British Government announced in 1871 an expedition to explore world's oceans and conduct appropriate scientific investigation. Charles Wyville Thomson and Sir John Murray launched the Challenger expedition. , leased from the Royal Navy, was modified for scientific work and equipped with separate laboratories for natural history and chemistry. Under the scientific supervision of Thomson, Challenger travelled nearly surveying and exploring. On her journey circumnavigating the globe, 492 deep sea soundings, 133 bottom dredges, 151 open water trawls and 263 serial water temperature observations were taken. Around 4,700 new species of marine life were discovered. The result was the Report Of The Scientific Results of the Exploring Voyage of H.M.S. Challenger during the years 1873–76. Murray, who supervised the publication, described the report as "the greatest advance in the knowledge of our planet since the celebrated discoveries of the fifteenth and sixteenth centuries". He went on to found the academic discipline of oceanography at the University of Edinburgh, which remained the centre for oceanographic research well into the 20th century. Murray was the first to study marine trenches and in particular the Mid-Atlantic Ridge, and map the sedimentary deposits in the oceans. He tried to map out the world's ocean currents based on salinity and temperature observations, and was the first to correctly understand the nature of coral reef development. In the late 19th century, other Western nations also sent out scientific expeditions (as did private individuals and institutions). The first purpose-built oceanographic ship, Albatros, was built in 1882. In 1893, Fridtjof Nansen allowed his ship, Fram, to be frozen in the Arctic ice. This enabled him to obtain oceanographic, meteorological and astronomical data at a stationary spot over an extended period. In 1881 the geographer John Francon Williams published a seminal book, Geography of the Oceans. Between 1907 and 1911 Otto Krümmel published the Handbuch der Ozeanographie, which became influential in awakening public interest in oceanography. The four-month 1910 North Atlantic expedition headed by John Murray and Johan Hjort was the most ambitious research oceanographic and marine zoological project ever mounted until then, and led to the classic 1912 book The Depths of the Ocean.
Oceanography
Wikipedia
510
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
The first acoustic measurement of sea depth was made in 1914. Between 1925 and 1927 the "Meteor" expedition gathered 70,000 ocean depth measurements using an echo sounder, surveying the Mid-Atlantic Ridge. In 1934, Easter Ellen Cupp, the first woman to have earned a PhD (at Scripps) in the United States, completed a major work on diatoms that remained the standard taxonomy in the field until well after her death in 1999. In 1940, Cupp was let go from her position at Scripps. Sverdrup specifically commended Cupp as a conscientious and industrious worker and commented that his decision was no reflection on her ability as a scientist. Sverdrup used the instructor billet vacated by Cupp to employ Marston Sargent, a biologist studying marine algae, which was not a new research program at Scripps. Financial pressures did not prevent Sverdrup from retaining the services of two other young post-doctoral students, Walter Munk and Roger Revelle. Cupp's partner, Dorothy Rosenbury, found her a position teaching high school, where she remained for the rest of her career. (Russell, 2000) Sverdrup, Johnson and Fleming published The Oceans in 1942, which was a major landmark. The Sea (in three volumes, covering physical oceanography, seawater and geology) edited by M.N. Hill was published in 1962, while Rhodes Fairbridge's Encyclopedia of Oceanography was published in 1966. The Great Global Rift, running along the Mid Atlantic Ridge, was discovered by Maurice Ewing and Bruce Heezen in 1953 and mapped by Heezen and Marie Tharp using bathymetric data; in 1954 a mountain range under the Arctic Ocean was found by the Arctic Institute of the USSR. The theory of seafloor spreading was developed in 1960 by Harry Hammond Hess. The Ocean Drilling Program started in 1966. Deep-sea vents were discovered in 1977 by Jack Corliss and Robert Ballard in the submersible . In the 1950s, Auguste Piccard invented the bathyscaphe and used the bathyscaphe to investigate the ocean's depths. The United States nuclear submarine made the first journey under the ice to the North Pole in 1958. In 1962 the FLIP (Floating Instrument Platform), a spar buoy, was first deployed. In 1968, Tanya Atwater led the first all-woman oceanographic expedition. Until that time, gender policies restricted women oceanographers from participating in voyages to a significant extent.
Oceanography
Wikipedia
512
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
From the 1970s, there has been much emphasis on the application of large scale computers to oceanography to allow numerical predictions of ocean conditions and as a part of overall environmental change prediction. Early techniques included analog computers (such as the Ishiguro Storm Surge Computer) generally now replaced by numerical methods (e.g. SLOSH.) An oceanographic buoy array was established in the Pacific to allow prediction of El Niño events. 1990 saw the start of the World Ocean Circulation Experiment (WOCE) which continued until 2002. Geosat seafloor mapping data became available in 1995. Study of the oceans is critical to understanding shifts in Earth's energy balance along with related global and regional changes in climate, the biosphere and biogeochemistry. The atmosphere and ocean are linked because of evaporation and precipitation as well as thermal flux (and solar insolation). Recent studies have advanced knowledge on ocean acidification, ocean heat content, ocean currents, sea level rise, the oceanic carbon cycle, the water cycle, Arctic sea ice decline, coral bleaching, marine heatwaves, extreme weather, coastal erosion and many other phenomena in regards to ongoing climate change and climate feedbacks. In general, understanding the world ocean through further scientific study enables better stewardship and sustainable utilization of Earth's resources. The Intergovernmental Oceanographic Commission reports that 1.7% of the total national research expenditure of its members is focused on ocean science. Branches The study of oceanography is divided into these five branches: Biological oceanography Biological oceanography investigates the ecology and biology of marine organisms in the context of the physical, chemical and geological characteristics of their ocean environment. Chemical oceanography Chemical oceanography is the study of the chemistry of the ocean. Whereas chemical oceanography is primarily occupied with the study and understanding of seawater properties and its changes, ocean chemistry focuses primarily on the geochemical cycles. The following is a central topic investigated by chemical oceanography. Ocean acidification
Oceanography
Wikipedia
401
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
Ocean acidification describes the decrease in ocean pH that is caused by anthropogenic carbon dioxide () emissions into the atmosphere. Seawater is slightly alkaline and had a preindustrial pH of about 8.2. More recently, anthropogenic activities have steadily increased the carbon dioxide content of the atmosphere; about 30–40% of the added CO2 is absorbed by the oceans, forming carbonic acid and lowering the pH (now below 8.1) through ocean acidification. The pH is expected to reach 7.7 by the year 2100. An important element for the skeletons of marine animals is calcium, but calcium carbonate becomes more soluble with pressure, so carbonate shells and skeletons dissolve below the carbonate compensation depth. Calcium carbonate becomes more soluble at lower pH, so ocean acidification is likely to affect marine organisms with calcareous shells, such as oysters, clams, sea urchins and corals, and the carbonate compensation depth will rise closer to the sea surface. Affected planktonic organisms will include pteropods, coccolithophorids and foraminifera, all important in the food chain. In tropical regions, corals are likely to be severely affected as they become less able to build their calcium carbonate skeletons, in turn adversely impacting other reef dwellers. The current rate of ocean chemistry change seems to be unprecedented in Earth's geological history, making it unclear how well marine ecosystems will adapt to the shifting conditions of the near future. Of particular concern is the manner in which the combination of acidification with the expected additional stressors of higher ocean temperatures and lower oxygen levels will impact the seas. Geological oceanography Geological oceanography is the study of the geology of the ocean floor including plate tectonics and paleoceanography. Physical oceanography Physical oceanography studies the ocean's physical attributes including temperature-salinity structure, mixing, surface waves, internal waves, surface tides, internal tides, and currents. The following are central topics investigated by physical oceanography. Seismic Oceanography Ocean currents
Oceanography
Wikipedia
416
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
Since the early ocean expeditions in oceanography, a major interest was the study of ocean currents and temperature measurements. The tides, the Coriolis effect, changes in direction and strength of wind, salinity, and temperature are the main factors determining ocean currents. The thermohaline circulation (THC) (thermo- referring to temperature and -haline referring to salt content) connects the ocean basins and is primarily dependent on the density of sea water. It is becoming more common to refer to this system as the 'meridional overturning circulation' because it more accurately accounts for other driving factors beyond temperature and salinity. Examples of sustained currents are the Gulf Stream and the Kuroshio Current which are wind-driven western boundary currents. Ocean heat content Oceanic heat content (OHC) refers to the extra heat stored in the ocean from changes in Earth's energy balance. The increase in the ocean heat play an important role in sea level rise, because of thermal expansion. Ocean warming accounts for 90% of the energy accumulation associated with global warming since 1971. Paleoceanography Paleoceanography is the study of the history of the oceans in the geologic past with regard to circulation, chemistry, biology, geology and patterns of sedimentation and biological productivity. Paleoceanographic studies using environment models and different proxies enable the scientific community to assess the role of the oceanic processes in the global climate by the reconstruction of past climate at various intervals. Paleoceanographic research is also intimately tied to palaeoclimatology. Oceanographic institutions
Oceanography
Wikipedia
321
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
The earliest international organizations of oceanography were founded at the turn of the 20th century, starting with the International Council for the Exploration of the Sea created in 1902, followed in 1919 by the Mediterranean Science Commission. Marine research institutes were already in existence, starting with the Stazione Zoologica Anton Dohrn in Naples, Italy (1872), the Biological Station of Roscoff, France (1876), the Arago Laboratory in Banyuls-sur-mer, France (1882), the Laboratory of the Marine Biological Association in Plymouth, UK (1884), the Norwegian Institute for Marine Research in Bergen, Norway (1900), the Laboratory für internationale Meeresforschung, Kiel, Germany (1902). On the other side of the Atlantic, the Scripps Institution of Oceanography was founded in 1903, followed by the Woods Hole Oceanographic Institution in 1930, the Virginia Institute of Marine Science in 1938, the Lamont–Doherty Earth Observatory at Columbia University in 1949, and later the School of Oceanography at University of Washington. In Australia, the Australian Institute of Marine Science (AIMS), established in 1972 soon became a key player in marine tropical research. In 1921 the International Hydrographic Bureau, called since 1970 the International Hydrographic Organization, was established to develop hydrographic and nautical charting standards. Related disciplines
Oceanography
Wikipedia
269
44044
https://en.wikipedia.org/wiki/Oceanography
Physical sciences
Oceanography
null
Galactic astronomy is the study of the Milky Way galaxy and all its contents. This is in contrast to extragalactic astronomy, which is the study of everything outside our galaxy, including all other galaxies. Galactic astronomy should not be confused with galaxy formation and evolution, which is the general study of galaxies, their formation, structure, components, dynamics, interactions, and the range of forms they take. The Milky Way galaxy, where the Solar System is located, is in many ways the best-studied galaxy, although important parts of it are obscured from view in visible wavelengths by regions of cosmic dust. The development of radio astronomy, infrared astronomy and submillimetre astronomy in the 20th century allowed the gas and dust of the Milky Way to be mapped for the first time. Subcategories A standard set of subcategories is used by astronomical journals to split up the subject of Galactic Astronomy: abundances – the study of the location of elements heavier than helium bulge – the study of the bulge around the center of the Milky Way center – the study of the central region of the Milky Way disk – the study of the Milky Way disk (the plane upon which most galactic objects are aligned) evolution – the evolution of the Milky Way formation – the formation of the Milky Way fundamental parameters – the fundamental parameters of the Milky Way (mass, size etc.) globular cluster – globular clusters within the Milky Way halo – the large halo around the Milky Way kinematics, and dynamics – the motions of stars and clusters nucleus – the region around the black hole at the center of the Milky Way (Sagittarius A*) open clusters and associations – open clusters and associations of stars Solar neighborhood – nearby stars stellar content – numbers and types of stars in the Milky Way structure – the structure (spiral arms etc.) Stellar populations Star clusters Globular clusters Open clusters Interstellar medium Interplanetary space - Interplanetary medium - interplanetary dust Interstellar space - Interstellar medium - interstellar dust Intergalactic space - Intergalactic medium - Intergalactic dust
Galactic astronomy
Wikipedia
420
44057
https://en.wikipedia.org/wiki/Galactic%20astronomy
Physical sciences
Basics_2
Astronomy
In physical cosmology, Big Bang nucleosynthesis (also known as primordial nucleosynthesis, and abbreviated as BBN) is the production of nuclei other than those of the lightest isotope of hydrogen (hydrogen-1, 1H, having a single proton as a nucleus) during the early phases of the universe. This type of nucleosynthesis is thought by most cosmologists to have occurred from 10 seconds to 20 minutes after the Big Bang. It is thought to be responsible for the formation of most of the universe's helium (as isotope helium-4 (4He)), along with small fractions of the hydrogen isotope deuterium (2H or D), the helium isotope helium-3 (3He), and a very small fraction of the lithium isotope lithium-7 (7Li). In addition to these stable nuclei, two unstable or radioactive isotopes were produced: the heavy hydrogen isotope tritium (3H or T) and the beryllium isotope beryllium-7 (7Be). These unstable isotopes later decayed into 3He and 7Li, respectively, as above. Elements heavier than lithium are thought to have been created later in the life of the Universe by stellar nucleosynthesis, through the formation, evolution and death of stars. Characteristics There are several important characteristics of Big Bang nucleosynthesis (BBN): The initial conditions (neutron–proton ratio) were set in the first second after the Big Bang. The universe was very close to homogeneous at this time, and strongly radiation-dominated. The fusion of nuclei occurred between roughly 10 seconds to 20 minutes after the Big Bang; this corresponds to the temperature range when the universe was cool enough for deuterium to survive, but hot and dense enough for fusion reactions to occur at a significant rate. It was widespread, encompassing the entire observable universe.
Big Bang nucleosynthesis
Wikipedia
393
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
The key parameter which allows one to calculate the effects of Big Bang nucleosynthesis is the baryon/photon number ratio, which is a small number of order 6 × 10−10. This parameter corresponds to the baryon density and controls the rate at which nucleons collide and react; from this it is possible to calculate element abundances after nucleosynthesis ends. Although the baryon per photon ratio is important in determining element abundances, the precise value makes little difference to the overall picture. Without major changes to the Big Bang theory itself, BBN will result in mass abundances of about 75% of hydrogen-1, about 25% helium-4, about 0.01% of deuterium and helium-3, trace amounts (on the order of 10−10) of lithium, and negligible heavier elements. That the observed abundances in the universe are generally consistent with these abundance numbers is considered strong evidence for the Big Bang theory. In this field, for historical reasons it is customary to quote the helium-4 fraction by mass, symbol Y, so that 25% helium-4 means that helium-4 atoms account for 25% of the mass, but less than 8% of the nuclei would be helium-4 nuclei. Other (trace) nuclei are usually expressed as number ratios to hydrogen. The first detailed calculations of the primordial isotopic abundances came in 1966 and have been refined over the years using updated estimates of the input nuclear reaction rates. The first systematic Monte Carlo study of how nuclear reaction rate uncertainties impact isotope predictions, over the relevant temperature range, was carried out in 1993. Important parameters The creation of light elements during BBN was dependent on a number of parameters; among those was the neutron–proton ratio (calculable from Standard Model physics) and the baryon-photon ratio. Neutron–proton ratio The neutron–proton ratio was set by Standard Model physics before the nucleosynthesis era, essentially within the first 1-second after the Big Bang. Neutrons can react with positrons or electron neutrinos to create protons and other products in one of the following reactions: n \ + e+ <=> \overline{\nu}_e + p n \ + \nu_{e} <=> p + e-
Big Bang nucleosynthesis
Wikipedia
478
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
At times much earlier than 1 sec, these reactions were fast and maintained the n/p ratio close to 1:1. As the temperature dropped, the equilibrium shifted in favour of protons due to their slightly lower mass, and the n/p ratio smoothly decreased. These reactions continued until the decreasing temperature and density caused the reactions to become too slow, which occurred at about T = 0.7 MeV (time around 1 second) and is called the freeze out temperature. At freeze out, the neutron–proton ratio was about 1/6. However, free neutrons are unstable with a mean life of 880 sec; some neutrons decayed in the next few minutes before fusing into any nucleus, so the ratio of total neutrons to protons after nucleosynthesis ends is about 1/7. Almost all neutrons that fused instead of decaying ended up combined into helium-4, due to the fact that helium-4 has the highest binding energy per nucleon among light elements. This predicts that about 8% of all atoms should be helium-4, leading to a mass fraction of helium-4 of about 25%, which is in line with observations. Small traces of deuterium and helium-3 remained as there was insufficient time and density for them to react and form helium-4. Baryon–photon ratio The baryon–photon ratio, η, is the key parameter determining the abundances of light elements after nucleosynthesis ends. Baryons and light elements can fuse in the following main reactions: along with some other low-probability reactions leading to 7Li or 7Be. (An important feature is that there are no stable nuclei with mass 5 or 8, which implies that reactions adding one baryon to 4He, or fusing two 4He, do not occur). Most fusion chains during BBN ultimately terminate in 4He (helium-4), while "incomplete" reaction chains lead to small amounts of left-over 2H or 3He; the amount of these decreases with increasing baryon-photon ratio. That is, the larger the baryon-photon ratio the more reactions there will be and the more efficiently deuterium will be eventually transformed into helium-4. This result makes deuterium a very useful tool in measuring the baryon-to-photon ratio. Sequence
Big Bang nucleosynthesis
Wikipedia
479
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
Big Bang nucleosynthesis began roughly about 20 seconds after the big bang, when the universe had cooled sufficiently to allow deuterium nuclei to survive disruption by high-energy photons. (Note that the neutron–proton freeze-out time was earlier). This time is essentially independent of dark matter content, since the universe was highly radiation dominated until much later, and this dominant component controls the temperature/time relation. At this time there were about six protons for every neutron, but a small fraction of the neutrons decay before fusing in the next few hundred seconds, so at the end of nucleosynthesis there are about seven protons to every neutron, and almost all the neutrons are in Helium-4 nuclei. One feature of BBN is that the physical laws and constants that govern the behavior of matter at these energies are very well understood, and hence BBN lacks some of the speculative uncertainties that characterize earlier periods in the life of the universe. Another feature is that the process of nucleosynthesis is determined by conditions at the start of this phase of the life of the universe, and proceeds independently of what happened before. As the universe expands, it cools. Free neutrons are less stable than helium nuclei, and the protons and neutrons have a strong tendency to form helium-4. However, forming helium-4 requires the intermediate step of forming deuterium. Before nucleosynthesis began, the temperature was high enough for many photons to have energy greater than the binding energy of deuterium; therefore any deuterium that was formed was immediately destroyed (a situation known as the "deuterium bottleneck"). Hence, the formation of helium-4 was delayed until the universe became cool enough for deuterium to survive (at about T = 0.1 MeV); after which there was a sudden burst of element formation. However, very shortly thereafter, around twenty minutes after the Big Bang, the temperature and density became too low for any significant fusion to occur. At this point, the elemental abundances were nearly fixed, and the only changes were the result of the radioactive decay of the two major unstable products of BBN, tritium and beryllium-7.
Big Bang nucleosynthesis
Wikipedia
461
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
History of theory The history of Big Bang nucleosynthesis began with the calculations of Ralph Alpher in the 1940s. Alpher published the Alpher–Bethe–Gamow paper that outlined the theory of light-element production in the early universe. Heavy elements Big Bang nucleosynthesis produced very few nuclei of elements heavier than lithium due to a bottleneck: the absence of a stable nucleus with 8 or 5 nucleons. This deficit of larger atoms also limited the amounts of lithium-7 produced during BBN. In stars, the bottleneck is passed by triple collisions of helium-4 nuclei, producing carbon (the triple-alpha process). However, this process is very slow and requires much higher densities, taking tens of thousands of years to convert a significant amount of helium to carbon in stars, and therefore it made a negligible contribution in the minutes following the Big Bang. The predicted abundance of CNO isotopes produced in Big Bang nucleosynthesis is expected to be on the order of 10−15 that of H, making them essentially undetectable and negligible. Indeed, none of these primordial isotopes of the elements from beryllium to oxygen have yet been detected, although those of beryllium and boron may be able to be detected in the future. So far, the only stable nuclides known experimentally to have been made during Big Bang nucleosynthesis are protium, deuterium, helium-3, helium-4, and lithium-7. Helium-4
Big Bang nucleosynthesis
Wikipedia
319
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
Big Bang nucleosynthesis predicts a primordial abundance of about 25% helium-4 by mass, irrespective of the initial conditions of the universe. As long as the universe was hot enough for protons and neutrons to transform into each other easily, their ratio, determined solely by their relative masses, was about 1 neutron to 7 protons (allowing for some decay of neutrons into protons). Once it was cool enough, the neutrons quickly bound with an equal number of protons to form first deuterium, then helium-4. Helium-4 is very stable and is nearly the end of this chain if it runs for only a short time, since helium neither decays nor combines easily to form heavier nuclei (since there are no stable nuclei with mass numbers of 5 or 8, helium does not combine easily with either protons, or with itself). Once temperatures are lowered, out of every 16 nucleons (2 neutrons and 14 protons), 4 of these (25% of the total particles and total mass) combine quickly into one helium-4 nucleus. This produces one helium for every 12 hydrogens, resulting in a universe that is a little over 8% helium by number of atoms, and 25% helium by mass. One analogy is to think of helium-4 as ash, and the amount of ash that one forms when one completely burns a piece of wood is insensitive to how one burns it. The resort to the BBN theory of the helium-4 abundance is necessary as there is far more helium-4 in the universe than can be explained by stellar nucleosynthesis. In addition, it provides an important test for the Big Bang theory. If the observed helium abundance is significantly different from 25%, then this would pose a serious challenge to the theory. This would particularly be the case if the early helium-4 abundance was much smaller than 25% because it is hard to destroy helium-4. For a few years during the mid-1990s, observations suggested that this might be the case, causing astrophysicists to talk about a Big Bang nucleosynthetic crisis, but further observations were consistent with the Big Bang theory. Deuterium
Big Bang nucleosynthesis
Wikipedia
456
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
Deuterium is in some ways the opposite of helium-4, in that while helium-4 is very stable and difficult to destroy, deuterium is only marginally stable and easy to destroy. The temperatures, time, and densities were sufficient to combine a substantial fraction of the deuterium nuclei to form helium-4 but insufficient to carry the process further using helium-4 in the next fusion step. BBN did not convert all of the deuterium in the universe to helium-4 due to the expansion that cooled the universe and reduced the density, and so cut that conversion short before it could proceed any further. One consequence of this is that, unlike helium-4, the amount of deuterium is very sensitive to initial conditions. The denser the initial universe was, the more deuterium would be converted to helium-4 before time ran out, and the less deuterium would remain. There are no known post-Big Bang processes which can produce significant amounts of deuterium. Hence observations about deuterium abundance suggest that the universe is not infinitely old, which is in accordance with the Big Bang theory. During the 1970s, there were major efforts to find processes that could produce deuterium, but those revealed ways of producing isotopes other than deuterium. The problem was that while the concentration of deuterium in the universe is consistent with the Big Bang model as a whole, it is too high to be consistent with a model that presumes that most of the universe is composed of protons and neutrons. If one assumes that all of the universe consists of protons and neutrons, the density of the universe is such that much of the currently observed deuterium would have been burned into helium-4. The standard explanation now used for the abundance of deuterium is that the universe does not consist mostly of baryons, but that non-baryonic matter (also known as dark matter) makes up most of the mass of the universe. This explanation is also consistent with calculations that show that a universe made mostly of protons and neutrons would be far more clumpy than is observed.
Big Bang nucleosynthesis
Wikipedia
439
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
It is very hard to come up with another process that would produce deuterium other than by nuclear fusion. Such a process would require that the temperature be hot enough to produce deuterium, but not hot enough to produce helium-4, and that this process should immediately cool to non-nuclear temperatures after no more than a few minutes. It would also be necessary for the deuterium to be swept away before it reoccurs. Producing deuterium by fission is also difficult. The problem here again is that deuterium is very unlikely due to nuclear processes, and that collisions between atomic nuclei are likely to result either in the fusion of the nuclei, or in the release of free neutrons or alpha particles. During the 1970s, cosmic ray spallation was proposed as a source of deuterium. That theory failed to account for the abundance of deuterium, but led to explanations of the source of other light elements. Lithium Lithium-7 and lithium-6 produced in the Big Bang are on the order of: lithium-7 to be 10−9 of all primordial nuclides; and lithium-6 around 10−13. Measurements and status of theory The theory of BBN gives a detailed mathematical description of the production of the light "elements" deuterium, helium-3, helium-4, and lithium-7. Specifically, the theory yields precise quantitative predictions for the mixture of these elements, that is, the primordial abundances at the end of the big-bang. In order to test these predictions, it is necessary to reconstruct the primordial abundances as faithfully as possible, for instance by observing astronomical objects in which very little stellar nucleosynthesis has taken place (such as certain dwarf galaxies) or by observing objects that are very far away, and thus can be seen in a very early stage of their evolution (such as distant quasars).
Big Bang nucleosynthesis
Wikipedia
393
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
As noted above, in the standard picture of BBN, all of the light element abundances depend on the amount of ordinary matter (baryons) relative to radiation (photons). Since the universe is presumed to be homogeneous, it has one unique value of the baryon-to-photon ratio. For a long time, this meant that to test BBN theory against observations one had to ask: can all of the light element observations be explained with a single value of the baryon-to-photon ratio? Or more precisely, allowing for the finite precision of both the predictions and the observations, one asks: is there some range of baryon-to-photon values which can account for all of the observations? More recently, the question has changed: Precision observations of the cosmic microwave background radiation with the Wilkinson Microwave Anisotropy Probe (WMAP) and Planck give an independent value for the baryon-to-photon ratio. Using this value, are the BBN predictions for the abundances of light elements in agreement with the observations? The present measurement of helium-4 indicates good agreement, and yet better agreement for helium-3. But for lithium-7, there is a significant discrepancy between BBN and WMAP/Planck, and the abundance derived from Population II stars. The discrepancy is a factor of 2.4―4.3 below the theoretically predicted value. This discrepancy, called the "cosmological lithium problem", is considered a problem for the original models, that have resulted in revised calculations of the standard BBN based on new nuclear data, and to various reevaluation proposals for primordial proton–proton nuclear reactions, especially the abundances of , versus . Non-standard scenarios In addition to the standard BBN scenario there are numerous non-standard BBN scenarios. These should not be confused with non-standard cosmology: a non-standard BBN scenario assumes that the Big Bang occurred, but inserts additional physics in order to see how this affects elemental abundances. These pieces of additional physics include relaxing or removing the assumption of homogeneity, or inserting new particles such as massive neutrinos.
Big Bang nucleosynthesis
Wikipedia
453
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
There have been, and continue to be, various reasons for researching non-standard BBN. The first, which is largely of historical interest, is to resolve inconsistencies between BBN predictions and observations. This has proved to be of limited usefulness in that the inconsistencies were resolved by better observations, and in most cases trying to change BBN resulted in abundances that were more inconsistent with observations rather than less. The second reason for researching non-standard BBN, and largely the focus of non-standard BBN in the early 21st century, is to use BBN to place limits on unknown or speculative physics. For example, standard BBN assumes that no exotic hypothetical particles were involved in BBN. One can insert a hypothetical particle (such as a massive neutrino) and see what has to happen before BBN predicts abundances that are very different from observations. This has been done to put limits on the mass of a stable tau neutrino.
Big Bang nucleosynthesis
Wikipedia
200
44058
https://en.wikipedia.org/wiki/Big%20Bang%20nucleosynthesis
Physical sciences
Physical cosmology
Astronomy
X-ray astronomy is an observational branch of astronomy which deals with the study of X-ray observation and detection from astronomical objects. X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray astronomy uses a type of space telescope that can see x-ray radiation which standard optical telescopes, such as the Mauna Kea Observatories, cannot. X-ray emission is expected from astronomical objects that contain extremely hot gases at temperatures from about a million kelvin (K) to hundreds of millions of kelvin (MK). Moreover, the maintenance of the E-layer of ionized gas high in the Earth's thermosphere also suggested a strong extraterrestrial source of X-rays. Although theory predicted that the Sun and the stars would be prominent X-ray sources, there was no way to verify this because Earth's atmosphere blocks most extraterrestrial X-rays. It was not until ways of sending instrument packages to high altitudes were developed that these X-ray sources could be studied. The existence of solar X-rays was confirmed early in the mid-twentieth century by V-2s converted to sounding rockets, and the detection of extra-terrestrial X-rays has been the primary or secondary mission of multiple satellites since 1958. The first cosmic (beyond the Solar System) X-ray source was discovered by a sounding rocket in 1962. Called Scorpius X-1 (Sco X-1) (the first X-ray source found in the constellation Scorpius), the X-ray emission of Scorpius X-1 is 10,000 times greater than its visual emission, whereas that of the Sun is about a million times less. In addition, the energy output in X-rays is 100,000 times greater than the total emission of the Sun in all wavelengths. Many thousands of X-ray sources have since been discovered. In addition, the intergalactic space in galaxy clusters is filled with a hot, but very dilute gas at a temperature between 100 and 1000 megakelvins (MK). The total amount of hot gas is five to ten times the total mass in the visible galaxies. History of X-ray astronomy
X-ray astronomy
Wikipedia
470
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
In 1927, E.O. Hulburt of the US Naval Research Laboratory and associates Gregory Breit and Merle A. Tuve of the Carnegie Institution of Washington explored the possibility of equipping Robert H. Goddard's rockets to explore the upper atmosphere. "Two years later, he proposed an experimental program in which a rocket might be instrumented to explore the upper atmosphere, including detection of ultraviolet radiation and X-rays at high altitudes". In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. The Sun has been known to be surrounded by a hot tenuous corona. In the mid-1940s radio observations revealed a radio corona around the Sun. The beginning of the search for X-ray sources from above the Earth's atmosphere was on August 5, 1948 12:07 GMT. A US Army (formerly German) V-2 rocket as part of Project Hermes was launched from White Sands Proving Grounds. The first solar X-rays were recorded by T. Burnight. Through the 1960s, 70s, 80s, and 90s, the sensitivity of detectors increased greatly during the 60 years of X-ray astronomy. In addition, the ability to focus X-rays has developed enormously—allowing the production of high-quality images of many fascinating celestial objects. Sounding rocket flights The first sounding rocket flights for X-ray research were accomplished at the White Sands Missile Range in New Mexico with a V-2 rocket on January 28, 1949. A detector was placed in the nose cone section and the rocket was launched in a suborbital flight to an altitude just above the atmosphere. X-rays from the Sun were detected by the U.S. Naval Research Laboratory Blossom experiment on board.
X-ray astronomy
Wikipedia
366
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
An Aerobee 150 rocket launched on June 19, 1962 (UTC) detected the first X-rays emitted from a source outside our solar system (Scorpius X-1). It is now known that such X-ray sources as Sco X-1 are compact stars, such as neutron stars or black holes. Material falling into a black hole may emit X-rays, but the black hole itself does not. The energy source for the X-ray emission is gravity. Infalling gas and dust is heated by the strong gravitational fields of these and other celestial objects. Based on discoveries in this new field of X-ray astronomy, starting with Scorpius X-1, Riccardo Giacconi received the Nobel Prize in Physics in 2002. The largest drawback to rocket flights is their very short duration (just a few minutes above the atmosphere before the rocket falls back to Earth) and their limited field of view. A rocket launched from the United States will not be able to see sources in the southern sky; a rocket launched from Australia will not be able to see sources in the northern sky. X-ray Quantum Calorimeter (XQC) project In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fields. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field.
X-ray astronomy
Wikipedia
355
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
Of interest is the hot ionized medium (HIM) consisting of a coronal cloud ejection from star surfaces at 106-107 K which emits X-rays. The ISM is turbulent and full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures are stellar wind bubbles and superbubbles of hot gas. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble. To measure the spectrum of the diffuse X-ray emission from the interstellar medium over the energy range 0.07 to 1 keV, NASA launched a Black Brant 9 from White Sands Missile Range, New Mexico on May 1, 2008. The Principal Investigator for the mission is Dr. Dan McCammon of the University of Wisconsin–Madison. Balloons Balloon flights can carry instruments to altitudes of up to 40 km above sea level, where they are above as much as 99.997% of the Earth's atmosphere. Unlike a rocket where data are collected during a brief few minutes, balloons are able to stay aloft for much longer. However, even at such altitudes, much of the X-ray spectrum is still absorbed. X-rays with energies less than 35 keV (5,600 aJ) cannot reach balloons. On July 21, 1964, the Crab Nebula supernova remnant was discovered to be a hard X-ray (15–60 keV) source by a scintillation counter flown on a balloon launched from Palestine, Texas, United States. This was likely the first balloon-based detection of X-rays from a discrete cosmic X-ray source. High-energy focusing telescope
X-ray astronomy
Wikipedia
410
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
The high-energy focusing telescope (HEFT) is a balloon-borne experiment to image astrophysical sources in the hard X-ray (20–100 keV) band. Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The angular resolution of HEFT is c. 1.5'. Rather than using a grazing-angle X-ray telescope, HEFT makes use of a novel tungsten-silicon multilayer coatings to extend the reflectivity of nested grazing-incidence mirrors beyond 10 keV. HEFT has an energy resolution of 1.0 keV full width at half maximum at 60 keV. HEFT was launched for a 25-hour balloon flight in May 2005. The instrument performed within specification and observed Tau X-1, the Crab Nebula. High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) A balloon-borne experiment called the High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) observed X-ray and gamma-rays emissions from the Sun and other astronomical objects. It was launched from McMurdo Station, Antarctica in December 1991 and 1992. Steady winds carried the balloon on a circumpolar flight lasting about two weeks each time. Rockoons The rockoon, a blend of rocket and balloon, was a solid fuel rocket that, rather than being immediately lit while on the ground, was first carried into the upper atmosphere by a gas-filled balloon. Then, once separated from the balloon at its maximum height, the rocket was automatically ignited. This achieved a higher altitude, since the rocket did not have to move through the lower thicker air layers that would have required much more chemical fuel. The original concept of "rockoons" was developed by Cmdr. Lee Lewis, Cmdr. G. Halvorson, S. F. Singer, and James A. Van Allen during the Aerobee rocket firing cruise of the on March 1, 1949. From July 17 to July 27, 1956, the Naval Research Laboratory (NRL) shipboard launched eight Deacon rockoons for solar ultraviolet and X-ray observations at ~30° N ~121.6° W, southwest of San Clemente Island, apogee: 120 km. X-ray telescopes and mirrors
X-ray astronomy
Wikipedia
466
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
Satellites are needed because X-rays are absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray telescopes (XRTs) have varying directionality or imaging ability based on glancing angle reflection rather than refraction or large deviation reflection. This limits them to much narrower fields of view than visible or UV telescopes. The mirrors can be made of ceramic or metal foil. The first X-ray telescope in astronomy was used to observe the Sun. The first X-ray picture (taken with a grazing incidence telescope) of the Sun was taken in 1963, by a rocket-borne telescope. On April 19, 1960, the very first X-ray image of the sun was taken using a pinhole camera on an Aerobee-Hi rocket. The utilization of X-ray mirrors for extrasolar X-ray astronomy simultaneously requires: the ability to determine the location at the arrival of an X-ray photon in two dimensions and a reasonable detection efficiency. X-ray astronomy detectors X-ray astronomy detectors have been designed and configured primarily for energy and occasionally for wavelength detection using a variety of techniques usually limited to the technology of the time. X-ray detectors collect individual X-rays (photons of X-ray electromagnetic radiation) and count the number of photons collected (intensity), the energy (0.12 to 120 keV) of the photons collected, wavelength (c. 0.008–8 nm), or how fast the photons are detected (counts per hour), to tell us about the object that is emitting them. Astrophysical sources of X-rays
X-ray astronomy
Wikipedia
340
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
Several types of astrophysical objects emit, fluoresce, or reflect X-rays, from galaxy clusters, through black holes in active galactic nuclei (AGN) to galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars and super soft X-ray sources), neutron star or black hole (X-ray binaries). Some Solar System bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. The X-ray continuum can arise from bremsstrahlung, black-body radiation, synchrotron radiation, or what is called inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions. An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate mass star. Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Herculis) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, , between high- and low-mass X-ray binaries. In July 2020, astronomers reported the observation of a "hard tidal disruption event candidate" associated with ASASSN-20hx, located near the nucleus of galaxy NGC 6297, and noted that the observation represented one of the "very few tidal disruption events with hard powerlaw X-ray spectra". Celestial X-ray sources
X-ray astronomy
Wikipedia
389
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
The celestial sphere has been divided into 88 constellations. The International Astronomical Union (IAU) constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them have been identified from astrophysical modeling to be galaxies or black holes at the centers of galaxies. Some are pulsars. As with sources already successfully modeled by X-ray astrophysics, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth. Constellations are an astronomical device for handling observation and precision independent of current physical theory or interpretation. Astronomy has been around for a long time. Physical theory changes with time. With respect to celestial X-ray sources, X-ray astrophysics tends to focus on the physical reason for X-ray brightness, whereas X-ray astronomy tends to focus on their classification, order of discovery, variability, resolvability, and their relationship with nearby sources in other constellations. Within the constellations Orion and Eridanus and stretching across them is a soft X-ray "hot spot" known as the Orion-Eridanus Superbubble, the Eridanus Soft X-ray Enhancement, or simply the Eridanus Bubble, a 25° area of interlocking arcs of Hα emitting filaments. Soft X-rays are emitted by hot gas (T ~ 2–3 MK) in the interior of the superbubble. This bright object forms the background for the "shadow" of a filament of gas and dust. The filament is shown by the overlaid contours, which represent 100 micrometre emission from dust at a temperature of about 30 K as measured by IRAS. Here the filament absorbs soft X-rays between 100 and 300 eV, indicating that the hot gas is located behind the filament. This filament may be part of a shell of neutral gas that surrounds the hot bubble. Its interior is energized by ultraviolet (UV) light and stellar winds from hot stars in the Orion OB1 association. These stars energize a superbubble about 1200 lys across which is observed in the visual (Hα) and X-ray portions of the spectrum. Explorational X-ray astronomy
X-ray astronomy
Wikipedia
477
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
Usually observational astronomy is considered to occur on Earth's surface (or beneath it in neutrino astronomy). The idea of limiting observation to Earth includes orbiting the Earth. As soon as the observer leaves the cozy confines of Earth, the observer becomes a deep space explorer. Except for Explorer 1 and Explorer 3 and the earlier satellites in the series, usually if a probe is going to be a deep space explorer it leaves the Earth or an orbit around the Earth. For a satellite or space probe to qualify as a deep space X-ray astronomer/explorer or "astronobot"/explorer, all it needs to carry aboard is an XRT or X-ray detector and leave Earth's orbit. Ulysses was launched October 6, 1990, and reached Jupiter for its "gravitational slingshot" in February 1992. It passed the south solar pole in June 1994 and crossed the ecliptic equator in February 1995. The solar X-ray and cosmic gamma-ray burst experiment (GRB) had 3 main objectives: study and monitor solar flares, detect and localize cosmic gamma-ray bursts, and in-situ detection of Jovian aurorae. Ulysses was the first satellite carrying a gamma burst detector which went outside the orbit of Mars. The hard X-ray detectors operated in the range 15–150 keV. The detectors consisted of 23-mm thick × 51-mm diameter CsI(Tl) crystals mounted via plastic light tubes to photomultipliers. The hard detector changed its operating mode depending on (1) measured count rate, (2) ground command, or (3) change in spacecraft telemetry mode. The trigger level was generally set for 8-sigma above background and the sensitivity is 10−6 erg/cm2 (1 nJ/m2). When a burst trigger is recorded, the instrument switches to record high resolution data, recording it to a 32-kbit memory for a slow telemetry read out. Burst data consist of either 16 s of 8-ms resolution count rates or 64 s of 32-ms count rates from the sum of the 2 detectors. There were also 16 channel energy spectra from the sum of the 2 detectors (taken either in 1, 2, 4, 16, or 32 second integrations). During 'wait' mode, the data were taken either in 0.25 or 0.5 s integrations and 4 energy channels (with shortest integration time being 8 s). Again, the outputs of the 2 detectors were summed.
X-ray astronomy
Wikipedia
511
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
The Ulysses soft X-ray detectors consisted of 2.5-mm thick × 0.5 cm2 area Si surface barrier detectors. A 100 mg/cm2 beryllium foil front window rejected the low energy X-rays and defined a conical FOV of 75° (half-angle). These detectors were passively cooled and operate in the temperature range −35 to −55 °C. This detector had 6 energy channels, covering the range 5–20 keV. Theoretical X-ray astronomy Theoretical X-ray astronomy is a branch of theoretical astronomy that deals with the theoretical astrophysics and theoretical astrochemistry of X-ray generation, emission, and detection as applied to astronomical objects. Like theoretical astrophysics, theoretical X-ray astronomy uses a wide variety of tools which include analytical models to approximate the behavior of a possible X-ray source and computational numerical simulations to approximate the observational data. Once potential observational consequences are available they can be compared with experimental observations. Observers can look for data that refutes a model or helps in choosing between several alternate or conflicting models. Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model. Most of the topics in astrophysics, astrochemistry, astrometry, and other fields that are branches of astronomy studied by theoreticians involve X-rays and X-ray sources. Many of the beginnings for a theory can be found in an Earth-based laboratory where an X-ray source is built and studied. Dynamos Dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field. This theory is used to explain the presence of anomalously long-lived magnetic fields in astrophysical bodies. If some of the stellar magnetic fields are really induced by dynamos, then field strength might be associated with rotation rate. Astronomical models
X-ray astronomy
Wikipedia
425
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
From the observed X-ray spectrum, combined with spectral emission results for other wavelength ranges, an astronomical model addressing the likely source of X-ray emission can be constructed. For example, with Scorpius X-1 the X-ray spectrum steeply drops off as X-ray energy increases up to 20 keV, which is likely for a thermal-plasma mechanism. In addition, there is no radio emission, and the visible continuum is roughly what would be expected from a hot plasma fitting the observed X-ray flux. The plasma could be a coronal cloud of a central object or a transient plasma, where the energy source is unknown, but could be related to the idea of a close binary. In the Crab Nebula X-ray spectrum there are three features that differ greatly from Scorpius X-1: its spectrum is much harder, its source diameter is in light-years (ly)s, not astronomical units (AU), and its radio and optical synchrotron emission are strong. Its overall X-ray luminosity rivals the optical emission and could be that of a nonthermal plasma. However, the Crab Nebula appears as an X-ray source that is a central freely expanding ball of dilute plasma, where the energy content is 100 times the total energy content of the large visible and radio portion, obtained from the unknown source. The "Dividing Line" as giant stars evolve to become red giants also coincides with the Wind and Coronal Dividing Lines. To explain the drop in X-ray emission across these dividing lines, a number of models have been proposed: low transition region densities, leading to low emission in coronae, high-density wind extinction of coronal emission, only cool coronal loops become stable, changes in a magnetic field structure to that an open topology, leading to a decrease of magnetically confined plasma, or changes in the magnetic dynamo character, leading to the disappearance of stellar fields leaving only small-scale, turbulence-generated fields among red giants.
X-ray astronomy
Wikipedia
408
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
Analytical X-ray astronomy High-mass X-ray binaries (HMXBs) are composed of OB supergiant companion stars and compact objects, usually neutron stars (NS) or black holes (BH). Supergiant X-ray binaries (SGXBs) are HMXBs in which the compact objects orbit massive companions with orbital periods of a few days (3–15 d), and in circular (or slightly eccentric) orbits. SGXBs show typical the hard X-ray spectra of accreting pulsars and most show strong absorption as obscured HMXBs. X-ray luminosity (Lx) increases up to 1036 erg·s−1 (1029 watts). The mechanism triggering the different temporal behavior observed between the classical SGXBs and the recently discovered supergiant fast X-ray transients (SFXT)s is still debated. Stellar X-ray astronomy The first detection of stellar x-rays occurred on April 5, 1974, with the detection of X-rays from Capella. A rocket flight on that date briefly calibrated its attitude control system when a star sensor pointed the payload axis at Capella (α Aur). During this period, X-rays in the range 0.2–1.6 keV were detected by an X-ray reflector system co-aligned with the star sensor. The X-ray luminosity of Lx = 1031 erg·s−1 (1024 W) is four orders of magnitude above the Sun's X-ray luminosity. Stellar coronae Coronal stars, or stars within a coronal cloud, are ubiquitous among the stars in the cool half of the Hertzsprung-Russell diagram. Experiments with instruments aboard Skylab and Copernicus have been used to search for soft X-ray emission in the energy range ~0.14–0.284 keV from stellar coronae. The experiments aboard ANS succeeded in finding X-ray signals from Capella and Sirius (α CMa). X-ray emission from an enhanced solar-like corona was proposed for the first time. The high temperature of Capella's corona as obtained from the first coronal X-ray spectrum of Capella using HEAO 1 required magnetic confinement unless it was a free-flowing coronal wind.
X-ray astronomy
Wikipedia
481
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
In 1977 Proxima Centauri is discovered to be emitting high-energy radiation in the XUV. In 1978, α Cen was identified as a low-activity coronal source. With the operation of the Einstein observatory, X-ray emission was recognized as a characteristic feature common to a wide range of stars covering essentially the whole Hertzsprung-Russell diagram. The Einstein initial survey led to significant insights: X-ray sources abound among all types of stars, across the Hertzsprung-Russell diagram and across most stages of evolution, the X-ray luminosities and their distribution along the main sequence were not in agreement with the long-favored acoustic heating theories, but were now interpreted as the effect of magnetic coronal heating, and stars that are otherwise similar reveal large differences in their X-ray output if their rotation period is different. To fit the medium-resolution spectrum of UX Arietis, subsolar abundances were required. Stellar X-ray astronomy is contributing toward a deeper understanding of magnetic fields in magnetohydrodynamic dynamos, the release of energy in tenuous astrophysical plasmas through various plasma-physical processes, and the interactions of high-energy radiation with the stellar environment. Current wisdom has it that the massive coronal main sequence stars are late-A or early F stars, a conjecture that is supported both by observation and by theory. Young, low-mass stars Newly formed stars are known as pre-main-sequence stars during the stage of stellar evolution before they reach the main-sequence. Stars in this stage (ages <10 million years) produce X-rays in their stellar coronae. However, their X-ray emission is 103 to 105 times stronger than for main-sequence stars of similar masses. X-ray emission for pre–main-sequence stars was discovered by the Einstein Observatory. This X-ray emission is primarily produced by magnetic reconnection flares in the stellar coronae, with many small flares contributing to the "quiescent" X-ray emission from these stars. Pre–main sequence stars have large convection zones, which in turn drive strong dynamos, producing strong surface magnetic fields. This leads to the high X-ray emission from these stars, which lie in the saturated X-ray regime, unlike main-sequence stars that show rotational modulation of X-ray emission. Other sources of X-ray emission include accretion hotspots and collimated outflows.
X-ray astronomy
Wikipedia
509
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
X-ray emission as an indicator of stellar youth is important for studies of star-forming regions. Most star-forming regions in the Milky Way Galaxy are projected on Galactic-Plane fields with numerous unrelated field stars. It is often impossible to distinguish members of a young stellar cluster from field-star contaminants using optical and infrared images alone. X-ray emission can easily penetrate moderate absorption from molecular clouds, and can be used to identify candidate cluster members. Unstable winds Given the lack of a significant outer convection zone, theory predicts the absence of a magnetic dynamo in earlier A stars. In early stars of spectral type O and B, shocks developing in unstable winds are the likely source of X-rays. Coolest M dwarfs Beyond spectral type M5, the classical αω dynamo can no longer operate as the internal structure of dwarf stars changes significantly: they become fully convective. As a distributed (or α2) dynamo may become relevant, both the magnetic flux on the surface and the topology of the magnetic fields in the corona should systematically change across this transition, perhaps resulting in some discontinuities in the X-ray characteristics around spectral class dM5. However, observations do not seem to support this picture: long-time lowest-mass X-ray detection, VB 8 (M7e V), has shown steady emission at levels of X-ray luminosity (LX) ≈ 1026 erg·s−1 (1019 W) and flares up to an order of magnitude higher. Comparison with other late M dwarfs shows a rather continuous trend. Strong X-ray emission from Herbig Ae/Be stars Herbig Ae/Be stars are pre-main sequence stars. As to their X-ray emission properties, some are reminiscent of hot stars, others point to coronal activity as in cool stars, in particular the presence of flares and very high temperatures. The nature of these strong emissions has remained controversial with models including unstable stellar winds, colliding winds, magnetic coronae, disk coronae, wind-fed magnetospheres, accretion shocks, the operation of a shear dynamo, the presence of unknown late-type companions.
X-ray astronomy
Wikipedia
443
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
K giants The FK Com stars are giants of spectral type K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous (LX ≥ 1032 erg·s−1 or 1025 W) and the hottest known with dominant temperatures up to 40 MK. However, the current popular hypothesis involves a merger of a close binary system in which the orbital angular momentum of the companion is transferred to the primary. Pollux is the brightest star in the constellation Gemini, despite its Beta designation, and the 17th brightest in the sky. Pollux is a giant orange K star that makes an interesting color contrast with its white "twin", Castor. Evidence has been found for a hot, outer, magnetically supported corona around Pollux, and the star is known to be an X-ray emitter. Eta Carinae New X-ray observations by the Chandra X-ray Observatory show three distinct structures: an outer, horseshoe-shaped ring about 2 light years in diameter, a hot inner core about 3 light-months in diameter, and a hot central source less than 1 light-month in diameter which may contain the superstar that drives the whole show. The outer ring provides evidence of another large explosion that occurred over 1,000 years ago. These three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. "The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays," says Prof. Kris Davidson of the University of Minnesota. Davidson is principal investigator for the Eta Carina observations by the Hubble Space Telescope. "In the most popular theory, X-rays are made by colliding gas streams from two stars so close together that they'd look like a point source to us. But what happens to gas streams that escape to farther distances? The extended hot stuff in the middle of the new image gives demanding new conditions for any theory to meet." Amateur X-ray astronomy
X-ray astronomy
Wikipedia
440
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. The United States Air Force Academy (USAFA) is the home of the US's only undergraduate satellite program, and has and continues to develop the FalconLaunch sounding rockets. In addition to any direct amateur efforts to put X-ray astronomy payloads into space, there are opportunities that allow student-developed experimental payloads to be put on board commercial sounding rockets as a free-of-charge ride. There are major limitations to amateurs observing and reporting experiments in X-ray astronomy: the cost of building an amateur rocket or balloon to place a detector high enough and the cost of appropriate parts to build a suitable X-ray detector. Major questions in X-ray astronomy As X-ray astronomy uses a major spectral probe to peer into the source, it is a valuable tool in efforts to understand many puzzles. Stellar magnetic fields Magnetic fields are ubiquitous among stars, yet we do not understand precisely why, nor have we fully understood the bewildering variety of plasma physical mechanisms that act in stellar environments. Some stars, for example, seem to have magnetic fields, fossil stellar magnetic fields left over from their period of formation, while others seem to generate the field anew frequently. Extrasolar X-ray source astrometry With the initial detection of an extrasolar X-ray source, the first question usually asked is "What is the source?" An extensive search is often made in other wavelengths such as visible or radio for possible coincident objects. Many of the verified X-ray locations still do not have readily discernible sources. X-ray astrometry becomes a serious concern that results in ever greater demands for finer angular resolution and spectral radiance. There are inherent difficulties in making X-ray/optical, X-ray/radio, and X-ray/X-ray identifications based solely on positional coincidents, especially with handicaps in making identifications, such as the large uncertainties in positional determinants made from balloons and rockets, poor source separation in the crowded region toward the galactic center, source variability, and the multiplicity of source nomenclature.
X-ray astronomy
Wikipedia
440
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
X‐ray source counterparts to stars can be identified by calculating the angular separation between source centroids and the position of the star. The maximum allowable separation is a compromise between a larger value to identify as many real matches as possible and a smaller value to minimize the probability of spurious matches. "An adopted matching criterion of 40" finds nearly all possible X‐ray source matches while keeping the probability of any spurious matches in the sample to 3%." Solar X-ray astronomy All of the detected X-ray sources at, around, or near the Sun appear to be associated with processes in the corona, which is its outer atmosphere. Coronal heating problem In the area of solar X-ray astronomy, there is the coronal heating problem. The photosphere of the Sun has an effective temperature of 5,570 K yet its corona has an average temperature of 1–2 × 106 K. However, the hottest regions are 8–20 × 106 K. The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere. It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient gas in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events—nanoflares. Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms.
X-ray astronomy
Wikipedia
421
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
Coronal mass ejection A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Evolution of these closed magnetic structures in response to various photospheric motions over different time scales (convection, differential rotation, meridional circulation) somehow leads to the CME. Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs. The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production. "Relating the sigmoids at X-ray (and other) wavelengths to magnetic structures and current systems in the solar atmosphere is the key to understanding their relationship to CMEs." The first detection of a Coronal mass ejection (CME) as such was made on December 1, 1971, by R. Tousey of the US Naval Research Laboratory using OSO 7. Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing. The largest geomagnetic perturbation, resulting presumably from a "prehistoric" CME, coincided with the first-observed solar flare, in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens. The same instrument recorded a crotchet, an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen) and the recognition of the ionosphere (by Kennelly and Heaviside). Exotic X-ray sources
X-ray astronomy
Wikipedia
404
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary, with an often resolvable pair of radio jets. LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01. Observations are revealing a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87. A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays. X-ray dark stars
X-ray astronomy
Wikipedia
216
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
During the solar cycle, as shown in the sequence of images at right, at times the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse, on the other hand, appears to be always X-ray dark. Hardly any X-rays are emitted by red giants. There is a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F. Altair is spectral type A7V and Vega is A0V. Altair's total X-ray luminosity is at least an order of magnitude larger than the X-ray luminosity for Vega. The outer convection zone of early F stars is expected to be very shallow and absent in A-type dwarfs, yet the acoustic flux from the interior reaches a maximum for late A and early F stars provoking investigations of magnetic activity in A-type stars along three principal lines. Chemically peculiar stars of spectral type Bp or Ap are appreciable magnetic radio sources, most Bp/Ap stars remain undetected, and of those reported early on as producing X-rays only few of them can be identified as probably single stars. X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area." X-ray dark planets and comets X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area." As X-ray detectors have become more sensitive, they have observed that some planets and other normally X-ray non-luminescent celestial objects under certain conditions emit, fluoresce, or reflect X-rays. Comet Lulin
X-ray astronomy
Wikipedia
421
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. "The solar wind—a fast-moving stream of particles from the sun—interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the Sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet.
X-ray astronomy
Wikipedia
173
44062
https://en.wikipedia.org/wiki/X-ray%20astronomy
Physical sciences
High-energy astronomy
Astronomy
Extragalactic astronomy is the branch of astronomy concerned with objects outside the Milky Way galaxy. In other words, it is the study of all astronomical objects which are not covered by galactic astronomy. The closest objects in extragalactic astronomy include the galaxies of the Local Group, which are close enough to allow very detailed analyses of their contents (e.g. supernova remnants, stellar associations). As instrumentation has improved, distant objects can now be examined in more detail and so extragalactic astronomy includes objects at nearly the edge of the observable universe. Research into distant galaxies (outside of our local group) is valuable for studying aspects of the universe such as galaxy evolution and Active Galactic Nuclei (AGN) which give insight into physical phenomena (e.g. super massive black hole accretion and the presence of dark matter). It is through extragalactic astronomy that astronomers and physicists are able to study the effects of General Relativity such as gravitational lensing and gravitational waves, that are otherwise impossible (or nearly impossible) to study on a galactic scale. A key interest in Extragalactic Astronomy is the study of how galaxies behave and interact through the universe. Astronomer's methodologies depend — from theoretical to observation based methods. Galaxies form in various ways. In most Cosmological N-body simulations, the earliest galaxies in the cosmos formed in the first hundreds of millions of years. These primordial galaxies formed as the enormous reservoirs of gas and dust in the early universe collapsed in on themselves, giving birth to the first stars, now known as Population III Stars. These stars were of enormous masses in the range of 300 to perhaps 3 million solar masses. Due to their large mass, these stars had extremely short lifespans. Famous examples Hubble Deep Field LIGO's detection of gravitational waves Chandra Deep Field South Topics Active Galactic Nuclei (AGN), Quasars Dark Matter Galaxy clusters, Superclusters Intergalactic stars Intergalactic dust the observable universe Radio galaxies Supernovae Extragalactic planet
Extragalactic astronomy
Wikipedia
412
44063
https://en.wikipedia.org/wiki/Extragalactic%20astronomy
Physical sciences
Basics_2
Astronomy
A gyroscope (from Ancient Greek γῦρος gŷros, "round" and σκοπέω skopéō, "to look") is a device used for measuring or maintaining orientation and angular velocity. It is a spinning wheel or disc in which the axis of rotation (spin axis) is free to assume any orientation by itself. When rotating, the orientation of this axis is unaffected by tilting or rotation of the mounting, according to the conservation of angular momentum. Gyroscopes based on other operating principles also exist, such as the microchip-packaged MEMS gyroscopes found in electronic devices (sometimes called gyrometers), solid-state ring lasers, fibre optic gyroscopes, and the extremely sensitive quantum gyroscope. Applications of gyroscopes include inertial navigation systems, such as in the Hubble Space Telescope, or inside the steel hull of a submerged submarine. Due to their precision, gyroscopes are also used in gyrotheodolites to maintain direction in tunnel mining. Gyroscopes can be used to construct gyrocompasses, which complement or replace magnetic compasses (in ships, aircraft and spacecraft, vehicles in general), to assist in stability (bicycles, motorcycles, and ships) or be used as part of an inertial guidance system. MEMS gyroscopes are popular in some consumer electronics, such as smartphones. Description and diagram A gyroscope is an instrument, consisting of a wheel mounted into two or three gimbals providing pivoted supports, for allowing the wheel to rotate about a single axis. A set of three gimbals, one mounted on the other with orthogonal pivot axes, may be used to allow a wheel mounted on the innermost gimbal to have an orientation remaining independent of the orientation, in space, of its support.
Gyroscope
Wikipedia
400
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
In the case of a gyroscope with two gimbals, the outer gimbal, which is the gyroscope frame, is mounted so as to pivot about an axis in its own plane determined by the support. This outer gimbal possesses one degree of rotational freedom and its axis possesses none. The second gimbal, inner gimbal, is mounted in the gyroscope frame (outer gimbal) so as to pivot about an axis in its own plane that is always perpendicular to the pivotal axis of the gyroscope frame (outer gimbal). This inner gimbal has two degrees of rotational freedom. The axle of the spinning wheel (the rotor) defines the spin axis. The rotor is constrained to spin about an axis, which is always perpendicular to the axis of the inner gimbal. So the rotor possesses three degrees of rotational freedom and its axis possesses two. The rotor responds to a force applied to the input axis by a reaction force to the output axis. A gyroscope flywheel will roll or resist about the output axis depending upon whether the output gimbals are of a free or fixed configuration. An example of some free-output-gimbal devices is the attitude control gyroscopes used to sense or measure the pitch, roll and yaw attitude angles in a spacecraft or aircraft. The centre of gravity of the rotor can be in a fixed position. The rotor simultaneously spins about one axis and is capable of oscillating about the two other axes, and it is free to turn in any direction about the fixed point (except for its inherent resistance caused by rotor spin). Some gyroscopes have mechanical equivalents substituted for one or more of the elements. For example, the spinning rotor may be suspended in a fluid, instead of being mounted in gimbals. A control moment gyroscope (CMG) is an example of a fixed-output-gimbal device that is used on spacecraft to hold or maintain a desired attitude angle or pointing direction using the gyroscopic resistance force. In some special cases, the outer gimbal (or its equivalent) may be omitted so that the rotor has only two degrees of freedom. In other cases, the centre of gravity of the rotor may be offset from the axis of oscillation, and thus the centre of gravity of the rotor and the centre of suspension of the rotor may not coincide. History
Gyroscope
Wikipedia
507
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
Early similar devices Essentially, a gyroscope is a top combined with a pair of gimbals. Tops were invented in many different civilizations, including classical Greece, Rome, and China. Most of these were not utilized as instruments. The first known apparatus similar to a gyroscope (the "Whirling Speculum" or "Serson's Speculum") was invented by John Serson in 1743. It was used as a level, to locate the horizon in foggy or misty conditions. The first instrument used more like an actual gyroscope was made by Johann Bohnenberger of Germany, who first wrote about it in 1817. At first he called it the "Machine". Bohnenberger's machine was based on a rotating massive sphere. In 1832, American Walter R. Johnson developed a similar device that was based on a rotating disc. The French mathematician Pierre-Simon Laplace, working at the École Polytechnique in Paris, recommended the machine for use as a teaching aid, and thus it came to the attention of Léon Foucault. Foucault's gyroscope In 1852, Foucault used it in an experiment demonstrating the rotation of the Earth. It was Foucault who gave the device its modern name, in an experiment to see (Greek skopeein, to see) the Earth's rotation (Greek gyros, circle or rotation), which was visible in the 8 to 10 minutes before friction slowed the spinning rotor. Commercialization In the 1860s, the advent of electric motors made it possible for a gyroscope to spin indefinitely; this led to the first prototype heading indicators, and a rather more complicated device, the gyrocompass. The first functional gyrocompass was patented in 1904 by German inventor Hermann Anschütz-Kaempfe. American Elmer Sperry followed with his own design later that year, and other nations soon realized the military importance of the invention—in an age in which naval prowess was the most significant measure of military power—and created their own gyroscope industries. The Sperry Gyroscope Company quickly expanded to provide aircraft and naval stabilizers as well, and other gyroscope developers followed suit.
Gyroscope
Wikipedia
467
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
Circa 1911 the L. T. Hurst Mfg Co of Indianapolis started producing the "Hurst gyroscope" a toy gyroscope with a pull string and pedestal. Manufacture was at some point switched to Chandler Mfg Co (still branded Hurst). The product was later renamed to a “Chandler gyroscope”, presumably because Chandler Mfg Co. took over rights to the gyroscope. Chandler continued to produce the toy until the company was purchased by TEDCO Inc. in 1982. The gyroscope is still produced by TEDCO today. In the first several decades of the 20th century, other inventors attempted (unsuccessfully) to use gyroscopes as the basis for early black box navigational systems by creating a stable platform from which accurate acceleration measurements could be performed (in order to bypass the need for star sightings to calculate position). Similar principles were later employed in the development of inertial navigation systems for ballistic missiles. During World War II, the gyroscope became the prime component for aircraft and anti-aircraft gun sights. After the war, the race to miniaturize gyroscopes for guided missiles and weapons navigation systems resulted in the development and manufacturing of so-called midget gyroscopes that weighed less than and had a diameter of approximately . Some of these miniaturized gyroscopes could reach a speed of 24,000 revolutions per minute in less than 10 seconds. Gyroscopes continue to be an engineering challenge. For example, the axle bearings have to be extremely accurate. A small amount of friction is deliberately introduced to the bearings, since otherwise an accuracy of better than of an inch (2.5 nm) would be required. Three-axis MEMS-based gyroscopes are also used in portable electronic devices such as tablets, smartphones, and smartwatches. This adds to the 3-axis acceleration sensing ability available on previous generations of devices. Together these sensors provide 6 component motion sensing; accelerometers for X, Y, and Z movement, and gyroscopes for measuring the extent and rate of rotation in space (roll, pitch and yaw). Some devices additionally incorporate a magnetometer to provide absolute angular measurements relative to the Earth's magnetic field. Newer MEMS-based inertial measurement units incorporate up to all nine axes of sensing in a single integrated circuit package, providing inexpensive and widely available motion sensing.
Gyroscope
Wikipedia
509
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
Gyroscopic principles All spinning objects have gyroscopic properties. The main properties that an object can experience in any gyroscopic motion are rigidity in space and precession. Rigidity in space Rigidity in space describes the principle that a gyroscope remains in the fixed position on the plane in which it is spinning, unaffected by the Earth's rotation. For example, a bike wheel. Early forms of gyroscope (not then known by the name) were used to demonstrate the principle. Precession A simple case of precession, also known as steady precession, can be described by the following relation to Moment: where represents precession, is represented by spin, is the nutation angle, and represents inertia along its respective axis. This relation is only valid with the Moment along the Y and Z axes are equal to 0. The equation can be further reduced noting that the angular velocity along the z-axis is equal to the sum of the Precession and the Spin: , Where represents the angular velocity along the z axis. or Gyroscopic precession is torque induced. It is the rate of change of the angular momentum that is produced by the applied torque. Precession produces counterintuitive dynamic results such as a spinning top not falling over. Precession is used in aerospace applications for sensing changes of attitude and direction. Contemporary uses Steadicam A Steadicam rig was employed during the filming of the 1983 film Return of the Jedi, in conjunction with two gyroscopes for extra stabilization, to film the background plates for the speeder bike chase. Steadicam inventor Garrett Brown operated the shot, walking through a redwood forest, running the camera at one frame per second. When projected at 24 frames per second, it gave the impression of flying through the air at perilous speeds. Heading indicator The heading indicator or directional gyro has an axis of rotation that is set horizontally, pointing north. Unlike a magnetic compass, it does not seek north. When being used in an airplane, for example, it will slowly drift away from north and will need to be reoriented periodically, using a magnetic compass as a reference. Gyrocompass
Gyroscope
Wikipedia
460
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
Unlike a directional gyro or heading indicator, a gyrocompass seeks north. It detects the rotation of the Earth about its axis and seeks the true north, rather than the magnetic north. Gyrocompasses usually have built-in damping to prevent overshoot when re-calibrating from sudden movement. Accelerometer By determining an object's acceleration and integrating over time, the velocity of the object can be calculated. Integrating again, position can be determined. The simplest accelerometer is a weight that is free to move horizontally, which is attached to a spring and a device to measure the tension in the spring. This can be improved by introducing a counteracting force to push the weight back and to measure the force needed to prevent the weight from moving. A more complicated design consists of a gyroscope with a weight on one of the axes. The device will react to the force generated by the weight when it is accelerated, by integrating that force to produce a velocity. Variations Gyrostat A gyrostat consists of a massive flywheel concealed in a solid casing. Its behaviour on a table, or with various modes of suspension or support, serves to illustrate the curious reversal of the ordinary laws of static equilibrium due to the gyrostatic behaviour of the interior invisible flywheel when rotated rapidly. The first gyrostat was designed by Lord Kelvin to illustrate the more complicated state of motion of a spinning body when free to wander about on a horizontal plane, like a top spun on the pavement, or a bicycle on the road. Kelvin also made use of gyrostats to develop mechanical theories of the elasticity of matter and of the ether. In modern continuum mechanics there is a variety of these models, based on ideas of Lord Kelvin. They represent a specific type of Cosserat theories (suggested for the first time by Eugène Cosserat and François Cosserat), which can be used for description of artificially made smart materials as well as of other complex media. One of them, so-called Kelvin's medium, has the same equations as magnetic insulators near the state of magnetic saturation in the approximation of quasimagnetostatics.
Gyroscope
Wikipedia
456
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
In modern times, the gyrostat concept is used in the design of attitude control systems for orbiting spacecraft and satellites. For instance, the Mir space station had three pairs of internally mounted flywheels known as gyrodynes or control moment gyroscopes. In physics, there are several systems whose dynamical equations resemble the equations of motion of a gyrostat. Examples include a solid body with a cavity filled with an inviscid, incompressible, homogeneous liquid, the static equilibrium configuration of a stressed elastic rod in elastica theory, the polarization dynamics of a light pulse propagating through a nonlinear medium, the Lorenz system in chaos theory, and the motion of an ion in a Penning trap mass spectrometer. MEMS gyroscope A microelectromechanical systems (MEMS) gyroscope is a miniaturized gyroscope found in electronic devices. It takes the idea of the Foucault pendulum and uses a vibrating element. This kind of gyroscope was first used in military applications but has since been adopted for increasing commercial use. HRG The hemispherical resonator gyroscope (HRG), also called a wine-glass gyroscope or mushroom gyro, makes use of a thin solid-state hemispherical shell, anchored by a thick stem. This shell is driven to a flexural resonance by electrostatic forces generated by electrodes which are deposited directly onto separate fused-quartz structures that surround the shell. Gyroscopic effect is obtained from the inertial property of the flexural standing waves. VSG or CVG A vibrating structure gyroscope (VSG), also called a Coriolis vibratory gyroscope (CVG), uses a resonator made of different metallic alloys. It takes a position between the low-accuracy, low-cost MEMS gyroscope and the higher-accuracy and higher-cost fiber optic gyroscope. Accuracy parameters are increased by using low-intrinsic damping materials, resonator vacuumization, and digital electronics to reduce temperature dependent drift and instability of control signals. High quality wine-glass resonators are used for precise sensors like HRG.
Gyroscope
Wikipedia
473
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
DTG A dynamically tuned gyroscope (DTG) is a rotor suspended by a universal joint with flexure pivots. The flexure spring stiffness is independent of spin rate. However, the dynamic inertia (from the gyroscopic reaction effect) from the gimbal provides negative spring stiffness proportional to the square of the spin speed (Howe and Savet, 1964; Lawrence, 1998). Therefore, at a particular speed, called the tuning speed, the two moments cancel each other, freeing the rotor from torque, a necessary condition for an ideal gyroscope. Ring laser gyroscope A ring laser gyroscope relies on the Sagnac effect to measure rotation by measuring the shifting interference pattern of a beam split into two separate beams which travel around the ring in opposite directions. When the Boeing 757-200 entered service in 1983, it was equipped with the first suitable ring laser gyroscope. This gyroscope took many years to develop, and the experimental models went through many changes before it was deemed ready for production by the engineers and managers of Honeywell and Boeing. It was an outcome of the competition with mechanical gyroscopes, which kept improving. The reason Honeywell, of all companies, chose to develop the laser gyro was that they were the only one that didn't have a successful line of mechanical gyroscopes, so they wouldn't be competing against themselves. The first problem they had to solve was that with laser gyros rotations below a certain minimum could not be detected at all, due to a problem called "lock-in", whereby the two beams act like coupled oscillators and pull each other's frequencies toward convergence and therefore zero output. The solution was to shake the gyro rapidly so that it never settled into lock-in. Paradoxically, too regular of a dithering motion produced an accumulation of short periods of lock-in when the device was at rest at the extremities of its shaking motion. This was cured by applying a random white noise to the vibration. The material of the block was also changed from quartz to a new glass ceramic Cer-Vit, made by Owens Corning, because of helium leaks. Fiber optic gyroscope
Gyroscope
Wikipedia
475
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
A fiber optic gyroscope also uses the interference of light to detect mechanical rotation. The two-halves of the split beam travel in opposite directions in a coil of fiber optic cable as long as 5 km. Like the ring laser gyroscope, it makes use of the Sagnac effect. London moment A London moment gyroscope relies on the quantum-mechanical phenomenon, whereby a spinning superconductor generates a magnetic field whose axis lines up exactly with the spin axis of the gyroscopic rotor. A magnetometer determines the orientation of the generated field, which is interpolated to determine the axis of rotation. Gyroscopes of this type can be extremely accurate and stable. For example, those used in the Gravity Probe B experiment measured changes in gyroscope spin axis orientation to better than 0.5 milliarcseconds (1.4 degrees, or about ) over a one-year period. This is equivalent to an angular separation the width of a human hair viewed from away. The GP-B gyro consists of a nearly-perfect spherical rotating mass made of fused quartz, which provides a dielectric support for a thin layer of niobium superconducting material. To eliminate friction found in conventional bearings, the rotor assembly is centered by the electric field from six electrodes. After the initial spin-up by a jet of helium which brings the rotor to 4,000 RPM, the polished gyroscope housing is evacuated to an ultra-high vacuum to further reduce drag on the rotor. Provided the suspension electronics remain powered, the extreme rotational symmetry, lack of friction, and low drag will allow the angular momentum of the rotor to keep it spinning for about 15,000 years. A sensitive DC SQUID that can discriminate changes as small as one quantum, or about 2 Wb, is used to monitor the gyroscope. A precession, or tilt, in the orientation of the rotor causes the London moment magnetic field to shift relative to the housing. The moving field passes through a superconducting pickup loop fixed to the housing, inducing a small electric current. The current produces a voltage across a shunt resistance, which is resolved to spherical coordinates by a microprocessor. The system is designed to minimize Lorentz torque on the rotor. Other examples Helicopters
Gyroscope
Wikipedia
478
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
The main rotor of a helicopter acts like a gyroscope. Its motion is influenced by the principle of gyroscopic precession which is the concept that a force applied to a spinning object will have a maximum reaction approximately 90 degrees later. The reaction may differ from 90 degrees when other stronger forces are in play. To change direction, helicopters must adjust the pitch angle and the angle of attack. Gyro X Gyro X prototype vehicle created by Alex Tremulis and Thomas Summers in 1967. The car utilized gyroscopic precession to drive on two wheels. An assembly consisting of a flywheel mounted in a gimbal housing under the hood of the vehicle acted as a large gyroscope. The flywheel was rotated by hydraulic pumps creating a gyroscopic effect on the vehicle. A precessional ram was responsible for rotating the gyroscope to change the direction of the precessional force to counteract any forces causing the vehicle imbalance. The one-of-a-kind prototype is now at the Lane Motor Museum in Nashville, Tennessee. Consumer electronics In addition to being used in compasses, aircraft, computer pointing devices, etc., gyroscopes have been introduced into consumer electronics. Since the gyroscope allows the calculation of orientation and rotation, designers have incorporated them into modern technology. The integration of the gyroscope has allowed for more accurate recognition of movement within a 3D space than the previous lone accelerometer within a number of smartphones. Gyroscopes in consumer electronics are frequently combined with accelerometers for more robust direction- and motion-sensing. Examples of such applications include smartphones such as the Samsung Galaxy Note 4, HTC Titan, Nexus 5, iPhone 5s, Nokia 808 PureView and Sony Xperia, game console peripherals such as the PlayStation 3 controller and the Wii Remote, and virtual reality headsets such as the Oculus Rift. Some features of Android phones like PhotoSphere or 360 Camera and to use VR gadget do not work without a gyroscope sensor in the phone. Nintendo has integrated a gyroscope into the Wii console's Wii Remote controller by an additional piece of hardware called "Wii MotionPlus". It is also included in the 3DS, Wii U GamePad, and Nintendo Switch Joy-Con and Pro controllers, which detect movement when turning and shaking.
Gyroscope
Wikipedia
500
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
Cruise ships use gyroscopes to level motion-sensitive devices such as self-leveling pool tables. An electric powered flywheel gyroscope inserted in a bicycle wheel is sold as an alternative to training wheels.
Gyroscope
Wikipedia
46
44125
https://en.wikipedia.org/wiki/Gyroscope
Technology
Navigation
null
The metric system is a system of measurement that standardizes a set of base units and a nomenclature for describing relatively large and small quantities via decimal-based multiplicative unit prefixes. Though the rules governing the metric system have changed over time, the modern definition, the International System of Units (SI), defines the metric prefixes and seven base units: metre (m), kilogram (kg), second (s), ampere (A), kelvin (K), mole (mol), and candela (cd). An SI derived unit is a named combination of base units such as hertz (cycles per second), newton (kg⋅m/s2), and tesla (1 kg⋅s−2⋅A−1) and in the case of Celsius a shifted scale from Kelvin. Certain units have been officially accepted for use with the SI. Some of these are decimalised, like the litre and electronvolt, and are considered "metric". Others, like the astronomical unit are not. Ancient non-metric but SI-accepted multiples of time, minute and hour, are base 60 (sexagesimal). Similarly, the angular measure degree and submultiples, arcminute, and arcsecond, are also sexagesimal and SI-accepted. The SI system derives from the older metre, kilogram, second (MKS) system of units, though the definition of the base units has evolved over time. Today, all base units are defined by physical constants; not by example as physical objects as they were in the past. Other metric system variants include the centimetre–gram–second system of units, the metre–tonne–second system of units, and the gravitational metric system. Each has unaffiliated metric units. Some of these systems are still used in limited contexts. Adoption The SI system has been adopted as the official system of weights and measures by most countries in the world. A notable outlier is the United States (US). Although used in some contexts, the US has resisted full adoption; continuing to use "a conglomeration of basically incoherent measurement systems". Adopting the metric system is known as metrication. Multiplicative prefixes
Metric system
Wikipedia
466
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
In the SI system and generally in older metric systems, multiples and fractions of a unit can be described via a prefix on a unit name that implies a decimal (base-10), multiplicative factor. The only exceptions are for the SI-accepted units of time (minute and hour) and angle (degree, arcminute, arcsecond) which, based on ancient convention, use base-60 multipliers. The prefix kilo, for example, implies a factor of 1000 (103), and the prefix milli implies a factor of 1/1000 (10−3). Thus, a kilometre is a thousand metres, and a milligram is one thousandth of a gram. These relations can be written symbolically as: Base units The decimalised system is based on the metre, which had been introduced in France in the 1790s. The historical development of these systems culminated in the definition of the International System of Units (SI) in the mid-20th century, under the oversight of an international standards body. The historical evolution of metric systems has resulted in the recognition of several principles. A set of independent dimensions of nature is selected, in terms of which all natural quantities can be expressed, called base quantities. For each of these dimensions, a representative quantity is defined as a base unit of measure. The definition of base units has increasingly been realised in terms of fundamental natural phenomena, in preference to copies of physical artefacts. A unit derived from the base units is used for expressing quantities of dimensions that can be derived from the base dimensions of the system—e.g., the square metre is the derived unit for area, which is derived from length. These derived units are coherent, which means that they involve only products of powers of the base units, without any further factors. For any given quantity whose unit has a name and symbol, an extended set of smaller and larger units is defined that are related by factors of powers of ten. The unit of time should be the second; the unit of length should be either the metre or a decimal multiple of it; and the unit of mass should be the gram or a decimal multiple of it.
Metric system
Wikipedia
440
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
Metric systems have evolved since the 1790s, as science and technology have evolved, in providing a single universal measuring system. Before and in addition to the SI, other metric systems include: the MKS system of units and the MKSA systems, which are the direct forerunners of the SI; the centimetre–gram–second (CGS) system and its subtypes, the CGS electrostatic (cgs-esu) system, the CGS electromagnetic (cgs-emu) system, and their still-popular blend, the Gaussian system; the metre–tonne–second (MTS) system; and the gravitational metric systems, which can be based on either the metre or the centimetre, and either the gram, gram-force, kilogram or kilogram-force. Attributes Ease of learning and use The metric system is intended to be easy to use and widely applicable, including units based on the natural world, decimal ratios, prefixes for multiples and sub-multiples, and a structure of base and derived units. It is a coherent system with derived units built from base units using logical rather than empirical relationships and with multiples and submultiples of both units based on decimal factors and identified by a common set of prefixes. Extensibility The metric system is extensible since the governing body reviews, modifies and extends it needs arise. For example, the katal, a derived unit for catalytic activity equivalent to one mole per second (1 mol/s), was added in 1999. Realisation The base units used in a measurement system must be realisable. To that end, the definition of each SI base unit is accompanied by a mise en pratique (practical realisation) that describes at least one way that the unit can be measured. Where possible, definitions of the base units were developed so that any laboratory equipped with proper instruments would be able to realise a standard without reliance on an artefact held by another country. In practice, such realisation is done under the auspices of a mutual acceptance arrangement.
Metric system
Wikipedia
429
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
In 1791 the commission originally defined the metre based on the size of the earth, equal to one ten-millionth of the distance from the equator to the North Pole. In the SI, the standard metre is now defined as exactly of the distance that light travels in a second. The metre can be realised by measuring the length that a light wave travels in a given time, or equivalently by measuring the wavelength of light of a known frequency. The kilogram was originally defined as the mass of one cubic decimetre of water at 4 °C, standardised as the mass of a man-made artefact of platinum–iridium held in a laboratory in France, which was used until a new definition was introduced in May 2019. Replicas made in 1879 at the time of the artefact's fabrication and distributed to signatories of the Metre Convention serve as de facto standards of mass in those countries. Additional replicas have been fabricated since as additional countries have joined the convention. The replicas were subject to periodic validation by comparison to the original, called the IPK. It became apparent that either the IPK or the replicas or both were deteriorating, and are no longer comparable: they had diverged by 50 μg since fabrication, so figuratively, the accuracy of the kilogram was no better than 5 parts in a hundred million or a relative accuracy of . The revision of the SI replaced the IPK with an exact definition of the Planck constant as expressed in SI units, which defines the kilogram in terms of fundamental constants. Base and derived unit structure A base quantity is one of a conventionally chosen subset of physical quantities, where no quantity in the subset can be expressed in terms of the others. A base unit is a unit adopted for expressing a base quantity. A derived unit is used for expressing any other quantity, and is a product of powers of base units. For example, in the modern metric system, length has the unit metre and time has the unit second, and speed has the derived unit metre per second. Density, or mass per unit volume, has the unit kilogram per cubic metre.
Metric system
Wikipedia
431
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
Decimal ratios A significant characteristic of the metric system is its use of decimal multiples powers of 10. For example, a length that is significantly longer or shorter than 1 metre can be represented in units that are a power of 10 or 1000 metres. This differs from many older systems in which the ratio of different units varied. For example, 12 inches is one foot, but the larger unit in the same system, the mile is not a power of 12 feet. It is 5,280 feet which is hard to remember for many. In the early days, multipliers that were positive powers of ten were given Greek-derived prefixes such as kilo- and mega-, and those that were negative powers of ten were given Latin-derived prefixes such as centi- and milli-. However, 1935 extensions to the prefix system did not follow this convention: the prefixes nano- and micro-, for example have Greek roots. During the 19th century the prefix myria-, derived from the Greek word μύριοι (mýrioi), was used as a multiplier for . When applying prefixes to derived units of area and volume that are expressed in terms of units of length squared or cubed, the square and cube operators are applied to the unit of length including the prefix, as illustrated below. For the most part, the metric prefixes are used uniformly for SI base, derived and accepted units. A notable exception is that for a large measure of seconds, the non-SI units of minute, hour and day are customary instead. Units of duration longer than a day are problematic since both month and year have varying number of days. Sub-second measures are often indicated via submultiple prefixes. For example, millisecond. Coherence Each variant of the metric system has a degree of coherence—the derived units are directly related to the base units without the need for intermediate conversion factors. For example, in a coherent system the units of force, energy, and power are chosen so that the equations hold without the introduction of unit conversion factors. Once a set of coherent units has been defined, other relationships in physics that use this set of units will automatically be true. Therefore, Einstein's mass–energy equation, , does not require extraneous constants when expressed in coherent units.
Metric system
Wikipedia
479
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
The CGS system had two units of energy, the erg that was related to mechanics and the calorie that was related to thermal energy; so only one of them (the erg) could bear a coherent relationship to the base units. Coherence was a design aim of SI, which resulted in only one unit of energy being defined – the joule. Rationalisation Maxwell's equations of electromagnetism contained a factor of relating to steradians, representative of the fact that electric charges and magnetic fields may be considered to emanate from a point and propagate equally in all directions, i.e. spherically. This factor made equations more awkward than necessary, and so Oliver Heaviside suggested adjusting the system of units to remove it. Everyday notions The basic units of the metric system have always represented commonplace quantities or relationships in nature; even with modern refinements of definition and methodology. In cases where laboratory precision may not be required or available, or where approximations are good enough, the commonplace notions may suffice. Time The second is readily determined from the Earth's rotation period. Unlike other units, time multiples are not decimal. A second is of a minute, which is of an hour, which is of a day, so a second is of a day. Length The length of the equator is close to (more precisely ). In fact, the dimensions of our planet were used by the French Academy in the original definition of the metre. A dining tabletop is typically about 0.75 metres high. A very tall human is about 2 metres tall. Mass A 1-euro coin weighs 7.5 g; a Sacagawea US 1-dollar coin weighs 8.1 g; a UK 50-pence coin weighs 8.0 g. Temperature In every day use, Celsius is more commonly used than Kelvin, however a temperature difference of one Kelvin is the same as one degree Celsius and that is defined as of the temperature differential between the freezing and boiling points of water at sea level. A temperature in Kelvin is the temperature in Celsius plus about 273. Human body temperature is about 37 °C or 310 K. Length, mass, volume relationship The mass of a litre of cold water is 1 kilogram. 1 millilitre of water occupies 1 cubic centimetre and weighs 1 gram.
Metric system
Wikipedia
482
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
Candela and Watt relationship Candela is about the luminous intensity of a moderately bright candle, or 1 candle power. A 60 Watt tungsten-filament incandescent light bulb has a luminous intensity of about 800 lumens which is radiated equally in all directions (i.e. 4 steradians), thus is equal to . Watt, Volt and Ampere relationship A 60 W incandescent light bulb consumes 0.5 A at 120 V (US mains voltage). A 60 W bulb rated at 230 V (European mains voltage) consumes 0.26 A at this voltage. This is evident from the formula . Mole and mass relationship A mole of a substance has a mass that is its molecular mass expressed in units of grams. The mass of a mole of carbon is 12.0 g, and the mass of a mole of table salt is 58.4 g. Since all gases have the same volume per mole at a given temperature and pressure far from their points of liquefaction and solidification (see Perfect gas), and air is about oxygen (molecular mass 32) and nitrogen (molecular mass 28), the density of any near-perfect gas relative to air can be obtained to a good approximation by dividing its molecular mass by 29 (because ). For example, carbon monoxide (molecular mass 28) has almost the same density as air. History The French Revolution (1789–99) enabled France to reform its many outdated systems of various local weights and measures. In 1790, Charles Maurice de Talleyrand-Périgord proposed a new system based on natural units to the French National Assembly, aiming for global adoption. With the United Kingdom not responding to a request to collaborate in the development of the system, the French Academy of Sciences established a commission to implement this new standard alone, and in 1799, the new system was launched in France. A number of different metric systems have been developed, all using the Mètre des Archives and Kilogramme des Archives (or their descendants) as their base units, but differing in the definitions of the various derived units.
Metric system
Wikipedia
429
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
19th century In 1832, Gauss used the astronomical second as a base unit in defining the gravitation of the Earth, and together with the milligram and millimetre, this became the first system of mechanical units. He showed that the strength of a magnet could also be quantified in terms of these units, by measuring the oscillations of a magnetised needle and finding the quantity of "magnetic fluid" that produces an acceleration of one unit when applied to a unit mass. The centimetre–gram–second system of units (CGS) was the first coherent metric system, having been developed in the 1860s and promoted by Maxwell and Thomson. In 1874, this system was formally promoted by the British Association for the Advancement of Science (BAAS). The system's characteristics are that density is expressed in , force expressed in dynes and mechanical energy in ergs. Thermal energy was defined in calories, one calorie being the energy required to raise the temperature of one gram of water from 15.5 °C to 16.5 °C. The meeting also recognised two sets of units for electrical and magnetic properties – the electrostatic set of units and the electromagnetic set of units. The CGS units of electricity were cumbersome to work with. This was remedied at the 1893 International Electrical Congress held in Chicago by defining the "international" ampere and ohm using definitions based on the metre, kilogram and second, in the International System of Electrical and Magnetic Units. During the same period in which the CGS system was being extended to include electromagnetism, other systems were developed, distinguished by their choice of coherent base unit, including the Practical System of Electric Units, or QES (quad–eleventhgram–second) system, was being used. Here, the base units are the quad, equal to (approximately a quadrant of the Earth's circumference), the eleventhgram, equal to , and the second. These were chosen so that the corresponding electrical units of potential difference, current and resistance had a convenient magnitude. 20th century In 1901, Giovanni Giorgi showed that by adding an electrical unit as a fourth base unit, the various anomalies in electromagnetic systems could be resolved. The metre–kilogram–second–coulomb (MKSC) and metre–kilogram–second–ampere (MKSA) systems are examples of such systems.
Metric system
Wikipedia
494
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
The metre–tonne–second system of units (MTS) was based on the metre, tonne and second – the unit of force was the sthène and the unit of pressure was the pièze. It was invented in France for industrial use and from 1933 to 1955 was used both in France and in the Soviet Union. Gravitational metric systems use the kilogram-force (kilopond) as a base unit of force, with mass measured in a unit known as the hyl, Technische Masseneinheit (TME), mug or metric slug. Although the CGPM passed a resolution in 1901 defining the standard value of acceleration due to gravity to be 980.665 cm/s2, gravitational units are not part of the International System of Units (SI). Current The International System of Units is the modern metric system. It is based on the metre–kilogram–second–ampere (MKSA) system of units from early in the 20th century. It also includes numerous coherent derived units for common quantities like power (watt) and irradience (lumen). Electrical units were taken from the International system then in use. Other units like those for energy (joule) were modelled on those from the older CGS system, but scaled to be coherent with MKSA units. Two additional base units – the kelvin, which is equivalent to degree Celsius for change in thermodynamic temperature but set so that 0 K is absolute zero, and the candela, which is roughly equivalent to the international candle unit of illumination – were introduced. Later, another base unit, the mole, a unit of amount of substance equivalent to the Avogadro number number of specified molecules, was added along with several other derived units. The system was promulgated by the General Conference on Weights and Measures (French: Conférence générale des poids et mesures – CGPM) in 1960. At that time, the metre was redefined in terms of the wavelength of a spectral line of the krypton-86 atom (krypton-86 being a stable isotope of an inert gas that occurs in undetectable or trace amounts naturally), and the standard metre artefact from 1889 was retired.
Metric system
Wikipedia
464
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
Today, the International system of units consists of 7 base units and innumerable coherent derived units including 22 with special names. The last new derived unit, the katal for catalytic activity, was added in 1999. All the base units except the second are now defined in terms of exact and invariant constants of physics or mathematics, barring those parts of their definitions which are dependent on the second itself. As a consequence, the speed of light has now become an exactly defined constant, and defines the metre as of the distance light travels in a second. The kilogram was defined by a cylinder of platinum-iridium alloy until a new definition in terms of natural physical constants was adopted in 2019. As of 2022, the range of decimal prefixes has been extended to those for 1030 (quetta–) and 10−30 (quecto–).
Metric system
Wikipedia
176
44142
https://en.wikipedia.org/wiki/Metric%20system
Physical sciences
Measurement systems
null
In physics, a conservative force is a force with the property that the total work done by the force in moving a particle between two points is independent of the path taken. Equivalently, if a particle travels in a closed loop, the total work done (the sum of the force acting along the path multiplied by the displacement) by a conservative force is zero. A conservative force depends only on the position of the object. If a force is conservative, it is possible to assign a numerical value for the potential at any point and conversely, when an object moves from one location to another, the force changes the potential energy of the object by an amount that does not depend on the path taken, contributing to the mechanical energy and the overall conservation of energy. If the force is not conservative, then defining a scalar potential is not possible, because taking different paths would lead to conflicting potential differences between the start and end points. Gravitational force is an example of a conservative force, while frictional force is an example of a non-conservative force. Other examples of conservative forces are: force in elastic spring, electrostatic force between two electric charges, and magnetic force between two magnetic poles. The last two forces are called central forces as they act along the line joining the centres of two charged/magnetized bodies. A central force is conservative if and only if it is spherically symmetric. For conservative forces, where is the conservative force, is the potential energy, and is the position. Informal definition Informally, a conservative force can be thought of as a force that conserves mechanical energy. Suppose a particle starts at point A, and there is a force F acting on it. Then the particle is moved around by other forces, and eventually ends up at A again. Though the particle may still be moving, at that instant when it passes point A again, it has traveled a closed path. If the net work done by F at this point is 0, then F passes the closed path test. Any force that passes the closed path test for all possible closed paths is classified as a conservative force. The gravitational force, spring force, magnetic force (according to some definitions, see below) and electric force (at least in a time-independent magnetic field, see Faraday's law of induction for details) are examples of conservative forces, while friction and air drag are classical examples of non-conservative forces.
Conservative force
Wikipedia
481
44158
https://en.wikipedia.org/wiki/Conservative%20force
Physical sciences
Classical mechanics
Physics
For non-conservative forces, the mechanical energy that is lost (not conserved) has to go somewhere else, by conservation of energy. Usually the energy is turned into heat, for example the heat generated by friction. In addition to heat, friction also often produces some sound energy. The water drag on a moving boat converts the boat's mechanical energy into not only heat and sound energy, but also wave energy at the edges of its wake. These and other energy losses are irreversible because of the second law of thermodynamics. Path independence A direct consequence of the closed path test is that the work done by a conservative force on a particle moving between any two points does not depend on the path taken by the particle. This is illustrated in the figure to the right: The work done by the gravitational force on an object depends only on its change in height because the gravitational force is conservative. The work done by a conservative force is equal to the negative of change in potential energy during that process. For a proof, imagine two paths 1 and 2, both going from point A to point B. The variation of energy for the particle, taking path 1 from A to B and then path 2 backwards from B to A, is 0; thus, the work is the same in path 1 and 2, i.e., the work is independent of the path followed, as long as it goes from A to B. For example, if a child slides down a frictionless slide, the work done by the gravitational force on the child from the start of the slide to the end is independent of the shape of the slide; it only depends on the vertical displacement of the child. Mathematical description A force field F, defined everywhere in space (or within a simply-connected volume of space), is called a conservative force or conservative vector field if it meets any of these three equivalent conditions: The curl of F is the zero vector: where in two dimensions this reduces to: There is zero net work (W) done by the force when moving a particle through a trajectory that starts and ends in the same place: The force can be written as the negative gradient of a potential, : The term conservative force comes from the fact that when a conservative force exists, it conserves mechanical energy. The most familiar conservative forces are gravity, the electric force (in a time-independent magnetic field, see Faraday's law), and spring force.
Conservative force
Wikipedia
495
44158
https://en.wikipedia.org/wiki/Conservative%20force
Physical sciences
Classical mechanics
Physics
Many forces (particularly those that depend on velocity) are not force fields. In these cases, the above three conditions are not mathematically equivalent. For example, the magnetic force satisfies condition 2 (since the work done by a magnetic field on a charged particle is always zero), but does not satisfy condition 3, and condition 1 is not even defined (the force is not a vector field, so one cannot evaluate its curl). Accordingly, some authors classify the magnetic force as conservative, while others do not. The magnetic force is an unusual case; most velocity-dependent forces, such as friction, do not satisfy any of the three conditions, and therefore are unambiguously nonconservative. Non-conservative force Despite conservation of total energy, non-conservative forces can arise in classical physics due to neglected degrees of freedom or from time-dependent potentials. Many non-conservative forces may be perceived as macroscopic effects of small-scale conservative forces. For instance, friction may be treated without violating conservation of energy by considering the motion of individual molecules; however, that means every molecule's motion must be considered rather than handling it through statistical methods. For macroscopic systems the non-conservative approximation is far easier to deal with than millions of degrees of freedom. Examples of non-conservative forces are friction and non-elastic material stress. Friction has the effect of transferring some of the energy from the large-scale motion of the bodies to small-scale movements in their interior, and therefore appear non-conservative on a large scale. General relativity is non-conservative, as seen in the anomalous precession of Mercury's orbit. However, general relativity does conserve a stress–energy–momentum pseudotensor.
Conservative force
Wikipedia
352
44158
https://en.wikipedia.org/wiki/Conservative%20force
Physical sciences
Classical mechanics
Physics
Ozone depletion consists of two related events observed since the late 1970s: a steady lowering of about four percent in the total amount of ozone in Earth's atmosphere, and a much larger springtime decrease in stratospheric ozone (the ozone layer) around Earth's polar regions. The latter phenomenon is referred to as the ozone hole. There are also springtime polar tropospheric ozone depletion events in addition to these stratospheric events. The main causes of ozone depletion and the ozone hole are manufactured chemicals, especially manufactured halocarbon refrigerants, solvents, propellants, and foam-blowing agents (chlorofluorocarbons (CFCs), HCFCs, halons), referred to as ozone-depleting substances (ODS). These compounds are transported into the stratosphere by turbulent mixing after being emitted from the surface, mixing much faster than the molecules can settle. Once in the stratosphere, they release atoms from the halogen group through photodissociation, which catalyze the breakdown of ozone (O3) into oxygen (O2). Both types of ozone depletion were observed to increase as emissions of halocarbons increased. Ozone depletion and the ozone hole have generated worldwide concern over increased cancer risks and other negative effects. The ozone layer prevents harmful wavelengths of ultraviolet (UVB) light from passing through the Earth's atmosphere. These wavelengths cause skin cancer, sunburn, permanent blindness, and cataracts, which were projected to increase dramatically as a result of thinning ozone, as well as harming plants and animals. These concerns led to the adoption of the Montreal Protocol in 1987, which bans the production of CFCs, halons, and other ozone-depleting chemicals. Over time, scientists have developed new refrigerants with lower global warming potential (GWP) to replace older ones. For example, in new automobiles, R-1234yf systems are now common, being chosen over refrigerants with much higher GWP such as R-134a and R-12.
Ozone depletion
Wikipedia
442
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Physical sciences
Atmosphere
null
The ban came into effect in 1989. Ozone levels stabilized by the mid-1990s and began to recover in the 2000s, as the shifting of the jet stream in the southern hemisphere towards the south pole has stopped and might even be reversing. Recovery was projected to continue over the next century, with the ozone hole expected to reach pre-1980 levels by around 2075. In 2019, NASA reported that the ozone hole was the smallest ever since it was first discovered in 1982. The UN now projects that under the current regulations the ozone layer will completely regenerate by 2045. The Montreal Protocol is considered the most successful international environmental agreement to date. Ozone cycle overview Three forms (or allotropes) of oxygen are involved in the ozone-oxygen cycle: oxygen atoms (O or atomic oxygen), oxygen gas ( or diatomic oxygen), and ozone gas ( or triatomic oxygen). Ozone is formed in the stratosphere when oxygen gas molecules photodissociate after absorbing UVC photons. This converts a single into two atomic oxygen radicals. The atomic oxygen radicals then combine with separate molecules to create two molecules. These ozone molecules absorb UVB light, following which ozone splits into a molecule of and an oxygen atom. The oxygen atom then joins up with an oxygen molecule to regenerate ozone. This is a continuing process that terminates when an oxygen atom recombines with an ozone molecule to make two molecules. It is worth noting that ozone is the only atmospheric gas that absorbs UVB light. O + → 2 The total amount of ozone in the stratosphere is determined by a balance between photochemical production and recombination. Ozone can be destroyed by a number of free radical catalysts; the most important are the hydroxyl radical (OH·), nitric oxide radical (NO·), chlorine radical (Cl·) and bromine radical (Br·). The dot is a notation to indicate that each species has an unpaired electron and is thus extremely reactive. The effectiveness of different halogens and pseudohalogens as catalysts for ozone destruction varies, in part due to differing routes to regenerate the original radical after reacting with ozone or dioxygen.
Ozone depletion
Wikipedia
457
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Physical sciences
Atmosphere
null
While all of the relevant radicals have both natural and man-made sources, human activity has impacted some more than others. As of 2020, most of the OH· and NO· in the stratosphere is naturally occurring, but human activity has drastically increased the levels of chlorine and bromine. These elements are found in stable organic compounds, especially chlorofluorocarbons, which can travel to the stratosphere without being destroyed in the troposphere due to their low reactivity. Once in the stratosphere, the Cl and Br atoms are released from the parent compounds by the action of ultraviolet light, e.g. + electromagnetic radiation → Cl· + · Ozone is a highly reactive molecule that easily reduces to the more stable oxygen form with the assistance of a catalyst. Cl and Br atoms destroy ozone molecules through a variety of catalytic cycles. In the simplest example of such a cycle, a chlorine atom reacts with an ozone molecule (), taking an oxygen atom to form chlorine monoxide (ClO) and leaving an oxygen molecule (). The ClO can react with a second molecule of ozone, releasing the chlorine atom and yielding two molecules of oxygen. The chemical shorthand for these gas-phase reactions is: Cl· + → ClO + A chlorine atom removes an oxygen atom from an ozone molecule to make a ClO molecule ClO + → Cl· + 2 This ClO can also remove an oxygen atom from another ozone molecule; the chlorine is free to repeat this two-step cycle The overall effect is a decrease in the amount of ozone, though the rate of these processes can be decreased by the effects of null cycles. More complicated mechanisms have also been discovered that lead to ozone destruction in the lower stratosphere.
Ozone depletion
Wikipedia
365
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Physical sciences
Atmosphere
null
A single chlorine atom would continuously destroy ozone (thus a catalyst) for up to two years (the time scale for transport back down to the troposphere) except for reactions that remove it from this cycle by forming reservoir species such as hydrogen chloride (HCl) and chlorine nitrate (). Bromine is even more efficient than chlorine at destroying ozone on a per-atom basis, but there is much less bromine in the atmosphere at present. Both chlorine and bromine contribute significantly to overall ozone depletion. Laboratory studies have also shown that fluorine and iodine atoms participate in analogous catalytic cycles. However, fluorine atoms react rapidly with water vapour, methane and hydrogen to form strongly bound hydrogen fluoride (HF) in the Earth's stratosphere, while organic molecules containing iodine react so rapidly in the lower atmosphere that they do not reach the stratosphere in significant quantities. A single chlorine atom is able to react with an average of 100,000 ozone molecules before it is removed from the catalytic cycle. This fact plus the amount of chlorine released into the atmosphere yearly by chlorofluorocarbons (CFCs) and hydrochlorofluorocarbons (HCFCs) demonstrates the danger of CFCs and HCFCs to the environment. Observations on ozone layer depletion The ozone hole is usually measured by reduction in the total column ozone above a point on the Earth's surface. This is normally expressed in Dobson units; abbreviated as "DU". The most prominent decrease in ozone has been in the lower stratosphere. Marked decreases in column ozone in the Antarctic spring and early summer compared to the early 1970s and before have been observed using instruments such as the Total Ozone Mapping Spectrometer (TOMS). Reductions of up to 70 percent in the ozone column observed in the austral (southern hemispheric) spring over Antarctica and first reported in 1985 (Farman et al.) are continuing. Antarctic total column ozone in September and October have continued to be 40–50 percent lower than pre-ozone-hole values since the 1990s. A gradual trend toward "healing" was reported in 2016. In 2017, NASA announced that the ozone hole was the weakest since 1988 because of warm stratospheric conditions. It is expected to recover around 2070.
Ozone depletion
Wikipedia
487
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Physical sciences
Atmosphere
null
The amount lost is more variable year-to-year in the Arctic than in the Antarctic. The greatest Arctic declines are in the winter and spring, reaching up to 30 percent when the stratosphere is coldest. Reactions that take place on polar stratospheric clouds (PSCs) play an important role in enhancing ozone depletion. PSCs form more readily in the extreme cold of the Arctic and Antarctic stratosphere. This is why ozone holes first formed, and are deeper, over Antarctica. Early models failed to take PSCs into account and predicted a gradual global depletion, which is why the sudden Antarctic ozone hole was such a surprise to many scientists. It is more accurate to speak of ozone depletion in middle latitudes rather than holes. Total column ozone declined below pre-1980 values between 1980 and 1996 for mid-latitudes. In the northern mid-latitudes, it then increased from the minimum value by about two percent from 1996 to 2009 as regulations took effect and the amount of chlorine in the stratosphere decreased. In the Southern Hemisphere's mid-latitudes, total ozone remained constant over that time period. There are no significant trends in the tropics, largely because halogen-containing compounds have not had time to break down and release chlorine and bromine atoms at tropical latitudes. Large volcanic eruptions have been shown to have substantial albeit uneven ozone-depleting effects, as observed with the 1991 eruption of Mt. Pinatubo in the Philippines. Ozone depletion also explains much of the observed reduction in stratospheric and upper tropospheric temperatures. The source of the warmth of the stratosphere is the absorption of UV radiation by ozone, hence reduced ozone leads to cooling. Some stratospheric cooling is also predicted from increases in greenhouse gases such as and CFCs themselves; however, the ozone-induced cooling appears to be dominant. Predictions of ozone levels remain difficult, but the precision of models' predictions of observed values and the agreement among different modeling techniques have increased steadily. The World Meteorological Organization Global Ozone Research and Monitoring Project—Report No. 44 is strongly in favor of the Montreal Protocol, but notes that a UNEP 1994 Assessment overestimated ozone loss for the 1994–1997 period. Compounds in the atmosphere
Ozone depletion
Wikipedia
473
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Physical sciences
Atmosphere
null
CFCs and related compounds Chlorofluorocarbons (CFCs) and other halogenated ozone-depleting substances (ODS) are mainly responsible for man-made chemical ozone depletion. The total amount of effective halogens (chlorine and bromine) in the stratosphere can be calculated and are known as the equivalent effective stratospheric chlorine (EESC). CFCs as refrigerants were invented by Thomas Midgley Jr. in the 1930s. They were used in air conditioning and cooling units, as aerosol spray propellants prior to the 1970s, and in the cleaning processes of delicate electronic equipment. They also occur as by-products of some chemical processes. No significant natural sources have ever been identified for these compounds—their presence in the atmosphere is due almost entirely to human manufacture. As mentioned above, when such ozone-depleting chemicals reach the stratosphere, they are dissociated by ultraviolet light to release chlorine atoms. The chlorine atoms act as a catalyst, and each can break down tens of thousands of ozone molecules before being removed from the stratosphere. Given the longevity of CFC molecules, recovery times are measured in decades. It is calculated that a CFC molecule takes an average of about five to seven years to go from the ground level up to the upper atmosphere, and it can stay there for about a century, destroying up to one hundred thousand ozone molecules during that time. 1,1,1-Trichloro-2,2,2-trifluoroethane, also known as CFC-113a, is one of four man-made chemicals newly discovered in the atmosphere by a team at the University of East Anglia. CFC-113a is the only known CFC whose abundance in the atmosphere is still growing. Its source remains a mystery, but illegal manufacturing is suspected by some. CFC-113a seems to have been accumulating unabated since 1960. Between 2012 and 2017, concentrations of the gas jumped by 40 percent. A study by an international team of researchers published in Nature found that since 2013 emissions that are predominately from north-eastern China have released large quantities of the banned chemical Chlorofluorocarbon-11 (CFC-11) into the atmosphere. Scientists estimate that without action, these CFC-11 emissions will delay the recovery of the planet's ozone hole by a decade.
Ozone depletion
Wikipedia
507
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Physical sciences
Atmosphere
null
Aluminum oxide Satellites burning up upon re-entry into Earth's atmosphere produce aluminum oxide (Al2O3) nanoparticles that endure in the atmosphere for decades. Estimates for 2022 alone were ~17 metric tons (~30kg of nanoparticles per ~250kg satellite). Increasing populations of satellite constellations can eventually lead to significant ozone depletion. Computer modeling Scientists have attributed ozone depletion to the increase of man-made (anthropogenic) halogen compounds from CFCs by combining observational data with computer models. These complex chemistry transport models (e.g. SLIMCAT, CLaMS—Chemical Lagrangian Model of the Stratosphere) work by combining measurements of chemicals and meteorological fields with chemical reaction rate constants. They identify key chemical reactions and transport processes that bring CFC photolysis products into contact with ozone. Ozone hole and its causes The Antarctic ozone hole is an area of the Antarctic stratosphere in which the recent ozone levels have dropped to as low as 33 percent of their pre-1975 values. The ozone hole occurs during the Antarctic spring, from September to early December, as strong westerly winds start to circulate around the continent and create an atmospheric container. Within this polar vortex, over 50 percent of the lower stratospheric ozone is destroyed during the Antarctic spring. As explained above, the primary cause of ozone depletion is the presence of chlorine-containing source gases (primarily CFCs and related halocarbons). In the presence of UV light, these gases dissociate, releasing chlorine atoms, which then go on to catalyze ozone destruction. The Cl-catalyzed ozone depletion can take place in the gas phase, but it is substantially enhanced in the presence of polar stratospheric clouds (PSCs). These polar stratospheric clouds form during winter, in the extreme cold. Polar winters are dark, consisting of three months without solar radiation (sunlight). The lack of sunlight contributes to a decrease in temperature and the polar vortex traps and chills the air. Temperatures are around or below −80 °C. These low temperatures form cloud particles. There are three types of PSC clouds—nitric acid trihydrate clouds, slowly cooling water-ice clouds, and rapid cooling water-ice (nacreous) clouds—provide surfaces for chemical reactions whose products will, in the spring lead to ozone destruction.
Ozone depletion
Wikipedia
506
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Physical sciences
Atmosphere
null
The photochemical processes involved are complex but well understood. The key observation is that, ordinarily, most of the chlorine in the stratosphere resides in "reservoir" compounds, primarily chlorine nitrate () as well as stable end products such as HCl. The formation of end products essentially removes Cl from the ozone depletion process. Reservoir compounds sequester Cl, which can later be made available via absorption of light at wavelengths shorter than 400 nm. During the Antarctic winter and spring, reactions on the surface of the polar stratospheric cloud particles convert these "reservoir" compounds into reactive free radicals (Cl and ClO). Denitrification is the process by which the clouds remove from the stratosphere by converting it to nitric acid in PSC particles, which then are lost by sedimentation. This prevents newly formed ClO from being converted back into . The role of sunlight in ozone depletion is the reason why the Antarctic ozone depletion is greatest during spring. During winter, even though PSCs are at their most abundant, there is no light over the pole to drive chemical reactions. During the spring, however, sunlight returns and provides energy to drive photochemical reactions and melt the polar stratospheric clouds, releasing considerable ClO, which drives the hole mechanism. Further warming temperatures near the end of spring break up the vortex around mid-December. As warm, ozone and -rich air flows in from lower latitudes, the PSCs are destroyed, the enhanced ozone depletion process shuts down, and the ozone hole closes. Most of the ozone that is destroyed is in the lower stratosphere, in contrast to the much smaller ozone depletion through homogeneous gas-phase reactions, which occurs primarily in the upper stratosphere.
Ozone depletion
Wikipedia
368
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Physical sciences
Atmosphere
null
Effects Since the ozone layer absorbs UVB ultraviolet light from the sun, ozone layer depletion increases surface UVB levels (all else equal), which could lead to damage, including an increase in skin cancer. This was the reason for the Montreal Protocol. Although decreases in stratospheric ozone are well-tied to CFCs and increases in surface UVB, there is no direct observational evidence linking ozone depletion to higher incidence of skin cancer and eye damage in human beings. This is partly because UVA, which has also been implicated in some forms of skin cancer, is not absorbed by ozone, and because it is nearly impossible to control statistics for lifestyle changes over time. Ozone depletion may also influence wind patterns. Increased UV Ozone, while a minority constituent in Earth's atmosphere, is responsible for most of the absorption of UVB radiation. The amount of UVB radiation that penetrates through the ozone layer decreases exponentially with the slant-path thickness and density of the layer. When stratospheric ozone levels decrease, higher levels of UVB reach the Earth's surface. UV-driven phenolic formation in tree rings has dated the start of ozone depletion in northern latitudes to the late 1700s. In October 2008, the Ecuadorian Space Agency published a report called HIPERION. The study used ground instruments in Ecuador and the last 28 years' data from 12 satellites of several countries, and found that the UV radiation reaching equatorial latitudes was far greater than expected, with the UV Index climbing as high as 24 in Quito; the WHO considers 11 as an extreme index and a great risk to health. The report concluded that depleted ozone levels around the mid-latitudes of the planet are already endangering large populations in these areas. Later, the CONIDA, the Peruvian Space Agency, published its own study, which yielded almost the same findings as the Ecuadorian study.
Ozone depletion
Wikipedia
393
44183
https://en.wikipedia.org/wiki/Ozone%20depletion
Physical sciences
Atmosphere
null