id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
95465
https://en.wikipedia.org/wiki/Stirling%20number
Stirling number
In mathematics, Stirling numbers arise in a variety of analytic and combinatorial problems. They are named after James Stirling, who introduced them in a purely algebraic setting in his book Methodus differentialis (1730). They were rediscovered and given a combinatorial meaning by Masanobu Saka in 1782. Two different sets of numbers bear this name: the Stirling numbers of the first kind and the Stirling numbers of the second kind. Additionally, Lah numbers are sometimes referred to as Stirling numbers of the third kind. Each kind is detailed in its respective article, this one serving as a description of relations between them. A common property of all three kinds is that they describe coefficients relating three different sequences of polynomials that frequently arise in combinatorics. Moreover, all three can be defined as the number of partitions of n elements into k non-empty subsets, where each subset is endowed with a certain kind of order (no order, cyclical, or linear). Notation Several different notations for Stirling numbers are in use. Ordinary (signed) Stirling numbers of the first kind are commonly denoted: Unsigned Stirling numbers of the first kind, which count the number of permutations of n elements with k disjoint cycles, are denoted: Stirling numbers of the second kind, which count the number of ways to partition a set of n elements into k nonempty subsets: Abramowitz and Stegun use an uppercase and a blackletter , respectively, for the first and second kinds of Stirling number. The notation of brackets and braces, in analogy to binomial coefficients, was introduced in 1935 by Jovan Karamata and promoted later by Donald Knuth, though the bracket notation conflicts with a common notation for Gaussian coefficients. The mathematical motivation for this type of notation, as well as additional Stirling number formulae, may be found on the page for Stirling numbers and exponential generating functions. Another infrequent notation is and . Expansions of falling and rising factorials Stirling numbers express coefficients in expansions of falling and rising factorials (also known as the Pochhammer symbol) as polynomials. That is, the falling factorial, defined as is a polynomial in of degree whose expansion is with (signed) Stirling numbers of the first kind as coefficients. Note that by convention, because it is an empty product. The notations for the falling factorial and for the rising factorial are also often used. (Confusingly, the Pochhammer symbol that many use for falling factorials is used in special functions for rising factorials.) Similarly, the rising factorial, defined as is a polynomial in of degree whose expansion is with unsigned Stirling numbers of the first kind as coefficients. One of these expansions can be derived from the other by observing that Stirling numbers of the second kind express the reverse relations: and As change of basis coefficients Considering the set of polynomials in the (indeterminate) variable x as a vector space, each of the three sequences is a basis. That is, every polynomial in x can be written as a sum for some unique coefficients (similarly for the other two bases). The above relations then express the change of basis between them, as summarized in the following commutative diagram: The coefficients for the two bottom changes are described by the Lah numbers below. Since coefficients in any basis are unique, one can define Stirling numbers this way, as the coefficients expressing polynomials of one basis in terms of another, that is, the unique numbers relating with falling and rising factorials as above. Falling factorials define, up to scaling, the same polynomials as binomial coefficients: . The changes between the standard basis and the basis are thus described by similar formulas: . Example Expressing a polynomial in the basis of falling factorials is useful for calculating sums of the polynomial evaluated at consecutive integers. Indeed, the sum of falling factorials with fixed k can expressed as another falling factorial (for ) This can be proved by induction. For example, the sum of fourth powers of integers up to n (this time with n included), is: Here the Stirling numbers can be computed from their definition as the number of partitions of 4 elements into k non-empty unlabeled subsets. In contrast, the sum in the standard basis is given by Faulhaber's formula, which in general is more complicated. As inverse matrices The Stirling numbers of the first and second kinds can be considered inverses of one another: and where is the Kronecker delta. These two relationships may be understood to be matrix inverse relationships. That is, let s be the lower triangular matrix of Stirling numbers of the first kind, whose matrix elements The inverse of this matrix is S, the lower triangular matrix of Stirling numbers of the second kind, whose entries are Symbolically, this is written Although s and S are infinite, so calculating a product entry involves an infinite sum, the matrix multiplications work because these matrices are lower triangular, so only a finite number of terms in the sum are nonzero. Lah numbers The Lah numbers are sometimes called Stirling numbers of the third kind. By convention, and if or . These numbers are coefficients expressing falling factorials in terms of rising factorials and vice versa: and As above, this means they express the change of basis between the bases and , completing the diagram. In particular, one formula is the inverse of the other, thus: Similarly, composing the change of basis from to with the change of basis from to gives the change of basis directly from to : and similarly for other compositions. In terms of matrices, if denotes the matrix with entries and denotes the matrix with entries , then one is the inverse of the other: . Composing the matrix of unsigned Stirling numbers of the first kind with the matrix of Stirling numbers of the second kind gives the Lah numbers: . Enumeratively, can be defined as the number of partitions of n elements into k non-empty unlabeled subsets, where each subset is endowed with no order, a cyclic order, or a linear order, respectively. In particular, this implies the inequalities: Inversion relations and the Stirling transform For any pair of sequences, and , related by a finite sum Stirling number formula given by for all integers , we have a corresponding inversion formula for given by The lower indices could be any integer between and . These inversion relations between the two sequences translate into functional equations between the sequence exponential generating functions given by the Stirling (generating function) transform as and For , the differential operators and are related by the following formulas for all integers : Another pair of "inversion" relations involving the Stirling numbers relate the forward differences and the ordinary derivatives of a function, , which is analytic for all by the formulas Similar properties See the specific articles for details. Symmetric formulae Abramowitz and Stegun give the following symmetric formulae that relate the Stirling numbers of the first and second kind. and Stirling numbers with negative integral values The Stirling numbers can be extended to negative integral values, but not all authors do so in the same way. Regardless of the approach taken, it is worth noting that Stirling numbers of first and second kind are connected by the relations: when n and k are nonnegative integers. So we have the following table for : Donald Knuth defined the more general Stirling numbers by extending a recurrence relation to all integers. In this approach, and are zero if n is negative and k is nonnegative, or if n is nonnegative and k is negative, and so we have, for any integers n and k, On the other hand, for positive integers n and k, David Branson defined and (but not or ). In this approach, one has the following extension of the recurrence relation of the Stirling numbers of the first kind: , For example, This leads to the following table of values of for negative integral n. In this case where is a Bell number, and so one may define the negative Bell numbers by . For example, this produces , generally .
Mathematics
Combinatorics
null
95807
https://en.wikipedia.org/wiki/Radiography
Radiography
Radiography is an imaging technique using X-rays, gamma rays, or similar ionizing radiation and non-ionizing radiation to view the internal form of an object. Applications of radiography include medical ("diagnostic" radiography and "therapeutic radiography") and industrial radiography. Similar techniques are used in airport security, (where "body scanners" generally use backscatter X-ray). To create an image in conventional radiography, a beam of X-rays is produced by an X-ray generator and it is projected towards the object. A certain amount of the X-rays or other radiation are absorbed by the object, dependent on the object's density and structural composition. The X-rays that pass through the object are captured behind the object by a detector (either photographic film or a digital detector). The generation of flat two-dimensional images by this technique is called projectional radiography. In computed tomography (CT scanning), an X-ray source and its associated detectors rotate around the subject, which itself moves through the conical X-ray beam produced. Any given point within the subject is crossed from many directions by many different beams at different times. Information regarding the attenuation of these beams is collated and subjected to computation to generate two-dimensional images on three planes (axial, coronal, and sagittal) which can be further processed to produce a three-dimensional image. History Radiography's origins and fluoroscopy's origins can both be traced to 8 November 1895, when German physics professor Wilhelm Conrad Röntgen discovered the X-ray and noted that, while it could pass through human tissue, it could not pass through bone or metal. Röntgen referred to the radiation as "X", to indicate that it was an unknown type of radiation. He received the first Nobel Prize in Physics for his discovery. There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers: Röntgen was investigating cathode rays using a fluorescent screen painted with barium platinocyanide and a Crookes tube which he had wrapped in black cardboard to shield its fluorescent glow. He noticed a faint green glow from the screen, about 1 metre away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow: they were passing through an opaque object to affect the film behind it. Röntgen discovered X-rays' medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first ever photograph of a human body part using X-rays. When she saw the picture, she said, "I have seen my death." The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England, on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On 14 February 1896, Hall-Edwards also became the first to use X-rays in a surgical operation. The United States saw its first medical X-ray obtained using a discharge tube of Ivan Pulyui's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Pulyui tube produced X-rays. This was a result of Pulyui's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896 Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work. X-rays were put to diagnostic use very early; for example, Alan Archibald Campbell-Swinton opened a radiographic laboratory in the United Kingdom in 1896, before the dangers of ionizing radiation were discovered. Indeed, Marie Curie pushed for radiography to be used to treat wounded soldiers in World War I. Initially, many kinds of staff conducted radiography in hospitals, including physicists, photographers, physicians, nurses, and engineers. The medical speciality of radiology grew up over many years around the new technology. When new diagnostic tests were developed, it was natural for the radiographers to be trained in and to adopt this new technology. Radiographers now perform fluoroscopy, computed tomography, mammography, ultrasound, nuclear medicine and magnetic resonance imaging as well. Although a nonspecialist dictionary might define radiography quite narrowly as "taking X-ray images", this has long been only part of the work of "X-ray departments", radiographers, and radiologists. Initially, radiographs were known as roentgenograms, while skiagrapher (from the Ancient Greek words for "shadow" and "writer") was used until about 1918 to mean radiographer. The Japanese term for the radiograph, , shares its etymology with the original English term. Medical uses Since the body is made up of various substances with differing densities, ionising and non-ionising radiation can be used to reveal the internal structure of the body on an image receptor by highlighting these differences using attenuation, or in the case of ionising radiation, the absorption of X-ray photons by the denser substances (like calcium-rich bones). The discipline involving the study of anatomy through the use of radiographic images is known as radiographic anatomy. Medical radiography acquisition is generally carried out by radiographers, while image analysis is generally done by radiologists. Some radiographers also specialise in image interpretation. Medical radiography includes a range of modalities producing many different types of image, each of which has a different clinical application. Projectional radiography The creation of images by exposing an object to X-rays or other high-energy forms of electromagnetic radiation and capturing the resulting remnant beam (or "shadow") as a latent image is known as "projection radiography". The "shadow" may be converted to light using a fluorescent screen, which is then captured on photographic film, it may be captured by a phosphor screen to be "read" later by a laser (CR), or it may directly activate a matrix of solid-state detectors (DR—similar to a very large version of a CCD in a digital camera). Bone and some organs (such as lungs) especially lend themselves to projection radiography. It is a relatively low-cost investigation with a high diagnostic yield. The difference between soft and hard body parts stems mostly from the fact that carbon has a very low X-ray cross section compared to calcium. Computed tomography Computed tomography or CT scan (previously known as CAT scan, the "A" standing for "axial") uses ionizing radiation (x-ray radiation) in conjunction with a computer to create images of both soft and hard tissues. These images look as though the patient was sliced like bread (thus, "tomography" – "tomo" means "slice"). Though CT uses a higher amount of ionizing x-radiation than diagnostic x-rays (both utilising X-ray radiation), with advances in technology, levels of CT radiation dose and scan times have reduced. CT exams are generally short, most lasting only as long as a breath-hold, Contrast agents are also often used, depending on the tissues needing to be seen. Radiographers perform these examinations, sometimes in conjunction with a radiologist (for instance, when a radiologist performs a CT-guided biopsy). Dual energy X-ray absorptiometry DEXA, or bone densitometry, is used primarily for osteoporosis tests. It is not projection radiography, as the X-rays are emitted in two narrow beams that are scanned across the patient, 90 degrees from each other. Usually the hip (head of the femur), lower back (lumbar spine), or heel (calcaneum) are imaged, and the bone density (amount of calcium) is determined and given a number (a T-score). It is not used for bone imaging, as the image quality is not good enough to make an accurate diagnostic image for fractures, inflammation, etc. It can also be used to measure total body fat, though this is not common. The radiation dose received from DEXA scans is very low, much lower than projection radiography examinations. Fluoroscopy Fluoroscopy is a term invented by Thomas Edison during his early X-ray studies. The name refers to the fluorescence he saw while looking at a glowing plate bombarded with X-rays. The technique provides moving projection radiographs. Fluoroscopy is mainly performed to view movement (of tissue or a contrast agent), or to guide a medical intervention, such as angioplasty, pacemaker insertion, or joint repair/replacement. The last can often be carried out in the operating theatre, using a portable fluoroscopy machine called a C-arm. It can move around the surgery table and make digital images for the surgeon. Biplanar Fluoroscopy works the same as single plane fluoroscopy except displaying two planes at the same time. The ability to work in two planes is important for orthopedic and spinal surgery and can reduce operating times by eliminating re-positioning. Angiography is the use of fluoroscopy to view the cardiovascular system. An iodine-based contrast is injected into the bloodstream and watched as it travels around. Since liquid blood and the vessels are not very dense, a contrast with high density (like the large iodine atoms) is used to view the vessels under X-ray. Angiography is used to find aneurysms, leaks, blockages (thromboses), new vessel growth, and placement of catheters and stents. Balloon angioplasty is often done with angiography. Contrast radiography Contrast radiography uses a radiocontrast agent, a type of contrast medium, to make the structures of interest stand out visually from their background. Contrast agents are required in conventional angiography, and can be used in both projectional radiography and computed tomography (called contrast CT). Other medical imaging Although not technically radiographic techniques due to not using X-rays, imaging modalities such as PET and MRI are sometimes grouped in radiography because the radiology department of hospitals handle all forms of imaging. Treatment using radiation is known as radiotherapy. Industrial radiography Industrial radiography is a method of non-destructive testing where many types of manufactured components can be examined to verify the internal structure and integrity of the specimen. Industrial Radiography can be performed utilizing either X-rays or gamma rays. Both are forms of electromagnetic radiation. The difference between various forms of electromagnetic energy is related to the wavelength. X and gamma rays have the shortest wavelength and this property leads to the ability to penetrate, travel through, and exit various materials such as carbon steel and other metals. Specific methods include industrial computed tomography. Image quality Image quality will depend on resolution and density. Resolution is the ability an image to show closely spaced structure in the object as separate entities in the image while density is the blackening power of the image. Sharpness of a radiographic image is strongly determined by the size of the X-ray source. This is determined by the area of the electron beam hitting the anode. A large photon source results in more blurring in the final image and is worsened by an increase in image formation distance. This blurring can be measured as a contribution to the modulation transfer function of the imaging system. Radiation dose The dosage of radiation applied in radiography varies by procedure. For example, the effective dosage of a chest x-ray is 0.1 mSv, while an abdominal CT is 10 mSv. The American Association of Physicists in Medicine (AAPM) have stated that the "risks of medical imaging at patient doses below 50 mSv for single procedures or 100 mSv for multiple procedures over short time periods are too low to be detectable and may be nonexistent." Other scientific bodies sharing this conclusion include the International Organization of Medical Physicists, the UN Scientific Committee on the Effects of Atomic Radiation, and the International Commission on Radiological Protection. Nonetheless, radiological organizations, including the Radiological Society of North America (RSNA) and the American College of Radiology (ACR), as well as multiple government agencies, indicate safety standards to ensure that radiation dosage is as low as possible. Shielding Lead is the most common shield against X-rays because of its high density (11,340 kg/m3), stopping power, ease of installation and low cost. The maximum range of a high-energy photon such as an X-ray in matter is infinite; at every point in the matter traversed by the photon, there is a probability of interaction. Thus there is a very small probability of no interaction over very large distances. The shielding of photon beam is therefore exponential (with an attenuation length being close to the radiation length of the material); doubling the thickness of shielding will square the shielding effect. Table in this section shows the recommended thickness of lead shielding in function of X-ray energy, from the Recommendations by the Second International Congress of Radiology. Campaigns In response to increased concern by the public over radiation doses and the ongoing progress of best practices, The Alliance for Radiation Safety in Pediatric Imaging was formed within the Society for Pediatric Radiology. In concert with the American Society of Radiologic Technologists, the American College of Radiology, and the American Association of Physicists in Medicine, the Society for Pediatric Radiology developed and launched the Image Gently campaign which is designed to maintain high quality imaging studies while using the lowest doses and best radiation safety practices available on pediatric patients. This initiative has been endorsed and applied by a growing list of various professional medical organizations around the world and has received support and assistance from companies that manufacture equipment used in radiology. Following upon the success of the Image Gently campaign, the American College of Radiology, the Radiological Society of North America, the American Association of Physicists in Medicine, and the American Society of Radiologic Technologists have launched a similar campaign to address this issue in the adult population called Image Wisely. The World Health Organization and International Atomic Energy Agency (IAEA) of the United Nations have also been working in this area and have ongoing projects designed to broaden best practices and lower patient radiation dose. Provider payment Contrary to advice that emphasises only conducting radiographs when in the patient's interest, recent evidence suggests that they are used more frequently when dentists are paid under fee-for-service. Equipment Sources In medicine and dentistry, projectional radiography and computed tomography images generally use X-rays created by X-ray generators, which generate X-rays from X-ray tubes. The resultant images from the radiograph (X-ray generator/machine) or CT scanner are correctly referred to as "radiograms"/"roentgenograms" and "tomograms" respectively. A number of other sources of X-ray photons are possible, and may be used in industrial radiography or research; these include betatrons, linear accelerators (linacs), and synchrotrons. For gamma rays, radioactive sources such as 192Ir, 60Co, or 137Cs are used. Grid An anti-scatter grid may be placed between the patient and the detector to reduce the quantity of scattered x-rays that reach the detector. This improves the contrast resolution of the image, but also increases radiation exposure for the patient. Detectors Detectors can be divided into two major categories: imaging detectors (such as photographic plates and X-ray film (photographic film), now mostly replaced by various digitizing devices like image plates or flat panel detectors) and dose measurement devices (such as ionization chambers, Geiger counters, and dosimeters used to measure the local radiation exposure, dose, and/or dose rate, for example, for verifying that radiation protection equipment and procedures are effective on an ongoing basis). Side markers A radiopaque anatomical side marker is added to each image. For example, if the patient has their right hand x-rayed, the radiographer includes a radiopaque "R" marker within the field of the x-ray beam as an indicator of which hand has been imaged. If a physical marker is not included, the radiographer may add the correct side marker later as part of digital post-processing. Image intensifiers and array detectors As an alternative to X-ray detectors, image intensifiers are analog devices that readily convert the acquired X-ray image into one visible on a video screen. This device is made of a vacuum tube with a wide input surface coated on the inside with caesium iodide (CsI). When hit by X-rays, phosphor material causes the photocathode adjacent to it to emit electrons. These electrons are then focused using electron lenses inside the intensifier to an output screen coated with phosphorescent materials. The image from the output can then be recorded via a camera and displayed. Digital devices known as array detectors are becoming more common in fluoroscopy. These devices are made of discrete pixelated detectors known as thin-film transistors (TFT) which can either work indirectly by using photo detectors that detect light emitted from a scintillator material such as CsI, or directly by capturing the electrons produced when the X-rays hit the detector. Direct detectors do not tend to experience the blurring or spreading effect caused by phosphorescent scintillators or by film screens since the detectors are activated directly by X-ray photons. Dual-energy Dual-energy radiography is where images are acquired using two separate tube voltages. This is the standard method for bone densitometry. It is also used in CT pulmonary angiography to decrease the required dose of iodinated contrast.
Technology
Imaging
null
19194676
https://en.wikipedia.org/wiki/S-type%20star
S-type star
An S-type star (or just S star) is a cool giant with approximately equal quantities of carbon and oxygen in its atmosphere. The class was originally defined in 1922 by Paul Merrill for stars with unusual absorption lines and molecular bands now known to be due to s-process elements. The bands of zirconium monoxide (ZrO) are a defining feature of the S stars. The carbon stars have more carbon than oxygen in their atmospheres. In most stars, such as class M giants, the atmosphere is richer in oxygen than carbon and they are referred to as oxygen-rich stars. S-type stars are intermediate between carbon stars and normal giants. They can be grouped into two classes: intrinsic S stars, which owe their spectra to convection of fusion products and s-process elements to the surface; and extrinsic S stars, which are formed through mass transfer in a binary system. The intrinsic S-type stars are on the most luminous portion of the asymptotic giant branch, a stage of their lives lasting less than a million years. Many are long period variable stars. The extrinsic S stars are less luminous and longer-lived, often smaller-amplitude semiregular or irregular variables. S stars are relatively rare, with intrinsic S stars forming less than 10% of asymptotic giant branch stars of comparable luminosity, while extrinsic S stars form an even smaller proportion of all red giants. Spectral features Cool stars, particularly class M, show molecular bands, with titanium(II) oxide (TiO) especially strong. A small proportion of these cool stars also show correspondingly strong bands of zirconium oxide (ZrO). The existence of clearly detectable ZrO bands in visual spectra is the definition of an S-type star. The main ZrO series are: α series, in the blue at 464.06 nm, 462.61 nm, and 461.98 nm β series, in the yellow at 555.17 nm and 571.81 nm γ series, in the red at 647.4 nm, 634.5 nm, and 622.9 nm The original definition of an S star was that the ZrO bands should be easily detectable on low dispersion photographic spectral plates, but more modern spectra allow identification of many stars with much weaker ZrO. MS stars, intermediate with normal class M stars, have barely detectable ZrO but otherwise normal class M spectra. SC stars, intermediate with carbon stars, have weak or undetectable ZrO, but strong sodium D lines and detectable but weak C2 bands. S star spectra also show other differences to those of normal M class giants. The characteristic TiO bands of cool giants are weakened in most S stars, compared to M stars of similar temperature, and completely absent in some. Features related to s-process isotopes such as YO bands, Sr lines, Ba lines, and LaO bands, and also sodium D lines are all much stronger. However, VO bands are absent or very weak. The existence of spectral lines from the period 5 element Technetium (Tc) is also expected as a result of the s-process neutron capture, but a substantial fraction of S stars show no sign of Tc. Stars with strong Tc lines are sometimes referred to as Technetium stars, and they can be of class M, S, C, or the intermediate MS and SC. Some S stars, especially Mira variables, show strong hydrogen emission lines. The Hβ emission is often unusually strong compared to other lines of the Balmer series in a normal M star, but this is due to the weakness of the TiO band that would otherwise dilute the Hβ emission. Classification schemes The spectral class S was first defined in 1922 to represent a number of long-period variables (meaning Mira variables) and stars with similar peculiar spectra. Many of the absorption lines in the spectra were recognised as unusual, but their associated elements were not known. The absorption bands now recognised as due to ZrO are clearly listed as major features of the S-type spectra. At that time, class M was not divided into numeric sub-classes, but into Ma, Mb, Mc, and Md. The new class S was simply left as either S or Se depending on the existence of emission lines. It was considered that the Se stars were all LPVs and the S stars were non-variable, but exceptions have since been found. For example, π1 Gruis is now known to be a semiregular variable. The classification of S stars has been revised several times since its first introduction, to reflect advances in the resolution of available spectra, the discovery of greater numbers of S-type stars, and better understanding of the relationships between the various cool luminous giant spectral types. Comma notation The formalisation of S star classification in 1954 introduced a two-dimensional scheme of the form SX,Y. For example, R Andromedae is listed as S6,6e. X is the temperature class. It is a digit between 1 (although the smallest type actually listed is S1.5) and 9, intended to represent a temperature scale corresponding approximately to the sequence of M1 to M9. The temperature class is actually calculated by estimating intensities for the ZrO and TiO bands, then summing the larger intensity with half the smaller intensity. Y is the abundance class. It is also a digit between 1 and 9, assigned by multiplying the ratio of ZrO and TiO bands by the temperature class. This calculation generally yields a number which can be rounded down to give the abundance class digit, but this is modified for higher values: 6.0 – 7.5 maps to 6 7.6 – 9.9 maps to 7 10.0 – 50 maps to 8 > 50 maps to 9 In practice, spectral types for new stars would be assigned by referencing to the standard stars, since the intensity values are subjective and would be impossible to reproduce from spectra taken under different conditions. A number of drawbacks came to light as S stars were studied more closely and the mechanisms behind the spectra came to be understood. The strengths of the ZrO and TiO are influenced both by temperature and by actual abundances. The S stars represent a continuum from having oxygen slightly more abundant than carbon to carbon being slightly more abundant than oxygen. When carbon becomes more abundant than oxygen, the free oxygen is rapidly bound into CO and abundances of ZrO and TiO drop dramatically, making them a poor indicator in some stars. The abundance class also becomes unusable for stars with more carbon than oxygen in their atmospheres. This form of spectral type is a common type seen for S stars, possibly still the most common form. Elemental intensities The first major revision of the classification for S stars completely abandons the single-digit abundance class in favour of explicit abundance intensities for Zr and Ti. So R And is listed, at a normal maximum, with a spectral type of S5e Zr5 Ti2. In 1979 Ake defined an abundance index based on the ZrO, TiO, and YO band intensities. This single digit between 1 and 7 was intended to represent the transition from MS stars through increasing C/O ratios to SC stars. Spectral types were still listed with explicit Zr and Ti intensity values, and the abundance index was included separately in the list of standard stars. Slash notation The abundance index was immediately adopted and extended to run from 1 to 10, differentiating abundances in SC stars. It was now quoted as part of the spectral type in preference to separate Zr and Ti abundances. To distinguish it from the earlier abandoned abundance class it was used with a slash character after the temperature class, so that the spectral class for R And became S5/4.5e. The new abundance index is not calculated directly, but is assigned from the relative strengths of a number of spectral features. It is designed to closely indicate the sequence of C/O ratios from below 0.95 to about 1.1. Primarily the relative strength of ZrO and TiO bands forms a sequence from MS stars to abundance index 1 through 6. Abundance indices 7 to 10 are the SC stars and ZrO is weak or absent so the relative strength of the sodium D lines and Cs bands is used. Abundance index 0 is not used, and abundance index 10 is equivalent to a carbon star Cx,2 so it is also never seen. The derivation of the temperature class is also refined, to use line ratios in addition to the total ZrO and TiO strength. For MS stars and those with abundance index 1 or 2, the same TiO band strength criteria as for M stars can be applied. Ratios of different ZrO bands at 530.5 nm and 555.1 nm are useful with abundance indices 3 and 4, and the sudden appearance of LaO bands at cooler temperatures. The ratio of Ba and Sr lines is also useful at the same indices and for carbon-rich stars with abundance index 7 to 9. Where ZrO and TiO are weak or absent the ratio of the blended features at 645.6 nm and 645.0 nm can be used to assign the temperature class. Asterisk notation With the different classification schemes and the difficulties of assigning a consistent class across the whole range of MS, S, and SC stars, other schemes are sometimes used. For example, one survey of new S/MS, carbon, and SC stars uses a two-dimensional scheme indicated by an asterisk, for example S5*3. The first digit is based on TiO strength to approximate the class M sequence, and the second is based solely on ZrO strength. Standard stars This table shows the spectral types of a number of well-known S stars as they were classified at various times. Most of the stars are variable, usually of the Mira type. Where possible the table shows the type at maximum brightness, but several of the Ake types in particular are not at maximum brightness and so have a later type. ZrO and TiO band intensities are also shown if they are published (an x indicates that no bands were found). If the abundances are part of the formal spectral type then the abundance index is shown. Formation There are two distinct classes of S-type stars: intrinsic S stars; and extrinsic S stars. The presence of Technetium is used to distinguish the two classes, only being found in the intrinsic S-type stars. Intrinsic S stars Intrinsic S-type stars are thermal pulsing asymptotic giant branch (TP-AGB) stars. AGB stars have inert carbon-oxygen cores and undergo fusion both in an inner helium shell and an outer hydrogen shell. They are large cool M class giants. The thermal pulses, created by flashes from the helium shell, cause strong convection within the upper layers of the star. These pulses become stronger as the star evolves and in sufficiently massive stars the convection becomes deep enough to dredge up fusion products from the region between the two shells to the surface. These fusion products include carbon and s-process elements. The s-process elements include zirconium (Zr), yttrium (Y), lanthanum (La), technetium (Tc), barium (Ba), and strontium (Sr), which form the characteristic S class spectrum with ZrO, YO, and LaO bands, as well as Tc, Sr, and Ba lines. The atmosphere of S stars has a carbon to oxygen ratio in the range 0.5 to < 1. Carbon enrichment continues with subsequent thermal pulses until the carbon abundance exceeds the oxygen abundance, at which point the oxygen in the atmosphere is rapidly locked into CO and formation of the oxides diminishes. These stars show intermediate SC spectra and further carbon enrichment leads to a carbon star. Extrinsic S stars The Technetium isotope produced by neutron capture in the s-process is 99Tc and it has a half life of around 200,000 years in a stellar atmosphere. Any of the isotope present when a star formed would have completely decayed by the time it became a giant, and any newly formed 99Tc dredged up in an AGB star would survive until the end of the AGB phase, making it difficult for a red giant to have other s-process elements in its atmosphere without technetium. S-type stars without technetium form by the transfer of technetium-rich matter, as well as other dredged-up elements, from an intrinsic S star in a binary system onto a smaller less-evolved companion. After a few hundred thousand years, the 99Tc will have decayed and a technetium-free star enriched with carbon and other s-process elements will remain. When this star is, or becomes, a G or K type red giant, it will be classified as a Barium star. When it evolves to temperatures cool enough for ZrO absorption bands to show in the spectrum, approximately M class, it will be classified as an S-type star. These stars are called extrinsic S stars. Distribution and numbers Stars with a spectral class of S only form under a narrow range of conditions and they are uncommon. The distributions and properties of intrinsic and extrinsic S stars are different, reflecting their different modes of formation. TP-AGB stars are difficult to identify reliably in large surveys, but counts of normal M-class luminous AGB stars and similar S-type and carbon stars have shown different distributions in the galaxy. S stars are distributed in a similar way to carbon stars, but there are only around a third as many as the carbon stars. Both types of carbon-rich star are very rare near to the Galactic Center, but make up 10% – 20% of all the luminous AGB stars in the solar neighbourhood, so that S stars are around 5% of the AGB stars. The carbon-rich stars are also concentrated more closely in the galactic plane. S-type stars make up a disproportionate number of Mira variables, 7% in one survey compared to 3% of all AGB stars. Extrinsic S stars are not on the TP-AGB, but are red giant branch stars or early AGB stars. Their numbers and distribution are uncertain. They have been estimated to make up between 30% and 70% of all S-type stars, although only a tiny fraction of all red giant branch stars. They are less strongly concentrated in the galactic disc, indicating that they are from an older population of stars than the intrinsic group. Properties Very few intrinsic S stars have had their mass directly measured using a binary orbit, although their masses have been estimated using Mira period-mass relations or pulsations properties. The observed masses were found to be around until very recently when Gaia parallaxes helped discover intrinsic S stars with solar-like masses and metallicities. Models of TP-AGB evolution show that the third dredge-up becomes larger as the shells move towards the surface, and that less massive stars experience fewer dredge-ups before leaving the AGB. Stars with masses of will experience enough dredge-ups to become carbon stars, but they will be large events and the star will usually skip straight past the crucial C/O ratio near 1 without becoming an S-type star. More massive stars reach equal levels of carbon and oxygen gradually during several small dredge-ups. Stars more than about experience hot bottom burning (the burning of carbon at the base of the convective envelope) which prevents them becoming carbon stars, but they may still become S-type stars before reverting to an oxygen-rich state. Extrinsic S stars are always in binary systems and their calculated masses are around . This is consistent with RGB stars or early AGB stars. Intrinsic S stars have luminosities around , although they are usually variable. Their temperatures average about 2,300 K for the Mira S stars and 3,100 K for the non-Mira S stars, a few hundred K warmer than oxygen-rich AGB stars and a few hundred K cooler than carbon stars. Their radii average about for the Miras and for the non-miras, larger than oxygen-rich stars and smaller than carbon stars. Extrinsic S stars have luminosities typically around , temperatures between 3,150 and 4,000 K, and radii less than . This means they lie below the red giant tip and will typically be RGB stars rather than AGB stars. Mass loss and dust Extrinsic S stars lose considerable mass through their stellar winds, similar to oxygen-rich TP-AGB stars and carbon stars. Typically the rates are around 1/10,000,000th the mass of the sun per year, although in extreme cases such as W Aquilae they can be more than ten times higher. It is expected that the existence of dust drives the mass loss in cool stars, but it is unclear what type of dust can form in the atmosphere of an S star with most carbon and oxygen locked into CO gas. The stellar winds of S stars are comparable to oxygen-rich and carbon-rich stars with similar physical properties. There is about 300 times more gas than dust observed in the circumstellar material around S stars. It is believed to be made up of metallic iron, FeSi, silicon carbide, and forsterite. Without silicates and carbon, it is believed that nucleation is triggered by TiC, ZrC, and TiO2. Detached dust shells are seen around a number of carbon stars, but not S-type stars. Infrared excesses indicate that there is dust around most intrinsic S stars, but the outflow has not been sufficient and longlasting enough to form a visible detached shell. The shells are thought to form during a superwind phase very late in the AGB evolution. Examples BD Camelopardalis is a naked-eye example of an extrinsic S star. It is a slow irregular variable in a symbiotic binary system with a hotter companion which may also be variable. The Mira variable Chi Cygni is an intrinsic S star. When near maximum light, it is the sky's brightest S-type star. It has a variable late type spectrum about S6 to S10, with features of zirconium, titanium and vanadium oxides, sometimes bordering on the intermediate MS type. A number of other prominent Mira variables such as R Andromedae and R Cygni are also S-type stars, as well as the peculiar semiregular variable π1 Gruis. The naked-eye star ο1 Ori is an intermediate MS star and small amplitude semiregular variable with a DA3 white dwarf companion. The spectral type has been given as S3.5/1-, M3III(BaII), or M3.2IIIaS.
Physical sciences
Stellar astronomy
Astronomy
19194778
https://en.wikipedia.org/wiki/Deformation%20%28physics%29
Deformation (physics)
In physics and continuum mechanics, deformation is the change in the shape or size of an object. It has dimension of length with SI unit of metre (m). It is quantified as the residual displacement of particles in a non-rigid body, from an configuration to a configuration, excluding the body's average translation and rotation (its rigid transformation). A configuration is a set containing the positions of all particles of the body. A deformation can occur because of external loads, intrinsic activity (e.g. muscle contraction), body forces (such as gravity or electromagnetic forces), or changes in temperature, moisture content, or chemical reactions, etc. In a continuous body, a deformation field results from a stress field due to applied forces or because of some changes in the conditions of the body. The relation between stress and strain (relative deformation) is expressed by constitutive equations, e.g., Hooke's law for linear elastic materials. Deformations which cease to exist after the stress field is removed are termed as elastic deformation. In this case, the continuum completely recovers its original configuration. On the other hand, irreversible deformations may remain, and these exist even after stresses have been removed. One type of irreversible deformation is plastic deformation, which occurs in material bodies after stresses have attained a certain threshold value known as the elastic limit or yield stress, and are the result of slip, or dislocation mechanisms at the atomic level. Another type of irreversible deformation is viscous deformation, which is the irreversible part of viscoelastic deformation. In the case of elastic deformations, the response function linking strain to the deforming stress is the compliance tensor of the material. Definition and formulation Deformation is the change in the metric properties of a continuous body, meaning that a curve drawn in the initial body placement changes its length when displaced to a curve in the final placement. If none of the curves changes length, it is said that a rigid body displacement occurred. It is convenient to identify a reference configuration or initial geometric state of the continuum body which all subsequent configurations are referenced from. The reference configuration need not be one the body actually will ever occupy. Often, the configuration at is considered the reference configuration, . The configuration at the current time is the current configuration. For deformation analysis, the reference configuration is identified as undeformed configuration, and the current configuration as deformed configuration. Additionally, time is not considered when analyzing deformation, thus the sequence of configurations between the undeformed and deformed configurations are of no interest. The components of the position vector of a particle in the reference configuration, taken with respect to the reference coordinate system, are called the material or reference coordinates. On the other hand, the components of the position vector of a particle in the deformed configuration, taken with respect to the spatial coordinate system of reference, are called the spatial coordinates There are two methods for analysing the deformation of a continuum. One description is made in terms of the material or referential coordinates, called material description or Lagrangian description. A second description of deformation is made in terms of the spatial coordinates it is called the spatial description or Eulerian description. There is continuity during deformation of a continuum body in the sense that: The material points forming a closed curve at any instant will always form a closed curve at any subsequent time. The material points forming a closed surface at any instant will always form a closed surface at any subsequent time and the matter within the closed surface will always remain within. Affine deformation An affine deformation is a deformation that can be completely described by an affine transformation. Such a transformation is composed of a linear transformation (such as rotation, shear, extension and compression) and a rigid body translation. Affine deformations are also called homogeneous deformations. Therefore, an affine deformation has the form where is the position of a point in the deformed configuration, is the position in a reference configuration, is a time-like parameter, is the linear transformer and is the translation. In matrix form, where the components are with respect to an orthonormal basis, The above deformation becomes non-affine or inhomogeneous if or . Rigid body motion A rigid body motion is a special affine deformation that does not involve any shear, extension or compression. The transformation matrix is proper orthogonal in order to allow rotations but no reflections. A rigid body motion can be described by where In matrix form, Background: displacement A change in the configuration of a continuum body results in a displacement. The displacement of a body has two components: a rigid-body displacement and a deformation. A rigid-body displacement consists of a simultaneous translation and rotation of the body without changing its shape or size. Deformation implies the change in shape and/or size of the body from an initial or undeformed configuration to a current or deformed configuration (Figure 1). If after a displacement of the continuum there is a relative displacement between particles, a deformation has occurred. On the other hand, if after displacement of the continuum the relative displacement between particles in the current configuration is zero, then there is no deformation and a rigid-body displacement is said to have occurred. The vector joining the positions of a particle P in the undeformed configuration and deformed configuration is called the displacement vector in the Lagrangian description, or in the Eulerian description. A displacement field is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. It is convenient to do the analysis of deformation or motion of a continuum body in terms of the displacement field. In general, the displacement field is expressed in terms of the material coordinates as or in terms of the spatial coordinates as where are the direction cosines between the material and spatial coordinate systems with unit vectors and , respectively. Thus and the relationship between and is then given by Knowing that then It is common to superimpose the coordinate systems for the undeformed and deformed configurations, which results in , and the direction cosines become Kronecker deltas: Thus, we have or in terms of the spatial coordinates as Displacement gradient tensor The partial differentiation of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor . Thus we have: or where is the deformation gradient tensor. Similarly, the partial differentiation of the displacement vector with respect to the spatial coordinates yields the spatial displacement gradient tensor . Thus we have, or Examples Homogeneous (or affine) deformations are useful in elucidating the behavior of materials. Some homogeneous deformations of interest are uniform extension pure dilation equibiaxial tension simple shear pure shear Linear or longitudinal deformations of long objects, such as beams and fibers, are called elongation or shortening; derived quantities are the relative elongation and the stretch ratio. Plane deformations are also of interest, particularly in the experimental context. Volume deformation is a uniform scaling due to isotropic compression; the relative volume deformation is called volumetric strain. Plane deformation A plane deformation, also called plane strain, is one where the deformation is restricted to one of the planes in the reference configuration. If the deformation is restricted to the plane described by the basis vectors , , the deformation gradient has the form In matrix form, From the polar decomposition theorem, the deformation gradient, up to a change of coordinates, can be decomposed into a stretch and a rotation. Since all the deformation is in a plane, we can write where is the angle of rotation and , are the principal stretches. Isochoric plane deformation If the deformation is isochoric (volume preserving) then and we have Alternatively, Simple shear A simple shear deformation is defined as an isochoric plane deformation in which there is a set of line elements with a given reference orientation that do not change length and orientation during the deformation. If is the fixed reference orientation in which line elements do not deform during the deformation then and . Therefore, Since the deformation is isochoric, Define Then, the deformation gradient in simple shear can be expressed as Now, Since we can also write the deformation gradient as
Physical sciences
Fluid mechanics
Physics
19196523
https://en.wikipedia.org/wiki/Randomness
Randomness
In common usage, randomness is the apparent or actual lack of definite pattern or predictability in information. A random sequence of events, symbols or steps often has no order and does not follow an intelligible pattern or combination. Individual random events are, by definition, unpredictable, but if there is a known probability distribution, the frequency of different outcomes over repeated events (or "trials") is predictable. For example, when throwing two dice, the outcome of any particular roll is unpredictable, but a sum of 7 will tend to occur twice as often as 4. In this view, randomness is not haphazardness; it is a measure of uncertainty of an outcome. Randomness applies to concepts of chance, probability, and information entropy. The fields of mathematics, probability, and statistics use formal definitions of randomness, typically assuming that there is some 'objective' probability distribution. In statistics, a random variable is an assignment of a numerical value to each possible outcome of an event space. This association facilitates the identification and the calculation of probabilities of the events. Random variables can appear in random sequences. A random process is a sequence of random variables whose outcomes do not follow a deterministic pattern, but follow an evolution described by probability distributions. These and other constructs are extremely useful in probability theory and the various applications of randomness. Randomness is most often used in statistics to signify well-defined statistical properties. Monte Carlo methods, which rely on random input (such as from random number generators or pseudorandom number generators), are important techniques in science, particularly in the field of computational science. By analogy, quasi-Monte Carlo methods use quasi-random number generators. Random selection, when narrowly associated with a simple random sample, is a method of selecting items (often called units) from a population where the probability of choosing a specific item is the proportion of those items in the population. For example, with a bowl containing just 10 red marbles and 90 blue marbles, a random selection mechanism would choose a red marble with probability 1/10. A random selection mechanism that selected 10 marbles from this bowl would not necessarily result in 1 red and 9 blue. In situations where a population consists of items that are distinguishable, a random selection mechanism requires equal probabilities for any item to be chosen. That is, if the selection process is such that each member of a population, say research subjects, has the same probability of being chosen, then we can say the selection process is random. According to Ramsey theory, pure randomness (in the sense of there being no discernible pattern) is impossible, especially for large structures. Mathematician Theodore Motzkin suggested that "while disorder is more probable in general, complete disorder is impossible". Misunderstanding this can lead to numerous conspiracy theories. Cristian S. Calude stated that "given the impossibility of true randomness, the effort is directed towards studying degrees of randomness". It can be proven that there is infinite hierarchy (in terms of quality or strength) of forms of randomness. History In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient peoples threw dice to determine fate, and this later evolved into games of chance. Most ancient cultures used various methods of divination to attempt to circumvent randomness and fate. Beyond religion and games of chance, randomness has been attested for sortition since at least ancient Athenian democracy in the form of a kleroterion. The formalization of odds and chance was perhaps earliest done by the Chinese of 3,000 years ago. The Greek philosophers discussed randomness at length, but only in non-quantitative forms. It was only in the 16th century that Italian mathematicians began to formalize the odds associated with various games of chance. The invention of calculus had a positive impact on the formal study of randomness. In the 1888 edition of his book The Logic of Chance, John Venn wrote a chapter on The conception of randomness that included his view of the randomness of the digits of pi (π), by using them to construct a random walk in two dimensions. The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various approaches to the mathematical foundations of probability were introduced. In the mid-to-late-20th century, ideas of algorithmic information theory introduced new dimensions to the field via the concept of algorithmic randomness. Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the 20th century computer scientists began to realize that the deliberate introduction of randomness into computations can be an effective tool for designing better algorithms. In some cases, such randomized algorithms even outperform the best deterministic methods. In science Many scientific fields are concerned with randomness: Algorithmic probability Chaos theory Cryptography Game theory Information theory Pattern recognition Percolation theory Probability theory Quantum mechanics Random walk Statistical mechanics Statistics In the physical sciences In the 19th century, scientists used the idea of random motions of molecules in the development of statistical mechanics to explain phenomena in thermodynamics and the properties of gases. According to several standard interpretations of quantum mechanics, microscopic phenomena are objectively random. That is, in an experiment that controls all causally relevant parameters, some aspects of the outcome still vary randomly. For example, if a single unstable atom is placed in a controlled environment, it cannot be predicted how long it will take for the atom to decay—only the probability of decay in a given time. Thus, quantum mechanics does not specify the outcome of individual experiments, but only the probabilities. Hidden variable theories reject the view that nature contains irreducible randomness: such theories posit that in the processes that appear random, properties with a certain statistical distribution are at work behind the scenes, determining the outcome in each case. In biology The modern evolutionary synthesis ascribes the observed diversity of life to random genetic mutations followed by natural selection. The latter retains some random mutations in the gene pool due to the systematically improved chance for survival and reproduction that those mutated genes confer on individuals who possess them. The location of the mutation is not entirely random however as e.g. biologically important regions may be more protected from mutations. Several authors also claim that evolution (and sometimes development) requires a specific form of randomness, namely the introduction of qualitatively new behaviors. Instead of the choice of one possibility among several pre-given ones, this randomness corresponds to the formation of new possibilities. The characteristics of an organism arise to some extent deterministically (e.g., under the influence of genes and the environment), and to some extent randomly. For example, the density of freckles that appear on a person's skin is controlled by genes and exposure to light; whereas the exact location of individual freckles seems random. As far as behavior is concerned, randomness is important if an animal is to behave in a way that is unpredictable to others. For instance, insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators to predict their trajectories. In mathematics The mathematical theory of probability arose from attempts to formulate mathematical descriptions of chance events, originally in the context of gambling, but later in connection with physics. Statistics is used to infer an underlying probability distribution of a collection of empirical observations. For the purposes of simulation, it is necessary to have a large supply of random numbers—or means to generate them on demand. Algorithmic information theory studies, among other topics, what constitutes a random sequence. The central idea is that a string of bits is random if and only if it is shorter than any computer program that can produce that string (Kolmogorov randomness), which means that random strings are those that cannot be compressed. Pioneers of this field include Andrey Kolmogorov and his student Per Martin-Löf, Ray Solomonoff, and Gregory Chaitin. For the notion of infinite sequence, mathematicians generally accept Per Martin-Löf's semi-eponymous definition: An infinite sequence is random if and only if it withstands all recursively enumerable null sets. The other notions of random sequences include, among others, recursive randomness and Schnorr randomness, which are based on recursively computable martingales. It was shown by Yongge Wang that these randomness notions are generally different. Randomness occurs in numbers such as log(2) and pi. The decimal digits of pi constitute an infinite sequence and "never repeat in a cyclical fashion." Numbers like pi are also considered likely to be normal: In statistics In statistics, randomness is commonly used to create simple random samples. This allows surveys of completely random groups of people to provide realistic data that is reflective of the population. Common methods of doing this include drawing names out of a hat or using a random digit chart (a large table of random digits). In information science In information science, irrelevant or meaningless data is considered noise. Noise consists of numerous transient disturbances, with a statistically randomized time distribution. In communication theory, randomness in a signal is called "noise", and is opposed to that component of its variation that is causally attributable to the source, the signal. In terms of the development of random networks, for communication randomness rests on the two simple assumptions of Paul Erdős and Alfréd Rényi, who said that there were a fixed number of nodes and this number remained fixed for the life of the network, and that all nodes were equal and linked randomly to each other. In finance The random walk hypothesis considers that asset prices in an organized market evolve at random, in the sense that the expected value of their change is zero but the actual value may turn out to be positive or negative. More generally, asset prices are influenced by a variety of unpredictable events in the general economic environment. In politics Random selection can be an official method to resolve tied elections in some jurisdictions. Its use in politics originates long ago. Many offices in ancient Athens were chosen by lot instead of modern voting. Randomness and religion Randomness can be seen as conflicting with the deterministic ideas of some religions, such as those where the universe is created by an omniscient deity who is aware of all past and future events. If the universe is regarded to have a purpose, then randomness can be seen as impossible. This is one of the rationales for religious opposition to evolution, which states that non-random selection is applied to the results of random genetic variation. Hindu and Buddhist philosophies state that any event is the result of previous events, as is reflected in the concept of karma. As such, this conception is at odds with the idea of randomness, and any reconciliation between both of them would require an explanation. In some religious contexts, procedures that are commonly perceived as randomizers are used for divination. Cleromancy uses the casting of bones or dice to reveal what is seen as the will of the gods. Applications In most of its mathematical, political, social and religious uses, randomness is used for its innate "fairness" and lack of bias. Politics: Athenian democracy was based on the concept of isonomia (equality of political rights), and used complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly allocated. Allotment is now restricted to selecting jurors in Anglo-Saxon legal systems, and in situations where "fairness" is approximated by randomization, such as selecting jurors and military draft lotteries. Games: Random numbers were first investigated in the context of gambling, and many randomizing devices, such as dice, shuffling playing cards, and roulette wheels, were first developed for use in gambling. The ability to produce random numbers fairly is vital to electronic gambling, and, as such, the methods used to create them are usually regulated by government Gaming Control Boards. Random drawings are also used to determine lottery winners. In fact, randomness has been used for games of chance throughout history, and to select out individuals for an unwanted task in a fair way (see drawing straws). Sports: Some sports, including American football, use coin tosses to randomly select starting conditions for games or seed tied teams for postseason play. The National Basketball Association uses a weighted lottery to order teams in its draft. Mathematics: Random numbers are also employed where their use is mathematically important, such as sampling for opinion polls and for statistical sampling in quality control systems. Computational solutions for some types of problems use random numbers extensively, such as in the Monte Carlo method and in genetic algorithms. Medicine: Random allocation of a clinical intervention is used to reduce bias in controlled trials (e.g., randomized controlled trials). Religion: Although not intended to be random, various forms of divination such as cleromancy see what appears to be a random event as a means for a divine being to communicate their will (see also Free will and Determinism for more). Generation It is generally accepted that there exist three mechanisms responsible for (apparently) random behavior in systems: Randomness coming from the environment (for example, Brownian motion, but also hardware random number generators). Randomness coming from the initial conditions. This aspect is studied by chaos theory, and is observed in systems whose behavior is very sensitive to small variations in initial conditions (such as pachinko machines and dice). Randomness intrinsically generated by the system. This is also called pseudorandomness, and is the kind used in pseudo-random number generators. There are many algorithms (based on arithmetics or cellular automaton) for generating pseudorandom numbers. The behavior of the system can be determined by knowing the seed state and the algorithm used. These methods are often quicker than getting "true" randomness from the environment. The many applications of randomness have led to many different methods for generating random data. These methods may vary as to how unpredictable or statistically random they are, and how quickly they can generate random numbers. Before the advent of computational random number generators, generating large amounts of sufficiently random numbers (which is important in statistics) required a lot of work. Results would sometimes be collected and distributed as random number tables. Measures and tests There are many practical measures of randomness for a binary sequence. These include measures based on frequency, discrete transforms, complexity, or a mixture of these, such as the tests by Kak, Phillips, Yuen, Hopkins, Beth and Dai, Mund, and Marsaglia and Zaman. Quantum nonlocality has been used to certify the presence of genuine or strong form of randomness in a given string of numbers. Misconceptions and logical fallacies Popular perceptions of randomness are frequently mistaken, and are often based on fallacious reasoning or intuitions. Fallacy: a number is "due" This argument is, "In a random selection of numbers, since all numbers eventually appear, those that have not come up yet are 'due', and thus more likely to come up soon." This logic is only correct if applied to a system where numbers that come up are removed from the system, such as when playing cards are drawn and not returned to the deck. In this case, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be some other card. However, if the jack is returned to the deck, and the deck is thoroughly reshuffled, a jack is as likely to be drawn as any other card. The same applies in any other process where objects are selected independently, and none are removed after each event, such as the roll of a die, a coin toss, or most lottery number selection schemes. Truly random processes such as these do not have memory, which makes it impossible for past outcomes to affect future outcomes. In fact, there is no finite number of trials that can guarantee a success. Fallacy: a number is "cursed" or "blessed" In a random sequence of numbers, a number may be said to be cursed because it has come up less often in the past, and so it is thought that it will occur less often in the future. A number may be assumed to be blessed because it has occurred more often than others in the past, and so it is thought likely to come up more often in the future. This logic is valid only if the randomisation might be biased, for example if a die is suspected to be loaded then its failure to roll enough sixes would be evidence of that loading. If the die is known to be fair, then previous rolls can give no indication of future events. In nature, events rarely occur with a frequency that is known a priori, so observing outcomes to determine which events are more probable makes sense. However, it is fallacious to apply this logic to systems designed and known to make all outcomes equally likely, such as shuffled cards, dice, and roulette wheels. Fallacy: odds are never dynamic In the beginning of a scenario, one might calculate the probability of a certain event. However, as soon as one gains more information about the scenario, one may need to re-calculate the probability accordingly. For example, when being told that a woman has two children, one might be interested in knowing if either of them is a girl, and if yes, the probability that the other child is also a girl. Considering the two events independently, one might expect that the probability that the other child is female is ½ (50%), but by building a probability space illustrating all possible outcomes, one would notice that the probability is actually only ⅓ (33%). To be sure, the probability space does illustrate four ways of having these two children: boy-boy, girl-boy, boy-girl, and girl-girl. But once it is known that at least one of the children is female, this rules out the boy-boy scenario, leaving only three ways of having the two children: boy-girl, girl-boy, girl-girl. From this, it can be seen only ⅓ of these scenarios would have the other child also be a girl (see Boy or girl paradox for more). In general, by using a probability space, one is less likely to miss out on possible scenarios, or to neglect the importance of new information. This technique can be used to provide insights in other situations such as the Monty Hall problem, a game show scenario in which a car is hidden behind one of three doors, and two goats are hidden as booby prizes behind the others. Once the contestant has chosen a door, the host opens one of the remaining doors to reveal a goat, eliminating that door as an option. With only two doors left (one with the car, the other with another goat), the player must decide to either keep their decision, or to switch and select the other door. Intuitively, one might think the player is choosing between two doors with equal probability, and that the opportunity to choose another door makes no difference. However, an analysis of the probability spaces would reveal that the contestant has received new information, and that changing to the other door would increase their chances of winning.
Mathematics
Probability
null
14161438
https://en.wikipedia.org/wiki/Polysporangiophyte
Polysporangiophyte
Polysporangiophytes, also called polysporangiates or formally Polysporangiophyta, are plants in which the spore-bearing generation (sporophyte) has branching stems (axes) that bear sporangia. The name literally means 'many sporangia plant'. The clade includes all land plants (embryophytes) except for the bryophytes (liverworts, mosses and hornworts) whose sporophytes are normally unbranched, even if a few exceptional cases occur. While the definition is independent of the presence of vascular tissue, all living polysporangiophytes also have vascular tissue, i.e., are vascular plants or tracheophytes. Extinct polysporangiophytes are known that have no vascular tissue and so are not tracheophytes. Early polysporangiophytes History of discovery Paleobotanists distinguish between micro- and megafossils. Microfossils are primarily spores, either single or in groups. Megafossils are preserved parts of plants large enough to show structure, such as stem cross-sections or branching patterns. Dawson, a Canadian geologist and paleobotanist, was the first to discover and describe a megafossil of a polysporangiophyte. In 1859 he published a reconstruction of a Devonian plant, collected as a fossil from the Gaspé region of Canada, which he named Psilophyton princeps. The reconstruction shows horizontal and upright stem-like structures; no leaves or roots are present. The upright stems or axes branch dichotomously and have pairs of spore-forming organs (sporangia) attached to them. Cross-sections of the upright axes showed that vascular tissue was present. He later described other specimens. Dawson's discoveries initially had little scientific impact; Taylor et al. speculate that this was because his reconstruction looked very unusual and the fossil was older than was expected. From 1917 onwards, Robert Kidston and William H. Lang published a series of papers describing fossil plants from the Rhynie chert – a fine-grained sedimentary rock found near the village of Rhynie, Aberdeenshire, now dated to the Pragian of the Lower Devonian (around ). The fossils were better-preserved than Dawson's, and showed clearly that these early land plants did indeed consist of generally naked vertical stems arising from similar horizontal structures. The vertical stems were dichotomously branched with some branches ending in sporangia. Since these discoveries, similar megafossils have been discovered in rocks of Silurian to mid-Devonian age throughout the world, including Arctic Canada, the eastern US, Wales, the Rhineland of Germany, Kazakhstan, Xinjiang and Yunnan in China, and Australia. , Eohostimella, dated to the Llandovery epoch (), is one of the earliest fossils that has been identified as a polysporangiophyte. Fossils assigned to the genus Cooksonia, which is more certainly a polysporangiophyte, have been dated to the succeeding Wenlock epoch (). Taxonomy The concept of the polysporangiophytes, more formally called Polysporangiophyta, was first published in 1997 by Kenrick and Crane. (The taxobox at the right represents their view of the classification of the polysporangiophytes.) The defining feature of the clade is that the sporophyte branches and bears multiple sporangia. This distinguishes polysporangiophytes from liverworts, mosses and hornworts, which have unbranched sporophytes each with a single sporangium. Polysporangiophytes may or may not have vascular tissue – those that do are vascular plants or tracheophytes. Prior to that, most of the early polysporangiophytes had been placed in a single order, Psilophytales, in the class Psilophyta, established in 1917 by Kidston and Lang. The living Psilotaceae, the whisk-ferns, were sometimes added to the class, which was then usually called Psilopsida. As additional fossils were discovered and described, it became apparent that the Psilophyta were not a homogeneous group of plants. In 1975, Banks expanded on his earlier 1968 proposal that split it into three groups at the rank of subdivision. These groups have since been treated at the ranks of division, class and order. A variety of names have been used, which the table below summarizes. For Banks, rhyniophytes comprised simple leafless plants with terminal sporangia (e.g., Cooksonia, Rhynia) with centrarch xylem; zosterophylls comprised plants with lateral sporangia that split distally (away from their attachment) to release their spores, and had exarch strands of xylem (e.g., Gosslingia). Trimerophytes comprised plants with large clusters of downwards curving terminal sporangia that split along their length to release their spores and had centrarch xylem strands (e.g., Psilophyton). Research by Kenrick and Crane that established the polysporangiophytes concluded that none of Banks' three groups were monophyletic. The rhyniophytes included "protracheophytes", which were precursors to vascular plants (e.g., Horneophyton, Aglaophyton); basal tracheophytes (e.g., Stockmansella, Rhynia gwynne-vaughanii); and plants allied to the lineages that led to the living club-mosses and allies as well as ferns and seed plants (e.g., Cooksonia species). The zosterophylls did contain a monophyletic clade, but some genera previously included in the group fell outside this clade (e.g., Hicklingia, Nothia). The trimerophytes were paraphyletic stem groups to both the crown group ferns and the crown group seed plants. Many researchers have urged caution in the classification of early polysporangiophytes. Taylor et al. note that basal groups of early land plants are inherently difficult to characterize since they share many characters with all later-evolving groups (i.e., have multiple plesiomorphies). In discussing the classification of the trimerophytes, Berry and Fairon-Demaret say that reaching a meaningful classification requires "a breakthrough in knowledge and understanding rather than simply a reinterpretation of the existing data and the surrounding mythology". Kenrick and Crane's cladograms have been questioned – see the Evolution section below. , there appears to be no complete Linnean (i.e., rank-based) classification for early polysporangiophytes that is consistent with Kenrick and Crane's cladistic analysis and subsequent research, though Cantino et al. have published a Phylocode classification. Banks' three groups continue to be used for convenience. Phylogeny A major cladistic study of land plants was published in 1997 by Kenrick and Crane; this both established the concept of the polysporangiophytes and presented a view of their phylogeny. Since 1997 there have been continual advances in understanding plant evolution, using RNA and DNA genome sequences and chemical analyses of fossils (e.g., Taylor et al. 2006), resulting in revisions to this phylogeny. In 2004, Crane et al. published a simplified cladogram for the polysporangiophytes (which they call polysporangiates), based on a number of figures in Kenrick and Crane (1997). Their cladogram is reproduced below (with some branches collapsed into 'basal groups' to reduce the size of the diagram). Their analysis is not accepted by other researchers; for example Rothwell and Nixon say that the broadly defined fern group (moniliforms or monilophytes) is not monophyletic. More recently, Gerrienne and Gonez have suggested a slightly different characterization of the early diverging polysporangiophytes: The paraphyletic protracheophytes, such as Aglaophyton, have water-conducting vessels like those of mosses, i.e., without cells containing thickened cell walls. The paratracheophytes, a name intended to replace Rhyniaceae or Rhyniopsida, have 'S-type' water-conducting cells, i.e., cells whose walls are thickened but in a much simpler fashion than those of true vascular plants, the eutracheophytes. Evolution If the cladogram above is correct it has implications for the evolution of land plants. The earliest diverging polysporangiophytes in the cladogram are the Horneophytopsida, a clade at the 'protracheophyte' grade that is sister to all other polysporangiophytes. They had essentially an isomorphic alternation of generations (meaning that the sporophytes and gametophytes were equally free living), which might suggest that both the gametophyte-dominant life style of bryophytes and the sporophyte-dominant life style of vascular plants evolved from this isomorphic condition. They were leafless and did not have true vascular tissues. In particular, they did not have tracheids: elongated cells that help transport water and mineral salts, and that develop a thick lignified wall at maturity that provides mechanical strength. Unlike plants at the bryophyte grade, their sporophytes were branched. According to the cladogram, the genus Rhynia illustrates two steps in the evolution of modern vascular plants. Plants have vascular tissue, albeit significantly simpler than modern vascular plants. Their gametophytes are distinctly smaller than their sporophytes (but have vascular tissue, unlike almost all modern vascular plants). The remainder of the polysporangiophytes divide into two lineages, a deep phylogenetic split that occurred in the early to mid Devonian, around 400 million years ago. Both lineages have developed leaves, but of different kinds. The lycophytes, which make up less than 1% of the species of living vascular plants, have small leaves (microphylls or more specifically lycophylls), which develop from an intercalary meristem (i.e., the leaves effectively grow from the base). The euphyllophytes are by far the largest group of vascular plants, in terms of both individuals and species. Euphyllophytes have large 'true' leaves (megaphylls), which develop through marginal or apical meristems (i.e., the leaves effectively grow from the sides or the apex). (Horsetails have secondarily reduced megaphylls resembling microphylls.) Both the cladogram derived from Kenrick and Crane's studies and its implications for the evolution of land plants have been questioned by others. A 2008 review by Gensel notes that recently discovered fossil spores suggest that tracheophytes were present earlier than previously thought; perhaps earlier than supposed stem group members. Spore diversity suggests that there were many plant groups, of which no other remains are known. Some early plants may have had heteromorphic alternation of generations, with later acquisition of isomorphic gametophytes in certain lineages. The cladogram above shows the 'protracheophytes' diverging earlier than the lycophytes; however, lycophytes were present in the Ludfordian stage of the Silurian around , long before the 'protracheophytes' found in the Rhynie chert, dated to the Pragian stage of the Devonian around . However, it has been suggested that the poorly preserved Eohostimella, found in deposits of Early Silurian age (Llandovery, around ), may be a rhyniophyte. Boyce has shown that the sporophytes of some Cooksonia species and allies ('cooksonioids') had stems that were too narrow to have supported sufficient photosynthetic activity for them to be independent of their gametophytes – inconsistent with their position in the cladogram. Because the stomata in mosses, hornworts and polysporangiophytes are viewed as homologous, it has been suggested they belong in a natural group named stomatophytes. The evolutionary history of plants is far from settled.
Biology and health sciences
Vascular plants (except seed plants)
Plants
3345164
https://en.wikipedia.org/wiki/Pteropus
Pteropus
Pteropus (suborder Yinpterochiroptera) is a genus of megabats which are among the largest bats in the world. They are commonly known as fruit bats or flying foxes, among other colloquial names. They live in South Asia, Southeast Asia, Australia, East Africa, and some oceanic islands in the Indian and Pacific Oceans. There are at least 60 extant species in the genus. Flying foxes eat fruit and other plant matter, and occasionally consume insects as well. They locate resources with their keen sense of smell. Most, but not all, are nocturnal. They navigate with keen eyesight, as they cannot echolocate. They have long life spans and low reproductive outputs, with females of most species producing only one offspring per year. Their slow life history makes their populations vulnerable to threats such as overhunting, culling, and natural disasters. Six flying fox species have been made extinct in modern times by overhunting. Flying foxes are often persecuted for their real or perceived role in damaging crops. They are ecologically beneficial by assisting in the regeneration of forests via seed dispersal. They benefit ecosystems and human interests by pollinating plants. Like other bats, flying foxes are relevant to humans as a source of disease, as they are the reservoirs of rare but fatal disease agents including Australian bat lyssavirus, which causes rabies, and Hendra virus; seven known human deaths have resulted from these two diseases. Nipah virus is also transmitted by flying foxes—it affects more people, with over 100 attributed fatalities. They have cultural significance to indigenous people, with appearances in traditional art, folklore, and weaponry. Their fur and teeth were used as currency in the past. Some cultures still use their teeth as currency today. Taxonomy and etymology The genus name Pteropus was coined by French zoologist Mathurin Jacques Brisson in 1762. Prior to 1998, genus authority was sometimes given to German naturalist Johann Christian Polycarp Erxleben. Although the Brisson publication (1762) predated the Erxleben publication (1777), thus giving him preference under the Principle of Priority, some authors gave preference to Erxleben as genus authority because Brisson's publication did not consistently use binomial nomenclature. In 1998, the International Commission on Zoological Nomenclature (ICZN) decided that Brisson's 1762 publication was a "rejected work" for nomenclatural purposes. Despite rejecting the majority of the publication, the ICZN decided to conserve a dozen generic names from the work and retain Brisson as authority, including Pteropus. The type species of the genus is the Mauritian flying fox, Pteropus niger (described as Vespertilio vampyrus niger by Robert Kerr in 1792). The decision to designate P. niger as the type species was made by the ICZN through their plenary powers over biological nomenclature. "Pteropus" comes from Ancient Greek meaning "wing" and meaning "foot." The phrase "flying fox" has been used to refer to Pteropus bats since at least 1759. Species Description External characteristics Flying fox species vary in body weight, ranging from . Across all species, males are usually larger than females. The large flying fox has the longest forearm length and reported wingspan of any bat species, but some bat species exceed it in weight. Its wingspan is up to , and it can weigh up to . The Indian and great flying foxes are heavier, at , respectively. Outside this genus, the giant golden-crowned flying fox (genus Acerodon) is the only bat with similar dimensions. Most flying fox species are considerably smaller and generally weigh less than . Smaller species such as the masked, Temminck's, Guam, and dwarf flying foxes all weigh less than . The pelage is long and silky with a dense underfur. In many species, individuals have a "mantle" of contrasting fur color on the back of their head, the shoulders, and the upper back. They lack tails. As the common name "flying fox" suggests, their heads resemble that of a small fox because of their small ears and large eyes. Females have one pair of mammae located in the chest region. Their ears are long and pointed at the tip and lack tragi, the outer margin of each ear forming an unbroken ring. The toes have sharp, curved claws. While microbats only have a claw on each thumb of their forelimbs, flying foxes additionally have a claw on each index finger. Skull and dentition The skulls of Pteropus species are composed of 24 bones, the snout is made of 7, the cranium of 16 and the mandible is a single bone. It has a large and bulbous braincase. Like all mammals, flying foxes have three middle ear ossicles which assist in transmitting sound to the brain. Flying fox skulls continue to develop after they are born. Compared to adults, young flying foxes have very short snouts; as they reach maturity, the maxilla elongates, gaining bone between the zygomatic processes and the canine teeth. Based on the grey-headed flying fox's development, pups are born with some milk teeth already erupted: canines and incisors. By 9 days old, all the milk teeth have emerged, with a dental formula of and a total of 20 teeth. By 140 days old (4.6 months), all the milk teeth have fallen and been replaced by permanent teeth. The canines are usually replaced first, followed by the premolars, incisors, and then molars. The adult dental formula is for a total of 34 teeth. The occlusal surface of the molars is generally smooth but with longitudinal furrows. Internal systems Flying foxes have large hearts and a relatively fast heart rate: resting individuals have a heart rate of 100–400 beats per minute. Flying foxes have simple digestive tracts; the time between ingestion and excretion is as short as 12 minutes. They lack both a cecum and an appendix. The stomach has marked cardiac and fundic regions. Intelligence The megabats, including flying foxes, have the greatest encephalization quotient (brain size relative to body size) of any bat family at 1.20. This value is equivalent to that of domestic dogs. Flying foxes display behaviors that indicate a reliance on long-term information storage. Though they have wide-ranging movements and cover thousands of square kilometers annually, they are consistently able to locate the same resource patches and roosts. They will visit these resource patches consistently in a strategy known as trap-lining. They can also be conditioned to perform behaviors, such as one study where spectacled flying foxes were trained to pull a lever using juice as a reinforcement. In a follow-up to the initial study, individuals who had learned to pull the lever to receive juice still did so 3.5 years later. Senses Smell Flying foxes rely heavily on their sense of smell. They have large olfactory bulbs to process scents. They use scent to locate food, for mothers to locate their pups, and for mates to locate each other. Males have enlarged androgen-sensitive sebaceous glands on their shoulders that they use for scent-marking their territories, particularly during the mating season. The secretions of these glands vary by species—of the 65 chemical compounds identified from the glands of four species, no compound was found in all species. Males also engage in "urine washing", meaning that they coat themselves in their own urine. Sight Flying foxes do not echolocate, and therefore rely on sight to navigate. Their eyes are relatively large and positioned on the front of their heads, giving them binocular vision. Like most mammals, though not primates, they are dichromatic. They have both rods and cones; they have "blue" cones that detect short-wavelength light and "green" cones that detect medium-to-long-wavelengths. The rods greatly outnumber the cones, however, as cones comprise only 0.5% of photoreceptors. Flying foxes are adapted to seeing in low-light conditions. Evolutionary history Flying foxes are poorly represented in the fossil record. Relative to the current number of extant species, the Pteropodidae has one of the most incomplete fossil records of any bat group. As of 2014, no flying fox fossils are known from before the Holocene. Many flying foxes live in the tropics, where conditions for fossilization are poor. Based on molecular evolution, flying foxes diverged from a common ancestor with Rousettus 28–18 million years ago and from their sister taxa Neopteryx and Acerodon 6.6–10.6 million years ago. Neopteryx, Acerodon, Desmalopex, Melonycteris, Mirimiri, Pteralopex, and Styloctenium are all relatively closely related to the flying foxes, as they are the other members of its subfamily Pteropodinae. Phylogenetic analysis indicates that flying foxes diversified rapidly in an explosive evolutionary radiation, creating many taxa in a relatively short time frame. Most flying fox lineages emerged after the Zanclean, with two major clades created: one consisting of the Indian Ocean species and the other of the Melanesian, Micronesian, Australian, and insular Southeast Asian species. Flying foxes likely originated on mainland Asia; molecular data suggests that there were at least three colonization events into the Indian Ocean. One event resulted in Livingstone's fruit bat and the Pemba flying fox, which are the westernmost flying foxes. A second colonization event resulted in the Rodrigues flying fox to Rodrigues Island; while a third event resulted in several species diverging to Mauritius, the Seychelles, Madagascar, and Aldabra. With one possible exception - the masked flying fox (P. personatus), flying foxes are likely monophyletic. There are over 60 extant species of flying fox. Flying foxes are now present from the western Indian Ocean midway through the Pacific Ocean as far east as the Cook Islands. They are found in tropical and subtropical climates. Biology and ecology Reproduction and life cycle Many species of flying fox are polygynandrous, meaning that each individual will mate with several other individuals. The Samoa flying fox is a notable exception because it is monogamous. Flying fox sexual behaviors include oral sex in addition to intercourse, with fellatio and cunnilingus observed between opposite sexes, as well as homosexual fellatio in at least one species, the Bonin flying fox. Opposite-sex oral sex is associated with increased duration of intercourse, while same-sex fellatio is hypothesized to encourage colony formation of otherwise-antagonistic males in colder climates. Flying fox gestation length varies among species; gestation length is 140–190 days (4.6–6.3 months). Females have a litter size of one young at a time, called a pup. Twins have been occasionally documented in some species, however. Twins can be fraternal, identical, or the result of superfetation. Pups are altricial and sparsely furred at birth, thereby dependent on their mothers for care. Pups are relatively small at birth, weighing approximately 12% of the mother's weight. Bats in other genera can have pups that weigh as much as 30% of the mother's weight at birth. They cling to their mothers' abdomens, gripping her fur with their thumb claws and teeth; females carry the pups for the first several weeks of life. After this, the females may leave the pups behind at the roost at night while they forage. As with nearly all bat species, males do not assist females in parental care. While male flying foxes of at least one species, the Bismarck masked flying fox, can lactate, it is unclear if the lactation is functional and males actually nurse pups or if it is a result of stress or malnutrition. Pups fledge beginning at 3 months old, but may not be weaned until 4–6 months old. Pups may stay with their mothers until age one. Flying foxes do not reach sexual maturity until 1.5–2 years old. Females can have up to two litters annually, though one is the norm due to the long weaning period. Most flying foxes are seasonal breeders and give birth in the spring, though the Mariana fruit bat seems to have aseasonal breeding with new pups documented throughout the year. Females remain fertile with no decrease in reproductive capability for at least the first 12 or 13 years of life. Flying foxes, like all bats, are long-lived relative to their size. In the wild, average lifespans are likely 15 years. However, individuals part of populations that face excessive disturbance may have lifespans as short as 7.1 years. In captivity, individuals can live approximately 20–28 years. The longest-lived flying fox was an Indian flying fox named Statler, who was a resident at Bat World Sanctuary for his last few years. He was born at a zoo in 1987, and was 34 years old at the time of his death. Social systems Most flying fox species are gregarious and form large aggregations of individuals called colonies or "camps." The large flying fox forms colonies of up to 15,000 individuals, while the little red flying fox forms colonies of up to 100,000 individuals. A few species and subspecies, such as Orii's flying fox (P. dasymallus inopinatus) and the Ceram fruit bat, are solitary. Colony size varies throughout the year in response to biological needs. The grey-headed flying fox forms harems during the breeding season consisting of one male and up to six females. These colonies break up after the breeding season is over. In the Bonin flying fox, colony formation is based on both the sex and age of individuals, as well as the season. In the winter breeding season, adult females will form colonies that include a few adult males (likely harems). Adult males who do not roost with females will form colonies with other adult and subadult males. Subadults will form mixed-sex "subadult groups" with each other. In the summer, however, individuals are solitary, with the exception of nursing females, who roost with their pups. Diet and foraging Flying foxes consume 25–35% of their body weight daily. They are generalists that will consume a variety of items to meet their nutritional needs. Food items include fruit, flowers, nectar, and leaves. They will sometimes deliberately consume insects such as cicadas as well. In Australia, eucalypt blossoms and pollen are preferred food sources, followed by Melaleuca and Banksia flowers They feed on a wide variety of crops as well, causing conflicts with farmers. Crops eaten by flying foxes include sisal, cashew, pineapple, areca, breadfruit, jackfruit, neem, papaya, citrus, fig, mango, banana, avocado, guava, sugar cane, tamarind, grapes, and more. In captivity, the recommended diet for flying foxes consists of two-thirds hard fruits like pears and apples and one-third soft fruits. Bananas and other high-fiber fruits should only be offered occasionally, as flying foxes are not adapted to high-fiber diets. Protein supplements are recommended for captive flying foxes; other supplements such as vitamin C, calcium, chondroitin sulfate, and glucosamine can be recommended periodically. The majority of flying fox species are nocturnal and forage at night. A few island species and subspecies are diurnal, however, hypothesized as a response to a lack of predators. Diurnal taxa include P. melanotus natalis, the Mauritian flying fox, the Caroline flying fox, P. p. insularis, and the Seychelles fruit bat. Foraging resources are often far from roosts, with individuals traveling up to to reach them. Flying foxes can travel at for three hours or more, and can reach top speeds of . Some colonial species will forage in groups, especially when resources are abundant. Less social species will forage alone. When they land on a tree with food, they will hang onto the branch with their clawed hind feet and use their clawed thumbs to pull branches bearing flowers or fruits towards them. As they forage on fruit, flying foxes will compress the fruit against the palate with the tongue to squeeze out and consume the juices. The rest of the fruit is then discarded in "ejecta pellets." Role in ecosystems Flying foxes have important roles as seed dispersers and pollinators. They help spread the seeds in the fruit they eat by discarding them in ejecta pellets or through their guano. In Madagascar, fig seeds have better germination success if they have passed through the gut of a flying fox, which is important because fig trees are a vital pioneer species in regenerating lost forest. Even though flying foxes can have a gut transit time as fast as 12 minutes, seeds can be retained in the gut for as long as 20 hours. As the flying foxes travel large distances, seeds can be deposited up to from the parent tree. They are particularly important in fragmented forests, as many other frugivores are terrestrial and often confined to forest fragments. Flying foxes have the capability to spread seeds beyond the forest fragments through flight. Flying foxes pollinate a variety of plants, including the economically valuable durian. They forage on its nectar in such a way that the flowers (and eventual fruit production) are not usually harmed. Flying fox pollination has a positive effect on durian reproductive success, suggesting that both flying foxes and durian trees benefit from this relationship. Conservation Conservation status Of the 62 flying fox species evaluated by the IUCN as of 2018, 3 are considered critically endangered: the Aru flying fox, Livingstone's fruit bat, and the Vanikoro flying fox. Another 7 species are listed as endangered; 20 are listed as vulnerable, 6 as near threatened, 14 as least concern, and 8 as data deficient. A further 4 are listed as extinct: the dusky flying fox, the large Palau flying fox, the small Mauritian flying fox, and the Guam flying fox. Over half of the species are threatened today with extinction, and in particular in the Pacific, a number of species have died out as a result of hunting, deforestation, and predation by invasive species. Six flying fox species are believed to have gone extinct from 1864 to 2014: the Guam, large Palau, small Mauritian, dusky, large Samoan, and the small Samoan flying foxes. Legal status All species of Pteropus are placed on Appendix II of CITES and 10 on Appendix I, which restricts international trade. Individual species have different legal protections from hunting and domestic trade that reflect the environmental laws of the countries where they are found. In some countries such as Bangladesh, Sri Lanka, and Thailand, flying foxes are absolutely protected from harm under the Wildlife Preservation and Security Act of 2012, Fauna and Flora Protection Ordinance of 1937, and Wildlife Protection and Reservation Act of 1992, respectively. However, in Thailand, flying fox poaching and the illegal bushmeat trade still occurs outside of protected areas. The large flying fox and the small flying fox are particularly prone to poaching and roost disturbance. In other countries, such as Australia, Japan, and the United States, some species of conservation concern are protected under national environmental legislation, while others are not. In Australia, two flying foxes are listed under the Environment Protection and Biodiversity Conservation Act of 1999: the grey-headed and spectacled flying foxes are listed as "vulnerable." Farmers can apply for permits to kill flying foxes when they are causing crop damage. Several flying fox species occur in Japan. The Bonin flying fox has been a Natural Monument of Japan since 1969, which means that it is illegal to capture or disturb them without appropriate permits. Two subspecies of the Ryukyu flying fox (P. d. dasymallus and P. d. daitoensis) are also listed as Natural Monuments. Flying foxes are not designated game species in Japan, and therefore cannot be legally hunted per the Wildlife Protection and Hunting Law. The Bonin flying fox and P. d. daitoensis are also listed as National Endangered Species, meaning that they cannot be killed or harmed; furthermore, the sale or transfer of live or dead individuals in whole or part is also prohibited without permits. Despite not occurring in the continental United States, several species and subspecies are listed under its Endangered Species Act of 1973. Pteropus mariannus mariannus—a subspecies of the Mariana fruit bat—is listed as threatened while the Rodrigues flying fox and Guam flying fox are listed as endangered. Additionally, the U. S. government has been petitioned to list the Aru flying fox and Bonin flying fox as threatened or endangered. In such countries as India and Pakistan, flying foxes explicitly have no legal protection. In India, they are listed as "vermin" under the Wildlife Protection Act of 1972. Pakistan's only flying fox, the Indian flying fox, is listed under Schedule 4 of the Punjab Wildlife (Protection, Preservation, Conservation and Management) Act of 1974, meaning that it has no legal protections and can be hunted. In Mauritius, flying foxes were formerly protected but are now legally culled at a large scale. In 2015, the Mauritian government passed the Native Terrestrial Biodiversity And National Parks Act, which legalized culling of the Mauritian flying fox. In Mauritius, over 40,000 Mauritian flying foxes were culled in a two-year period, reducing its population by an estimated 45%. This decision was viewed with controversy, with researchers stating "Because they spread seeds and pollinate flowers, flying foxes are vital for regenerating lost forests." Legal protection can vary within a country as well, such as in Malaysia. Under the 1990 Protection of Wild Life Amendment Order, flying foxes can be hunted with a permit; each permit is good for killing up to 50 flying foxes. Permits cost U.S.$8 each. However, under the Protection of WildLife Act of 1972, flying foxes can be killed without permits if they are causing damage or if there is "reason to believe that it is about to cause serious damage" to crops. In 2012, the Malaysian state of Terengganu issued a moratorium on hunting flying foxes. In Sarawak, all bat species are listed as "Protected" and hunting them is not legal. Factors causing decline Anthropogenic sources Flying foxes species are declining or going extinct as a result of several human impacts to their environments, in addition to natural phenomena. Their populations are especially vulnerable to threats because the litter size is usually only individual and females generally only have one litter per year. Even when nearly every female (90%) successfully produces and raises young, if a population's mortality rate exceeds 22% annually, then it will steadily decline. Invasive species, such as the brown tree snake, can seriously affect populations; the brown tree snake consumes so many pups that it reduced the recruitment of the Guam population of the Mariana fruit bat to essentially zero. Many flying fox species are threatened by overhunting. While they have long been a dietary component of indigenous people, expanding human population and more efficient weapons have resulted in population declines, local extinctions, and extinctions. Overhunting is believed to be the primary cause of extinction for the small Mauritian flying fox and the Guam flying fox. Flying foxes are also threatened with excessive culling due to conflict with farmers. They are shot, beaten to death, or poisoned to reduce their populations. Mortality also occurs via accidental entanglement into netting used to prevent the bats from eating fruit. Culling can dramatically reduce flying fox populations. In Mauritius, over 40,000 Mauritian flying foxes were culled in a two-year period, reducing its population by an estimated 45%. Flying foxes are also killed by electrocution. In one Australian orchard, it is estimated that over 21,000 bats were electrocuted to death in an 8-week period. Farmers construct electrified grids over their fruit trees to kill flying foxes before they can consume their crop. The grids are questionably effective at preventing crop loss, with one farmer who operated such a grid estimating that they still lost of fruit to flying foxes in a year. Some electrocution deaths are also accidental, such as when bats fly into overhead power lines. Climate change causes flying fox mortality and a source of concern for species persistence. Extreme heat waves in Australia have been responsible for the deaths of more than 30,000 Australian flying foxes from 1994 to 2008. Females and young bats are most susceptible to extreme heat, which affects a population's ability to recover. Flying foxes are threatened by sea level rise associated with climate change, as several taxa are endemic to low-lying atolls. Natural sources Because many species are endemic to a single island, they are vulnerable to random events such as typhoons. A 1979 typhoon halved the remaining population of the Rodrigues flying fox. Typhoons result in indirect mortality as well: Because they defoliate the trees, flying foxes are more visible and easily hunted by humans. Food resources for the bats become scarce after major storms, and flying foxes resort to riskier foraging strategies such as consuming fallen fruit off the ground. There, they are more vulnerable to depredation by domestic cats, dogs, and pigs. Flying foxes are also threatened by disease such as tick paralysis. Tick paralysis affects the spectacled flying fox, and is responsible for an estimated 1% of its annual mortality. Captive breeding Several species of endangered flying fox are bred in captivity to augment their population sizes. Critically endangered Livingstone's fruit bats were taken from the wild starting in 1995 to create a captive breeding program. All captive individuals remain the property of the Comorian government. 17 individuals were collected from the wild; with breeding, there are 71 in captivity as of 2017. Individuals are held at the Jersey Zoo and the Bristol Zoo. Though the program has been successful in increasing the population, caretakers of the captive population have had to deal with husbandry issues such as obesity and cardiomyopathy. Relative to their wild counterparts, captive bats have a higher percentage of body fat and a lower percentage of muscle mass. The problem is pronounced in dominant males, which are the most sedentary. Addressing these concerns involves increasing flight space so that the animals can exercise adequately. Keepers are also exploring ways of distributing food within enclosures to encourage exercise. The endangered Rodrigues flying fox has been bred in captivity with great success. By 1979, only 70–100 individuals were left in the world. In 1976, 25 individuals were removed from the wild by Durrell Wildlife Conservation Trust to begin a breeding program. In 1988, the breeding program was called "undoubtedly the most important chiropteran breeding project now in operation." By 2016, there were 180 individuals in 16 zoos across the United States alone. Worldwide, 46 zoos participate in the Rodrigues flying fox breeding program as of 2017. Relationship to people Food Many flying foxes species are killed for bushmeat. The bushmeat harvest is often unsustainable, often resulting in severe population decline or local extinction. Flying foxes are killed and sold for bushmeat in several countries in Southeast Asia, South Asia, and Oceania, including Indonesia, Malaysia, Papua New Guinea, the Philippines, Bangladesh, China, Fiji, and Guam. Flying fox consumption is particularly common in countries with low food security and lack of environmental regulation. In some cultures in the region, however, eating flying fox meat is taboo. In Namoluk, locals are repulsed by the idea of eating flying foxes because the flying foxes urinate on themselves. In predominately Muslim regions such as much of Indonesia, flying foxes are rarely consumed because of halal dietary restrictions. North Sulawesi has the greatest demand for flying fox bushmeat. Despite being in Muslim-majority Indonesia, North Sulawesi is predominately Christian; therefore, many locals do not follow halal guidelines prohibiting flying fox consumption. In Manado, most local people consume flying fox meat at least once a month. The frequency of flying fox consumption increases tenfold around holidays. Locals believe that "unique meat" from undomesticated animals should be served on special occasions to "enliven the atmosphere." Suggestions to make the flying fox bushmeat trade more sustainable include enforcing a quota system for harvesting, encouraging hunters to release female and juvenile individuals, and providing economic alternatives to those who make a living selling flying fox bushmeat. In Guam and the Commonwealth of the Northern Mariana Islands, consumption of the Mariana fruit bat exposes locals to the neurotoxin beta-Methylamino-L-alanine (BMAA) which may later lead to neurodegenerative diseases. BMAA may become biomagnified in humans who consume flying foxes; flying foxes are exposed to BMAA by eating cycad fruits. Medicine Flying foxes are killed for use in traditional medicine. The Indian flying fox, for example, has many perceived medical uses. Some believe that its fat is a treatment for rheumatism. Tribes in the Attappadi region of India eat the cooked flesh of the Indian flying fox to treat asthma and chest pain. Healers of the Kanda tribe of Bangladesh use hair from Indian flying foxes to create treatments for "fever with shivering." Transmitting disease Flying foxes are the natural reservoirs of several viruses, some of which can be transmitted to humans. Notably, flying foxes can transmit lyssaviruses, which cause rabies. In Australia the rabies virus is not naturally present; Australian bat lyssavirus is the only lyssavirus present. Australian bat lyssavirus was first identified in 1996; it is very rarely transmitted to humans. Transmission occurs from the bite or scratch of an infected animal, but can also occur from getting the infected animal's saliva in a mucous membrane or an open wound. Exposure to flying fox blood, urine, or feces is not a risk of exposure to Australian bat lyssavirus. Since 1994, there have been three records of people getting infected with it—all three were in Queensland and each case was fatal. Flying foxes are also reservoirs of henipaviruses such as Hendra virus and Nipah virus. Hendra virus was first identified in 1994; it also rarely occurs humans. From 1994 to 2013, there have been seven reported cases of Hendra virus affecting people, four of which were fatal. The hypothesized primary route of human infection is via contact with horses that have come into contact with flying fox urine. There are no documented instances of direct transmission between flying foxes and humans. As of 2012, there is a vaccine available for horses to decrease the likelihood of infection and transmission. Nipah virus was first identified in 1998 in Malaysia. Since 1998, there have been several Nipah outbreaks in Malaysia, Singapore, India, and Bangladesh, resulting in over 100 casualties. A 2018 outbreak in Kerala, India resulted in 19 humans infected, of which 17 died. The overall fatality rate is 40–75%. Humans can contract Nipah virus from direct contact with flying foxes or their fluids, through exposure to an intermediate host such as domestic pigs, or from contact with an infected person. A 2014 study of the Indian flying fox and Nipah virus found that while Nipah virus outbreaks are more likely in areas preferred by flying foxes, "the presence of bats in and of itself is not considered a risk factor for Nipah virus infection." Rather, the consumption of date palm sap is a significant route of transmission. The practice of date palm sap collection involves placing collecting pots at date palm trees. Indian flying foxes have been observed licking the sap as it flows into the pots, as well as defecating and urinating in proximity to the pots. In this way, humans who drink the palm sap can be exposed to the bats' viruses. The use of bamboo skirts on collecting pots lowers the risk of contamination from bat fluids. Flying foxes can transmit several non-lethal diseases as well, such as Menangle virus and Nelson Bay virus. These viruses rarely affect humans and few cases have been reported. While other bat species have been suspected or implicated as the reservoir of diseases such as SARS and Ebola, flying foxes are not suspected as hosts for either causative virus. Pests Flying foxes are often considered pests due to the damage they cause to orchard crops. Flying foxes have been cited as particularly destructive to almonds, guavas, and mangoes in the Maldives; lychee in Mauritius; areca in India; and stone fruits in Australia. Orchard damages from other animals are often misattributed to flying foxes, though, and economic damage can be difficult to quantify or exaggerated. To prevent fruit damage, farmers may legally or illegally cull flying foxes. In the 1800s, the Australian government paid farmers bounties to kill flying foxes, though the practice has since been discontinued. Alternatives to culling include placing barriers between the bats and fruit trees, such as netting, or harvesting fruit in a timely manner to avoid attracting as many flying foxes. Netting is the most effective way to prevent crop loss, though some farmers find it cost prohibitive. It costs US$4,400–44,000 to net of crops. Other methods of preventing fruit loss may also involve the use of scare guns, chemical deterrents, or night-time lights. Alternatively, planting Singapore cherry trees and other decoy crops next to an orchard can be effective, as flying foxes are much more attracted to their fruits than many other orchard crops. The location of flying fox camps can be a disturbance to humans. In Batemans Bay, Australia, locals report being so disturbed by flying fox vocalizations in the morning that they lose sleep. Flying foxes can fly into power lines and cause electricity outages. Their guano and body odor are also unpleasant to smell. The presence of flying fox colonies can cause nearby property values to decline. In culture Flying foxes are featured in many indigenous cultures and traditions. A folklore Dreamtime story from the New South Wales North Coast in Australia features an impatient flying fox wanting the Great Spirit to teach him how to be a bird, only to be hung upside down on a branch. They were also featured in Aboriginal cave art, as evinced by several surviving examples. In Tonga, flying foxes are considered sacred. All flying foxes are the property of the king, meaning non-royal persons cannot harm them in any way. Tongan legend states that a colony of flying foxes at Kolovai are the descendants of a pair of flying foxes gifted to the King of Tonga by the Princess of Samoa. In the Indian village of Puliangulam, a colony of Indian flying foxes roosts in a Banyan tree. Villagers believe that the flying foxes are under the protection of Muni, and do not harm the bats. A shrine to Muni is beneath the tree. If locals believe that they have offended Muni by failing to protect the bats, they will pray and perform puja after offering sweet rice, coconut, and bananas to those attending the ceremony. Flying foxes are also featured in folk stories from Papua New Guinea. Stories with flying foxes include a legend about a cockatoo stealing feathers from the flying fox, resulting in it becoming nocturnal. Another story features a flying fox that could transform into a young man; the flying fox stole a woman away from her husband to take as his wife. Another legend states that a flying fox-man was responsible for introducing yams to their people. Indigenous societies in Oceania used parts of flying foxes for functional and ceremonial weapons. In the Solomon Islands, people created barbs out of their bones for use in spears. In New Caledonia, ceremonial axes made of jade were decorated with braids of flying fox fur. Flying fox wings were depicted on the war shields of the Asmat people of Indonesia; they believed that the wings offered protection to their warriors. There are modern and historical references to flying fox byproducts used as currency. In New Caledonia, braided flying fox fur was once used as currency. On the island of Makira, which is part of the Solomon Islands, indigenous peoples still hunt flying foxes for their teeth as well as for bushmeat. The canine teeth are strung together on necklaces that are used as currency. Teeth of the insular flying fox are particularly prized, as they are usually large enough to drill holes in. The Makira flying fox is also hunted, though, despite its smaller teeth. Deterring local peoples from using flying fox teeth as currency may be detrimental to the species, with Lavery and Fasi noting, "Species that provide an important cultural resource can be highly treasured." Emphasizing sustainable hunting of flying foxes to preserve cultural currency may be more effective than encouraging the abandonment of cultural currency. Even if flying foxes were no longer hunted for their teeth, they would still be killed for bushmeat; therefore, retaining their cultural value may encourage sustainable hunting practices. Lavery stated, "It’s a positive, not a negative, that their teeth are so culturally valuable. The practice of hunting bats shouldn’t necessarily be stopped, it needs to be managed sustainably." Other uses Flying foxes and other bat species in Southeast Asia are often killed and sold as "mummies". The mummified bodies or skeletons of these bats are often shipped to the United States where they are sold in souvenir or curiosity shops or online through vendors such as Etsy or eBay. From 2000 to 2013, over 100,000 dead bats were imported to the United States. Bat conservationist Merlin Tuttle wrote, "I've seen huge losses, mostly due to various kinds of over-harvesting, especially at cave entrances, either for food or for sale as mummies." Despite sometimes being advertised as "sustainable," the practice could lead to overharvesting and depletion of flying fox species, with Tuttle saying, "It is a virtual certainty that the bats you've seen advertised are not sustainably harvested."
Biology and health sciences
Bats
null
3345336
https://en.wikipedia.org/wiki/Riparian%20zone
Riparian zone
A riparian zone or riparian area is the interface between land and a river or stream. In some regions, the terms riparian woodland, riparian forest, riparian buffer zone, riparian corridor, and riparian strip are used to characterize a riparian zone. The word riparian is derived from Latin ripa, meaning "river bank". Riparian is also the proper nomenclature for one of the terrestrial biomes of the Earth. Plant habitats and communities along the river margins and banks are called riparian vegetation, characterized by hydrophilic plants. Riparian zones are important in ecology, environmental resource management, and civil engineering because of their role in soil conservation, their habitat biodiversity, and the influence they have on terrestrial and semiaquatic fauna as well as aquatic ecosystems, including grasslands, woodlands, wetlands, and even non-vegetative areas. Riparian zones may be natural or engineered for soil stabilization or restoration. These zones are important natural biofilters, protecting aquatic environments from excessive sedimentation, polluted surface runoff, and erosion. They supply shelter and food for many aquatic animals and shade that limits stream temperature change. When riparian zones are damaged by construction, agriculture or silviculture, biological restoration can take place, usually by human intervention in erosion control and revegetation. If the area adjacent to a watercourse has standing water or saturated soil for as long as a season, it is normally termed a wetland because of its hydric soil characteristics. Because of their prominent role in supporting a diversity of species, riparian zones are often the subject of national protection in a biodiversity action plan. These are also known as a "plant or vegetation waste buffer". Research shows that riparian zones are instrumental in water quality improvement for both surface runoff and water flowing into streams through subsurface or groundwater flow. Riparian zones can play a role in lowering nitrate contamination in surface runoff, such as manure and other fertilizers from agricultural fields, that would otherwise damage ecosystems and human health. Particularly, the attenuation of nitrate or denitrification of the nitrates from fertilizer in this buffer zone is important. The use of wetland riparian zones shows a particularly high rate of removal of nitrate entering a stream and thus has a place in agricultural management. Also in terms of carbon transport from terrestrial ecosystems to aquatic ecosystems, riparian groundwater can play an important role. As such, a distinction can be made between parts of the riparian zone that connect large parts of the landscape to streams, and riparian areas with more local groundwater contributions. Characteristics Key features of a typical riparian forest include 1. Location and Hydrological Context    - Riparian forests are primarily situated alongside rivers or streams, with varying degrees of proximity to the water's edge.    - These ecosystems are intimately connected with dynamic water flow and soil processes, influencing their characteristics. 2.Diverse Ecosystem Components    - Riparian forests feature a diverse combination of elements, including:    - Mesic terrestrial vegetation (vegetation adapted to moist conditions).    - Dependent animal life, relying on the riparian environment for habitat and resources.    - Local microclimate influenced by the presence of water bodies. 3. Distinct Vegetation Structure    - The vegetation in riparian forests exhibits a multi-layered structure.    - Moisture-dependent trees are the dominant feature, giving these forests a unique appearance, especially in savanna regions.    - These moisture-dependent trees define the landscape, accompanied by a variety of mesic understorey, shrub, and ground cover species. 4. Floristic Composition    - Riparian forests often host plant species that have high moisture requirements.    - The flora typically includes species native to the region, adapted to the moist conditions provided by proximity to water bodies. In summary, riparian forests are characterized by their location along waterways, their intricate interplay with water and soil dynamics, a diverse array of vegetation layers, and a plant composition favoring moisture-dependent species. Roles and functions Riparian zones dissipate stream energy. The meandering curves of a river, combined with vegetation and root systems, slow the flow of water, which reduces soil erosion and flood damage. Sediment is trapped, reducing suspended solids to create less turbid water, replenish soils, and build stream banks. Pollutants are filtered from surface runoff, enhancing water quality via biofiltration. The riparian zones also provide wildlife habitat, increased biodiversity, and wildlife corridors, enabling aquatic and riparian organisms to move along river systems avoiding isolated communities. Riparian vegetation can also provide forage for wildlife and livestock. Riparian zones are also important for the fish that live within rivers, such as brook and charr. Impacts on riparian zones can affect fish, and restoration is not always sufficient to recover fish populations. They provide native landscape irrigation by extending seasonal or perennial flows of water. Nutrients from terrestrial vegetation (e.g. plant litter and insect drop) are transferred to aquatic food webs, and are a vital source of energy in aquatic food webs. The vegetation surrounding the stream helps to shade the water, mitigating water temperature changes. Thinning of riparian zones has been observed to cause increased maximum temperatures, higher fluctuations in temperature, and elevated temperatures being observed more frequently and for longer periods of time. Extreme changes in water temperature can have lethal effects on fish and other organisms in the area. The vegetation also contributes wood debris to streams, which is important to maintaining geomorphology. Riparian zones also act as important buffers against nutrient loss in the wake of natural disasters, such as hurricanes. Many of the characteristics of riparian zones that reduce the inputs of nitrogen from agricultural runoff also retain the necessary nitrogen in the ecosystem after hurricanes threaten to dilute and wash away critical nutrients. From a social aspect, riparian zones contribute to nearby property values through amenity and views, and they improve enjoyment for footpaths and bikeways through supporting foreshoreway networks. Space is created for riparian sports such as fishing, swimming, and launching for vessels and paddle craft. The riparian zone acts as a sacrificial erosion buffer to absorb impacts of factors including climate change, increased runoff from urbanization, and increased boat wake without damaging structures located behind a setback zone. "Riparian zones play a crucial role in preserving the vitality of streams and rivers, especially when faced with challenges stemming from catchment land use, including agricultural and urban development. These changes in land utilization can exert adverse impacts on the health of streams and rivers and, consequently, contribute to a decline in their reproductive rates." Role in logging The protection of riparian zones is often a consideration in logging operations. The undisturbed soil, soil cover, and vegetation provide shade, plant litter, and woody material and reduce the delivery of soil eroded from the harvested area. Factors such as soil types and root structures, climatic conditions, and vegetative cover determine the effectiveness of riparian buffering. Activities associated with logging, such as sediment input, introduction or removal of species, and the input of polluted water all degrade riparian zones. Vegetation The assortment of riparian zone trees varies from those of wetlands and typically consists of plants that are either emergent aquatic plants, or herbs, trees and shrubs that thrive in proximity to water. In South Africa's fynbos biome, Riparian ecosystem are heavily invaded by alien woody plants. Riparian plant communities along lowland streams exhibit remarkable species diversity, driven by the unique environmental gradients inherent to these ecosystems. Riparian zones in Africa Riparian forest can be found in Benin, West Africa. In Benin, where the savanna ecosystem prevails, "riparian forests" include various types of woodlands, such as semi-deciduous forests, dry forests, open forests, and woodland savannas. These woodlands can be found alongside rivers and streams. In Nigeria, you can also discover riparian zones within the Ibadan region of Oyo state. Ibadan, one of the oldest towns in Africa, covers a total area of 3,080 square kilometers and is characterized by a network of perennial water streams that create these valuable riparian zones. In the research conducted by Adeoye et al. (2012) on land use changes in Southwestern Nigeria, it was observed that 46.18 square kilometers of the area are occupied by water bodies. Additionally, most streams and rivers in this region are accompanied by riparian forests. Nevertheless, the study also identified a consistent reduction in the extent of these riparian forests over time, primarily attributed to a significant deforestation rate. In Nigeria, according to Momodu et al. (2011), there has been a notable decline of about 50% in the riparian forest coverage within the period of 1978 to 2000. This reduction is primarily attributed to alterations in land use and land cover. Additionally, their research indicates that if current trends continue, the riparian forests may face further depletion, potentially leading to their complete disappearance by the year 2040. Riparian zones can also be found in Cape Agulhas region of South Africa. Riparian areas along South African rivers have experienced significant deterioration as a result of human activities. Similar to many other developed and developing areas worldwide, the extensive building of dams in upstream river areas and the extraction of water for irrigation purposes have led to diminished water flows and changes in the riparian environment. North America Water's edge Herbaceous Perennial: Peltandra virginica – Arrow Arum Sagittaria lancifolia – Arrowhead Carex stricta – Tussock Sedge Iris virginica – Southern Blue Flag Iris Inundated riparian zone Herbaceous Perennial: Sagittaria latifolia – Duck Potato Schoenoplectus tabernaemontani – Softstem Bulrush Scirpus americanus – Three-square Bulrush Eleocharis quadrangulata – Square-stem Spikerush Eleocharis obtusa – Spikerush Western In western North America and the Pacific coast, the riparian vegetation includes: Riparian trees Sequoia sempervirens – Coast Redwood Thuja plicata – Western Redcedar Abies grandis – Grand Fir Picea sitchensis – Sitka Spruce Chamaecyparis lawsoniana – Port Orford-cedar Taxus brevifolia – Pacific Yew Populus fremontii – Fremont Cottonwood Populus trichocarpa – Black Cottonwood Platanus racemosa – California Sycamore Alnus rhombifolia – White Alder Alnus rubra – Red Alder Acer macrophyllum – Big-leaf Maple Fraxinus latifolia – Oregon ash Prunus emarginata – Bitter Cherry Salix lasiolepis – Arroyo Willow Salix lucida – Pacific Willow Quercus agrifolia – Coast live oak Quercus garryana – Garry oak Populus tremuloides – Quaking Aspen Umbellularia californica – California Bay Laurel Cornus nuttallii – Pacific Dogwood Riparian shrubs Acer circinatum – Vine Maple Ribes spp. – Gooseberies and Currants Rosa pisocarpa – Swamp Rose or Cluster Rose Symphoricarpos albus – Snowberry Spiraea douglasii – Douglas spirea Rubus spp. – Blackberries, Raspberries, Thimbleberry, Salmonberry Rhododendron occidentale – Western Azalea Oplopanax horridus – Devil's Club Oemleria cerasiformis – Indian Plum, Osoberry Lonicera involucrata – Twinberry Cornus stolonifera – Red-osier Dogwood Salix spp. – Willows Other plants Polypodium – Polypody Ferns Polystichum – Sword Ferns Woodwardia – Giant Chain Ferns Pteridium – Goldback Ferns Dryopteris – Wood Ferns Adiantum – Maidenhair Ferns Carex spp. – Sedges Juncus spp. – Rushes Festuca californica – California Fescue bunchgrass Leymus condensatus – Giant Wildrye bunchgrass Melica californica – California Melic bunchgrass Mimulus spp. – Monkeyflower and varieties Aquilegia spp. – Columbine Asia In Asia there are different types of riparian vegetation, but the interactions between hydrology and ecology are similar as occurs in other geographic areas. Carex spp. – Sedges Juncus spp. – Rushes Australia Typical riparian vegetation in temperate New South Wales, Australia include: Acacia melanoxylon – Blackwood Acacia pravissima – Ovens Wattle Acacia rubida – Red Stem Wattle Bursaria lasiophylla – Blackthorn Callistemon citrinus – Crimson Bottlebrush Callistemon sieberi – River Bottlebrush Casuarina cunninghamiana – River She-Oak Eucalyptus bridgesiana – Apple Box Eucalyptus camaldulensis – River Red Gum Eucalyptus melliodora – Yellow Box Eucalyptus viminalis – Manna Gum Kunzea ericoides – Burgan Leptospermum obovatum – River Tea-Tree Melaleuca ericifolia – Swamp Paperbark Central Europe Typical riparian zone trees in Central Europe include: Acer campestre – Field Maple Acer pseudoplatanus – Sycamore Maple Alnus glutinosa – Black Alder Carpinus betulus – European Hornbeam Fraxinus excelsior – European Ash Juglans regia – Persian Walnut Malus sylvestris – European Wild Apple Populus alba – White Poplar Populus nigra – Black Poplar Quercus robur – Pedunculate Oak Salix alba – White Willow Salix fragilis – Crack Willow Tilia cordata – Small-leaved Lime Ulmus laevis – European White Elm Ulmus minor – Field Elm Repair and restoration Land clearing followed by floods can quickly erode a riverbank, taking valuable grasses and soils downstream, and later allowing the sun to bake the land dry. Riparian zones can be restored through relocation (of human-made products), rehabilitation, and time. Natural Sequence Farming techniques have been used in the Upper Hunter Valley of New South Wales, Australia, in an attempt to restore eroded farms to optimum productivity rapidly. The Natural Sequence Farming technique involves placing obstacles in the water's pathway to lessen the energy of a flood and help the water to deposit soil and seep into the flood zone. Another technique is to quickly establish ecological succession by encouraging fast-growing plants such as "weeds" (pioneer species) to grow. These may spread along the watercourse and cause environmental degradation, but may stabilize the soil, place carbon into the ground, and protect the land from drying. The weeds will improve the streambeds so trees and grasses can return and, ideally, replace the weeds. There are several other techniques used by government and non-government agencies to address riparian and streambed degradation, ranging from the installation of bed control structures such as log sills to the use of pin groynes or rock emplacement. Other possible approaches include control of invasive species, monitoring of herbivore activity, and cessation of human activity in a particular zone followed by natural re-vegetation. Conservation efforts have also encouraged incorporating the value of ecosystem services provided by riparian zones into management plans, as these benefits have traditionally been absent in the consideration and designing of these plans.
Physical sciences
Wetlands
Earth science
15261560
https://en.wikipedia.org/wiki/Dictyostelium%20discoideum
Dictyostelium discoideum
Dictyostelium discoideum is a species of soil-dwelling amoeba belonging to the phylum Amoebozoa, infraphylum Mycetozoa. Commonly referred to as slime mold, D. discoideum is a eukaryote that transitions from a collection of unicellular amoebae into a multicellular slug and then into a fruiting body within its lifetime. Its unique asexual life cycle consists of four stages: vegetative, aggregation, migration, and culmination. The life cycle of D. discoideum is relatively short, which allows for timely viewing of all stages. The cells involved in the life cycle undergo movement, chemical signaling, and development, which are applicable to human cancer research. The simplicity of its life cycle makes D. discoideum a valuable model organism to study genetic, cellular, and biochemical processes in other organisms. Natural habitat and diet In the wild, D. discoideum can be found in soil and moist leaf litter. Its primary diet consists of bacteria, such as Escherichia coli, found in the soil and decaying organic matter. Uninucleate amoebae of D. discoideum consume bacteria found in their natural habitat, which includes deciduous forest soil and decaying leaves. Life cycle and reproduction The life cycle of D. discoideum begins when spores are released from a mature sorocarp (fruiting body). Myxamoebae hatch from the spores under warm and moist conditions. During their vegetative stage, the myxamoebae divide by mitosis as they feed on bacteria. The bacteria secrete folic acid, which attracts the myxamoebae. When the supply of bacteria is depleted, the myxamoebae enter the aggregation stage. During aggregation, starvation initiates the production of protein compounds such as glycoproteins and adenylyl cyclase. The glycoproteins allow for cell-cell adhesion, and adenylyl cyclase creates cyclic AMP. Cyclic AMP is secreted by the amoebae to attract neighboring cells to a central location. As they move toward the signal, they bump into each other and stick together by the use of glycoprotein adhesion molecules. The migration stage begins once the amoebae have formed a tight aggregate and the elongated mound of cells tips over to lie flat on the ground. The amoebae work together as a motile pseudoplasmodium, also known as a slug. The slug is about 2–4 mm long, composed of up to 100,000 cells, and is capable of movement by producing a cellulose sheath in its anterior cells through which the slug moves. Part of this sheath is left behind as a slimy trail as it moves toward attractants such as light, heat, and humidity in a forward-only direction. Cyclic AMP and a substance called differentiation-inducing factor, help to form different cell types. The slug becomes differentiated into prestalk and prespore cells that move to the anterior and posterior ends, respectively. Once the slug has found a suitable environment, the anterior end of the slug forms the stalk of the fruiting body and the posterior end forms the spores of the fruiting body. Anterior-like cells, which have only been recently discovered, are also dispersed throughout the posterior region of the slug. These anterior-like cells form the very bottom of the fruiting body and the caps of the spores. After the slug settles into one spot, the posterior end spreads out with the anterior end raised in the air, forming what is called the "Mexican hat", and the culmination stage begins. The prestalk cells and prespore cells switch positions in the culmination stage to form the mature fruiting body. The anterior end of the Mexican hat forms a cellulose tube, which allows the more posterior cells to move up the outside of the tube to the top, and the prestalk cells move down. This rearrangement forms the stalk of the fruiting body made up of the cells from the anterior end of the slug, and the cells from the posterior end of the slug are on the top and now form the spores of the fruiting body. At the end of this 8– to 10-hour process, the mature fruiting body is fully formed. This fruiting body is 1–2 mm tall and is now able to start the entire cycle over again by releasing the mature spores that become myxamoebae. Sexual reproduction Although D. discoideum generally reproduces asexually, D. discoideum is still capable of sexual reproduction if certain conditions are met. D. discoideum has three different mating types and studies have identified the sex locus that specifies these three mating types. Type I strains are specified by the gene called MatA, type III strains have two different MatS and MatT genes, and type II strains have three different genes: MatB (homologous to MatA), MatC, and MatD (which are homologous to MatS and MatT). Each sex can only mate with the two different sexes and not with its own. By switching out these genes, it was shown that not all genes found in the sex locus are required to cause successful mating. Successful mating occurs when a strain with MatA is paired with one strain that has MatC or MatS, or when one with MatB is paired with one that has MatS. MatD/MatT have no effect on mating compatibility, but it being present in the fused zygote makes macrocyst formation more likely to be successful. When incubated with their bacterial food supply, heterothallic or homothallic sexual development can occur, resulting in the formation of a diploid zygote. Heterothallic mating occurs when two amoebae of different mating types are present in a dark and wet environment, where they can fuse during aggregation to form a giant zygote cell. The giant cell then releases cAMP to attract other cells, then engulfs the other cells cannibalistically in the aggregate. The consumed cells serve to encase the whole aggregate in a thick, cellulose wall to protect it. This is known as a macrocyst. Inside the macrocyst, the giant cell divides first through meiosis, then through mitosis to produce many haploid amoebae that will be released to feed as normal amoebae would. While sexual reproduction is possible, it is very rare to see successful germination of a D. discoideum macrocyst under laboratory conditions. Nevertheless, recombination is widespread within D. discoideum natural populations, indicating that sex is likely an important aspect of their life cycle. Homothallic D. discoideum strains AC4 and ZA3A are also able to produce macrocysts. Each of these strains express a mating type similar to type III, but with significant divergence (77% for MatS, 94% for MatT). AC4's status as D. discoideum has been questioned, but its SSU rRNA sequence still fall into the range of this species. It's still unclear how these two manage to become homothallic. In addition, type II cells produce homothallic macrocysts after being primed either by a small number of type I cells or by nearby type I cells physically prevented from contacting them. This may be due to the effect of ethelene. Some mutants also generate cyst-like structures on their own. Because these require unnatural conditions, the type IIs are not considered truly homothallic. Use as a model organism Because many of its genes are homologous to human genes, yet its life cycle is simple, D. discoideum is commonly used as a model organism. It can be observed at organismic, cellular, and molecular levels primarily because of their restricted number of cell types and behaviors, and their rapid growth. It is used to study cell differentiation, chemotaxis, and apoptosis, which are all normal cellular processes. It is also used to study other aspects of development, including cell sorting, pattern formation, phagocytosis, motility, and signal transduction. These processes and aspects of development are either absent or too difficult to view in other model organisms. D. discoideum is closely related to higher metazoans. It carries similar genes and pathways, making it a good candidate for gene knockout. The cell differentiation process occurs when a cell becomes more specialized to develop into a multicellular organism. Changes in size, shape, metabolic activities, and responsiveness can occur as a result of adjustments in gene expression. Cell diversity and differentiation, in this species, involves decisions made from cell-cell interactions in pathways to either stalk cells or spore cells. These cell fates depend on their environment and pattern formation. Therefore, the organism is an excellent model for studying cell differentiation. Chemotaxis is defined as a passage of an organism toward or away from a chemical stimulus along a chemical concentration gradient. Certain organisms demonstrate chemotaxis when they move toward a supply of nutrients. In D. discoideum, the amoeba secretes the signal, cAMP, out of the cell, attracting other amoebae to migrate toward the source. Every amoeba moves toward a central amoeba, the one dispensing the greatest amount of cAMP secretions. The secretion of the cAMP is then exhibited by all amoebae and is a call for them to begin aggregation. These chemical emissions and amoeba movement occur every six minutes. The amoebae move toward the concentration gradient for 60 seconds and stop until the next secretion is sent out. This behavior of individual cells tends to cause oscillations in a group of cells, and chemical waves of varying cAMP concentration propagate through the group in spirals. An elegant set of mathematical equations that reproduces the spirals and the streaming patterns of D. discoideum was discovered by mathematical biologists Thomas Höfer and Martin Boerlijst. Mathematical biologist Cornelis J. Weijer has proven that similar equations can model its movement. The equations of these patterns are mainly influenced by the density of the amoeba population, the rate of the production of cyclic AMP and the sensitivity of individual amoebas to cyclic AMP. The spiraling pattern is formed by amoebas at the centre of a colony who rotate as they send out waves of cyclic AMP. The use of cAMP as a chemotactic agent is not established in any other organism. In developmental biology, this is one of the comprehensible examples of chemotaxis, which is important for an understanding of human inflammation, arthritis, asthma, lymphocyte trafficking, and axon guidance. Phagocytosis is used in immune surveillance and antigen presentation, while cell-type determination, cell sorting, and pattern formation are basic features of embryogenesis that may be studied with these organisms. Note, however, that cAMP oscillations may not be necessary for the collective cell migration at multicellular stages. A study has found that cAMP-mediated signaling changes from propagating waves to a steady state at a multicellular stage of D. discoideum. Thermotaxis is movement along a gradient of temperature. The slugs have been shown to migrate along extremely shallow gradients of only 0.05 °C/cm, but the direction chosen is complicated; it seems to be away from a temperature about 2 °C below the temperature to which they had been acclimated. This complicated behavior has been analyzed by computer modeling of the behavior and the periodic pattern of temperature changes in soil caused by daily changes in air temperature. The conclusion is that the behavior moves slugs a few centimeters below the soil surface up to the surface. This is an amazingly sophisticated behavior by a primitive organism with no apparent sense of gravity. Apoptosis (programmed cell death) is a normal part of species development. Apoptosis is necessary for the proper spacing and sculpting of complex organs. Around 20% of cells in D. discoideum altruistically sacrifice themselves in the formation of the mature fruiting body. During the pseudoplasmodium (slug or grex) stage of its life cycle, the organism has formed three main types of cells: prestalk, prespore, and anterior-like cells. During culmination, the prestalk cells secrete a cellulose coat and extend as a tube through the grex. As they differentiate, they form vacuoles and enlarge, lifting up the prespore cells. The stalk cells undergo apoptosis and die as the prespore cells are lifted high above the substrate. The prespore cells then become spore cells, each one becoming a new myxamoeba upon dispersal. This is an example of how apoptosis is used in the formation of a reproductive organ, the mature fruiting body. A recent major contribution from Dictyostelium research has come from new techniques allowing the activity of individual genes to be visualised in living cells. This has shown that transcription occurs in "bursts" or "pulses" (transcriptional bursting) rather than following simple probabilistic or continuous behaviour. Bursting transcription now appears to be conserved between bacteria and humans. Another remarkable feature of the organism is that it has sets of DNA repair enzymes found in human cells, which are lacking from many other popular metazoan model systems. Defects in DNA repair lead to devastating human cancers, so the ability to study human repair proteins in a simple tractable model will prove invaluable. Lab cultivation This organism's ability to be easily isolated and cultivated in the laboratory adds to its appeal as a model organism. While D. discoideum can be grown in liquid culture, it is usually grown in Petri dishes containing nutrient agar and the surfaces are kept moist. The cultures grow best at 22–24 °C (room temperature). D. discoideum feed primarily on E. coli, which is adequate for all stages of the life cycle. When the food supply is diminished, the myxamoebae aggregate to form pseudoplasmodia. Soon, the dish is covered with various stages of the life cycle. Checking the dish often allows for detailed observations of development. The cells can be harvested at any stage of development and grown quickly. While cultivating D. discoideum in a laboratory, it is important to take into account its behavioral responses. For example, it has an affinity toward light, higher temperatures, high humidity, low ionic concentrations, and the acidic side of the pH gradient. Experiments are often done to see how manipulations of these parameters hinder, stop, or accelerate development. Variations of these parameters can alter the rate and viability of culture growth. Also, the fruiting bodies, being that this is the tallest stage of development, are very responsive to air currents and physical stimuli. It is unknown if there is a stimulus involved with spore release. Protein expression studies Detailed analysis of protein expression in Dictyostelium has been hampered by large shifts in the protein expression profile between different developmental stages and a general lack of commercially available antibodies for Dictyostelium antigens. In 2013, a group at the Beatson West of Scotland Cancer Centre reported an antibody-free protein visualization standard for immunoblotting based on detection of MCCC1 using streptavidin conjugates. Legionnaire's disease The bacterial genus Legionella includes the species that causes legionnaire's disease in humans. D. discoideum is also a host for Legionella and is a suitable model for studying the infection process. Specifically, D. discoideum shares with mammalian host cells a similar cytoskeleton and cellular processes relevant to Legionella infection, including phagocytosis, membrane trafficking, endocytosis, vesicle sorting, and chemotaxis. Bacteria farming A 2011 report in Nature published findings that demonstrated a "primitive farming behaviour" in D. discoideum colonies. Described as a "symbiosis" between D. discoideum and bacterial prey, about one-third of wild-collected D. discoideum colonies engaged in the "husbandry" of the bacteria when the bacteria were included within the slime mold fruiting bodies. The incorporation of the bacteria into the fruiting bodies allows the "seeding" of the food source at the location of the spore dispersal, which is particularly valuable if the new location is low in food resources. Colonies produced from the "farming" spores typically also show the same behavior when sporulating. This incorporation has a cost associated with it: Those colonies that do not consume all of the prey bacteria produce smaller spores that cannot disperse as widely. In addition, much less benefit exists for bacteria-containing spores that land in a food-rich region. This balance of the costs and benefits of the behavior may contribute to the fact that a minority of D. discoideum colonies engage in this practice. D. discoideum is known for eating Gram-positive, as well as Gram-negative bacteria, but some of the phagocytized bacteria, including some human pathogens, are able to live in the amoebae and exit without killing the cell. When they enter the cell, where they reside, and when they leave the cell are not known. The research is not yet conclusive but it is possible to draw a general life cycle of D. discoideum adapted for farmer clones to better understand this symbiotic process. In the picture, one can see the different stages. First, in the starvation stage, bacteria are enclosed within D. discoideum, after entry into amoebae, in a phagosome the fusion with lysosomes is blocked and these unmatured phagosomes are surrounded by host cell organelles such as mitochondria, vesicles, and a multilayer membrane derived from the rough endoplasmic reticulum (RER) of amoebae. The role of the RER in the intracellular infection is not known, but the RER is not required as a source of proteins for the bacteria. The bacteria reside within these phagosomes during the aggregation and the multicellular development stages. The amoebae preserve their individuality and each amoeba has its own bacterium. During the culmination stage, when the spores are produced, the bacteria pass from the cell to the sorus with the help of a cytoskeletal structure that prevents host cell destruction. Some results suggest the bacteria exploit the exocytosis without killing the cell. Free-living amoebae seem to play a crucial role for persistence and dispersal of some pathogens in the environment. Transient association with amoebae has been reported for a number of different bacteria, including Legionella pneumophila, many Mycobacterium species, Francisella tularensis, and Escherichia coli, among others. Agriculture seems to play a crucial role for pathogens' survival, as they can live and replicate inside D. discoideum, making husbandry. Nature’s report has made an important advance in the knowledge of amoebic behavior, and the famous Spanish phrase translated as “you are more stupid than an amoeba” is losing the sense because amoebae are an excellent example of social behavior with an amazing coordination and sense of sacrifice for the benefit of the species. There are two strains of Pseudomonas fluorescens farmed by D. discoideum. One strain acts as a food source, while the other produces beneficial secondary metabolites that deters the growth of fungi and enhances the fertility of "farmer" individuals. The main genetic difference between these two strains is a mutation of the global activator gene called gacA. This gene plays a key role in gene regulation; when this gene is knocked out in the nonfood strain, it loses its special secondary metabolites and, independently, is transformed into a food strain. Sentinel cells Sentinel cells in Dictyostelium discoideum are phagocytic cells responsible for removing toxic material from the slug stage of the social cycle. Generally round in shape, these cells are present within the slug sheath where they are found to be circulating freely. The detoxification process occurs when these cells engulf toxins and pathogens within the slug through phagocytosis. Then, the cells clump into groups of five to ten cells, which then attach to the inner sheath of the slug. The sheath is sloughed off as the slug migrates to a new site in search of food bacteria. Sentinel cells make up approximately 1% of the total number of slug cells, and the number of sentinel cells remains constant even as they are being released. This indicates a constant regeneration of sentinel cells within the slugs as they are being removed along with toxins and pathogens. Sentinel cells are present in the slug even when there are no toxins or pathogens to be removed. Sentinel cells have been located in five other species of Dictyostelia, which suggests that sentinel cells can be described as a general characteristic of the innate immune system in social amoebae. Effects of farming status on sentinel cells The number of sentinel cells varies depending on the farming status of wild D. discoideum. When exposed to a toxic environment created by the use of ethidium bromide, it was shown that the number of sentinel cells per millimeter was lower for farmers than non-farmers. This was concluded by observing the trails left behind as the slugs migrated and counting the number of sentinel cells present in a millimeter. However, the number of sentinel cells does not affect the spore production and viability in farmers. Farmers exposed to a toxic environment produce the same number of spores as farmers in a non-toxic environment, and the spore viability was the same between the farmers and non-farmers. When Clade 2 Burkholderia, or farmer-associated bacteria, are removed from the farmers, spore production and viability were similar to that of the non-farmers. Thus, it is suggested that bacteria carried by the farmers provide an additional role of protection for the farmers against potential harm due to toxins or pathogens. Classification and phylogeny In older classifications, Dictyostelium was placed in the defunct polyphyletic class Acrasiomycetes. This was a class of cellular slime molds, which was characterized by the aggregation of individual amoebae into a multicellular fruiting body, making it an important factor that related the acrasids to the dictyostelids. More recent genomic studies have shown that Dictyostelium has maintained more of its ancestral genome diversity than plants and animals, although proteome-based phylogeny confirms that amoebozoa diverged from the animal–fungal lineage after the plant–animal split. Subclass Dictyosteliidae, order Dictyosteliales is a monophyletic assemblage within the Mycetozoa, a group that includes the protostelid, dictyostelid, and myxogastrid slime molds. Elongation factor-1α (EF-1α) data analyses support Mycetozoa as a monophyletic group, though rRNA trees place it as a polyphyletic group. Further, these data support the idea that the dictyostelid and myxogastrid are more closely related to each other than they are the protostelids. EF-1α analysis also placed the Mycetozoa as the immediate outgroup for the animal-fungal clade. Latest phylogenetic data place dictyostelids firmly within supergroup Amoebozoa, along with myxomycetes. Meanwhile, protostelids have turned out to be polyphyletic, their stalked fruiting bodies a convergent feature of multiple unrelated lineages. Genome The D. discoideum genome sequencing project was completed and published in 2005 by an international collaboration of institutes. This was the first free-living protozoan genome to be fully sequenced. D. discoideum consists of a 34-Mb haploid genome with a base composition of 77% [A+T] and contains six chromosomes that encode around 12,500 proteins. Sequencing of the D. discoideum genome provides a more intricate study of its cellular and developmental biology. Tandem repeats of trinucleotides are very abundant in this genome; one class of the genome is clustered, leading researchers to believe it serves as centromeres. The repeats correspond to repeated sequences of amino acids and are thought to be expanded by nucleotide expansion. Expansion of trinucleotide repeats also occurs in humans, in general leading to many diseases. Learning how D. discoideum cells endure these amino acid repeats may provide insight to allow humans to tolerate them. Every genome sequenced plays an important role in identifying genes that have been gained or lost over time. Comparative genomic studies allow for comparison of eukaryotic genomes. A phylogeny based on the proteome showed that the amoebozoa deviated from the animal-fungal lineage after the plant-animal split. The D. discoideum genome is noteworthy because its many encoded proteins are commonly found in fungi, plants, and animals. Databases DictyBase - general genomic database about Dictyostelium discoideum Membranome database provides information about single-pass transmembrane proteins from Dictyostelium and several other organisms
Biology and health sciences
Eukaryotes
Plants
712345
https://en.wikipedia.org/wiki/Debridement
Debridement
Debridement is the medical removal of dead, damaged, or infected tissue to improve the healing potential of the remaining healthy tissue. Removal may be surgical, mechanical, chemical, autolytic (self-digestion), or by maggot therapy. In podiatry, practitioners such as chiropodists, podiatrists and foot health practitioners remove conditions such as calluses and verrucas. Debridement is an important part of the healing process for burns and other serious wounds; it is also used for treating some kinds of snake and spider bites. Sometimes the boundaries of the problem tissue may not be clearly defined. For example, when excising a tumor, there may be micrometastases along the edges of the tumor that are too small to be detected, but if not removed, could cause a relapse. In such circumstances, a surgeon may opt to debride a portion of the surrounding healthy tissue to ensure that the tumor is completely removed. Types There is a lack of high-quality evidence to compare the effectiveness of various debridement methods on time taken for debridement or time taken for complete healing of wounds. Surgical debridement Surgical or "sharp" debridement and laser debridement under anesthesia are the fastest methods of debridement. They are very selective, meaning that the person performing the debridement has nearly complete control over which tissue is removed and which is left behind. Surgical debridement can be performed in the operating room or bedside, depending on the extent of the necrotic material and a patient's ability to tolerate the procedure. The surgeon will typically debride tissue back to viability, as determined by tissue appearance and the presence of blood flow in healthy tissue. Autolytic debridement Autolysis uses the body's own enzymes and moisture to re-hydrate, soften and finally liquefy hard eschar and slough. Autolytic debridement is selective; only necrotic tissue is liquefied. It is also virtually painless for the patient. Autolytic debridement can be achieved with the use of occlusive or semi-occlusive dressings which maintain wound fluid in contact with the necrotic tissue. Autolytic debridement can be achieved with hydrocolloids, hydrogels and transparent films. It is suitable for wounds where the amount of dead tissue is not extensive and where there is no infection. Enzymatic debridement Chemical enzymes are fast-acting products that slough off necrotic tissue. These enzymes are derived from micro-organisms including Clostridium histolyticum; or from plants, examples include collagenase, varidase, papain, and bromelain. Some of these enzymatic debriders are selective, while some are not. This method works well on wounds (especially burns) with a large amount of necrotic debris or with eschar formation. However, the results are mixed and the effectiveness is variable. Therefore, this type of debridement is used sparingly and is not considered a standard of care for burn treatments. Mechanical debridement When removal of tissue is necessary for the treatment of wounds, hydrotherapy which performs selective mechanical debridement can be used. Examples of this include directed wound irrigation and therapeutic irrigation with suction. Baths with whirlpool water flow should not be used to manage wounds because a whirlpool will not selectively target the tissue to be removed and can damage all tissue. Whirlpools also create an unwanted risk of bacterial infection, can damage fragile body tissue, and in the case of treating arms and legs, bring risk of complications from edema. Hydrosurgery uses a high‐pressure, water‐based jet system to remove burnt skin. This should leave behind the unburned, healthy skin. A 2019 Cochrane systematic review aimed to find out if burns treated with hydrosurgery heal more quickly and with fewer infections than burns treated with a knife. The review authors only found one randomised controlled trial (RCT) with very low certainty evidence that investigated this. Based on this trial, they concluded that it is uncertain whether or not hydrosurgery is better than conventional surgery for early treatment of mid‐depth burns. More RCTs are needed to fully answer this question. Allowing a dressing to proceed from moist to dry, then manually removing the dressing causes a form of non-selective debridement. This method works best on wounds with moderate amounts of necrotic debris (e.g. "dead tissue"). Maggot therapy In maggot therapy, a number of small maggots are introduced to a wound in order to consume necrotic tissue, and do so far more precisely than is possible in a normal surgical operation. Larvae of the green bottle fly (Lucilia sericata) are used, which primarily feed on the necrotic (dead) tissue of the living host without attacking living tissue. Maggots can debride a wound in one or two days. The maggots derive nutrients through a process known as "extracorporeal digestion" by secreting a broad spectrum of proteolytic enzymes that liquefy necrotic tissue, and absorb the semi-liquid result within a few days. In an optimum wound environment maggots molt twice, increasing in length from 1–2 mm to 8–10 mm, and in girth, within a period of 3–4 days by ingesting necrotic tissue, leaving a clean wound free of necrotic tissue when they are removed.
Biology and health sciences
Surgery
Health
712576
https://en.wikipedia.org/wiki/Passive%20smoking
Passive smoking
Passive smoking is the inhalation of tobacco smoke, called passive smoke, secondhand smoke (SHS) or environmental tobacco smoke (ETS), by individuals other than the active smoker. It occurs when tobacco smoke diffuses into the surrounding atmosphere as an aerosol pollutant, which leads to its inhalation by nearby bystanders within the same environment. Exposure to secondhand tobacco smoke causes many of the same health effects caused by active smoking, although at a lower prevalence due to the reduced concentration of smoke that enters the airway. According to a WHO report published in 2023, more than 1.3 million deaths are attributed to passive smoking worldwide every year. The health risks of secondhand smoke are a matter of scientific consensus, and have been a major motivation for smoking bans in workplaces and indoor venues, including restaurants, bars and night clubs, as well as some open public spaces. Concerns around secondhand smoke have played a central role in the debate over the harms and regulation of tobacco products. Since the early 1970s, the tobacco industry has viewed public concern over secondhand smoke as a serious threat to its business interests. Despite the industry's awareness of the harms of secondhand smoke as early as the 1980s, the tobacco industry coordinated a scientific controversy with the purpose of stopping regulation of their products. Terminology Fritz Lickint created the term "passive smoking" ("Passivrauchen") in a publication in the German language during the 1930s. Terms used include "environmental tobacco smoke" to refer to the airborne matter, while "involuntary smoking" and "passive smoking" refer to exposure to secondhand smoke. The term "environmental tobacco smoke" can be traced back to a 1974 industry-sponsored meeting held in Bermuda, while the term "passive smoking" was first used in the title of a scientific paper in 1970. The Surgeon General of the United States prefers to use the phrase "secondhand smoke" rather than "environmental tobacco smoke", stating that "The descriptor 'secondhand' captures the involuntary nature of the exposure, while 'environmental' does not." Most researchers consider the term "passive smoking" to be synonymous with "secondhand smoke". In contrast, a 2011 commentary in Environmental Health Perspectives argued that research into "thirdhand smoke" renders it inappropriate to refer to passive smoking with the term "secondhand smoke", which the authors stated constitutes a pars pro toto. The term "sidestream smoke" is sometimes used to refer to smoke that goes into the air directly from a burning cigarette, cigar, or pipe, while "mainstream smoke" refers to smoke that a smoker exhales. Health effects Secondhand smoke causes many of the same diseases as direct smoking, including cardiovascular diseases, lung cancer, and respiratory diseases. These include: Cancer: General: overall increased risk; reviewing the evidence accumulated on a worldwide basis, the International Agency for Research on Cancer concluded in 2004 that "Involuntary smoking (exposure to secondhand or 'environmental' tobacco smoke) is carcinogenic to humans." The Centers for Disease Control and Prevention reports that about 70 chemicals present in secondhand smoke are carcinogenic. Lung cancer: Passive smoking is a risk factor for lung cancer. In the United States, secondhand smoke is estimated to cause more than 7,000 deaths from lung cancer a year among non-smokers. A quarter of all cases occur in people who have never smoked. Breast cancer: The California Environmental Protection Agency concluded in 2005 that passive smoking increases the risk of breast cancer in younger, primarily premenopausal females by 70% and the US Surgeon General has concluded that the evidence is "suggestive", but still insufficient to assert such a causal relationship. In contrast, the International Agency for Research on Cancer concluded in 2004 that there was "no support for a causal relation between involuntary exposure to tobacco smoke and breast cancer in never-smokers." A 2015 meta-analysis found that the evidence that passive smoking moderately increased the risk of breast cancer had become "more substantial than a few years ago". Cervical cancer: A 2015 overview of systematic reviews found that exposure to secondhand smoke increased the risk of cervical cancer. Bladder cancer: A 2016 systematic review and meta-analysis found that secondhand smoke exposure was associated with a significant increase in the risk of bladder cancer. Circulatory system: risk of heart disease and reduced heart rate variability. Epidemiological studies have shown that both active and passive cigarette smoking increase the risk of atherosclerosis. Passive smoking is strongly associated with an increased risk of stroke, and this increased risk is disproportionately high at low levels of exposure. Lung problems: Risk of asthma Risk of chronic obstructive pulmonary disease (COPD) According to a 2015 review, passive smoking may increase the risk of tuberculosis infection and accelerate the progression of the disease, but the evidence remains weak. The majority of studies on the association between secondhand smoke exposure and sinusitis have found a significant association between the two. Cognitive impairment and dementia: Exposure to secondhand smoke may increase the risk of cognitive impairment and dementia in adults 50 and over. Children exposed to secondhand smoke show reduced vocabulary and reasoning skills when compared with non-exposed children as well as more general cognitive and intellectual deficits. Mental health: Exposure to secondhand smoke is associated with an increased risk of depressive symptoms. During pregnancy: Miscarriage: a 2014 meta-analysis found that maternal secondhand smoke exposure increased the risk of miscarriage by 11%. Low birth weight, part B, ch. 3. Premature birth, part B, ch. 3 (Evidence of the causal link is described only as "suggestive" by the US Surgeon General in his 2006 report.) Laws limiting smoking decrease premature births. Stillbirth and congenital malformations in children Recent studies comparing females exposed to secondhand smoke and non-exposed females, demonstrate that females exposed while pregnant have higher risks of delivering a child with congenital abnormalities, longer lengths, smaller head circumferences, and neural tube defects. General: Worsening of asthma, allergies, and other conditions. A 2014 systematic review and meta-analysis found that passive smoking was associated with a slightly increased risk of allergic diseases among children and adolescents; the evidence for an association was weaker for adults. Type 2 diabetes. It remains unclear whether the association between passive smoking and diabetes is causal. Risk of carrying Neisseria meningitidis or Streptococcus pneumoniae. A possible increased risk of periodontitis. Overall increased risk of death in both adults, where it was estimated to kill 53,000 nonsmokers per year in the U.S in 1991, and in children. The World Health Organization states that passive smoking causes about 600,000 deaths a year, and about 1% of the global burden of disease. As of 2017, passive smoking causes about 900,000 deaths a year, which is about 1/8 of all deaths caused by smoking. Skin conditions: A 2016 systematic review and meta-analysis found that passive smoking was associated with a higher rate of atopic dermatitis. Risk to children Sudden infant death syndrome (SIDS). In his 2006 report, the US Surgeon General concludes: "The evidence is sufficient to infer a causal relationship between exposure to secondhand smoke and sudden infant death syndrome." Secondhand smoking has been estimated to be associated with 430 SIDS deaths in the United States annually. Asthma. Secondhand smoke exposure is also associated with an almost doubled risk of hospitalization for asthma exacerbation among children with asthma. Lung infections, also including more severe illness with bronchiolitis and bronchitis, and worse outcome, as well as increased risk of developing tuberculosis if exposed to a carrier. In the United States, it is estimated that secondhand smoke has been associated with between 150,000 and 300,000 lower respiratory tract infections in infants and children under 18 months of age, resulting in between 7,500 and 15,000 hospitalizations each year. Impaired respiratory function and slowed lung growth Allergies Maternal passive smoking increases the risk of non-syndromic orofacial clefts by 50% among their children. Learning difficulties, developmental delays, executive function problems, and neurobehavioral effects. Animal models suggest a role for nicotine and carbon monoxide in neurocognitive problems. Increased risk of middle ear infections. Invasive meningococcal disease. Anesthesia complications and some negative surgical outcomes. Sleep disordered breathing: Most studies have found a significant association between passive smoking and sleep disordered breathing in children, but further studies are needed to determine whether this association is causal. Adverse effects on the cardiovascular system of children. Evidence Epidemiological studies show that non-smokers exposed to secondhand smoke are at risk for many of the health problems associated with direct smoking. In 1992, a review estimated that secondhand smoke exposure was responsible for 35,000 to 40,000 deaths per year in the United States in the early 1980s. The absolute risk increase of heart disease due to ETS was 2.2%, while the attributable risk percent was 23%. A 1997 meta-analysis found that secondhand smoke exposure increased the risk of heart disease by a quarter, and two 1999 meta-analyses reached similar conclusions. Evidence shows that inhaled sidestream smoke, the main component of secondhand smoke, is about four times more toxic than mainstream smoke. This fact has been known to the tobacco industry since the 1980s, though it kept its findings secret. Some scientists believe that the risk of passive smoking, in particular the risk of developing coronary heart diseases, may have been substantially underestimated. In 1997, a meta-analysis on the relationship between secondhand smoke exposure and lung cancer concluded that such exposure caused lung cancer. The increase in risk was estimated to be 24% among non-smokers who lived with a smoker. In 2000, Copas and Shi reported that there was clear evidence of publication bias in the studies included in this meta-analysis. They further concluded that after correcting for publication bias, and assuming that 40% of all studies are unpublished, this increased risk decreased from 24% to 15%. This conclusion has been challenged on the basis that the assumption that 40% of all studies are unpublished was "extreme". In 2006, Takagi et al. reanalyzed the data from this meta-analysis to account for publication bias and estimated that the relative risk of lung cancer among those exposed to secondhand smoke was 1.19, slightly lower than the original estimate. A 2000 meta-analysis found a relative risk of 1.48 for lung cancer among men exposed to secondhand smoke, and a relative risk of 1.16 among those exposed to it at work. Another meta-analysis confirmed the finding of an increased risk of lung cancer among women with spousal exposure to secondhand smoke the following year. It found a relative risk of lung cancer of 1.29 for women exposed to secondhand smoke from their spouses. A 2014 meta-analysis noted that "the association between exposure to secondhand smoke and lung cancer risk is well established." A minority of epidemiologists have found it hard to understand how secondhand smoke, which is more diluted than actively inhaled smoke, could have an effect that is such a large fraction of the added risk of coronary heart disease among active smokers. One proposed explanation is that secondhand smoke is not simply a diluted version of "mainstream" smoke, but has a different composition with more toxic substances per gram of total particulate matter. Passive smoking appears to be capable of precipitating the acute manifestations of cardio-vascular diseases (atherothrombosis) and may also have a negative impact on the outcome of patients who have acute coronary syndromes. In 2004, the International Agency for Research on Cancer (IARC) of the World Health Organization (WHO) reviewed all significant published evidence related to tobacco smoking and cancer. It concluded: Subsequent meta-analyses have confirmed these findings. The National Asthma Council of Australia cites studies showing that secondhand smoke is probably the most important indoor pollutant, especially around young children: Smoking by either parent, particularly by the mother, increases the risk of asthma in children. The outlook for early childhood asthma is less favourable in smoking households. Children with asthma who are exposed to smoking in the home generally have more severe disease. Many adults with asthma identify ETS as a trigger for their symptoms. Doctor-diagnosed asthma is more common among non-smoking adults exposed to ETS than those not exposed. Among people with asthma, higher ETS exposure is associated with a greater risk of severe attacks. In France, exposure to secondhand smoke has been estimated to cause between 3,000 and 5,000 premature deaths per year, with the larger figure cited by Prime Minister Dominique de Villepin during his announcement of a nationwide smoke-free law: "That makes more than 13 deaths a day. It is an unacceptable reality in our country in terms of public health." There is good observational evidence that smoke-free legislation reduces the number of hospital admissions for heart disease. Exposure and risk levels The International Agency for Research on Cancer of the World Health Organization concluded in 2004 that there was sufficient evidence that secondhand smoke caused cancer in humans. Those who work in environments where smoke is not regulated are at higher risk. Workers particularly at risk of exposure include those in installation repair and maintenance, construction and extraction, and transportation. Much research has come from studies of nonsmokers who are married to a smoker. The US Surgeon General, in his 2006 report, estimated that living or working in a place where smoking is permitted increases the non-smokers' risk of developing heart disease by 25–30% and lung cancer by 20–30%. Similarly, children who are exposed to environmental tobacco smoke are shown to experience a range of adverse effects and a higher risk of becoming smokers later in life. The WHO has identified reduction of exposure to environmental tobacco smoke as key element for actions to encourage healthy child development. The US Centers for Disease Control and Prevention monitors the extent of and trends in exposure to environmental tobacco smoke by measuring serum cotinine in national health surveys. The prevalence of secondhand smoke exposure among U.S. nonsmokers declined from 87.5% in 1988 to 25.2% in 2014. However, nearly half of blacks and the poor were exposed in 2014. Interventions to reduce environmental tobacco smoke A systematic review compared smoking control programmes and their effects on smoke exposure in children. The review distinguishes between community-based, ill-child and healthy-child settings and the most common types of interventions were counselling or brief advice during clinical visits. The review did not find superior outcomes for any intervention, and the authors caution that evidence from adult settings may not generalise well to children. Biomarkers Environmental tobacco smoke can be evaluated either by directly measuring tobacco smoke pollutants found in the air or by using biomarkers, an indirect measure of exposure. Carbon monoxide monitored through breath, nicotine, cotinine, thiocyanates, and proteins are the most specific biological markers of tobacco smoke exposure. Biochemical tests are a much more reliable biomarker of secondhand smoke exposure than surveys. Certain groups of people are reluctant to disclose their smoking status and exposure to tobacco smoke, especially pregnant women and parents of young children. This is due to their smoking being socially unacceptable. Also, it may be difficult for individuals to recall their exposure to tobacco smoke. A 2007 study in the Addictive Behaviors journal found a positive correlation between secondhand tobacco smoke exposure and concentrations of nicotine and/or biomarkers of nicotine in the body. Significant biological levels of nicotine from secondhand smoke exposure were equivalent to nicotine levels from active smoking and levels that are associated with behaviour changes due to nicotine consumption. Cotinine Cotinine, the metabolite of nicotine, is a biomarker of secondhand smoke exposure. Typically, cotinine is measured in the blood, saliva, and urine. Hair analysis has recently become a new, noninvasive measurement technique. Cotinine accumulates in hair during hair growth, which results in a measure of long-term, cumulative exposure to tobacco smoke. Urinary cotinine levels have been a reliable biomarker of tobacco exposure and have been used as a reference in many epidemiological studies. However, cotinine levels found in the urine reflect exposure only over the preceding 48 hours. Cotinine levels of the skin, such as the hair and nails, reflect tobacco exposure over the previous three months and are a more reliable biomarker. Carbon monoxide (CO) Carbon monoxide monitored via breath is also a reliable biomarker of secondhand smoke exposure as well as tobacco use. With high sensitivity and specificity, it not only provides an accurate measure, but the test is also non-invasive, highly reproducible, and low in cost. Breath CO monitoring measures the concentration of CO in an exhalation in parts per million, and this can be directly correlated to the blood CO concentration (carboxyhemoglobin). Breath CO monitors can also be used by emergency services to identify patients who are suspected of having CO poisoning. Pathophysiology A 2004 study by the International Agency for Research on Cancer of the World Health Organization concluded that non-smokers are exposed to the same carcinogens as active smokers. Sidestream smoke contains more than 4,000 chemicals, including 69 known carcinogens. Of special concern are polynuclear aromatic hydrocarbons, tobacco-specific N-nitrosamines, and aromatic amines, such as 4-aminobiphenyl, all known to be highly carcinogenic. Mainstream smoke, sidestream smoke, and secondhand smoke contain largely the same components, however the concentration varies depending on type of smoke. Several well-established carcinogens have been shown by the tobacco companies' own research to be present at higher concentrations in sidestream smoke than in mainstream smoke. Secondhand smoke has been shown to produce more particulate-matter (PM) pollution than an idling low-emission diesel engine. In an experiment conducted by the Italian National Cancer Institute, three cigarettes were left smoldering, one after the other, in a 60 m3 garage with a limited air exchange. The cigarettes produced PM pollution exceeding outdoor limits, as well as PM concentrations up to 10-fold that of the idling engine. Secondhand tobacco smoke exposure has immediate and substantial effects on blood and blood vessels in a way that increases the risk of a heart attack, particularly in people already at risk. Exposure to tobacco smoke for 30 minutes significantly reduces coronary flow velocity reserve in healthy nonsmokers. Secondhand smoke is also associated with impaired vasodilation among adult nonsmokers. Secondhand smoke exposure also affects platelet function, vascular endothelium, and myocardial exercise tolerance at levels commonly found in the workplace. Pulmonary emphysema can be induced in rats through acute exposure to sidestream tobacco smoke (30 cigarettes per day) over a period of 45 days. Degranulation of mast cells contributing to lung damage has also been observed. The term "third-hand smoke" was recently coined to identify the residual tobacco smoke contamination that remains after the cigarette is extinguished and secondhand smoke has cleared from the air. Preliminary research suggests that by-products of third-hand smoke may pose a health risk, though the magnitude of risk, if any, remains unknown. In October 2011, it was reported that Christus St. Frances Cabrini Hospital in Alexandria, Louisiana, would seek to eliminate third-hand smoke beginning in July 2012, and that employees whose clothing smelled of smoke would not be allowed to work. This prohibition was enacted because third-hand smoke poses a special danger for the developing brains of infants and small children. In 2008, there were more than 161,000 deaths attributed to lung cancer in the United States. Of these deaths, an estimated 10% to 15% were caused by factors other than first-hand smoking; equivalent to 16,000 to 24,000 deaths annually. Slightly more than half of the lung cancer deaths caused by factors other than first-hand smoking were found in nonsmokers. Lung cancer in non-smokers may well be considered one of the most common cancer mortalities in the United States. Clinical epidemiology of lung cancer has linked the primary factors closely tied to lung cancer in non-smokers as exposure to secondhand tobacco smoke, carcinogens including radon, and other indoor air pollutants. Opinion of public health authorities There is widespread scientific consensus that exposure to secondhand smoke is harmful. The link between passive smoking and health risks is accepted by every major medical and scientific organisation, including: World Health Organization U.S. National Institutes of Health Centers for Disease Control United States Surgeon General U.S. National Cancer Institute United States Environmental Protection Agency California Environmental Protection Agency American Heart Association, American Lung Association, and American Cancer Society American Medical Association American Academy of Pediatrics Australian National Health and Medical Research Council United Kingdom Scientific Committee on Tobacco and Health Public opinion Recent major surveys conducted by the U.S. National Cancer Institute and Centers for Disease Control have found widespread public awareness that secondhand smoke is harmful. In both 1992 and 2000 surveys, more than 80% of respondents agreed with the statement that secondhand smoke was harmful. A 2001 study found that 95% of adults agreed that secondhand smoke was harmful to children, and 96% considered tobacco-industry claims that secondhand smoke was not harmful to be untruthful. A 2007 Gallup poll found that 56% of respondents felt that secondhand smoke was "very harmful", a number that has held relatively steady since 1997. Another 29% believe that secondhand smoke is "somewhat harmful"; 10% answered "not too harmful", while 5% said "not at all harmful". Controversy over harm As part of its attempt to prevent or delay tighter regulation of smoking, the tobacco industry funded a number of scientific studies and, where the results cast doubt on the risks associated with secondhand smoke, sought wide publicity for those results. The industry also funded libertarian and conservative think tanks, such as the Cato Institute in the United States and the Institute of Public Affairs in Australia which criticised both scientific research on passive smoking and policy proposals to restrict smoking. New Scientist and the European Journal of Public Health have identified these industry-wide coordinated activities as one of the earliest expressions of corporate denialism. Further, they state that the disinformation spread by the tobacco industry has created a tobacco denialism movement, sharing many characteristics of other forms of denialism, such as HIV-AIDS denialism. Industry-funded studies and critiques Enstrom and Kabat A 2003 study by James Enstrom and Geoffrey Kabat, published in the British Medical Journal, argued that the harms of passive smoking had been overstated. Their analysis reported no statistically significant relationship between passive smoking and lung cancer, coronary heart disease (CHD), or chronic obstructive pulmonary disease, though the accompanying editorial noted that "they may overemphasise the negative nature of their findings." This paper was widely promoted by the tobacco industry as evidence that the harms of passive smoking were unproven. The American Cancer Society (ACS), whose database Enstrom and Kabat used to compile their data, criticized the paper as "neither reliable nor independent", stating that scientists at the ACS had repeatedly pointed out serious flaws in Enstrom and Kabat's methodology prior to publication. Notably, the study had failed to identify a comparison group of "unexposed" persons. Enstrom's ties to the tobacco industry also drew scrutiny; in a 1997 letter to Philip Morris, Enstrom requested a "substantial research commitment... in order for me to effectively compete against the large mountain of epidemiologic data and opinions that already exist regarding the health effects of ETS and active smoking." In a US racketeering lawsuit against tobacco companies, the Enstrom and Kabat paper was cited by the US District Court as "a prime example of how nine tobacco companies engaged in criminal racketeering and fraud to hide the dangers of tobacco smoke." The Court found that the study had been funded and managed by the Center for Indoor Air Research, a tobacco industry front group tasked with "offsetting" damaging studies on passive smoking, as well as by Philip Morris who stated that Enstrom's work was "clearly litigation-oriented". A 2005 paper in Tobacco Control argued that the disclosure section in the Enstrom and Kabat BMJ paper, although it met the journal's requirements, "does not reveal the full extent of the relationship the authors had with the tobacco industry." In 2006, Enstrom and Kabat published a meta-analysis of studies regarding passive smoking and coronary heart disease in which they reported a very weak association between passive smoking and heart disease mortality. They concluded that exposure to secondhand smoke increased the risk of death from CHD by only 5%, although this analysis has been criticized for including two previous industry-funded studies that suffered from widespread exposure misclassification. Gori Gio Batta Gori, a tobacco industry spokesman and consultant and an expert on risk utility and scientific research, wrote in the libertarian Cato Institute's magazine Regulation that "...of the 75 published studies of ETS and lung cancer, some 70% did not report statistically significant differences of risk and are moot. Roughly 17% claim an increased risk and 13% imply a reduction of risk." Milloy Steven Milloy, the "junk science" commentator for Fox News and a former Philip Morris consultant, claimed that "of the 19 studies" on passive smoking "only 8— slightly more than 42%— reported statistically significant increases in heart disease incidence." Another component of criticism cited by Milloy focused on relative risk and epidemiological practices in studies of passive smoking. Milloy, who has a master's degree from the Johns Hopkins School of Hygiene and Public Health, argued that studies yielding relative risks of less than 2 were meaningless junk science. This approach to epidemiological analysis was criticized in the American Journal of Public Health: The tobacco industry and affiliated scientists also put forward a set of "Good Epidemiology Practices" which would have the practical effect of obscuring the link between secondhand smoke and lung cancer; the privately stated goal of these standards was to "impede adverse legislation". However, this effort was largely abandoned when it became clear that no independent epidemiological organization would agree to the standards proposed by Philip Morris et al. Levois and Layard In 1995, Levois and Layard, both tobacco industry consultants, published two analyses in the journal Regulatory Toxicology and Pharmacology regarding the association between spousal exposure to secondhand smoke and heart disease. Both of these papers reported no association between secondhand smoke and heart disease. These analyses have been criticized for failing to distinguish between current and former smokers, despite the fact that former smokers, unlike current ones, are not at a significantly increased risk of heart disease. World Health Organization controversy A 1998 study by the International Agency for Research on Cancer (IARC) on environmental tobacco smoke (ETS) found "weak evidence of a dose–response relationship between risk of lung cancer and exposure to spousal and workplace ETS." In March 1998, before the study was published, reports appeared in the media alleging that the IARC and the World Health Organization (WHO) were suppressing information. The reports, appearing in the British Sunday Telegraph and The Economist, among other sources, alleged that the WHO withheld from publication of its own report that supposedly failed to prove an association between passive smoking and a number of other diseases (lung cancer in particular). In response, the WHO issued a press release stating that the results of the study had been "completely misrepresented" in the popular press and were in fact very much in line with similar studies demonstrating the harms of passive smoking. The study was published in the Journal of the National Cancer Institute in October of the same year, and concluded the authors found "no association between childhood exposure to ETS and lung cancer risk" but "did find weak evidence of a dose–response relationship between risk of lung cancer and exposure to spousal and workplace ETS." An accompanying editorial summarized: With the release of formerly classified tobacco industry documents through the Tobacco Master Settlement Agreement, it was found (by Elisa Ong and Stanton Glantz) that the controversy over the WHO's alleged suppression of data had been engineered by Philip Morris, British American Tobacco, and other tobacco companies in an effort to discredit scientific findings which would harm their business interests. A WHO inquiry, conducted after the release of the tobacco-industry documents, found that this controversy was generated by the tobacco industry as part of its larger campaign to cut the WHO's budget, distort the results of scientific studies on passive smoking, and discredit the WHO as an institution. This campaign was carried out using a network of ostensibly independent front organizations and international and scientific experts with hidden financial ties to the industry. EPA lawsuit In 1993, the United States Environmental Protection Agency (EPA) issued a report estimating that 3,000 lung cancer related deaths in the United States were caused by passive smoking annually. Philip Morris, R.J. Reynolds Tobacco Company, and groups representing growers, distributors and marketers of tobacco took legal action, claiming that the EPA had manipulated this study and ignored accepted scientific and statistical practices. The United States District Court for the Middle District of North Carolina ruled in favor of the tobacco industry in 1998, finding that the EPA had failed to follow proper scientific and epidemiologic practices and had "cherry picked" evidence to support conclusions which they had committed to in advance. The court stated in part, "EPA publicly committed to a conclusion before research had begun...adjusted established procedure and scientific norms to validate the Agency's public conclusion... In conducting the ETS Risk Assessment, disregarded information and made findings on selective information; did not disseminate significant epidemiologic information; deviated from its Risk Assessment Guidelines; failed to disclose important findings and reasoning..." In 2002, the EPA successfully appealed this decision to the United States Court of Appeals for the Fourth Circuit. The EPA's appeal was upheld on the preliminary grounds that their report had no regulatory weight, and the earlier finding was vacated. In 1998, the U.S. Department of Health and Human Services, through the publication by its National Toxicology Program of the 9th Report on Carcinogens, listed environmental tobacco smoke among the known carcinogens, observing of the EPA assessment that "The individual studies were carefully summarized and evaluated." Tobacco-industry funding of research The tobacco industry's role in funding scientific research on secondhand smoke has been controversial. A review of published studies found that tobacco-industry affiliation was strongly correlated with findings exonerating secondhand smoke; researchers affiliated with the tobacco industry were 88 times more likely than independent researchers to conclude that secondhand smoke was not harmful. In a specific example which came to light with the release of tobacco-industry documents, Philip Morris executives successfully encouraged an author to revise his industry-funded review article to downplay the role of secondhand smoke in sudden infant death syndrome. The 2006 U.S. Surgeon General's report criticized the tobacco industry's role in the scientific debate: This strategy was outlined at an international meeting of tobacco companies in 1988, at which Philip Morris proposed to set up a team of scientists, organized by company lawyers, to "carry out work on ETS to keep the controversy alive." All scientific research was subject to oversight and "filtering" by tobacco-industry lawyers: Philip Morris reported that it was putting "...vast amounts of funding into these projects... in attempting to coordinate and pay so many scientists on an international basis to keep the ETS controversy alive." Tobacco industry response Measures to tackle secondhand smoke pose a serious economic threat to the tobacco industry, having broadened the definition of smoking beyond a personal habit to something with a social impact. In a confidential 1978 report, the tobacco industry described increasing public concerns about secondhand smoke as "the most dangerous development to the viability of the tobacco industry that has yet occurred." In United States of America v. Philip Morris et al., the District Court for the District of Columbia found that the tobacco industry "... recognized from the mid-1970s forward that the health effects of passive smoking posed a profound threat to industry viability and cigarette profits," and that the industry responded with "efforts to undermine and discredit the scientific consensus that ETS causes disease." Accordingly, the tobacco industry have developed several strategies to minimise the impact on their business: The industry has sought to position the secondhand smoke debate as essentially concerned with civil liberties and smokers' rights rather than with health, by funding groups such as FOREST. Funding bias in research; in all reviews of the effects of secondhand smoke on health published between 1980 and 1995, the only factor associated with concluding that secondhand smoke is not harmful was whether an author was affiliated with the tobacco industry. However, not all studies that failed to find evidence of harm were by industry-affiliated authors. Delaying and discrediting legitimate research (see for an example of how the industry attempted to discredit Takeshi Hirayama's landmark study, and for an example of how it attempted to delay and discredit a major Australian report on passive smoking) Promoting "good epidemiology" and attacking so-called junk science (a term popularised by industry lobbyist Steven Milloy): attacking the methodology behind research showing health risks as flawed and attempting to promote sound science. Ong & Glantz (2001) cite an internal Phillip Morris memo giving evidence of this as company policy. Creation of outlets for favourable research. In 1989, the tobacco industry established the International Society of the Built Environment, which published the peer-reviewed journal Indoor and Built Environment. This journal did not require conflict-of-interest disclosures from its authors. With documents made available through the Master Settlement, it was found that the executive board of the society and the editorial board of the journal were dominated by paid tobacco-industry consultants. The journal published a large amount of material on passive smoking, much of which was "industry-positive". Citing the tobacco industry's production of biased research and efforts to undermine scientific findings, the 2006 U.S. Surgeon General's report concluded that the industry had "attempted to sustain controversy even as the scientific community reached consensus... industry documents indicate that the tobacco industry has engaged in widespread activities... that have gone beyond the bounds of accepted scientific practice." The U.S. District Court, in U.S.A. v. Philip Morris et al., found that "...despite their internal acknowledgment of the hazards of secondhand smoke, Defendants have fraudulently denied that ETS causes disease." Position of major tobacco companies The positions of major tobacco companies on the issue of secondhand smoke is somewhat varied. In general, tobacco companies have continued to focus on questioning the methodology of studies showing that secondhand smoke is harmful. Some (such as British American Tobacco and Philip Morris) acknowledge the medical consensus that secondhand smoke carries health risks, while others continue to assert that the evidence is inconclusive. Several tobacco companies advocate the creation of smoke-free areas within public buildings as an alternative to comprehensive smoke-free laws. US racketeering lawsuit against tobacco companies On September 22, 1999, the U.S. Department of Justice filed a racketeering lawsuit against Philip Morris and other major cigarette manufacturers. Almost 7 years later, on August 17, 2006, U.S. District Court Judge Gladys Kessler found that the Government had proven its case and that the tobacco company defendants had violated the Racketeer Influenced Corrupt Organizations Act (RICO). In particular, Judge Kessler found that PM and other tobacco companies had: conspired to minimize, distort and confuse the public about the health hazards of smoking; publicly denied, while internally acknowledging, that secondhand tobacco smoke is harmful to nonsmokers, and destroyed documents relevant to litigation. The ruling found that tobacco companies undertook joint efforts to undermine and discredit the scientific consensus that secondhand smoke causes disease, notably by controlling research findings via paid consultants. The ruling also concluded that tobacco companies were fraudulently continuing to deny the health effects of ETS exposure. On May 22, 2009, a three-judge panel of the U.S. Court of Appeals for the District of Columbia Circuit unanimously upheld the lower court's 2006 ruling. Smoke-free laws As a consequence of the health risks associated with secondhand smoke, many national and local governments have outlawed smoking in indoor public places, including restaurants, cafés, and nightclubs, as well as some outdoor open areas. Ireland was the first country in the world to institute a comprehensive national ban on smoking in all indoor workplaces on 29 March 2004. Since then, many others have followed suit. The countries which have ratified the WHO Framework Convention on Tobacco Control (FCTC) have a legal obligation to implement effective legislation "for protection from exposure to tobacco smoke in indoor workplaces, public transport, indoor public places and, as appropriate, other public places." (Article 8 of the FCTC) The parties to the FCTC have further adopted Guidelines on the Protection from Exposure to secondhand Smoke which state that "effective measures to provide protection from exposure to tobacco smoke ... require the total elimination of smoking and tobacco smoke in a particular space or environment in order to create a 100% smoke-free environment." Opinion polls have shown considerable support for smoke-free laws. In June 2007, a survey of 15 countries found 80% approval for such laws. A survey in France, reputedly a nation of smokers, showed 70% support. Effects Smoking bans by governments result in decreased harm from secondhand smoke, including less admissions for acute coronary syndrome. In the first 18 months after the town of Pueblo, Colorado, enacted a smoke-free law in 2003, hospital admissions for heart attacks dropped 27%. Admissions in neighbouring towns without smoke-free laws showed no change, and the decline in heart attacks in Pueblo was attributed to the resulting reduction in secondhand smoke exposure. A 2004 smoking ban instituted in Massachusetts workplaces decreased workers' secondhand smoke exposure from 8% of workers in 2003 to 5.4% of workers in 2010. A 2016 review also found that bans and policy changes in specific locations such as hospitals or universities can lead to reduced smoking rates. In prison settings bans might lead to reduced mortality and to lower exposure to secondhand smoke. In 2001, a systematic review for the Guide to Community Preventive Services acknowledged strong evidence of the effectiveness of smoke-free policies and restrictions in reducing expose to secondhand smoke. A follow-up to this review, identified the evidence on which the effectiveness of smoking bans reduced the prevalence of tobacco use. Articles published until 2005, were examined to further support this evidence. The examined studies provided sufficient evidence that smoke-free policies reduce tobacco use among workers when implemented in worksites or by communities. While a number of studies funded by the tobacco industry have claimed a negative economic impact from smoke-free laws, no independently funded research has shown any such impact. A 2003 review reported that independently funded, methodologically sound research consistently found either no economic impact or a positive impact from smoke-free laws. Air nicotine levels were measured in Guatemalan bars and restaurants before and after an implemented smoke-free law in 2009. Nicotine concentrations significantly decreased in both the bars and restaurants measured. Also, the employees' support for a smoke-free workplace substantially increased in the post-implementation survey compared to pre-implementation survey. Public opinion Recent surveys taken by the Society for Research on Nicotine and Tobacco demonstrate supportive attitudes of the public towards smoke-free policies in outdoor areas. A vast majority of the public supports restricting smoking in various outdoor settings. The respondents' support for the policies were for varying reasons such as litter control, establishing positive smoke-free role models for youth, reducing youth opportunities to smoke, and avoiding exposure to secondhand smoke. Alternative forms Alternatives to smoke-free laws have also been proposed as a means of harm reduction, particularly in bars and restaurants. For example, critics of smoke-free laws cite studies suggesting ventilation as a means of reducing tobacco smoke pollutants and improving air quality. Ventilation has also been heavily promoted by the tobacco industry as an alternative to outright bans, via a network of ostensibly independent experts with often undisclosed ties to the industry. However, not all critics have connections to the industry. The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) officially concluded in 2005 that while completely isolated smoking rooms do eliminate the risk to nearby non-smoking areas, smoking bans are the only means of eliminating health risks associated with indoor exposure. They further concluded that no system of dilution or cleaning was effective at eliminating risk. The U.S. Surgeon General and the European Commission Joint Research Centre have reached similar conclusions. The implementation guidelines for the WHO Framework Convention on Tobacco Control states that engineering approaches, such as ventilation, are ineffective and do not protect against secondhand smoke exposure. However, this does not necessarily mean that such measures are useless in reducing harm, only that they fall short of the goal of reducing exposure completely to zero. Others have suggested a system of tradable smoking pollution permits, similar to the cap-and-trade pollution permits systems used by the United States Environmental Protection Agency in recent decades to curb other types of pollution. This would guarantee that a portion of bars/restaurants in a jurisdiction will be smoke-free, while leaving the decision to the market. In animals Multiple studies have been conducted to determine the carcinogenicity of environmental tobacco smoke to animals. These studies typically fall under the categories of simulated environmental tobacco smoke, administering condensates of sidestream smoke, or observational studies of cancer among pets. To simulate environmental tobacco smoke, scientists expose animals to sidestream smoke, that which emanates from the cigarette's burning cone and through its paper, or a combination of mainstream and sidestream smoke. The IARC monographs conclude that mice with prolonged exposure to simulated environmental tobacco smoke, that is six hours a day, five days a week, for five months with a subsequent four-month interval before dissection, will have significantly higher incidence and multiplicity of lung tumors than with control groups. The IARC monographs concluded that sidestream smoke condensates had a significantly higher carcinogenic effect on mice than did mainstream smoke condensates. Observational studies Secondhand smoke is popularly recognised as a risk factor for cancer in pets. A study conducted by the Tufts University School of Veterinary Medicine and the University of Massachusetts Amherst linked the occurrence of feline oral cancer to exposure to environmental tobacco smoke through an overexpression of the p53 gene. Another study conducted at the same universities concluded that cats living with a smoker were more likely to get feline lymphoma; the risk increased with the duration of exposure to secondhand smoke and the number of smokers in the household. A study by Colorado State University researchers, looking at cases of canine lung cancer, was generally inconclusive, though the authors reported a weak relation for lung cancer in dogs exposed to environmental tobacco smoke. The number of smokers within the home, the number of packs smoked in the home per day, and the amount of time that the dog spent within the home had no effect on the dog's risk for lung cancer.
Biology and health sciences
Health and fitness: General
Health
712625
https://en.wikipedia.org/wiki/Scree
Scree
Scree is a collection of broken rock fragments at the base of a cliff or other steep rocky mass that has accumulated through periodic rockfall. Landforms associated with these materials are often called talus deposits. The term scree is applied both to an unstable steep mountain slope composed of rock fragments and other debris, and to the mixture of rock fragments and debris itself. It is loosely synonymous with talus, material that accumulates at the base of a projecting mass of rock, or talus slope, a landform composed of talus. The term scree is sometimes used more broadly for any sheet of loose rock fragments mantling a slope, while talus is used more narrowly for material that accumulates at the base of a cliff or other rocky slope from which it has obviously eroded. Scree is formed by rockfall, which distinguishes it from colluvium. Colluvium is rock fragments or soil deposited by rainwash, sheetwash, or slow downhill creep, usually at the base of gentle slopes or hillsides. However, the terms scree, talus, and sometimes colluvium tend to be used interchangeably. The term talus deposit is sometimes used to distinguish the landform from the material of which it is made. The exact definition of scree in the primary literature is somewhat relaxed, and it often overlaps with both talus and colluvium. Etymology The term scree comes from the Old Norse term for landslide, skriða, while the term talus is a French word meaning a slope or embankment. Description Talus deposits typically have a concave upwards form, where the maximum inclination corresponds to the angle of repose of the mean debris particle size. Scree slopes are often assumed to be close to the angle of repose. This is the slope at which a pile of granular material becomes mechanically unstable. However, careful examination of scree slopes shows that only those that are either rapidly accumulating new material, or are experiencing rapid removal of material from their bases, are close to the angle of repose. Most scree slopes are less steep, and they often show a concave shape, so that the foot of the slope is less steep than the top of the slope. Scree with large, boulder-sized rock fragments may form talus caves, or human-sized passages formed in-between boulders. Formation The formation of scree and talus deposits is the result of physical and chemical weathering acting on a rock face, and erosive processes transporting the material downslope. In high-altitude arctic and subarctic regions, scree slopes and talus deposits are typically adjacent to hills and river valleys. These steep slopes usually originate from late-Pleistocene periglacial processes. There are five main stages of scree slope evolution: accumulation consolidation weathering encroaching vegetation slope degradation. Scree slopes form as a result of accumulated loose, coarse-grained material. Within the scree slope itself, however, there is generally good sorting of sediment by size: larger particles accumulate more rapidly at the bottom of the slope. Cementation occurs as fine-grained material fills in gaps between debris. The speed of consolidation depends on the composition of the slope; clayey components will bind debris together faster than sandy ones. Should weathering outpace the supply of sediment, plants may take root. Plant roots diminish cohesive forces between the coarse and fine components, degrading the slope. The predominant processes that degrade a rock slope depend largely on the regional climate (see below), but also on the thermal and topographic stresses governing the parent rock material. Example process domains include: Physical weathering Chemical weathering Biotic processes Thermal stresses Topographic stresses Physical weathering processes Scree formation is commonly attributed to the formation of ice within mountain rock slopes. The presence of joints, fractures, and other heterogeneities in the rock wall can allow precipitation, groundwater, and surface runoff to flow through the rock. If the temperature drops below the freezing point of the fluid contained within the rock, during particularly cold evenings, for example, this water can freeze. Since water expands by 9% when it freezes, it can generate large forces that either create new cracks or wedge blocks into an unstable position. Special boundary conditions (rapid freezing and water confinement) may be required for this to happen. Freeze-thaw scree production is thought to be most common during the spring and fall, when the daily temperatures fluctuate around the freezing point of water, and snow melt produces ample free water. The efficiency of freeze-thaw processes in scree production is a subject of ongoing debate. Many researchers believe that ice formation in large open fracture systems cannot generate high enough pressures to force the fracturing apart of parent rocks, and instead suggest that the water and ice simply flow out of the fractures as pressure builds. Many argue that frost heaving, like that known to act in soil in permafrost areas, may play an important role in cliff degradation in cold places. Eventually, a rock slope may be completely covered by its own scree, so that production of new material ceases. The slope is then said to be "mantled" with debris. However, since these deposits are still unconsolidated, there is still a possibility of the deposit slopes themselves failing. If the talus deposit pile shifts and the particles exceed the angle of repose, the scree itself may slide and fail. Chemical weathering processes Phenomena such as acid rain may also contribute to the chemical degradation of rocks and produce more loose sediments. Biotic weathering processes Biotic processes often intersect with both physical and chemical weathering regimes, as the organisms that interact with rocks can mechanically or chemically alter them. Lichen frequently grow on the surface of, or within, rocks. Particularly during the initial colonization process, the lichen often inserts its hyphae into small fractures or mineral cleavage planes that exist in the host rock. As the lichen grows, the hyphae expand and force the fractures to widen. This increases the potential of fragmentation, possibly leading to rockfalls. During the growth of the lichen thallus, small fragments of the host rock can be incorporated into the biological structure and weaken the rock. Freeze-thaw action of the entire lichen body due to microclimatic changes in moisture content can alternately cause thermal contraction and expansion, which also stresses the host rock. Lichen also produce a number of organic acids as metabolic byproducts. These often react with the host rock, dissolving minerals, and breaking down the substrate into unconsolidated sediments. Interaction with glaciers Scree often collects at the base of glaciers, concealing them from their environment. For example, Lech dl Dragon, in the Sella group of the Dolomites, is derived from the melting waters of a glacier and is hidden under a thick layer of scree. Debris cover on a glacier affects the energy balance and, therefore, the melting process. Whether the glacier ice begins melting more rapidly or more slowly is determined by the thickness of the layer of scree on its surface. The amount of energy reaching the surface of the ice below the debris can be estimated via the one-dimensional, homogeneous material assumption of Fourier's law: , where k is the thermal conductivity of the debris material, Ts is the ambient temperature above the debris surface, Ti is the temperature at the lower surface of the debris, and d is the thickness of the debris layer. Debris with a low thermal conductivity value, or a high thermal resistivity, will not efficiently transfer energy through to the glacier, meaning the amount of heat energy reaching the ice surface is substantially lessened. This can act to insulate the glacier from incoming radiation. Albedo (radiation reflection) The albedo, or the ability of a material to reflect incoming radiation energy, is also an important quality to consider. Generally, the debris will have a lower albedo than the glacier ice it covers, and will thus reflect less incoming solar radiation. Instead, the debris will absorb radiation energy and transfer it through the cover layer to the debris-ice interface. If the ice is covered by a relatively thin layer of debris (less than around 2 centimeters thick), the albedo effect is most important. As scree accumulates atop the glacier, the ice's albedo will begin to decrease. Instead, the glacier ice will absorb incoming solar radiation and transfer it to the upper surface of the ice. Then, the glacier ice begins to absorb the energy and uses it in the process of melting. However, once the debris cover reaches 2 or more centimeters in thickness, the albedo effect begins to dissipate. Instead, the debris blanket will act to insulate the glacier, preventing incoming radiation from penetrating the scree and reaching the ice surface. In addition to rocky debris, thick snow cover can form an insulating blanket between the cold winter atmosphere and subnivean spaces in screes. As a result, soil, bedrock, and also subterranean voids in screes do not freeze at high elevations. Microclimates A scree has many small interstitial voids, while an ice cave has a few large hollows. Due to cold air seepage and air circulation, the bottom of scree slopes have a thermal regime similar to ice caves. Because subsurface ice is separated from the surface by thin, permeable sheets of sediment, screes experience cold air seepage from the bottom of the slope where sediment is thinnest. This freezing circulating air maintains internal scree temperatures 6.8-9.0 °C colder than external scree temperatures. These <0 °C thermal anomalies occur up to 1000m below sites with mean annual air temperatures of 0 °C. Patchy permafrost, which forms under conditions <0 °C, probably exists at the bottom of some scree slopes despite mean annual air temperatures of 6.8–7.5 °C. Biodiversity Scree microclimates maintained by circulating freezing air create microhabitats that support taiga plants and animals that could not otherwise survive regional conditions. A Czech Republic Academy of Sciences research team led by physical chemist Vlastimil Růžička, analyzing 66 scree slopes, published a paper in Journal of Natural History in 2012, reporting that: "This microhabitat, as well as interstitial spaces between scree blocks elsewhere on this slope, supports an important assemblage of boreal and arctic bryophytes, pteridophytes, and arthropods that are disjunct from their normal ranges far to the north. This freezing scree slope represents a classic example of a palaeo refugium that significantly contributes to [the] protection and maintenance of regional landscape biodiversity." Ice Mountain, a massive scree in West Virginia, supports distinctly different distributions of plant and animal species than northern latitudes. Scree running (activity) Scree running is the activity of running down a scree slope; which can be very quick, as the scree moves with the runner. Some scree slopes are no longer possible to run, because the stones have been moved towards the bottom.
Physical sciences
Montane landforms
Earth science
712675
https://en.wikipedia.org/wiki/Operator%20theory
Operator theory
In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operators or closed operators, and consideration may be given to nonlinear operators. The study, which depends heavily on the topology of function spaces, is a branch of functional analysis. If a collection of operators forms an algebra over a field, then it is an operator algebra. The description of operator algebras is part of operator theory. Single operator theory Single operator theory deals with the properties and classification of operators, considered one at a time. For example, the classification of normal operators in terms of their spectra falls into this category. Spectrum of operators The spectral theorem is any of a number of results about linear operators or about matrices.<ref>Sunder, V.S. Functional Analysis: Spectral Theory (1997) Birkhäuser Verlag</ref> In broad terms the spectral theorem provides conditions under which an operator or a matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This concept of diagonalization is relatively straightforward for operators on finite-dimensional spaces, but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of linear operators that can be modelled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras.
Mathematics
Mathematical analysis
null
712694
https://en.wikipedia.org/wiki/Push-up
Push-up
The push-up (press-up in British English) is a common calisthenics exercise beginning from the prone position. By raising and lowering the body using the arms, push-ups exercise the pectoral muscles, triceps, and anterior deltoids, with ancillary benefits to the rest of the deltoids, serratus anterior, coracobrachialis and the midsection as a whole. Push-ups are a basic exercise used in civilian athletic training or physical education and commonly in military physical training. They are also a common form of punishment used in the military, school sport, and some martial arts disciplines to humiliate and its absence of use for equipment. Variations of push-ups, such as wide-arm push-ups, diamond push-ups target specific muscle groups and provide further challenges. Etymology The American English term push-up was first used between 1905 and 1910, while the British press-up was first recorded was 1920. Body mass supported during push-ups According to the study published in The Journal of Strength and Conditioning Research, the test subjects supported with their hands, on average, 69.16% of their body mass in the up position, and 75.04% in the down position during the traditional push-ups. In modified push-ups, where knees are used as the pivot point, subjects supported 53.56% and 61.80% of their body mass in up and down positions, respectively. Muscles worked While the push-up primarily targets the muscles of the chest, arms, and shoulders, support required from other muscles results in a wider range of muscles integrated into the exercise. Abdominals The rectus abdominis and transversus abdominis contract continually while performing push-ups to hold the body off the floor and keep the legs and torso aligned. The rectus abdominis spans the front of the abdomen and is the most prominent of the abdominal muscles. The transversus abdominis lies deep within the abdomen, wrapping around the entire abdominal area. Both muscles compress the abdomen, and the rectus abdominis also flexes the spine forward, although it does not execute this function when performing push-ups. Deltoid The anterior portion of the deltoid muscle is one of the major shoulder-joint horizontal adductors, moving the upper arms toward the chest during the upward phase of a push-up. It also helps control the speed of movement during the downward phase. The deltoid attaches to parts of the clavicle and scapula, just above the shoulder joint on one end, and to the outside of the humerus bone on the other. Along with horizontal adduction, the anterior deltoid assists with flexion and internal rotation of the humerus within the shoulder socket. Chest muscles The push-up requires the work of many muscle groups, with one of the primary muscle groups being the chest muscles, the pectoralis major and the minor. These are the two large chest muscles and the main pushing muscle group of the upper body. When pushing and lowering the body during a push-up, the pectoralis major is doing most of the work. As a result, these muscles become very strong and can become defined as lean muscle after doing push-ups regularly. Stabilizers: back body The push-up depends on stabilizer muscles as the body is pushed and lowered. The erector spinae is the main stabilizer muscle in the back. Made up of three muscles including the spinal, longissimus, and iliocostalis. The spinal runs adjacent to the spine, the longissimus runs adjacent to the spinal and the iliocostalis runs adjacent to the longissimus and over the ribs. Two muscles called the gluteus medius and gluteus minimus stabilize the upper leg. The medius and minimus sit under the largest butt muscle, the gluteus maximus. Triceps brachii While the anterior deltoids and pectoralis major muscles work to horizontally adduct the upper arms during the upward phase of a push-up, the triceps brachii muscles, or triceps for short, are also hard at work extending the elbow joints so the arms can be fully extended. The triceps also control the speed of elbow-joint flexion during the downward phase of the exercise. The closer together the hands are placed during a push-up, the harder the triceps work. The muscle is divided into three heads — the lateral head, long head, and medial head. The lateral and medial heads attach to the back of the humerus bone, and the long head attaches just behind the shoulder socket on one end; all three heads combine and attach to the back of the elbow on the other. Forearms Stabilizers include wrist and forearm muscles, the knee extensors, and the hip/spine flexors, which all work isometrically to maintain a proper plank position in the standard prone push-up. Biceps During the push-up exercise, the short head of the biceps brachii muscle acts as a dynamic stabilizer. This means the muscle activates at both ends—the elbow and the shoulder—to help stabilize the joints. Joints and tendons Inner muscles that support the operation of the fingers, wrists, forearms, and elbows are also worked isometrically. Some push-up modifications that require to have the arms at different heights effectively engage the rotator cuff. Variations In the "full push-up", the back and legs are straight and off the floor. There are several variations besides the common push-up. These include bringing the thumbs and index fingers of both hands together (a "diamond push-up") as well as having the elbows pointed towards the knees. These variations are intended to put greater emphasis on the triceps or shoulders, rather than the chest muscles. When both hands are unbalanced or on uneven surfaces, this exercise works the body core. Raising the feet or hands onto elevated surfaces during the exercise emphasizes the upper (minor) or lower (major) pectorals, respectively. Raising the hands with the aid of push-up bars or a dumbbell allows for a greater range of motion, providing further stress for the muscles. Weighted push-ups Progressively overloading classic push-ups using barbell plates, resistance bands or any form of weight. The load is usually positioned on the upper back. This very effective exercise is not commonly performed because of the difficulty of loading the human body in that position. An alternate way to add weight to the push-up is by placing the hands on high handles bars and then elevating the feet on a high surface to get into a suspended push-up position. Due to the distance between the pelvis and the floor a dipping belt can be used to add weights from the pelvis. This method of adding extra weight to the push-up becomes more efficient. Knee push-ups "Modified" or "knee" push-ups are performed by supporting the lower body on the knees instead of the toes, which reduces the difficulty. These are sometimes used in fitness tests for women, corresponding to regular push-ups for men. This is useful for warm-ups/downs, pyramids/drop sets, endurance training, and rehab. It can also be used to train in a more explosive plyometric manner (like clapping push-ups) when one cannot perform them with the feet. It can also be used with the one-arm variations as a transition. However, the intense pressure on the knees can be harmful. Planche push-ups An extremely difficult variation is to perform a push-up using only hands, without resting the feet on the floor, i.e., starting from and returning to the planche position. These are known as "planche push-ups". To do this variation, the body's center of gravity must be kept over the hands while performing the push-up by leaning forward while the legs are elevated in the air, which requires great strength and a high level of balance. The entire body weight is lifted in this variation. Tandem push-ups Tandem push-ups are a variation of traditional push-ups, performed by two people working together. Each person is facing a different direction but with one of the athletes lying faced downwards on top of the other. It is considered a very challenging variation of the regular push-up because it requires two people to co-ordinate with perfect balance placing their feet to each others shoulders and pressing up. Tandem knuckle push-ups Tandem knuckle push-ups are a more challenging variation of push-ups performed by two people together, using their knuckles instead of their palms. They offer a greater challenge compared to tandem push-ups. Hand release push-ups Hand release push-ups are a much more challenging variation of traditional push-ups, involving lifting the hands off the floor between each repetition.This way the athlete is forced to do a full and complete repetition. Lifting the hands off the ground completely resets the movement eliminating the momentum. This variation builds core and shoulder strength in addition to the benefits of the regular push-up. Push-ups on medicine balls Push-ups on medicine balls are a variation of push-ups that involve performing the exercise on top of three medicine balls instead of on the floor. This modification adds an element of instability and core engagement to the exercise, making it more challenging and effective for building upper body strength and stability. More difficult variations include push-ups on medicine balls with one leg raised and decline push-ups on medicine balls. Knuckle push-ups Another variation is to perform push-ups on the knuckles of the fist, rather than with palms of the hands on the floor. This method is also commonly used in martial arts, such as Karate and Tae Kwon Do, and may be used in boxing training while wearing boxing gloves. The intent, in addition to building strength and conditioning, is to toughen the knuckles, wrist, and forearm in the punching position. This variation also reduces the amount of strain in the wrist, compared to the typical "palms on floor" approach, and so it is sometimes used by those with wrist injuries. Such practitioners will usually perform their knuckle push-ups on a padded floor or a rolled-up towel, unlike martial artists, who may do bare-knuckle push-ups on hard floors. Maltese push-up The Maltese push-up is a gymnastic variation of the push-up, in which the hands are positioned further down towards the hips (as opposed to roughly alongside the pectorals), but with a wide distance between them. Hindu push-up The most basic form of Hindu push-up starts from the downward dog yoga position (hands and feet on the floor with the posterior raised) and transitions to an upward dog position (hands and feet on the floor with the torso arched forwards and the legs close to the floor). It is also known as a dand, and is still widely known by this title especially in India where it originated from. It is a common exercise in Indian physical culture and martial arts, particularly Pehlwani. The famous martial artist Bruce Lee also used it in his training regime and referred to it as a cat stretch, influenced by The Great Gama. It is an effective core strength exercise because it dynamically involves both the anterior and posterior chains in a harmonious fashion. There are numerous variations of the Hindu push-up although most incorporate the two postures used in the most basic version. It may also be known as a Hanuman push up, judo push up, or dive-bomber push-up. Guillotine push-up The guillotine push-up is a form of push-up done from an elevated position (either hands-on elevated platforms or traditionally medicine balls) wherein the practitioner lowers the chest, head, and neck (thus the name) past the plane of the hands. The goal is to stretch the shoulders and put extra emphasis on the muscles there. Backhanded push-up The backhanded push-up is a form of push-ups performed using the back of the hands, rather than the palms. Currently, the record holder of the backhanded push-ups is Bill Kathan who broke the world record in 2010, by performing 2,396 on Valentine's Day. One-arm versions Many of the push-up variations can be done using one arm instead of two. This will further increase the resistance put upon the trainee. Single-leg push-up To do single leg push-up lift one of the legs off the ground and do a set. Repeat the same with another leg. Narrow-grip push-up Do a normal push-up with the hands just a few inches apart from each other underneath the chest. Wide-grip push-up Similar to a normal push-up but with hands wider than shoulder width. This works the chest and shoulders more. Clap push-up At the peak of the push-up, push the body up off the ground and quickly clap the hands in midair. The fast jolting force of clap push-ups will help develop explosive power while also bulking up the pectoral muscles. Spider-Man push-up Do a normal push-up but raise one knee toward the elbow of the same side as the body rises. Switch knees with each rep. More stress can be added to the abs with a two-second hold. Declined push-up You can do a declined, or leg elevated, push-up by doing a normal push-up but with the feet on a bench or a step. Keep your back straight and low down instead of up during the exercise. The more your feet are elevated, the more difficult this variation will be. The declined push-up, with the downward angle, adds additional work to the front shoulder and upper pectoral muscles. Other versions There are some less difficult versions, which reduce the effort by supporting some of the bodyweights in some way. One can move on to the standard push-up after progress is made. "Wall" push-ups are performed by standing close to a wall and pushing away from the wall with the arms; one can increase the difficulty by moving one's feet further from the wall. "Table" or "chair" push-ups are performed by pushing away from a table, chair, or other object. The lower the object, the more difficult the push-up. One should be sure that the object is securely stationary before attempting to push up from it. "Three-phase" push-ups involve simply breaking a standard push-up into three components and doing each one slowly and deliberately. Participants usually start face down on the floor with hands outstretched either perpendicular or parallel to the body. The first phase involves the arms being brought palms down on a 90-degree angle at the elbows. The second phase involves the body being pushed into the up position. The third phase is returning to the starting position. This technique is commonly used after a large block of regular push-ups, as it poses less stress and requires less effort. "Diamond" or "Triceps" push-ups are done by placing both palms on the ground and touching together both thumbs and pointer fingers. This technique requires stronger triceps muscles than regular push-ups because, at the bottom of the stroke, the forearm is nearly parallel to the ground and the elbow is almost completely flexed, resulting in a much higher mechanical load on the triceps. There is a special sub-set of the diamond push-up (so named for the diamond-shaped space between the hands when the thumb and forefinger of the left hand are placed on the floor up against the thumb and forefinger of the right hand.) The special version of this push-up is when the diamond is placed directly below the nose instead of the solar plexus. The nose must almost touch the floor in the center of the diamond. This special diamond push-up is done by the United States Marine Corps. The lips must come within 1 inch of the floor while keeping the neck in line with the straight spine to qualify as a valid push-up. This can be verified by placing a 1-inch foam disposable earplug on the floor in the center of the diamond and picking it up with the lips. "Hollow-Body" push-ups are performed in the position gymnasts call the "hollow body". In the plank version of the hollow body, the shoulders are protracted into a pronounced curve in the upper back while the abdominal muscles are tightened and the legs are locked and squeezed together. This variation requires full-body tension to execute and results in greater integration of the hips, shoulders, and core. Plyometrics Two platforms are placed beside the trainee, one on either side. The exercise begins with the hands-on either platform supporting the body, then the subject drops to the ground and explosively rebounds with a push-up, extending the torso and arms completely off the ground and returning the hands to the platforms. Another is simply an explosive push-up where a person attempts to push quickly and with enough force to raise his or her hands several centimeters off the ground, with the body completely suspended on the feet for a moment, a variation of the drop push. This is necessary for performing 'clap push-ups' i.e. clapping the hands while in the air. Aztec push-ups The Aztec push-up is one of the most difficult plyometric push-ups. A person performs an Aztec push-up by beginning in the normal push-up starting position and exploding upward with both the hands and feet, driving the entire body into the air. While in the air, the body is bent at the waist and the hands quickly touch the toes. The body is then quickly straightened and the hands and feet break the fall, returning the body to the normal push-up position for another repetition. 360 push-ups The 360 push-up is a variation of the superman push-up where one rotates 360 degrees while in the air. Falling and explosive rebound push-ups Here one falls to the ground from standing position and then using an explosive push-up gets back to standing position. With push-ups, many possibilities for customization and increased intensity are possible. Some examples are: One hand can be set on a higher platform than the other or be further away from the other to give more weight to the opposite arm/side of the body and also exercise many diverse muscles. One can perform push-ups by using only the tips of the fingers and thumb. For increased difficulty, push-ups can be performed on one arm or using weights. Push-ups between chairs form an integral part of the "Dynamic Tension" Course devised by Charles Atlas, and similar systems. Record breakers and attempts The Guinness world record for most push-ups in one hour is 3,054 by Jarrad Young on 11 June 2021 in Queensland, Australia. The most push-ups in 24 hours is 46,001 and was achieved by Charles Servizio on 25 April 1993. In the animal kingdom There are zoology observations that certain animals emulate a push up action. Most notably various taxa of the fence lizard exhibit this display, primarily involving the male engaging in postures to attract females. The western fence lizard is a particular species that also engage in this behavior. (It may be noted that in Mexican Spanish, push-ups are called "Lagartijas", which means "lizards".)
Biology and health sciences
Physical fitness
Health
712760
https://en.wikipedia.org/wiki/Trichoplax
Trichoplax
Trichoplax adhaerens is one of the four named species in the phylum Placozoa. The others are Hoilungia hongkongensis, Polyplacotoma mediterranea and Cladtertia collaboinventa. Placozoa is a basal group of multicellular animals, possible relatives of Cnidaria. Trichoplax are very flat organisms commonly less than 4 mm in diameter, lacking any organs or internal structures. They have two cellular layers: the top epitheloid layer is made of ciliated "cover cells" flattened toward the outside of the organism, and the bottom layer is made up of cylinder cells that possess cilia used in locomotion, and gland cells that lack cilia. Between these layers is the fibre syncytium, a liquid-filled cavity strutted open by star-like fibres. Trichoplax feed by absorbing food particles—mainly microbes—with their underside. They generally reproduce asexually, by dividing or budding, but can also reproduce sexually. Though Trichoplax has a small genome in comparison to other animals, nearly 87% of its 11,514 predicted protein-coding genes are identifiably similar to known genes in other animals. Discovery Trichoplax was discovered in 1883 by the German zoologist Franz Eilhard Schulze, in a seawater aquarium at the Zoological Institute in Graz, Austria. The generic name is derived from the classical Greek (), "hair", and (), "plate". The specific epithet adhaerens is Latin meaning "adherent", reflecting its propensity to stick to the glass slides and pipettes used in its examination. Although from the very beginning most researchers who studied Trichoplax in any detail realized that it had no close relationship to other animal phyla, the zoologist Thilo Krumbach published a hypothesis that Trichoplax is a form of the planula larva of the anemone-like hydrozoan Eleutheria krohni in 1907. Although this was refuted in print by Schulze and others, Krumbach's analysis became the standard textbook explanation, and nothing was printed in zoological journals about Trichoplax until the 1960s. In the 1960s and 1970s a new interest among researchers led to acceptance of Placozoa as a new animal phylum. Among the new discoveries was study of the early phases of the animals' embryonic development and evidence that the animals that people had been studying are adults, not larvae. This newfound interest also included study of the organism in nature (as opposed to aquariums). Morphology Trichoplax generally has a thinly flattened, plate-like body in cross-section around half a millimetre, occasionally up to two or three millimetres. The body is usually only about 25 μm thick. Because they are so thin and fragile, and because the cilia which they use for locomotion are only loosely coordinated, they are constantly being split into two or three separate clones when their cilia moves in opposite directions, causing microfractures in the animal’s epithelium. One hypothesis is that the larger a motile animal lacking a nervous system is, the less coordinated its locomotion becomes, placing an upper limit on their possible size. These colorlessly gray organisms are so thin they are transparent when illuminated from behind, and in most cases are barely visible to the naked eye. Like the single-celled amoebae, which they superficially resemble, they continually change their external shape. In addition, spherical phases occasionally form. These may facilitate movement to new habitats. Trichoplax lacks tissues and organs; there is also no manifest body symmetry, so it is not possible to distinguish anterior from posterior or left from right. It is made up of a few thousand cells of six types in three distinct layers: dorsal epithelia cells and ventral epithelia cells, each with a single cilium ("monociliate"), ventral gland cells, syncytial fiber cells, lipophils, and crystal cells (each containing a birefringent crystal, arrayed around the rim). Lacking sensory and muscle cells, it moves using cilia on its external surface. The collective movements of the cilia are completely coordinated by mechanical interactions. Signal processing There are no neurons present, but in the absence of a nervous system the animal uses short chains of amino acids known as peptides for cell communication, in a manner resembling the way animals with neurons use neuropeptides for the same purpose. These specialized cells are called peptidergic cells, but unlike neurons they don't use electrical impulses and their messaging is restricted to sending signals to other nearby cells only, as they're unable to both send and receive signals. Individual cells contain and secrete a variety of small peptides, made up of between four and 20 amino acids, which are detected by neighbouring cells. Each peptide can be used individually to send a signal to other cells, but also sequentially or together in different combinations, creating a huge number a different types of signals. This allows for a relatively complex behavioural repertoire, including behaviours such as "crinkling", turning, flattening, and internal "churning". The genome of Trichoplax codes for eighty-five neurotransmitter receptors, more than in any other sequenced animal. Epitheloid Both structurally and functionally, it is possible to distinguish a back or dorsal side from a belly or ventral side in Trichoplax adhaerens. Both consist of a single layer of cells coated on the outside with slime and are reminiscent of epithelial tissue, primarily due to the junctions—belt desmosomes—between the cells. In contrast to true epithelium, however, the cell layers of the Placozoa possess no basal lamina, which refers to a thin layer of extracellular material underlying epithelium that stiffens it and separates it from the body's interior. The absence of this structure, which is otherwise to be found in all animals except the sponges, can be explained in terms of function: a rigid separating layer would make the amoeboid changes in the shape of Trichoplax adhaerens impossible. Instead of an epithelium, therefore, we speak of an epitheloid in the Placozoa. A mature individual consists of up to a thousand cells that can be divided into four different cell types. The monociliated cells of the dorsal epitheloid are flattened and contain lipid bodies. The cells on the ventral side likewise possess a single cilium, while their elongated columnar shape, with a small cross section at the surface, packs them very closely together, causing the cilia to be very closely spaced on the ventral side and to form a ciliated "crawling sole". Interspersed among these ventral epithlioid cells are unciliated gland cells thought to be capable of synthesizing digestive enzymes. Fibre syncytium Between the two layers of cells is a liquid-filled interior space, which, except for the immediate zones of contact with the ventral and dorsal sides, is pervaded by a star-shaped fibre syncytium: a fibrous network that consists essentially of a single cell but contains numerous nuclei that, while separated by internal crosswalls (septa), do not have true cell membranes between them. Similar structures are also found in the sponges (Porifera) and many fungi. On both sides of the septa are liquid-filled capsules that cause the septa to resemble synapses, i.e. nerve-cell junctions that occur in fully expressed form only in animals with tissues (Eumetazoa). Striking accumulations of calcium ions, which may have a function related to the propagation of stimuli, likewise suggest a possible role as protosynapses. This view is supported by the fact that fluorescent antibodies against cnidarian neurotransmitters, i.e. precisely those signal carriers that are transferred in synapses, bind in high concentrations in certain cells of Trichoplax adhaerens, and thus indicate the existence of comparable substances in the Placozoa. The fibre syncytium also contains molecules of actin and probably also of myosin, which occur in the muscle cells of eumetazoans . In the placozoans, they ensure that the individual fibres can relax or contract and thus help determine the animals' shape. In this way, the fibre syncytium assumes the functions of nerve and muscle tissues. Moreover, at least a portion of digestion occurs here. On the other hand, no gelatinous extracellular matrix exists of the kind observed, in mesoglea, in cnidarians and ctenophores. Pluripotent cells, which can differentiate into other cell types, have not yet been demonstrated unambiguously in T. adhaerens, in contrast to the case of the Eumetazoa. The conventional view is that dorsal and ventral epithelioid cells arise only from other cells of the same type. Genetics The Trichoplax genome contains about 98 million base pairs and 11,514 predicted protein-coding genes. All nuclei of placozoan cells contain six pairs of chromosomes that are only about two to three micrometres in size. Three pairs are metacentric, meaning that the centromere, the attachment point for the spindle fibers in cell division, is located at the center, or acrocentric, with the centromere at an extreme end of each chromosome. The cells of the fiber syncytium can be tetraploid, i.e. contain a quadruple complement of chromosomes. A single complement of chromosomes in Trichoplax adhaerens contains a total of fewer than fifty million base pairs and thus forms the smallest animal genome; the number of base pairs in the intestinal bacterium Escherichia coli is smaller by a factor of only ten. The genetic complement of Trichoplax adhaerens has not yet been very well researched; it has, however, already been possible to identify several genes, such as Brachyury and TBX2/TBX3, which are homologous to corresponding base-pair sequences in eumetazoans. Of particular significance is Trox-2, a placozoan gene known under the name Cnox-2 in cnidarians and as Gsx in the bilaterally symmetrical Bilateria. As a homeobox or Hox gene it plays a role in organization and differentiation along the axis of symmetry in the embryonic development of eumetazoans; in cnidarians, it appears to determine the position of mouth-facing (oral) and opposite-facing (aboral) sides of the organism. Since placozoans possess no axes of symmetry, exactly where the gene is transcribed in the body of Trichoplax is of special interest. Antibody studies have been able to show that the gene's product occurs only in the transition zones of the dorsal and ventral sides, perhaps in a fifth cell type that has not yet been characterized. It is not yet clear whether these cells, contrary to traditional views, are stem cells, which play a role in cell differentiation. In any case, Trox-2 can be considered a possible candidate for a proto-Hox gene, from which the other genes in this important family could have arisen through gene duplication and variation. Initially, molecular-biology methods were applied unsuccessfully to test the various theories regarding Placozoa's position in the Metazoa system. No clarification was achieved with standard markers such as 18S rDNA/RNA: the marker sequence was apparently "garbled", i.e. rendered uninformative as the result of many mutations. Nevertheless, this negative result supported the suspicion that Trichoplax might represent an extremely primitive lineage of metazoans, since a very long period of time had to be assumed for the accumulation of so many mutations. Of the 11,514 genes identified in the six chromosomes of Trichoplax, 87% are identifiably similar to genes in cnidarians and bilaterians. In those Trichoplax genes for which equivalent genes can be identified in the human genome, over 80% of the introns (the regions within genes that are removed from RNA molecules before their sequences are translated in protein synthesis) are found in the same location as in the corresponding human genes. The arrangement of genes in groups on chromosomes is also conserved between the Trichoplax and human genomes. This contrasts to other model systems such as fruit flies and soil nematodes that have experienced a paring down of non-coding regions and a loss of the ancestral genome organizations. Relationship with animals The phylogenetic relationship between Trichoplax and other animals has been debated for some time. A variety of hypotheses have been advanced based on the few morphological characteristics of this simple organism that could be identified. More recently, a comparison of the Trichoplax mitochondrial genome suggested that Trichoplax is a basal metazoan—less closely related to all other animals including sponges than they are to each other. This implies that the Placozoa would have arisen relatively soon after the evolutionary transition from unicellular to multicellular forms. But an even more recent analysis of the much larger Trichoplax nuclear genome instead supports the hypothesis that Trichoplax is a basal eumetazoan, that is, more closely related to Cnidaria and other animals than any of those animals are to sponges. This is consistent with the presence in Trichoplax of cell layers reminiscent of epithelial tissue (see above). Distribution and habitat Trichoplax was first discovered on the walls of a marine aquarium, and is rarely observed in its natural habitat. Trichoplax has been collected, among other places, in the Red Sea, the Mediterranean, and the Caribbean, off Hawaii, Guam, Samoa, Japan, Vietnam, Brazil, and Papua New Guinea, and on the Great Barrier Reef off the east coast of Australia. Field specimens tend to be found in the coastal tidal zones of tropical and subtropical seas, on such substrates as the trunks and roots of mangroves, shells of molluscs, fragments of stony corals or simply on pieces of rock. One study was able to detect seasonal population fluctuations, the causes of which have not yet been deduced. Feeding and symbionts Trichoplax adhaerens feeds on small algae, particularly on green algae (Chlorophyta) of the genus Chlorella, cryptomonads (Cryptophyta) of the genera Cryptomonas and Rhodomonas, and blue-green bacteria (Cyanobacteria) such as Phormidium inundatum, but also on detritus from other organisms. In feeding, one or several small pockets form around particles of nutrients on the ventral side, into which digestive enzymes are released by the gland cells; the organisms thus develop a temporary "external stomach", so to speak. The enclosed nutrients are then taken up by pinocytosis ("cell-drinking") by the ciliated cells located on the ventral surface. Entire single-celled organisms can also be ingested through the upper epitheloid (that is, the "dorsal surface" of the animal). This mode of feeding could be unique in the animal kingdom: the particles, collected in a slime layer, are drawn through the intercellular gaps (cellular interstices) of the epitheloid by the fibre cells and then digested by phagocytosis ("cell-eating"). Such "collecting" of nutrient particles through an intact tegument is only possible because some "insulating" elements (specifically, a basal lamina under the epitheloid and certain types of cell-cell junctions) are not present in the Placozoa. When the concentrations of algae are high the animals are more likely to engage in social feeding behavior. Not all bacteria in the interior of Placozoa are digested as food: in the endoplasmic reticulum, an organelle of the fibre syncytium, bacteria are frequently found that appear to live in symbiosis with Trichoplax adhaerens. These endosymbionts, which are no longer able to survive outside its host, are transferred from one generation to the next through both vegetative and sexual reproduction. Locomotion Placozoa can move in two different ways on solid surfaces: first, their ciliated crawling sole lets them glide slowly across the substrate; second, they can change location by modifying their body shape, as an amoeba does. These movements are not centrally coordinated, since no muscle or nerve tissues exist. It can happen that an individual moves simultaneously in two different directions and consequently divides into two parts. It has been possible to demonstrate a close connection between body shape and the speed of locomotion, which is also a function of available food: At low nutrient density, the spread-out area fluctuates slightly but irregularly; speed remains relatively constant at about 15 micrometres per second. If nutrient density is high, however, the area covered oscillates with a stable period of about 8 minutes, in which the greatest extent reached by the organism can be as much as twice the smallest. Its speed, which remains consistently below 5 micrometres per second, varies with the same period. In this case, a high speed always corresponds to a reduced area, and vice versa. Since the transition is not smooth but happens abruptly, the two modes of extension can be very clearly separated from one another. The following is a qualitative explanation of the animal's behavior: At low nutrient density, Trichoplax maintains a constant speed in order to uncover food sources without wasting time. Once such a source is identified by high nutrient density, the organism increases its area in regular increments and thereby enlarges the surface in contact with substrate. This enlarges the surface through which nutrients can be ingested. The animal reduces its speed at the same time in order to actually consume all of the available food. Once this is nearly completed, Trichoplax reduces its area again to move on. Because food sources such as algal mats are often relatively extensive, it is reasonable for such an animal to stop moving after a brief period in order to flatten out again and absorb nutrients. Thus Trichoplax progresses relatively slowly in this phase. The actual direction in which Trichoplax moves each time is random: if we measure how fast an individual animal moves away from an arbitrary starting point, we find a linear relationship between elapsed time and mean square distance between starting point and present location. Such a relationship is also characteristic of random Brownian motion of molecules, which thus can serve as a model for locomotion in the Placozoa. Small animals are also capable of swimming actively with the aid of their cilia. As soon as they come into contact with a possible substrate, a dorsoventral response occurs: the dorsal cilia continue to beat, whereas the cilia of ventral cells stop their rhythmic beating. At the same time, the ventral surface tries to make contact with the substrate; small protrusions and invaginations, the microvilli found on the surface of the columnar cells, help in attaching to the substrate via their adhesive action. Using T. adhaerens as a model, 0.02–0.002 Hz oscillations in locomotory and feeding patterns were observed, and taken as evidence of complex multicellular integration, dependent on endogenous secretion of signal molecules. Evolutionarily conserved low-molecular-weight transmitters (glutamate, aspartate, glycine, GABA, and ATP) acted as coordinators of distinct locomotory and feeding patterns. Specifically, L-glutamate induced and partially mimicked endogenous feeding cycles, whereas glycine and GABA suppressed feeding. ATP-modified feeding is complex, first causing feeding-like cycles and then suppressing feeding. Trichoplax locomotion was modulated by glycine, GABA, and, surprisingly, by animals’ own mucus trails. Mucus triples locomotory speed compared to clean substrates. Glycine and GABA increased the frequency of turns. Regeneration A notable characteristic of the Placozoa is that they can regenerate themselves from extremely small groups of cells. Even when large portions of the organism are removed in the laboratory, a complete animal develops again from the remainder. It is also possible to rub Trichoplax adhaerens through a strainer in such a manner that individual cells are not destroyed but are separated from one another to a large extent. In the test tube they then find their way back together again to form complete organisms. If this procedure is performed on several previously strained individuals simultaneously, the same thing occurs. In this case, however, cells that previously belonged to a particular individual can suddenly show up as part of another. Reproduction The Placozoa normally propagate asexually, dividing down the middle to produce two (or sometimes, three) roughly equal-sized daughters. These remain loosely connected for a while after fission. More rarely, budding processes are observed: spherules of cells separate from the dorsal surface; each of these combines all known cell types and subsequently grows into an individual on its own. Sexual reproduction is thought to be triggered by excessive population density. As a result, the animals absorb liquid, begin to swell, and separate from the substrate so that they float freely in the water. In the protected interior space, the ventral cells form an ovum surrounded by a special envelope, the fertilisation membrane; the ovum is supplied with nutrients by the surrounding syncytium, allowing energy-rich yolk to accumulate in its interior. Once maturation of the ovum is complete, the rest of the animal degenerates, liberating the ovum itself. Small, unciliated cells that form at the same time are interpreted to be spermatozoa. It has not yet been possible to observe fertilisation itself; the existence of the fertilisation membrane is currently taken to be evidence, however, that it has taken place. Putative eggs have been observed, but they degrade, typically at the 32–64 cell stage. Neither embryonic development nor sperm have been observed. Despite lack of observation of sexual reproduction in the lab, the genetic structure of the populations in the wild is compatible with the sexual reproduction mode, at least for species of the analysed genotype H5. Usually even before its liberation, the ovum initiates cleavage processes in which it becomes completely pinched through at the middle. A ball of cells characteristic of animals, the blastula, is ultimately produced in this manner, with a maximum of 256 cells. Development beyond this 256-cell stage has not yet been observed. Trichoplax lack a homologue of the Boule protein that appears to be ubiquitous and conserved in males of all species of other animals tested. If its absence implies the species has no males, then perhaps its "sexual" reproduction may be a case of the above-described process of regeneration, combining cells from two separate organisms into one. Due to the possibility of its cloning itself by asexual propagation without limit, the life span of Placozoa is infinite; in the laboratory, several lines descended from a single organism have been maintained in culture for an average of 20 years without the occurrence of sexual processes. Role as a model organism Long ignored as an exotic, marginal phenomenon, Trichoplax adhaerens is today viewed as a potential biological model organism. In particular, research is needed to determine how a group of cells that cannot be considered full-fledged epithelial tissue organizes itself, how locomotion and coordination occur in the absence of true muscle and nerve tissue, and how the absence of a concrete body axis affects the animal's biology. At the genetic level, the way in which Trichoplax adhaerens protects against damage to its genome needs to be studied, particularly with regard to the existence of special DNA-repair processes. T. adhaerens can tolerate high levels of radiation damage that are lethal to other animals. Tolerance to X-ray exposure was found to depend on expression of genes involved in DNA repair and apoptosis including the gene Mdm2. Complete decoding of the genome should also clarify the placozoans' place in evolution, which continues to be controversial. Its ability to fight cancer through a combination of aggressive DNA repair and ejection of damaged cells makes it a promising organism for cancer research. In addition to basic research, this animal could also be suitable for studying wound-healing and regeneration processes; as yet unidentified metabolic products should be researched. Finally, Trichoplax adhaerens is also being considered as an animal model for testing compounds and antibacterial drugs. The related lineage Trichoplax sp. H2 has been suggested to be a more suitable model organism than T. adhaerens, due to its abundance and ease of culture. Systematics Francesco Saverio Monticelli described another species in 1893, which he found in the waters around Naples, naming it Treptoplax reptans. However, it has not been observed since 1896, and most zoologists today doubt its existence. Significant genetic differences have been observed between collected specimens matching the morphological description of T. adhaerens, leading scientists to suggest in 2004 that it may be a cryptic species complex. At least twenty haplotypes have since been assigned based on the 16S mitochondrial DNA fragment, with T. adhaerens being equated to the lineage H1. While most haplotypes have not been formally described as species, they have been (with the exception of the morphologically distinct H0, Polyplacotoma mediterranea) provisionally placed into seven distinct clades. The genus Trichoplax was redefined as comprising clades I and II, including haplotypes H1, H2, H3 and H17. A later study defined Trichoplax more restrictively as only clade I (haplotypes H1, H2 and H17), with H3 being suggested to belong to a separate undescribed genus in the family Trichoplacidae. Placozoan haplotypes are not necessarily equivalent to species, and several haplotypes of the related placozoan genus Hoilungia have been found to belong to the same species. Nonetheless, haplotype H2 is usually considered to be a separate undescribed species, referred to as Trichoplax sp. H2. It has been reported to be more robust and abundant than T. adhaerens, and easier to culture, making it a better fit for experimental research. Trichoplax sp. H2 is also distinguished by the presence of an additional cell type, termed "epithelia upper-like", giving it a total of 29 cell types compared to the 28 in T. adhaerens. Comparative genetic studies of Trichoplax adhaerens and the Panama strain of Trichoplax sp. H2 have suggested that their genetic similarity might be due to an interbreeding event having happened in the wild at least several decades ago, with one of them being the result of hybridization between the other and a third unknown strain. Analysis of bacterial endosymbionts supports this as a possible hypothesis, as the endosymbiont found in the Panama strain is closer to the one in T. adhaerens than to the one in the Hawaii strain of Trichoplax sp. H2.
Biology and health sciences
Other
Animals
713497
https://en.wikipedia.org/wiki/Wireless%20mesh%20network
Wireless mesh network
A wireless mesh network (WMN) is a communications network made up of radio nodes organized in a mesh topology. It can also be a form of wireless ad hoc network. A mesh refers to rich interconnection among devices or nodes. Wireless mesh networks often consist of mesh clients, mesh routers and gateways. Mobility of nodes is less frequent. If nodes constantly or frequently move, the mesh spends more time updating routes than delivering data. In a wireless mesh network, topology tends to be more static, so that routes computation can converge and delivery of data to their destinations can occur. Hence, this is a low-mobility centralized form of wireless ad hoc network. Also, because it sometimes relies on static nodes to act as gateways, it is not a truly all-wireless ad hoc network. Mesh clients are often laptops, cell phones, and other wireless devices. Mesh routers forward traffic to and from the gateways, which may or may not be connected to the Internet. The coverage area of all radio nodes working as a single network is sometimes called a mesh cloud. Access to this mesh cloud depends on the radio nodes working together to create a radio network. A mesh network is reliable and offers redundancy. When one node can no longer operate, the rest of the nodes can still communicate with each other, directly or through one or more intermediate nodes. Wireless mesh networks can self form and self heal. Wireless mesh networks work with different wireless technologies including 802.11, 802.15, 802.16, cellular technologies and need not be restricted to any one technology or protocol. History Wireless mesh radio networks were originally developed for military applications, such that every node could dynamically serve as a router for every other node. In that way, even in the event of a failure of some nodes, the remaining nodes could continue to communicate with each other, and, if necessary, serve as uplinks for the other nodes. Early wireless mesh network nodes had a single half-duplex radio that, at any one instant, could either transmit or receive, but not both at the same time. This was accompanied by the development of shared mesh networks. This was subsequently superseded by more complex radio hardware that could receive packets from an upstream node and transmit packets to a downstream node simultaneously (on a different frequency or a different CDMA channel). This allowed the development of switched mesh networks. As the size, cost, and power requirements of radios declined further, nodes could be cost-effectively equipped with multiple radios. This, in turn, permitted each radio to handle a different function, for instance, one radio for client access, and another for backhaul services. Work in this field has been aided by the use of game theory methods to analyze strategies for the allocation of resources and routing of packets. Features Architecture Wireless mesh architecture is a first step towards providing cost effective and low mobility over a specific coverage area. Wireless mesh infrastructure is, in effect, a network of routers minus the cabling between nodes. It is built of peer radio devices that do not have to be cabled to a wired port like traditional WLAN access points (AP) do. Mesh infrastructure carries data over large distances by splitting the distance into a series of short hops. Intermediate nodes not only boost the signal, but cooperatively pass data from point A to point B by making forwarding decisions based on their knowledge of the network, i.e. perform routing by first deriving the topology of the network. Wireless mesh networks is a relatively "stable-topology" network except for the occasional failure of nodes or addition of new nodes. The path of traffic, being aggregated from a large number of end users, changes infrequently. Practically all the traffic in an infrastructure mesh network is either forwarded to or from a gateway, while in wireless ad hoc networks or client mesh networks the traffic flows between arbitrary pairs of nodes. If rate of mobility among nodes are high, i.e., link breaks happen frequently, wireless mesh networks start to break down and have low communication performance. Management This type of infrastructure can be decentralized (with no central server) or centrally managed (with a central server). Both are relatively inexpensive, and can be very reliable and resilient, as each node needs only transmit as far as the next node. Nodes act as routers to transmit data from nearby nodes to peers that are too far away to reach in a single hop, resulting in a network that can span larger distances. The topology of a mesh network must be relatively stable, i.e., not too much mobility. If one node drops out of the network, due to hardware failure or any other reason, its neighbors can quickly find another route using a routing protocol. Applications Mesh networks may involve either fixed or mobile devices. The solutions are as diverse as communication needs, for example in difficult environments such as emergency situations, tunnels, oil rigs, battlefield surveillance, high-speed mobile-video applications on board public transport, real-time racing-car telemetry, or self-organizing Internet access for communities. An important possible application for wireless mesh networks is VoIP. By using a quality of service scheme, the wireless mesh may support routing local telephone calls through the mesh. Most applications in wireless mesh networks are similar to those in wireless ad hoc networks. Some current applications: U.S. military forces are now using wireless mesh networking to connect their computers, mainly ruggedized laptops, in field operations. Electric smart meters now being deployed on residences, transfer their readings from one to another and eventually to the central office for billing, without the need for human meter readers or the need to connect the meters with cables. The laptops in the One Laptop per Child program use wireless mesh networking to enable students to exchange files and get on the Internet even though they lack wired or cell phone or other physical connections in their area. Smart home devices such as Google Wi-Fi, Google Nest Wi-Fi, and Google OnHub support Wi-Fi mesh (i.e., Wi-Fi ad hoc) networking. Several manufacturers of Wi-Fi routers began offering mesh routers for home use in the mid-2010s. Some communications satellite constellations operate as a mesh network, with wireless links between adjacent satellites. Calls between two satellite phones are routed through the mesh, from one satellite to another across the constellation, without having to go through an earth station. This makes for a shorter travel distance for the signal, reducing latency, and also allows for the constellation to operate with far fewer earth stations than would be required for an equal number of traditional communications satellites. The Iridium satellite constellation, consists of 66 active satellites in a polar orbit and operates as a mesh network providing global coverage. Operation The principle is similar to the way packets travel around the wired Internet – data hops from one device to another until it eventually reaches its destination. Dynamic routing algorithms implemented in each device allow this to happen. To implement such dynamic routing protocols, each device needs to communicate routing information to other devices in the network. Each device then determines what to do with the data it receives – either pass it on to the next device or keep it, depending on the protocol. The routing algorithm used should attempt to always ensure that the data takes the most appropriate (fastest) route to its destination. Multi-radio mesh Multi-radio mesh refers to having different radios operating at different frequencies to interconnect nodes in a mesh. This means there is a unique frequency used for each wireless hop and thus a dedicated CSMA collision domain. With more radio bands, communication throughput is likely to increase as a result of more available communication channels. This is similar to providing dual or multiple radio paths to transmit and receive data. Research topics One of the more often cited papers on wireless mesh networks identified the following areas as open research problems in 2005: New modulation schemes To achieve higher transmission rate requires new wideband transmission schemes other than OFDM and UWB. Advanced antenna processing Advanced antenna processing including directional, smart and multiple antenna technologies is further investigated, since their complexity and cost are still too high for wide commercialization. Flexible spectrum management Tremendous efforts on research of frequency-agile techniques are being performed for increased efficiency. Cross-layer optimization Cross-layer research is a popular current research topic where information is shared between different communications layers to increase the knowledge and current state of the network. This could facilitate development of new and more efficient protocols. A joint protocol that addresses various design problems—routing, scheduling, channel assignment etc.—can achieve higher performance since these problems are strongly co-related. Note that careless cross-layer design can lead to code that is difficult to maintain and extend. Software-defined wireless networking Centralized, distributed, or hybrid? - In a new SDN architecture for WMNs is explored that eliminates the need for multi-hop flooding of route information and therefore enables WMNs to easily expand. The key idea is to split network control and data forwarding by using two separate frequency bands. The forwarding nodes and the SDN controller exchange link-state information and other network control signaling in one of the bands, while actual data forwarding takes place in the other band. Security A WMN can be seen as a group of nodes (clients or routers) that cooperate to provide connectivity. Such an open architecture, where clients serve as routers to forward data packets, is exposed to many types of attacks that can interrupt the whole network and cause denial of service (DoS) or Distributed Denial of Service (DDoS). Examples A number of wireless community networks have been started as grassroots projects across the world at various points in time. Other projects, often proprietary or tied to a single institution, are: ALOHAnet was first used in Hawaii in 1971 to connect the islands. Amateur radio operators began experimenting with VHF and later UHF digital communications networks in Canada in 1978 and the US in 1980. By 1984, the volunteer-operated Amateur Packet Radio Network (AMPRNet) of 'digipeaters' spanned most of North America. The emerging network allowed a licensed operator using merely an early laptop computer such as TRS-80 Model 100 and compatible handheld FM transceiver operating in the 1.25-meter band or 2-meter band to accomplish wireless transcontinental digital communications. With the development of the Internet, portals into and out of other IP networks facilitated 'tunnels' to reach packet networks in other parts of the world. In 1998–1999, a field implementation of a campus-wide wireless network using 802.11 WaveLAN 2.4 GHz wireless interface on several laptops was successfully completed. Several real applications, mobility and data transmissions were made. Mesh networks were useful for the military market because of the radio capability, and because not all military missions have frequently moving nodes. The Pentagon launched the DoD JTRS program in 1997, with an ambition to use software to control radio functions - such as frequency, bandwidth, modulation and security previously baked into the hardware. This approach would allow the DoD to build a family of radios with a common software core, capable of handling functions that were previously split among separate hardware-based radios: VHF voice radios for infantry units; UHF voice radios for air-to-air and ground-to-air communications; long-range HF radios for ships and ground troops; and a wideband radio capable of transmitting data at megabit speeds across a battlefield. However, JTRS program was shut down in 2012 by the US Army because the radios made by Boeing had a 75% failure rate. Google Home, Google Wi-Fi, and Google OnHub all support Wi-Fi mesh networking. In rural Catalonia, Guifi.net was developed in 2004 as a response to the lack of broadband Internet, where commercial Internet providers weren't providing a connection or a very poor one. Nowadays with more than 30,000 nodes it is only halfway a fully connected network, but following a peer to peer agreement it remained an open, free and neutral network with extensive redundancy. In 2004, TRW Inc. engineers from Carson, California, successfully tested a multi-node mesh wireless network using 802.11a/b/g radios on several high speed laptops running Linux, with new features such as route precedence and preemption capability, adding different priorities to traffic service class during packet scheduling and routing, and quality of service. Their work concluded that data rate can be greatly enhanced using MIMO technology at the radio front end to provide multiple spatial paths. Zigbee digital radios are incorporated into some consumer appliances, including battery-powered appliances. Zigbee radios spontaneously organize a mesh network, using specific routing algorithms; transmission and reception are synchronized. This means the radios can be off much of the time, and thus conserve power. Zigbee is for low power low bandwidth application scenarios. Thread is a consumer wireless networking protocol built on open standards and IPv6/6LoWPAN protocols. Thread's features include a secure and reliable mesh network with no single point of failure, simple connectivity and low power. Thread networks are easy to set up and secure to use with banking-class encryption to close security holes that exist in other wireless protocols. In 2014 Google Inc's Nest Labs announced a working group with the companies Samsung, ARM Holdings, Freescale, Silicon Labs, Big Ass Fans and the lock company Yale to promote Thread. In early 2007, the US-based firm Meraki launched a mini wireless mesh router. The 802.11 radio within the Meraki Mini has been optimized for long-distance communication, providing coverage over 250 metres. In contrast to multi-radio long-range mesh networks with tree-based topologies and their advantages in O(n) routing, the Maraki had only one radio, which it used for both client access and backhaul traffic. In 2012, Meraki was acquired by Cisco. The Naval Postgraduate School, Monterey CA, demonstrated such wireless mesh networks for border security. In a pilot system, aerial cameras kept aloft by balloons relayed real time high resolution video to ground personnel via a mesh network. SPAWAR, a division of the US Navy, is prototyping and testing a scalable, secure Disruption Tolerant Mesh Network to protect strategic military assets, both stationary and mobile. Machine control applications, running on the mesh nodes, "take over", when Internet connectivity is lost. Use cases include Internet of Things e.g. smart drone swarms. An MIT Media Lab project has developed the XO-1 laptop or "OLPC" (One Laptop per Child) which is intended for disadvantaged schools in developing nations and uses mesh networking (based on the IEEE 802.11s standard) to create a robust and inexpensive infrastructure. The instantaneous connections made by the laptops are claimed by the project to reduce the need for an external infrastructure such as the Internet to reach all areas, because a connected node could share the connection with nodes nearby. A similar concept has also been implemented by Greenpacket with its application called SONbuddy. In Cambridge, UK, on 3 June 2006, mesh networking was used at the “Strawberry Fair” to run mobile live television, radio and Internet services to an estimated 80,000 people. Broadband-Hamnet, a mesh networking project used in amateur radio, is "a high-speed, self-discovering, self-configuring, fault-tolerant, wireless computer network" with very low power consumption and a focus on emergency communication. The Champaign-Urbana Community Wireless Network (CUWiN) project is developing mesh networking software based on open source implementations of the Hazy-Sighted Link State Routing Protocol and Expected Transmission Count metric. Additionally, the Wireless Networking Group in the University of Illinois at Urbana-Champaign are developing a multichannel, multi-radio wireless mesh testbed, called Net-X as a proof of concept implementation of some of the multichannel protocols being developed in that group. The implementations are based on an architecture that allows some of the radios to switch channels to maintain network connectivity, and includes protocols for channel allocation and routing. FabFi is an open-source, city-scale, wireless mesh networking system originally developed in 2009 in Jalalabad, Afghanistan to provide high-speed Internet to parts of the city and designed for high performance across multiple hops. It is an inexpensive framework for sharing wireless Internet from a central provider across a town or city. A second larger implementation followed a year later near Nairobi, Kenya with a freemium pay model to support network growth. Both projects were undertaken by the Fablab users of the respective cities. SMesh is an 802.11 multi-hop wireless mesh network developed by the Distributed System and Networks Lab at Johns Hopkins University. A fast handoff scheme allows mobile clients to roam in the network without interruption in connectivity, a feature suitable for real-time applications, such as VoIP. Many mesh networks operate across multiple radio bands. For example, Firetide and Wave Relay mesh networks have the option to communicate node to node on 5.2 GHz or 5.8 GHz, but communicate node to client on 2.4 GHz (802.11). This is accomplished using software-defined radio (SDR). The SolarMESH project examined the potential of powering 802.11-based mesh networks using solar power and rechargeable batteries. Legacy 802.11 access points were found to be inadequate due to the requirement that they be continuously powered. The IEEE 802.11s standardization efforts are considering power save options, but solar-powered applications might involve single radio nodes where relay-link power saving will be inapplicable. The WING project (sponsored by the Italian Ministry of University and Research and led by CREATE-NET and Technion) developed a set of novel algorithms and protocols for enabling wireless mesh networks as the standard access architecture for next generation Internet. Particular focus has been given to interference and traffic-aware channel assignment, multi-radio/multi-interface support, and opportunistic scheduling and traffic aggregation in highly volatile environments. WiBACK Wireless Backhaul Technology has been developed by the Fraunhofer Institute for Open Communication Systems (FOKUS) in Berlin. Powered by solar cells and designed to support all existing wireless technologies, networks are due to be rolled out to several countries in sub-Saharan Africa in summer 2012. Recent standards for wired communications have also incorporated concepts from Mesh Networking. An example is ITU-T G.hn, a standard that specifies a high-speed (up to 1 Gbit/s) local area network using existing home wiring (power lines, phone lines and coaxial cables). In noisy environments such as power lines (where signals can be heavily attenuated and corrupted by noise), it is common that mutual visibility between devices in a network is not complete. In those situations, one of the nodes has to act as a relay and forward messages between those nodes that cannot communicate directly, effectively creating a "relaying" network. In G.hn, relaying is performed at the data link layer. Protocols Routing protocols There are more than 70 competing schemes for routing packets across mesh networks. Some of these include: Associativity-Based Routing (ABR) AODV (Ad hoc On-Demand Distance Vector) B.A.T.M.A.N. (Better Approach To Mobile Ad hoc Networking) Babel (protocol) (a distance-vector routing protocol for IPv6 and IPv4 with fast convergence properties) Dynamic NIx-Vector Routing|DNVR DSDV (Destination-Sequenced Distance-Vector Routing) DSR (Dynamic Source Routing) HSLS (Hazy-Sighted Link State) HWMP (Hybrid Wireless Mesh Protocol, the default mandatory routing protocol of IEEE 802.11s) Infrastructure Wireless Mesh Protocol (IWMP) for Infrastructure Mesh Networks by GRECO UFPB-Brazil ODMRP (On-Demand Multicast Routing Protocol) OLSR (Optimized Link State Routing protocol) OORP (OrderOne Routing Protocol) (OrderOne Networks Routing Protocol) OSPF (Open Shortest Path First Routing) Routing Protocol for Low-Power and Lossy Networks (IETF ROLL RPL protocol, ) PWRP (Predictive Wireless Routing Protocol) TORA (Temporally-Ordered Routing Algorithm) ZRP (Zone Routing Protocol) The IEEE has developed a set of standards under the title 802.11s. A less thorough list can be found at list of ad hoc routing protocols. Autoconfiguration protocols Standard autoconfiguration protocols, such as DHCP or IPv6 stateless autoconfiguration may be used over mesh networks. Mesh network specific autoconfiguration protocols include: Ad Hoc Configuration Protocol (AHCP) Proactive Autoconfiguration (Proactive Autoconfiguration Protocol) Dynamic WMN Configuration Protocol (DWCP) Communities and providers Anyfi AWMN CUWiN Freifunk (DE) / FunkFeuer (AT) / OpenWireless (CH) Firechat Firetide Guifi.net Netsukuku Ninux (IT) NYC Mesh Red Hook Wi-Fi
Technology
Networks
null
713903
https://en.wikipedia.org/wiki/Taipan
Taipan
Taipans are snakes of the genus Oxyuranus in the elapid family. They are large, fast-moving, highly venomous, and endemic to Australia and New Guinea. Three species are recognised, one of which, the coastal taipan, has two subspecies. Taipans are some of the deadliest known snakes. Taxonomy The common name, taipan, was coined by anthropologist Donald Thomson after the word used by the Wik-Mungkan Aboriginal people of central Cape York Peninsula, Queensland, Australia. The Wik-Mungkan people used the name in reference to an ancestral creator being in Aboriginal Australian mythology known as the Rainbow Serpent. The genus name is from Greek ὀξῠ́ς (oxys: sharp, needle-like) and οὐρανός (ouranos: an arch, specifically the vault of the heavens), and refers to the needle-like anterior process on the arch of the palate, which Kinghorn noted separated the genus from all other elapids. The oft-quoted meaning "sharp-tailed" (based on a confusion with οὐρά, oura, "tail", and Latin anus) is both etymologically and morphologically incorrect. The three known species are the coastal taipan (Oxyuranus scutellatus), the inland taipan (O. microlepidotus), and a recently discovered third species, the Central Ranges taipan (O. temporalis). The coastal taipan has two subspecies: the coastal taipan (O. s. scutellatus), found along the northeastern coast of Queensland, and the Papuan taipan (O. s. canni), found on the southern coast of New Guinea. A 2016 genetic analysis showed that the speckled brown snake (Pseudonaja guttata) was an early offshoot of a lineage giving rise to the taipans, with the Central Ranges taipan being an offshoot of the common ancestor of the inland and coastal taipans. Species Diet Their diet consists primarily of small mammals, especially rats and bandicoots. Venom Species of this genus possess highly neurotoxic venom with some other toxic constituents that have multiple effects on victims. The venom is known to paralyse the victim's nervous system and clot the blood, which then blocks blood vessels and uses up clotting factors. Members of this genus are considered to be among the most venomous snakes in the world based on their murine , an indicator of the toxicity on mice. The inland taipan is considered to be the most venomous snake in the world and the coastal taipan, which is arguably the largest Australian venomous snake, is the third-most venomous snake in the world. The Central Ranges taipan has been less researched than other species of this genus, so the exact toxicity of its venom is still not clear, but it may be even more venomous than the other taipan species. Apart from venom toxicity, quantities of venom delivered should also be taken into account for the danger posed. The coastal taipan is capable of injecting a large quantity of venom due to its large size. In 1950, Kevin Budden, an amateur herpetologist, was one of the first people to capture a taipan alive, although he was bitten in the process and died the next day. The snake, which ended up dying a few weeks later, was the first known taipan to have been milked for venom: Melbourne zoologist David Fleay and Dr. F. C. Morgan performed the milking, and the venom was used to develop an antivenom, which became available in 1955. The original preserved specimen is currently stored in the facilities of Museums Victoria. Two antivenoms are available: CSL polyvalent antivenom and CSL taipan antivenom, both from CSL Limited in Australia. In his book Venom, which explores the development of a taipan antivenom in Australia in the 1940s and 1950s, author Brendan James Murray states that only one person is known to have survived an Oxyuranus bite without antivenom: George Rosendale, a Guugu Yimithirr person bitten at Hope Vale in 1949. Murray writes that Rosendale's condition was so severe that nurses later showed him extracted samples of his own blood that were completely black in colour. Temperament also varies from species to species. The inland taipan is generally shy, while the coastal taipan can be quite aggressive when cornered and actively defends itself.
Biology and health sciences
Snakes
Animals
714543
https://en.wikipedia.org/wiki/Br%C3%B8nsted%E2%80%93Lowry%20acid%E2%80%93base%20theory
Brønsted–Lowry acid–base theory
The Brønsted–Lowry theory (also called proton theory of acids and bases) is an acid–base reaction theory which was first developed by Johannes Nicolaus Brønsted and Thomas Martin Lowry independently in 1923. The basic concept of this theory is that when an acid and a base react with each other, the acid forms its conjugate base, and the base forms its conjugate acid by exchange of a proton (the hydrogen cation, or H+). This theory generalises the Arrhenius theory. Definitions of acids and bases In the Arrhenius theory, acids are defined as substances that dissociate in aqueous solutions to give H+ (hydrogen ions or protons), while bases are defined as substances that dissociate in aqueous solutions to give OH− (hydroxide ions). In 1923, physical chemists Johannes Nicolaus Brønsted in Denmark and Thomas Martin Lowry in England both independently proposed the theory named after them. In the Brønsted–Lowry theory acids and bases are defined by the way they react with each other, generalising them. This is best illustrated by an equilibrium equation. acid + base ⇌ conjugate base + conjugate acid. With an acid, HA, the equation can be written symbolically as: HA + B <=> A- + HB+ The equilibrium sign, ⇌, is used because the reaction can occur in both forward and backward directions (is reversible). The acid, HA, is a proton donor which can lose a proton to become its conjugate base, A−. The base, B, is a proton acceptor which can become its conjugate acid, HB+. Most acid–base reactions are fast, so the substances in the reaction are usually in dynamic equilibrium with each other. Aqueous solutions Consider the following acid–base reaction: CH3 COOH + H2O <=> CH3 COO- + H3O+ Acetic acid, , is an acid because it donates a proton to water () and becomes its conjugate base, the acetate ion (). is a base because it accepts a proton from and becomes its conjugate acid, the hydronium ion, (). The reverse of an acid–base reaction is also an acid–base reaction, between the conjugate acid of the base in the first reaction and the conjugate base of the acid. In the above example, ethanoate is the base of the reverse reaction and hydronium ion is the acid. H3O+ + CH3 COO- <=> CH3COOH + H2O One feature of the Brønsted–Lowry theory in contrast to Arrhenius theory is that it does not require an acid to dissociate. Amphoteric substances The essence of Brønsted–Lowry theory is that an acid is only such in relation to a base, and vice versa. Water is amphoteric as it can act as an acid or as a base. In the image shown at the right one molecule of acts as a base and gains to become while the other acts as an acid and loses to become . Another example is illustrated by substances like aluminium hydroxide, . \overset{(acid)}{Al(OH)3}{} + OH- <=> Al(OH)4^- 3H+{} + \overset{(base)}{Al(OH)3} <=> 3H2O{} + Al_{(aq)}^3+ Non-aqueous solutions The hydrogen ion, or hydronium ion, is a Brønsted–Lowry acid when dissolved in H2O and the hydroxide ion is a base because of the autoionization of water reaction H2O + H2O <=> H3O+ + OH- An analogous reaction occurs in liquid ammonia NH3 + NH3 <=> NH4+ + NH2- Thus, the ammonium ion, , in liquid ammonia corresponds to the hydronium ion in water and the amide ion, in ammonia, to the hydroxide ion in water. Ammonium salts behave as acids, and metal amides behave as bases. Some non-aqueous solvents can behave as bases, i.e. accept protons, in relation to Brønsted–Lowry acids. HA + S <=> A- + SH+ where S stands for a solvent molecule. The most important of such solvents are dimethylsulfoxide, DMSO, and acetonitrile, , as these solvents have been widely used to measure the acid dissociation constants of carbon-containing molecules. Because DMSO accepts protons more strongly than the acid becomes stronger in this solvent than in water. Indeed, many molecules behave as acids in non-aqueous solutions but not in aqueous solutions. An extreme case occurs with carbon acids, where a proton is extracted from a bond. Some non-aqueous solvents can behave as acids. An acidic solvent will make dissolved substances more basic. For example, the compound is known as acetic acid since it behaves as an acid in water. However, it behaves as a base in liquid hydrogen fluoride, a much more acidic solvent. CH3COOH + 2HF <=> CH3C(OH)2+ + HF2- Comparison with Lewis acid–base theory In the same year that Brønsted and Lowry published their theory, G. N. Lewis created an alternative theory of acid–base reactions. The Lewis theory is based on electronic structure. A Lewis base is a compound that can give an electron pair to a Lewis acid, a compound that can accept an electron pair. Lewis's proposal explains the Brønsted–Lowry classification using electronic structure. HA + B <=> A- + BH+ In this representation both the base, B, and the conjugate base, A−, are shown carrying a lone pair of electrons and the proton, which is a Lewis acid, is transferred between them. Lewis later wrote "To restrict the group of acids to those substances that contain hydrogen interferes as seriously with the systematic understanding of chemistry as would the restriction of the term oxidizing agent to substances containing oxygen." In Lewis theory an acid, A, and a base, B, form an adduct, AB, where the electron pair forms a dative covalent bond between A and B. This is shown when the adduct H3N−BF3 forms from ammonia and boron trifluoride, a reaction that cannot occur in water because boron trifluoride hydrolizes in water. 4BF3 + 3H2O -> B(OH)3 + 3HBF4 The reaction above illustrates that BF3 is an acid in both Lewis and Brønsted–Lowry classifications and shows that the theories agree with each other. Boric acid is recognised as a Lewis acid because of the reaction B(OH)3 + H2O <=> B(OH)4- + H+ In this case the acid does not split up but the base, H2O, does. A solution of B(OH)3 is acidic because hydrogen ions are given off in this reaction. There is strong evidence that dilute aqueous solutions of ammonia contain minute amounts of the ammonium ion H2O + NH3 -> OH- + NH+4 and that, when dissolved in water, ammonia functions as a Lewis base. Comparison with the Lux–Flood theory The reactions between oxides in the solid or liquid states are excluded in the Brønsted–Lowry theory. For example, the reaction 2MgO + SiO2 -> Mg2 SiO4 is not covered in the Brønsted–Lowry definition of acids and bases. On the other hand, magnesium oxide acts as a base when it reacts with an aqueous solution of an acid. 2H+ + MgO(s) -> Mg^{2+}(aq) + H2O Dissolved silicon dioxide, SiO2, has been predicted to be a weak acid in the Brønsted–Lowry sense. SiO2(s) + 2H2O <=> Si(OH)4 (solution) Si(OH)4 <=> Si(OH)3O- + H+ According to the Lux–Flood theory, oxides like MgO and SiO2 in the solid state may be called acids or bases. For example, the mineral olivine may be known as a compound of a basic oxide, MgO, and silicon dioxide, SiO2, as an acidic oxide. This is important in geochemistry.
Physical sciences
Concepts
Chemistry
714710
https://en.wikipedia.org/wiki/Weber%20%28unit%29
Weber (unit)
In physics, the weber ( ; symbol: Wb) is the unit of magnetic flux in the International System of Units (SI). The unit is derived (through Faraday's law of induction) from the relationship (volt-second). A magnetic flux density of 1 Wb/m2 (one weber per square metre) is one tesla. The weber is named after the German physicist Wilhelm Eduard Weber (1804–1891). Definition The weber may be defined in terms of Faraday's law, which relates a changing magnetic flux through a loop to the electric field around the loop. A change in flux of one weber per second will induce an electromotive force of one volt (produce an electric potential difference of one volt across two open-circuited terminals). Officially: That is: One weber is also the total magnetic flux across a surface of one square meter perpendicular to a magnetic flux density of one tesla; that is, Expressed only in SI base units, 1 weber is: The weber is used in the definition of the henry as 1 weber per ampere, and consequently can be expressed as the product of those units: The weber is commonly expressed in a multitude of other units: where Ω is ohm, C is coulomb, J is joule, and N is newton. History In 1861, the British Association for the Advancement of Science (known as "The BA") established a committee under William Thomson (later Lord Kelvin) to study electrical units. In a February 1902 manuscript, with handwritten notes of Oliver Heaviside, Giovanni Giorgi proposed a set of rational units of electromagnetism including the weber, noting that "the product of the volt into the second has been called the weber by the B. A." The International Electrotechnical Commission began work on terminology in 1909 and established Technical Committee 1 in 1911, its oldest established committee, "to sanction the terms and definitions used in the different electrotechnical fields and to determine the equivalence of the terms used in the different languages." In 1930, TC1 decided that the magnetic field strength (H) is of a different nature from the magnetic flux density (B), and took up the question of naming the units for these fields and related quantities, among them the integral of magnetic flux density. In 1935, TC 1 recommended names for several electrical units, including the weber for the practical unit of magnetic flux (and the maxwell for the CGS unit). Also in 1935, TC1 passed responsibility for "electric and magnetic magnitudes and units" to the new TC24. This "led eventually to the universal adoption of the Giorgi system, which unified electromagnetic units with the MKS dimensional system of units, the whole now known simply as the SI system ()." In 1938, TC24 "recommended as a connecting link [from mechanical to electrical units] the permeability of free space with the value of This group also recognized that any one of the practical units already in use (ohm, ampere, volt, henry, farad, coulomb, and weber), could equally serve as the fourth fundamental unit. "After consultation, the ampere was adopted as the fourth unit of the Giorgi system in Paris in 1950." Multiples Like other SI units, the weber can modified by adding a prefix that multiplies it by a power of 10. Conversions One maxwell (Mx), the CGS unit of magnetic flux, equals 10−8 Wb
Physical sciences
Electromagnetism
null
714908
https://en.wikipedia.org/wiki/Ridge
Ridge
A ridge is a long, narrow, elevated geomorphologic landform, structural feature, or a combination of both separated from the surrounding terrain by steep sides. The sides of a ridge slope away from a narrow top, the crest or ridgecrest, with the terrain dropping down on either side. The crest, if narrow, is also called a ridgeline. Limitations on the dimensions of a ridge are lacking. Its height above the surrounding terrain can vary from less than a meter to hundreds of meters. A ridge can be either depositional, erosional, tectonic, or a combination of these in origin and can consist of either bedrock, loose sediment, lava, or ice depending on its origin. A ridge can occur as either an isolated, independent feature or part of a larger geomorphological and/or structural feature. Frequently, a ridge can be further subdivided into smaller geomorphic or structural elements. Classification As in the case of landforms in general, there is a lack of any commonly agreed classification or typology of ridges. They can be defined and classified on the basis of a variety of factors including either genesis, morphology, composition, statistical analysis of remote sensing data, or some combinations of these factors. An example of ridge classification is that of Schoeneberger and Wysocki, which provides a relatively simple and straightforward system that is used by the USA National Cooperative Soil Survey Program to classify ridges and other landforms. This system uses the dominant geomorphic process or setting to classify different groups of landforms into two major groups, Geomorphic Environments and Other Groupings with a total of 16 subgroups. The groups and their subgroups are not mutually exclusive; landforms, including ridges, can belong to multiple subgroups. In this classification, ridges are found in the Aeolian, Coastal Marine and Estuarine, Lacustrine, Glacial, Volcanic and Hydrothermal, Tectonic and Structural, Slope, and Erosional subgroups. Aeolian ridge Aeolian dune ridge An aeolian dune ridge is a ridge of sand piled up by the wind. A sand dune can be either a hill or ridge of sand piled up by the wind. A single sand dune can range in length from less than one meter to several tens of kilometers, their height can vary from a few tens of centimeters to a 150 meters. Megadunes or draas are very large dunes, which can have smaller dunes superimposed on them. Coastal ridges Beach ridge A beach ridge is a low, essentially continuous ridge of beach or beach-and-dune sediments piled up by the action of waves and currents on a shoreline beyond the present limit of storm waves and the reach of ordinary tides. They occur occurring singly or as one of a series of approximately parallel ridges that are roughly parallel to the shoreline. Erosional ridges Dendritic ridge In typical dissected plateau terrain, the stream drainage valleys will leave intervening ridges. These are by far the most common ridges. These ridges usually represent slightly more erosion resistant rock, but not always – they often remain because there were more joints where the valleys formed or other chance occurrences. This type of ridge is generally somewhat random in orientation, often changing direction frequently, often with knobs at intervals on the ridge top. Strike ridge A strike ridge is an asymmetric ridge created by the differential erosion of a hard, erosion-resistant, dipping layer of rock sandwiched between layers of weaker, more easily eroded rock. A strike ridge has a distinctly gentler sloping side (dip slope), that roughly parallels the inclined layer of erosion-resistant rock. The opposite side of a strike ridge is relatively short, steep or cliff-like slope (scarp slope) that cuts across the tilted layers of rocks. In foldbelts such as the Ridge-and-Valley Appalachians, they form series of long, parallel, straight to arcuate ridges. Strike ridges are subdivided into cuestas, flatirons, homoclinal ridges, and hogbacks. Reef A term applied by early explorers and settlers in the western United States to ridges that formed a rocky barrier to land travel, by analogy with ocean reefs as barriers to sea travel. Examples include Capitol Reef National Park and the San Rafael Reef. The usage may have originated with sailors during the Australian gold rushes to describe the gold-bearing ridges of Bendigo, Australia. Glacial ridges Moraines and eskers Glacial activity may leave ridges in the form of moraines and eskers. An arête is a thin ridge of rock that is formed by glacial erosion. Pressure ridge (ice) An ice pressure ridge is a ridge of deformed ice along the boundaries of individual ice floes when the ice floes on a lake or ocean collide and compress their edges. The average height of a sea ice pressure ridge is between 5 and 30 meters. Tectonic and Structural ridges Oceanic spreading ridge In tectonic spreading zones around the world, such as at the Mid-Atlantic Ridge, the volcanic activity forms new land between tectonic boundaries creating volcanic ridges at the spreading zone. Isostatic settling and erosion gradually reduces the elevations moving away from the zone. Impact Crater ridge Large asteroid strikes typically form large impact craters bordered by rim(s) that are circular ridge(s). Shutter ridge A shutter ridge is a ridge that has moved along a fault line, blocking or diverting drainage. Typically, a shutter ridge creates a valley corresponding to the alignment of the fault that produces it. Volcanic and Hydrothermal ridges Pressure ridge (lava) A specific case of pressure ridge, also known as a tumulus, usually develops in lava flows, especially when slow-moving lava beneath a solidified crust wells upward. The brittle crust usually buckles to accommodate the inflating core of the flow, thus creating a central crack along the length of the tumulus. Volcanic crater/caldera ridges Large volcanoes often have a central crater or caldera or both, bordered by rims that form circular ridges. Volcanic subglacial ridges Subglacial volcanic eruptions can create volcanic ridges, known as tindars, that vary from tens of meters up to 250 meters in height. Tindars are a piles of volcanic ash that have been generated by explosive subaqueous eruptions in a glacial meltwater-filled vault or lake within a glacier or ice sheet.
Physical sciences
Montane landforms
Earth science
715028
https://en.wikipedia.org/wiki/Chain%20%28unit%29
Chain (unit)
The chain (abbreviated ch) is a unit of length equal to 66 feet (22 yards), used in both the US customary and Imperial unit systems. It is subdivided into 100 links. There are 10 chains in a furlong, and 80 chains in one statute mile. In metric terms, it is 20.1168 m long. By extension, chainage (running distance) is the distance along a curved or straight survey line from a fixed commencing point, as given by an odometer. The chain has been used since the early 17th century in England, and was brought by British settlers during the colonial period to other countries around the globe. In the United Kingdom, there were 80 chains to the mile, but until the early nineteenth century the Scottish and Irish customary miles were longer than the statute mile; consequently a Scots chain was about 74 (imperial) feet, an Irish chain 84 feet. These longer chains became obsolete following the adoption of the imperial system of units in 1824. In India, "metric chains" of exactly are used, along with fractions thereof. Definition The UK statute chain is 22 yards, which is . This unit is a statute measure in the United Kingdom, defined in the Weights and Measures Act 1985. One link is a hundredth part of a chain, which is . The surveyor's chain first appears in an illustration in a Dutch map of 1607, and in an English book for surveyors of 1610. In 1593 the English mile was redefined by a statute of Queen Elizabeth I as 5,280 feet, to tie in with agricultural practice. In 1620, the polymath Edmund Gunter developed a method of accurately surveying land using a surveyor's chain 66 feet long with 100 links. The 66-foot unit, which was four perches or rods, took on the name the chain. By 1675 it was accepted, and Ogilby wrote: From Gunter's system, the chain and the link became standard surveyors' units of length and crossed to the colonies. The thirteen states of America were expanding westward and the public land had to be surveyed for a cadastral. In 1784 Thomas Jefferson wrote a report for the Continental Congress proposing the rectangular survey system; it was adopted with some changes as the Land Ordinance of 1785 on 20 May the following year. In the report, the use of the chain as a unit of measurement was mandated, and the chain was defined. Modern use and historic cultural references United Kingdom In the United Kingdom, the chain is no longer used for practical survey work. However, it is still used on the railways as a location identifier. When railways were designed, the location of features such as bridges and stations was indicated by a cumulative longitudinal "mileage", using miles and chains, from a zero point at the origin or headquarters of the railway, or the originating junction of a new branch line. Since railways are linear in topology, the "mileage" or "chainage" is sufficient to identify a place uniquely on any given route. Thus, a given bridge location may be indicated as 112 miles and 63 chains (181.51 km) from the origin. In the case of the photograph, the bridge is near Keynsham, which is that distance from London Paddington station. The indication "MLN" after the mileage is the Engineer's Line Reference describing the route as the Great Western Main Line, which is needed to uniquely determine the bridge, as there may be points at 112 miles 63 chains on other routes. On new railway lines built in the United Kingdom such as High Speed 1, the position along the alignment is still referred to as "chainage", although the value is now expressed in metres. North America The use of the chain was mandatory in laying out US townships. A federal law was passed in 1785 (the Public Land Survey Ordinance) that all official government surveys must be done with a Gunter's (surveyor's) chain. Chains and links are commonly encountered in older metes and bounds legal descriptions. Distances on township plat maps made by the US General Land Office are shown in chains. Under the US Public Land Survey System, parcels of land are often described in terms of the section (), quarter-section (), and quarter-quarter-section (). Respectively, these square divisions of land are approximately 80 chains (one mile or 1.6 km), 40 chains (half a mile or 800 m), and 20 chains (a quarter mile or 400 m) on a side. The chain is still used in agriculture: measuring wheels with a circumference of 0.1 chain (diameter ≈ 2.1 ft or 64 cm) are still readily available in Canada and the United States. For a rectangular tract, multiplying the number of turns of a chain wheel for each of two adjacent sides and dividing by 1,000 gives the area in acres. In Canada, road allowances were originally 1 chain wide and are now 20 metres. The unit was also used in mapping the United States along train routes in the 19th century. Railroads in the United States have long since used decimal fractions of a mile. Some subways such as the New York City Subway and the Washington Metro were designed with and continue with a chaining system using the 100-foot engineer's chain. In the United States, the chain is also used as the measure of the rate of spread of wildfires (chains per hour), both in the predictive National Fire Danger Rating System as well as in after-action reports. The term chain is used by wildland firefighters in day-to-day operations as a unit of distance. Australia and New Zealand In Australia and New Zealand, most building lots in the past were a quarter of an acre, measuring one chain by two and a half chains, and other lots would be multiples or fractions of a chain. The street frontages of many houses in these countries are one chain wide—roads were almost always wide in urban areas, sometimes or . Laneways would be half a chain (10.1 m). In rural areas the roads were wider, up to where a stock route was required. roads were surveyed as major roads or highways between larger towns, roads between smaller localities, and roads were local roads in farming communities. Roads named Three Chain Road etc. persist today. The "Queen's Chain" is a concept that has long existed in New Zealand, of a strip of public land, usually 20 metres (or one chain in pre-metric measure) wide from the high water mark, that has been set aside for public use along the coast, around many lakes, and along all or part of many rivers. These strips exist in various forms (including road reserves, esplanade reserves, esplanade strips, marginal strips and reserves of various types) but not as extensively and consistently as is often assumed. Cricket pitches The chain also survives as the length of a cricket pitch, being the distance between the stumps. Measuring instruments Civil engineers and surveyors use various instruments including chains to measure distance. Other instruments used for measuring distance include tapes and bands. A steel band is also known as a "band chain". Surveyors' chain (Gunter's chain) In 1620, the polymath Edmund Gunter developed a method of accurately surveying land using a 100 link chain, 22 yards (66 feet) long, called the Gunter's Chain. Other surveyors chains have been used historically. Engineer's chain (Ramsden's chain) A longer chain of , with a hundred links, was devised in the UK in the late 18th century by Jesse Ramsden, though it never supplanted Gunter's chain. Surveyors also sometimes used such a device, and called it the engineer's chain. or Texas chain In the Southwestern United States, the chain also called the Texas chain, of 20 (16.9164 m , or  ft) was used in surveying Spanish and later Mexican land grants, such as the major Fisher–Miller and Paisano Grants in Texas, several similarly large ones in New Mexico, and over 200 smaller in California. Metric chains Metric chains, of lengths 5 m, 10 m, 20 m and 30 m, are widely used in India. Tolerances are ±3 mm for 5 m and 10 m chains, ±5 mm for a 20 m chain, and ±8 mm for a 30 m chain. Revenue chain In India, a revenue chain with 16 links and of length is used in cadastral surveys. Other instruments Also in North America, a variant of the chain is used in forestry for traverse surveys. This modern chain is a static cord (thin rope) 50 metres long, marked with a small tag at each metre, and also marked in the first metre every decimetre. When working in dense bush, a short axe or hatchet is commonly tied to the end of the chain, and thrown through the bush in the direction of the traverse. Another version used extensively in forestry and surveying is the hip-chain: a small box containing a string counter, worn on the hip. The user ties off the spooled string to a stake or tree and the counter tallies distance as the user walks away in a straight line. These instruments are available in both feet and metres. Use in popular culture The lyrics of Three Chain Road, by Lee Kernaghan, include the line "He lived out on the three chain road" which is the name of many Australian roads; referring to the width of the road reserve.
Physical sciences
English
Basics and measurement
715308
https://en.wikipedia.org/wiki/Whirlpool%20Galaxy
Whirlpool Galaxy
The Whirlpool Galaxy, also known as Messier 51a (M51a) or NGC 5194, is an interacting grand-design spiral galaxy with a Seyfert 2 active galactic nucleus. It lies in the constellation Canes Venatici, and was the first galaxy to be classified as a spiral galaxy. It is 31 million lightyears (9.5 megaparsecs/Mpc) away and in diameter. The galaxy and its companion, NGC 5195, are easily observed by amateur astronomers, and the two galaxies may be seen with binoculars. The Whirlpool Galaxy has been extensively observed by professional astronomers, who study it and its pair with dwarf galaxy NGC 5195 to understand galaxy structure (particularly structure associated with the spiral arms) and galaxy interactions. Its pair with NGC 5195 is among the most famous and relatively close interacting systems, and thus is a favorite subject of galaxy interaction models. Discovery William Parsons, 3rd Earl of Rosse, employing a reflecting telescope at Birr Castle, Ireland, found that the Whirlpool possessed a spiral structure, the first "nebula" to be known to have one. These "spiral nebulae" were not recognized as galaxies until Edwin Hubble was able to observe Cepheid variables in some of these spiral nebulae, which provided evidence that they were so far away that they must be entirely separate galaxies. The Whirlpool Galaxy was discovered on October 13, 1773, by Charles Messier while searching for objects that might confuse comet hunters. It was later cataloged as M51 in Messier's list of astronomical objects. The advent of radio astronomy and subsequent radio images of M51 unequivocally demonstrated that the Whirlpool and its companion galaxy are indeed interacting. Sometimes the designation M51 is used to refer to the pair of galaxies, in which case the individual galaxies may be referred to as M51a (NGC 5194) and M51b (NGC 5195). Visual appearance Deep in the constellation Canes Venatici, M51 is often found by finding the easternmost star of the Big Dipper, Alkaid, and going 3.5° southwest. Its declination is, rounded, +47°, making it circumpolar (never setting) for observers above the 43rd parallel north; it reaches a high altitude throughout this hemisphere making it an accessible object from the early hours in November through to the end of May, after which observation is more coincidental in modest latitudes with the risen sun (due to the Sun approaching to and receding from its right ascension, specifically figuring in Gemini, just to the north). M51 is visible through binoculars under dark sky conditions, and it can be resolved in detail with modern amateur telescopes. When seen through a 100 mm telescope the basic outlines of M51 (limited to 5×6') and its companion are visible. Under dark skies, and with a moderate eyepiece through a 150 mm telescope, M51's intrinsic spiral structure can be detected. With larger (>300 mm) instruments under dark sky conditions, the various spiral bands are apparent with HII regions visible, and M51 can be seen to be attached to M51B. As is usual for galaxies, the true extent of its structure can only be gathered from inspecting photographs; long exposures reveal a large nebula extending beyond the visible circular appearance. In 1984, thanks to the high-speed detector—the so-called image-photon-counting-system (IPCS)—developed jointly by the CNRS Laboratoire d'Astronomie Spatiald (L.A.S.-CNRS) and the Observatoire de Haute Provence (O.H.P.) along with the particularly nice visibility offered by the Canada-France-Hawaii-Telescope (C.F.H.T.) 3.60m Cassegrain focus on the summit of Mauna Kea in Hawaii, Hua et al. detected the double component of the very nucleus of the Whirlpool Galaxy. In January 2005 the Hubble Heritage Project constructed a 11,477 × 7,965-pixel composite image (shown in the infobox above) of M51 using Hubble's ACS instrument. The image highlights the galaxy's spiral arms, and shows detail into some of the structures inside the arms. Properties The Whirlpool Galaxy lies at a distance of 23 to 31 million light-years from Earth. Based on the 1991 measurement by the Third Reference Catalogue of Bright Galaxies using the D25 isophote at the B-band, the Whirlpool Galaxy has a diameter of . Overall the galaxy is about 88% the size of the Milky Way. Its mass is estimated to be 160 billion solar masses, or around 10.3% of the mass of Milky Way Galaxy. It's believed to be an estimated 400 million years old. A black hole, once thought to be surrounded by a ring of dust, but now believed to be partially occluded by dust instead, exists at the heart of the spiral. A pair of ionization cones extend from the active galactic nucleus. Spiral structure The Whirlpool Galaxy has two, very prominent spiral arms that wind clockwise. One arm deviates from a constant angle significantly. The pronounced spiral structure of the Whirlpool Galaxy is believed to be the result of the close interaction between it and its companion galaxy NGC 5195, which may have passed through the main disk of M51 about 500 to 600 million years ago. In this proposed scenario, NGC 5195 came from behind M51 through the disk towards the observer and made another disk crossing as recently as 50 to 100 million years ago until it is where we observe it to be now, slightly behind M51. Tidal features As a result of the Whirlpool Galaxy's interaction with NGC 5195, a variety of tidal features have been created. The largest of these features is the so-called Northwest plume, which extends out to from the galaxy's center. This plume is uniform in color and likely originated from the Whirlpool Galaxy itself due to having diffuse gas. Adjacent to it are two other plumes that have a slightly bluer color, referred to as the Western plumes due to their location. In 2015, a study discovered two new tidal features caused by the interaction between the Whirlpool Galaxy and NGC 5195, the "Northeast plume" and the "South plume". The study remarks that a simulation that takes into account only one passage of NGC 5195 into the Whirlpool Galaxy will fail to produce an analogue to the Northeast tail. In contrast, the multiple-passage simulations made by Salo and Laurikainen et.al reproduce the northeast plume. Star formation The central region of M51 appears to be undergoing a period of enhanced star formation. The present efficiency of star formation, defined as the ratio of mass of new stars to the mass of star-forming gas, is only ~1%, quite comparable to the global value for the Milky Way and other galaxies. It is estimated that the current high rate of star formation can last no more than another 100 million years or so. Similarly, the spiral arms are experiencing high levels of star formation, as well as the space along the arms. Transient events Three supernovae have been observed in the Whirlpool Galaxy: In 1994, SN 1994I was observed in the Whirlpool Galaxy. It was classified as type Ic, indicating that its progenitor star was very massive and had already shed much of its mass, and its brightness peaked at apparent magnitude 12.91. In June 2005 the type II supernova SN 2005cs was observed in the Whirlpool Galaxy, peaking at apparent magnitude 14. On 31 May 2011 a type II supernova was detected in the Whirlpool Galaxy, peaking at magnitude 12.1. This supernova, designated SN 2011dh, showed a spectrum much bluer than average, with P Cygni profiles, which indicate rapidly expanding material, in its hydrogen-Balmer lines. The progenitor was probably a yellow supergiant and not a red or blue supergiant, which are thought to be the most common supernova progenitors. On 22 January 2019, a supernova impostor, designated AT 2019abn, was discovered in Messier 51. The transient was later identified as a luminous red nova. The progenitor star was detected in archival Spitzer Space Telescope infrared images. No object could be seen at the position of the transient in archival Hubble images, indicating that the progenitor star was heavily obstructed by interstellar dust. 2019abn peaked at magnitude 17, reaching an intrinsic brightness of . Planet candidate In September 2020, the detection by the Chandra X-ray Observatory of a candidate exoplanet, named M51-ULS-1b, orbiting the high-mass X-ray binary M51-ULS-1 in this galaxy was announced. If confirmed, it would be the first known instance of an extragalactic planet: a planet outside the Milky Way. The planet candidate was detected by eclipses of the X-ray source (XRS), which consists of a stellar remnant (either a neutron star or a black hole) and a massive star, likely a B-type supergiant. The planet would be slightly smaller than Saturn and orbit at a distance of some tens of astronomical units. Companion NGC 5195 (also known as Messier 51b or M51b) is a dwarf galaxy that is interacting with the Whirlpool Galaxy (also known as M51a or NGC 5194). Both galaxies are found in the constellation Canes Venatici, some 31 million light-years away. These two galaxies are among the most extensively researched pairs of interacting galaxies. Galaxy group information The Whirlpool Galaxy is the brightest galaxy in the M51 Group, a small group of galaxies that also includes M63 (the Sunflower Galaxy), NGC 5023, and NGC 5229. This small group may actually be a subclump at the southeast end of a large, elongated group that includes the M101 Group and the NGC 5866 Group, although most group identification methods and catalogs identify the three groups as separate entities.
Physical sciences
Notable galaxies
null
715446
https://en.wikipedia.org/wiki/Picea%20sitchensis
Picea sitchensis
Picea sitchensis, the Sitka spruce, is a large, coniferous, evergreen tree growing to just over tall, with a trunk diameter at breast height that can exceed 5 m (16 ft). It is by far the largest species of spruce and the fifth-largest conifer in the world (behind giant sequoia, coast redwood, kauri, and western red cedar), and the third-tallest conifer species (after coast redwood and South Tibetan cypress). The Sitka spruce is one of only four species documented to exceed in height. Its name is derived from the community of Sitka in southeast Alaska, where it is prevalent. Its range hugs the western coast of Canada and the US and continues south into northern California. Description The bark is thin and scaly, flaking off in small, circular plates across. The inner bark is reddish-brown. The crown is broad conic in young trees, becoming cylindric in older trees; old trees may not have branches lower than . The shoots are very pale buff-brown, almost white, and glabrous (hairless), but with prominent pulvini. The leaves are stiff, sharp, and needle-like, 15–25 millimeters long, flattened in cross-section, dark glaucous blue-green above with two or three thin lines of stomata, and blue-white below with two dense bands of stomata. The cones are pendulous, slender cylindrical, long and broad when closed, opening to broad. They have thin, flexible scales long; the bracts just above the scales are the longest of any spruce, occasionally just exserted and visible on the closed cones. They are green or reddish, maturing pale brown 5–7 months after pollination. The seeds are black, long, with a slender, long pale brown wing. Size More than a century of logging has left only a remnant of the spruce forest. The largest trees were cut long before careful measurements could be made. Trees over tall may still be seen in Pacific Rim National Park and Carmanah Walbran Provincial Park on Vancouver Island, British Columbia (the Carmanah Giant, at tall, is the tallest tree in Canada), and in Olympic National Park, Washington and Prairie Creek Redwoods State Park, California (United States), the latter of which houses the tallest individual measuring at 100.2 meters or 329 feet tall; two others at the last site are just over 96 m tall. The Queets Spruce is the largest in the world with a trunk volume of , a height of , and a dbh. It is located near the Queets River in Olympic National Park, about from the Pacific Ocean. Another specimen, from Klootchy Creek Park, Oregon, was previously recorded to be the largest with a circumference of and height of . Age Sitka spruce is a long-lived tree, with individuals over 700 years old known. Because it grows rapidly under favorable conditions, large size may not indicate exceptional age. The Queets Spruce has been estimated to be only 350 to 450 years old, but adds more than a cubic meter of wood each year. Root system Because it grows in extremely wet and poorly-drained soil, the Sitka spruce has a shallow root system with long lateral roots and few branchings. This also makes it susceptible to wind throw. Taxonomy DNA analysis has shown that only P. breweriana has a more basal position than Sitka spruce to the rest of the spruce. The other 33 species of spruce are more derived, which suggests that Picea originated in North America. Distribution and habitat Sitka spruce is native to the west coast of North America, with its northwestern limit on Kenai Peninsula, Alaska, and its southeastern limit near Fort Bragg in northern California. It is closely associated with the temperate rainforests and is found within a few kilometers of the coast in the southern portion of its range. North of Oregon, its range extends inland along river floodplains, but seldom does its range extend more than around from the Pacific Ocean and its inlets. It is situated at about above sea level in Alaska and generally below further south. Forests with the species average between of rain annually. It is tolerant to salty spray common in coastal dune habitat, such as at Cape Disappointment State Park in Washington, and prefers soils high in magnesium, calcium, and phosphorus. Sitka spruce has been introduced to Europe as a lumber tree, and was first planted there in the 19th century. Sitka spruce plantations have become a dominant forest type in Great Britain and Ireland, making up 25% of forest cover in the former and 52% in the latter. Sitka spruce woodland is also present in France and Denmark, and the plant was introduced to Iceland and Norway in the early 20th century. Observations of Sitka spruce along the Norwegian coast have shown the species to be growing 25–100% faster than the native Norway spruce there, even as far north as Vesterålen, and Sitka spruces planted along the southwest coast of Norway are growing fastest among the Sitka plantations in Europe. A 9-metre-tall, 100-year-old Sitka spruce growing in the middle of the permanently uninhabited sub-antarctic Campbell Island has been recognised by the Guinness World Records as the "most remote tree in the world". Ecology Value to wildlife Sitka spruce provides critical habitat for a large variety of mammals, birds, reptiles, and amphibians. Its thick, sharp needles are poor browse for ungulates, and only the new spring growth is eaten. However, in Alaska and British Columbia the needles of Picea sitchensis comprise up to 90% of the winter diet of blue grouse. Lichen-forming fungi Helocarpon lesdainii is found on Picea sitchensis trees in Harris Beach State Park, Oregon, USA. It provides cover and hiding places for a large variety of mammals, and good nesting and roosting habitat for birds. Sitka deer require old-growth Sitka spruce forests for winter habitat, as the extensive foliage holds a significant percentage of fallen snow in a given area, thus allowing for better understory browsing and easier migration for terrestrial animals. Cavity nesting birds favor Sitka spruce snags, and the tree is used by bald eagles, and peregrine falcons as nesting habitat. Successional status Sitka spruce is shade tolerant but not as much as its competitors, preferring full sun if possible. It is a pioneer on landslides, sand dunes, uplifted beaches, and deglaciated terrain. However, it is a climax species in coastal forests, where it can become dominant. Fire ecology Due to the prevalence of Sitka spruce in cool, wet climates, its thin bark and shallow root system are not adapted to resist fire damage and it is thus very susceptible. Sitka spruce forests have a fire regime of severe crown or surface fires on long intervals, (150 to 350+ years) which results in total stand replacement. Sitka spruce recolonizes burned sites via wind-dispersed seed from adjacent unburned forests. Uses The root bark of Sitka spruce trees is used in Native Alaskan basket-weaving designs and for rain hats. The pitch was used for caulking, chewing, and its medicinal properties. Native Americans heated and plied the roots to make cord. The resin was used as glue and for waterproofing. Natives and pioneers split off shakes for construction use. The wood is light and relatively strong. Sitka spruce is of major importance in forestry for timber and paper production. Outside its native range, it is particularly valued for its fast growth on poor soils and exposed sites where few other trees can prosper; in ideal conditions, young trees may grow per year. It is naturalized in some parts of Ireland and Great Britain, where it was introduced in 1831 by David Douglas, and New Zealand, though not so extensively as to be considered invasive. Sitka spruce is also planted extensively in Denmark, Norway, and Iceland. In Norway, Sitka spruce was introduced in the early 1900s. An estimated have been planted in Norway, mainly along the coast from Vest-Agder in the south to Troms in the north. It is more tolerant to wind and saline ocean air, and grows faster than the native Norway spruce. But in Norway, the Sitka spruce is now considered an invasive species, and effort to eliminate it is being made. The resonant wood is used widely in piano, harp, violin, and guitar manufacture, as its high strength-to-weight ratio and regular, knot-free rings make it an excellent conductor of sound. For these reasons, the wood is also an important material for sailboat spars, and aircraft wing spars (including flying models). The Wright brothers' Flyer was built using Sitka spruce, as were many aircraft before World War II; during that war, aircraft such as the British Mosquito used it as a substitute for strategically important aluminium. Newly grown tips of Sitka spruce branches are used to flavor spruce beer and are boiled to make syrup. Indigenous culture A unique specimen with golden foliage that used to grow on Haida Gwaii, known as Kiidk'yaas or "The Golden Spruce", is sacred to the Haida First Nations people. It was illegally felled in 1997 by Grant Hadwin, although saplings grown from cuttings can now be found near its original site. In the Lushootseed language, spoken in what is now Washington state, it is known as c̓əlaqayac. Chemistry The stilbene glucosides astringin, isorhapontin, and piceid can be found in the bark of the Sitka spruce. Burls In the Olympic National Forest in Washington, Sitka spruce trees near the ocean sometimes develop burls. According to a guidebook entitled Olympic Peninsula, "Damage to the tip or the bud of a Sitka spruce causes the growth cells to divide more rapidly than normal to form this swelling or burl. Even though the burls may look menacing, they do not affect the overall tree growth."
Biology and health sciences
Pinaceae
Plants
715930
https://en.wikipedia.org/wiki/Cider%20apple
Cider apple
Cider apples are a group of apple cultivars grown for their use in the production of cider (referred to as "hard cider" in the United States). Cider apples are distinguished from "cookers" and "eaters", or dessert apples, by their bitterness or dryness of flavour, qualities which make the fruit unpalatable but can be useful in cidermaking. Some apples are considered to occupy more than one category. In the United Kingdom, the Long Ashton Research Station categorised cider apples in 1903 into four main types according to the proportion of tannins and malic acid in the fruit. For cider production, it is important that the fruit contains high sugar levels which encourage fermentation and raise the final alcohol levels. Cider apples therefore often have higher sugar levels than dessert and cooking apples. It is also considered important for cider apples to contribute tannins, which add depth to the finished cider's flavour. Classification of cider apples Long Ashton Research Station classification system In 1903, Professor B.T.P. Barker, the first director of the Long Ashton Research Station (LARS) in Bristol, England, established an analytical classification system for cider apples based on tannin and malic acid percentages in pressed juice. This system is divided into four categories, which are as follows: Long Ashton's classification system also included a three-level classification of tannin: "full" for an apple with pronounced tannins (e.g. a "full bittersweet" such as Chisel Jersey, "mild" for light tannins such as Cummy Norman, and "medium" such as Dabinett. Tannins are further sometimes categorised as "hard" or "soft", for bitter and astringent tannins respectively. British cidermakers normally blend juice from apples of multiple categories to ensure a finished cider with a balanced flavour and for the best and most consistent quality. While traditional ciders were made from whatever apples were available locally, the blend of sugar, acid and tannin required for a successful cider is difficult to achieve from any single cultivar with the possible exception of some bittersharps. As bittersharps are rare, a common modern approach is to use a range of bittersweet varieties with some sharps, or a cooking apple such as the readily available Bramley, to balance the acidity. Sharps, with their high acid content, also keep the cider's pH below 3.8 to prevent spoilage; sweets help provide adequate sugar for fermentation to the proper alcohol content. French and Spanish classification systems In addition to the Long Ashton Research Station classification, Charles Neal has written about a French classification system. In France and Spain, the system has an intermediate category called acidulée or acidulada respectively, which is sometimes used to classify cider apples that are semi-tart and have low tannin content. Similar to the English system, acidity and tannins are considered. Apples are classified as follows: In the US, there are four regions where cider apples are grown in orchards: the Northeast, Mid-Atlantic, Midwest, and Northwest. Out of the twenty most commonly grown cider apple varieties, half originate from England, two come from France, and the rest originate in America. Most special cider cultivars for European ciders are bittersweets and bittersharps, which have high tannin content. There are not a lot of cultivars with high tannins readily available in the U.S. Most ciders in the United States are made from culled dessert apples that are generally sweets and sharps. There is no systematic classification of North American apple cultivars for cider-making purposes. However, there is a database for apple varieties called the U.S. National Plant Germplasm System (NPGS). Other classification considerations Beyond the Long Ashton or English system and French system for classifying cider apples, there are other considerations for characterisation. Other measurements taken of apple varieties towards use in cider classification include pH, polyphenol composition, yeast assimilable nitrogen (YAN), and soluble solid concentration (ºBrix). The sharpness of an apple is affected by pH and titratable acidity. Most cultivars must reach pH levels of around 3.3 to 3.8 to aid in the fermentation process, and additions of malic acid may be necessary if the cider apple is over this desired threshold. Soluble solids as measured in units of degrees Brix can be used to quantify the potential alcohol that a yeast can ferment from the initial juice of the cider apple. This is carefully considered in cultivars from areas where there are tax regulations on the percentage of alcohol by volume that is contained in these products. In the United States, "hard cider" legally falls between the 0.5% to 8.5% alcohol by volume tax bracket. Cideries that exceed a soluble solids level of 17 °Brix will be subject to higher tax levels that are classified under cider wine. In the United Kingdom, cider falls in two duty brackets, with a flat rate for up to 7.4% ABV, and a higher duty rate for ciders between 7.4% to 8.5% ABV. Foaming is an intricate, yet essential component that can be used to assess the overall quality of a cider and distinguish between natural and sparkling ciders. Chemically, hydrophobic polypeptides contribute to the initial foam, bubble size, the extent to which it persists, number of nucleation sites, and the froth of the foam (foam collar). These chemical compositions and parameters are quantitatively measured through metrics such as foam height, foam stability height, and stability time. The olfactory sensory profile is used to determine the specific aroma of the cider. Research is still ongoing in this field, but the aromas that contribute to the sensory perceptions of cider mainly come from the phenols 4-ethyl guaiacol and 4-ethyl phenol. Styles Cider is made in several countries and can be made from any apples. Historically the flavours preferred and varieties used to produce cider have varied by region. Many of the most traditional apple varieties used for ciders come from or are derived from those from Devon, Somerset and Herefordshire in England, Normandy in France, and Asturias in Spain, and these areas are considered to have their own broad cider styles although the many exceptions make this more of a historic footnote. Normandy cider is usually naturally carbonated and clear: Asturian cider apple varieties are mainly 'sharps' or mild 'bittersweets', producing a mildly acidic cider which is customarily served by being poured from height into the glass to oxygenate it. In the UK there are two broad styles of cider, determined by the types of apple available. The style associated with the east of England (East Anglia, Kent, Sussex) used surplus dessert and cooking apples and was therefore characterised by an acidic, light-bodied cider. The other style, using specific cider apple cultivars with higher tannin levels, is usually associated with the West Country, particularly Somerset, and Three Counties. Within these broad types there are also a number of more specific regional styles. The ciders of Devon were often made largely from sweets, the cultivars low in acid and tannins that typified the county's orchards. Devon cidermakers also specialised in "keeved", or "matched" cider, where fermentation was slowed to produce a naturally sweet finish, though such ciders were usually intended for the London market and a fully fermented, dry "rough" cider was preferred for home consumption. Somerset ciders, by contrast, have tended to be stronger and more tannic. Bittersweet cultivars, locally known as "Jersey" apples, were typical of Somerset, although the county's most famous apple, Kingston Black, was a mild bittersharp. The West Midland county of Gloucestershire traditionally favoured bittersharp apples, giving strong ciders with a higher bite of acidity and tannins: neighbouring Worcestershire and Herefordshire also favoured acidic cider apples, but their growers also made plantings of dual purpose apples to take advantage of markets in nearby industrial centres. Single varietal cider cultivars Historically ciders have been almost invariably made from blending apple varieties, and the practice of making single variety ciders is considered largely a modern approach. Only a very small number of apple varieties are considered to be capable of making a good single-variety cider. These fruit are designated as having "vintage" quality, a term first introduced by Robert Hogg in 1888, and further popularised by Barker at Long Ashton: it should be understood as referring to the cultivar's ability to produce complex and interesting flavours, rather than in the sense "vintage" is used in winemaking. Sweet Coppin is a sweet originating in Devon; Sweet Alford is another Devon sweet variety; Crimson King is a sharp, first grown in Somerset; Yarlington Mill is a bittersweet, named after the mill in Somerset where it was found; Dabinett is a bittersweet named after William Dabinett, and is from Middle Lambrook, South Petherton, Somerset; Major is an old bittersweet variety, found in orchards in South Devon and east of the Blackdown Hills in south Somerset; Broxwood Foxwhelp is a Herefordshire bittersharp, probably a sport of the old variety Foxwhelp Kingston Black is a bittersharp probably named after the village of Kingston, near Taunton, Somerset; Stoke Red is a bittersharp originating from the village of Rodney Stoke in Somerset Although considered suitable for single-variety ciders, they can also contribute well to blends. Cider apple composition Polyphenols and tannins Polyphenols are an important component of ciders, contributing astringency, bitterness, colloidal stability and colour. The content in apples varies depending on cultivar, production practices, and part of the fruit, with the peel of an apple having more polyphenols than the flesh. The primary polyphenol in apples is procyanidins, followed by hydroxycinnamic acids in the flesh and flavonols in peel. Much of the polyphenols in the fruit are not pressed into the juice, because they bind to polysaccharides in the fruit cell wall, becoming bound to the pomace, when the cell wall is ruptured during the pressing process. Procyanidins are especially prone to binding to the pomace with about 30% extracted into the juice. Cider apples can have five times the total phenolic content compared to dessert apples, but there is a limited supply of bittersweet and bittersharp apples in the U.S. to meet the needs of the fast-growing cider industry. Some cider makers add exogenous tannins to improve phenolic characteristics, and researchers are working on improving polyphenol extraction technology. In countries with more well-established cider industries, such as the U.K. and France, there is an adequate supply of high tannin cider apples. About one half of the apples processed for cider in Europe are bittersweet fruit. Orchard design Traditional orchard design The end of the 1950s saw a huge turn in cider apple orchard design, where before traditional styles of orchard had been maintained for centuries. Traditional orchards are now uncommon, though they can still be found in places like Spain where most growers have maintained traditional systems. Traditional orchards were designed with large spacing between individual large trees;(6–12 meters tall and spaced about 7.6–9 meters apart) typically, less than 150 trees per hectare. Trees within an orchard were more variable in age; individual trees would be grown until they died and a new tree would be planted in its place. Older trees in traditional orchards can grow gnarled and hollowed for the tree's entire lifespan. The large (7.6 meter) spherical-shaped canopies of traditional methods differ from various planting systems that use conic, flat planar or v-shaped styles. Traditional orchards were often intercropped: it was particularly common to use a silvopastoral system that combined fruit trees and pasture. The natural grasses forming the orchard's undergrowth were often grazed by sheep or cows: the English "grass orchard" was particularly associated with cider producing districts. Management techniques did not use fertiliser or chemicals, other than the natural fertilisation from the dung of grazing cattle, and generally required less training than modern, high-density systems. Budding of scions took place high up in the tree, typically using vigorous rootstocks or seedlings. Traditional orchards have been found to produce apples with lower nitrogen content and higher polyphenolic levels. In recent years, there has been a decline in the numbers of traditional cider orchards and a corresponding loss of orchard design knowledge between generations of apple growers. Traditional orchards have, for example decreased by about 20% since 1994 in parts of Germany. The decline is partly attributed to the high maintenance demands of large trees and the physical limitations for apple pickers, the low yield (10-12 tons per hectare,) the slow cropping of trees (15 years compared to the average 8 years of high-density orchards,) and historical changes in regional alcohol preferences. During the 1950s, France subsidised growers who converted to high-density orchards. By the 1990s, most of France no longer used traditional orchard styles. By the 1970s, traditional style orchards were only used for making 25% of the cider in the United Kingdom. Bush orchards In response to the rising demand for cider apples in the United Kingdom in the 1950s, the Long Ashton Research Station developed the bush orchard system commonly used in the UK today. Cider apple varieties are grafted onto semi-dwarfing rootstocks and reach a maximum height of 15 to 20 feet (4.5 to 6 m). Trees are planted at a density of approximately 750 per hectare, with trees spaced 2 – 3 m (6.5–10 ft) apart in rows 5.5 m (18 ft) wide. Although more densely planted than a traditional orchard, rows are still wide enough for tractors, harvesters, and other machinery to access the rows. Unlike a high density orchard, trees are free standing and are not supported by a trellis. Bush orchards can yield 2-3 times as much as a traditional orchard, up to 35-50 tons per hectare. The bush orchard style became especially popular in the 1970s after the H.P. Bulmer and Taunton Cider companies established Incentive Planting Schemes, which rewarded farmers for planting bush orchards of cider apple varieties. Today, approximately two thirds of cider apples in the United Kingdom are grown in bush orchards. High density orchards High density planting became popular in the 1960s and 1970s, and is a common method of growing cider apples outside of the United Kingdom. The average high density orchard contains about 1,000 trees per acre, although some orchards in Europe and the Pacific Northwest may contain up to 9,000 trees per acre. Trees in high density orchards are grafted onto a precocious dwarfing rootstock that keeps the tree small and encourages early fruit production, with trees often bearing within two to three years of planting. This allows growers to bring new varieties of apple to market more quickly than they could with traditional, more widely spaced orchard designs that are slower to mature. Because trees grown on a dwarfing rootstock are small and thin, they must be supported by a trellis system. Rows are spaced depending on the height of the mature tree, usually half the tree height plus three feet (approximately 1m). High density orchards are more labor efficient than traditional orchards, as workers do not need to climb ladders during maintenance or harvest Pesticide application is also more efficient, as chemicals can be applied by over-the-row sprayers, fixed in-canopy systems, or other devices that reduce pesticide waste. Tree types and planting systems With the move to higher density plantings, different tree types and planting systems have been developed, and are used around the world. These systems include: Central leader trees are commonly grown in a conical shape, with a central vertical shoot (the central leader), and horizontal larger branches at the bottom decreasing to smaller branches near the top. Central leader trees grown with standard or semi dwarf rootstocks are large and free standing, unlike modern high density plantings. The central leader system has been adjusted in recent years to suit the requirements of modern orchard designs and high density plantings. An example of this is slender spindle. While there are different forms, slender spindle trees have the same tapered design. Top branches are regularly renewed by pruning, or weakened by bending. A less vigorous rootstock is used to limit growth, creating a smaller tree, usually individually staked for support of heavy cropping. Solaxe and vertical axis systems are similar to both central leader and slender spindle, and has been used as a transition from low density plantings to high density plantings. Tree size is determined by rootstock, ranging from semi dwarf to fully dwarf. The trees require a form of support. These systems aim to create an equilibrium between fruiting and vegetative growth, receiving minimal pruning. Solaxe uses limb bending to control vigour, a modification from vertical axis which uses periodical pruning. Super spindle orchard design utilises high density planting, with up to or over 2000 trees/acre. The benefits of high density include high early yields with reduced inputs such as labour due to reduced manual work and the ability to have high output picking during harvest. High density plantings are grown with a trellis system for tree support. Tall spindle shares many of the high density benefits as super spindle, and is a combination of slender spindle, vertical axis, solaxe and super spindle systems. It utilises high density planting on dwarfing rootstocks with a range between 2,500 and 3,300 trees/acre. Tall spindle systems utilise minimal pruning at planting, and uses branch bending to control growth, and limb pruning to renew branches as they become too large. As tree height exceeds 90% of the row spacing, fruit quality at the lower parts of the tree may be reduced.
Biology and health sciences
Pomes
Plants
715934
https://en.wikipedia.org/wiki/Snakebite
Snakebite
A snakebite is an injury caused by the bite of a snake, especially a venomous snake. A common sign of a bite from a venomous snake is the presence of two puncture wounds from the animal's fangs. Sometimes venom injection from the bite may occur. This may result in redness, swelling, and severe pain at the area, which may take up to an hour to appear. Vomiting, blurred vision, tingling of the limbs, and sweating may result. Most bites are on the hands, arms, or legs. Fear following a bite is common with symptoms of a racing heart and feeling faint. The venom may cause bleeding, kidney failure, a severe allergic reaction, tissue death around the bite, or breathing problems. Bites may result in the loss of a limb or other chronic problems or even death. The outcome depends on the type of snake, the area of the body bitten, the amount of snake venom injected, the general health of the person bitten, and whether or not anti-venom serum has been administered by a doctor in a timely manner. Problems are often more severe in children than adults, due to their smaller size. Allergic reactions to snake venom can further complicate outcomes and can include anaphylaxis, requiring additional treatment and in some cases resulting in death. Snakes bite both as a method of hunting and as a means of protection. Risk factors for bites include working outside with one's hands such as in farming, forestry, and construction. Snakes commonly involved in envenomations include elapids (such as kraits, cobras and mambas), vipers, and sea snakes. The majority of snake species do not have venom and kill their prey by constriction (squeezing them). Venomous snakes can be found on every continent except Antarctica. Determining the type of snake that caused a bite is often not possible. The World Health Organization says snakebites are a "neglected public health issue in many tropical and subtropical countries", and in 2017, the WHO categorized snakebite envenomation as a Neglected Tropical Disease (Category A). The WHO also estimates that between 4.5 and 5.4 million people are bitten each year, and of those figures, 40–50% develop some kind of clinical illness as a result. Furthermore, the death toll from such an injury could range between 80,000 and 130,000 people per year. The purpose was to encourage research, expand the accessibility of antivenoms, and improve snakebite management in "developing countries". Prevention of snake bites can involve wearing protective footwear, avoiding areas where snakes live, and not handling snakes. Treatment partly depends on the type of snake. Washing the wound with soap and water and holding the limb still is recommended. Trying to suck out the venom, cutting the wound with a knife, or using a tourniquet is not recommended. Antivenom is effective at preventing death from bites; however, antivenoms frequently have side effects. The type of antivenom needed depends on the type of snake involved. When the type of snake is unknown, antivenom is often given based on the types known to be in the area. In some areas of the world, getting the right type of antivenom is difficult and this partly contributes to why they sometimes do not work. An additional issue is the cost of these medications. Antivenom has little effect on the area around the bite itself. Supporting the person's breathing is sometimes also required. The number of venomous snakebites that occur each year may be as high as five million. They result in about 2.5 million envenomations and 20,000 to 125,000 deaths. The frequency and severity of bites vary greatly among different parts of the world. They occur most commonly in Africa, Asia, and Latin America, with rural areas more greatly affected. Deaths are relatively rare in Australia, Europe and North America. For example, in the United States, about seven to eight thousand people per year are bitten by venomous snakes (about one in 40 thousand people) and about five people die (about one death per 65 million people). Signs and symptoms The most common first symptom of all snakebites is an overwhelming fear, which may contribute to other symptoms, and may include nausea and vomiting, diarrhea, vertigo, fainting, tachycardia, and cold, clammy skin. Snake bites can have a variety of different signs and symptoms depending on their species. Dry snakebites and those inflicted by a non-venomous species may still cause severe injury. The bite may become infected from the snake's saliva. The fangs sometimes harbor pathogenic microbial organisms, including Clostridium tetani, and may require an updated tetanus immunization. Most snakebites, from either a venomous or a non-venomous snake, will have some type of local effect. Minor pain and redness occur in over 90 percent of cases, although this varies depending on the site. Bites by vipers and some cobras may be extremely painful, with the local tissue sometimes becoming tender and severely swollen within five minutes. This area may also bleed and blister and may lead to tissue necrosis. Other common initial symptoms of pit viper and viper bites include lethargy, bleeding, weakness, nausea, and vomiting. Symptoms may become more life-threatening over time, developing into hypotension, tachypnea, severe tachycardia, severe internal bleeding, altered sensorium, kidney failure, and respiratory failure. Bites by some snakes, such as the kraits, coral snake, Mojave rattlesnake, and the speckled rattlesnake, may cause little or no pain, despite their serious and potentially life-threatening venom. Some people report experiencing a "rubbery", "minty", or "metallic" taste after being bitten by certain species of rattlesnake. Spitting cobras and rinkhalses can spit venom in a person's eyes. This results in immediate pain, ophthalmoparesis, and sometimes blindness. Some Australian elapids and most viper envenomations will cause coagulopathy, sometimes so severe that a person may bleed spontaneously from the mouth, nose, and even old, seemingly healed wounds. Internal organs may bleed, including the brain and intestines, and ecchymosis (bruising) of the skin is often seen. The venom of elapids, including sea snakes, kraits, cobras, king cobra, mambas, and many Australian species, contains toxins which attack the nervous system, causing neurotoxicity. The person may present with strange disturbances to their vision, including blurriness. Paresthesia throughout the body, as well as difficulty in speaking and breathing, may be reported. Nervous system problems will cause a huge array of symptoms, and those provided here are not exhaustive. If not treated immediately they may die from respiratory failure. Venom emitted from some types of cobras, almost all vipers, and some sea snakes cause necrosis of muscle tissue. Muscle tissue will begin to die throughout the body, a condition known as rhabdomyolysis. Rhabdomyolysis can result in damage to the kidneys as a result of myoglobin accumulation in the renal tubules. This, coupled with hypotension, can lead to acute kidney injury, and, if left untreated, eventually death. Snakebite is also known to cause depression and post-traumatic stress disorder in a high proportion of people who survive. Cause In the developing world most snakebites occur in those who work outside such as farmers, hunters, and fishermen. They often happen when a person steps on the snake or approaches it too closely. In the United States and Europe snakebites most commonly occur in those who keep them as pets. The type of snake that most often delivers serious bites depends on the region of the world. In Africa, it is mambas, Egyptian cobras, puff adders, and carpet vipers. In the Middle East, it is carpet vipers and elapids. In Latin America, it is snakes of the Bothrops and Crotalus types, the latter including rattlesnakes. In North America, rattlesnakes are the primary concern, and up to 95% of all snakebite-related deaths in the United States are attributed to the western and eastern diamondback rattlesnakes. The greatest number of bites are inflicted on the hands. People get bitten by handling snakes or in the outdoors by putting their hands on the wrong places. The next largest number of bites occur on the ankles, as snakes are often hidden or camouflaged extremely well to fend off predators. Most bite victims are bitten by surprise, and it is a comfortable fiction that rattlesnakes always forewarn their bite victims - often the bite is the first indication a snake is near. Since most venomous snakes move about during the dawn dusk or night, one may expect more encounters during the early morning or late afternoon, though many species such as the Western Diamondback may be encountered at any time of day and in fact most bites occur during the month of April when both snakes and humans are out and about and encounter one another hiking, in yards, or on pathways. Children playing within short distances of their homes crawl under porches, jump into bushes, pull boards of wood from a pile and are bitten. Most however occur when people handle rattlesnakes. In South Asia, it was previously believed that Indian cobras, common kraits, Russell's viper, and carpet vipers were the most dangerous; other snakes, however, may also cause significant problems in this area of the world. Pathophysiology Since envenomation is completely voluntary, all venomous snakes are capable of biting without injecting venom into a person. Snakes may deliver such a "dry bite" rather than waste their venom on a creature too large for them to eat, a behaviour called venom metering. However, the percentage of dry bites varies among species: 80 percent of bites inflicted by sea snakes, which are normally timid, do not result in envenomation, whereas for pit viper bites the number is closer to 25 percent. Furthermore, some snake genera, such as rattlesnakes, can internally regulate the amount of venom they inject. There is a wide variance in the composition of venoms from one species of venomous snake to another. Some venoms may have their greatest effect on a victim's respiration or circulatory system. Others may damage or destroy tissues. This variance has imparted to the venom of each species a distinct chemistry. Sometimes antivenins have to be developed for individual species. For this reason, standard therapeutic measures will not work in all cases. Some dry bites may also be the result of imprecise timing on the snake's part, as venom may be prematurely released before the fangs have penetrated the person. Even without venom, some snakes, particularly large constrictors such as those belonging to the Boidae and Pythonidae families, can deliver damaging bites; large specimens often cause severe lacerations, or the snake itself pulls away, causing the flesh to be torn by the needle-sharp recurved teeth embedded in the person. While not as life-threatening as a bite from a venomous species, the bite can be at least temporarily debilitating and could lead to dangerous infections if improperly dealt with. While most snakes must open their mouths before biting, African and Middle Eastern snakes belonging to the family Atractaspididae can fold their fangs to the side of their head without opening their mouth and jab a person. Snake venom It has been suggested that snakes evolved the mechanisms necessary for venom formation and delivery sometime during the Miocene epoch. During the mid-Tertiary, most snakes were large ambush predators belonging to the superfamily Henophidia, which use constriction to kill their prey. As open grasslands replaced forested areas in parts of the world, some snake families evolved to become smaller and thus more agile. However, subduing and killing prey became more difficult for the smaller snakes, leading to the evolution of snake venom. The most likely hypothesis holds that venom glands evolved from specialized salivary glands. The venom itself evolved through the process of natural selection; it retained and emphasized the qualities that made it useful in killing or subduing prey. Today we can find various snake species in stages of this hypothesized development. There are the highly efficient envenoming machines - like the rattlesnakes - with large capacity venom storage, hollow fangs that swing into position immediately before the snake bites, and spare fangs ready to replace those damaged or lost. Other research on Toxicofera, a hypothetical clade thought to be ancestral to most living reptiles, suggests an earlier time frame for the evolution of snake venom, possibly to the order of tens of millions of years, during the Late Cretaceous. Snake venom is produced in modified parotid glands normally responsible for secreting saliva. It is stored in structures called alveoli behind the animal's eyes and ejected voluntarily through its hollow tubular fangs. Venom in many snakes, such as pit vipers, affects virtually every organ system in the human body and can be a combination of many toxins, including cytotoxins, hemotoxins, neurotoxins, and myotoxins, allowing for an enormous variety of symptoms. Snake venom may cause cytotoxicity as various enzymes including hyaluronidases, collagenases, proteinases and phospholipases lead to breakdown (dermonecrosis) and injury of local tissue and inflammation which leads to pain, edema and blister formation. Metalloproteinases further lead to breakdown of the extracellular matrix (releasing inflammatory mediators) and cause microvascular damage, leading to hemorrhage, skeletal muscle damage (necrosis), blistering and further dermonecrosis. The metalloproteinase release of the inflammatory mediators leads to pain, swelling, and white blood cell (leukocyte) infiltration. The lymphatic system may be damaged by the various enzymes contained in the venom leading to edema; or the lymphatic system may also allow the venom to be carried systemically. Snake venom may cause muscle damage or myotoxicity via the enzyme phospholipase A2 which disrupts the plasma membrane of muscle cells. This damage to muscle cells may cause rhabdomyolysis, respiratory muscle compromise, or both. Other enzymes such as bradykinin potentiating peptides, natriuretic peptides, vascular endothelial growth factors, proteases can also cause hypotension or low blood pressure. Toxins in snake venom can also cause kidney damage (nephrotoxicity) via the same inflammatory cytokines. The toxins cause direct damage to the glomeruli in the kidneys as well as causing protein deposits in Bowman's capsule. Or the kidneys may be indirectly damaged by envenomation due to shock, clearance of toxic substances such as immune complexes, blood degradation products, or products of muscle breakdown (rhabdomyolysis). In venom-induced consumption coagulopathy, toxins in snake venom promote hemorrhage via activation, consumption, and subsequent depletion of clotting factors in the blood. These clotting factors normally work as part of the coagulation cascade in the blood to form blood clots and prevent hemorrhage. Toxins in snake venom (especially the venom of New World pit vipers (the family crotalina)) may also cause low platelets (thrombocytopenia) or altered platelet function also leading to bleeding. Snake venom is known to cause neuromuscular paralysis, usually as a flaccid paralysis that is descending; starting at the facial muscles, causing ptosis or drooping eyelids and dysarthria or poor articulation of speech, and descending to the respiratory muscles causing respiratory compromise. The neurotoxins can either bind to and block membrane receptors at the post-synaptic neurons or they can be taken up into the pre-synaptic neuron cells and impair neurotransmitter release. Venom toxins that are taken up intra-cellularly, into the cells of the pre-synaptic neurons are much more difficult to reverse using anti-venom as they are inaccessible to the anti-venom when they are intracellular. The strength of venom differs markedly between species and even more so between families, as measured by median lethal dose (LD50) in mice. Subcutaneous LD50 varies by over 140-fold within elapids and by more than 100-fold in vipers. The amount of venom produced also differs among species, with the Gaboon viper able to potentially deliver from 450 to 600 milligrams of venom in a single bite, the most of any snake. Opisthoglyphous colubrids have venom ranging from life-threatening (in the case of the boomslang) to barely noticeable (as in Tantilla). Prevention Snakes are most likely to bite when they feel threatened, are startled, are provoked, or when they have been cornered. Snakes are likely to approach residential areas when attracted by prey, such as rodents. Regular pest control can reduce the threat of snakes considerably. It is beneficial to know the species of snake that are common in local areas, or while travelling or hiking. Africa, Australia, the Neotropics, and South Asia in particular are populated by many dangerous species of snake. Being aware of—and ultimately avoiding—areas known to be heavily populated by dangerous snakes is strongly recommended. When in the wilderness, treading heavily creates ground vibrations and noise, which will often cause snakes to flee from the area. However, this generally only applies to vipers, as some larger and more aggressive snakes in other parts of the world, such as mambas and cobras, will respond more aggressively. If presented with a direct encounter, it is best to remain silent and motionless. If the snake has not yet fled, it is important to step away slowly and cautiously. The use of a flashlight when engaged in camping activities, such as gathering firewood at night, can be helpful. Snakes may also be unusually active during especially warm nights when ambient temperatures exceed . It is advised not to reach blindly into hollow logs, flip over large rocks, and enter old cabins or other potential snake hiding places. When rock climbing, it is not safe to grab ledges or crevices without examining them first, as snakes are cold-blooded and often sunbathe atop rock ledges. In the United States, more than 40 percent of people bitten by snakes intentionally put themselves in harm's way by attempting to capture wild snakes or by carelessly handling their dangerous pets—40 percent of that number had a blood alcohol level of 0.1 percent or more. It is also important to avoid snakes that appear to be dead, as some species will roll over on their backs and stick out their tongue to fool potential threats. A snake's detached head can immediately act by reflex and potentially bite. The induced bite can be just as severe as that of a live snake. As a dead snake is incapable of regulating the venom injected, a bite from a dead snake can often contain large amounts of venom. Treatment It may be difficult to determine if a bite by any species of snake is life-threatening. A bite by a North American copperhead on the ankle is usually a moderate injury to a healthy adult, but a bite to a child's abdomen or face by the same snake may be fatal. The outcome of all snakebites depends on a multitude of factors: the type of snake, the size, physical condition, and temperature of the snake, the age and physical condition of the person, the area and tissue bitten (e.g., foot, torso, vein or muscle), the amount of venom injected, the time it takes for the person to find treatment, and finally the quality of that treatment. An overview of systematic reviews on different aspects of snakebite management found that the evidence base from majority of treatment modalities is low quality. An analysis of World Health Organization guidelines found that they are of low quality, with inadequate stakeholder involvement and poor methodological rigour. In addition, access to effective treatment modalities is a major challenge in some regions, particularly in most African countries. Snake identification Identification of the snake is important in planning treatment in certain areas of the world but is not always possible. Ideally, the dead snake would be brought in with the person, but in areas where snake bite is more common, local knowledge may be sufficient to recognize the snake. However, in regions where polyvalent antivenoms are available, such as North America, identification of snakes is not a high-priority item. Attempting to catch or kill the offending snake also puts one at risk for re-envenomation or creating a second person bitten, and generally is not recommended. The three types of venomous snakes that cause the majority of major clinical problems are vipers, kraits, and cobras. Knowledge of what species are present locally can be crucial, as is knowledge of typical signs and symptoms of envenomation by each type of snake. A scoring system can be used to try to determine the biting snake based on clinical features, but these scoring systems are extremely specific to particular geographical areas and might be compromised by the presence of escaped or released non-native species. First aid Snakebite first aid recommendations vary, in part because different snakes have different types of venom. Some have little local effect, but life-threatening systemic effects, in which case containing the venom in the region of the bite by pressure immobilization is desirable. Other venoms instigate localized tissue damage around the bitten area, and immobilization may increase the severity of the damage in this area, but also reduce the total area affected; whether this trade-off is desirable remains a point of controversy. Because snakes vary from one country to another, first aid methods also vary. Many organizations, including the American Medical Association and American Red Cross, recommend washing the bite with soap and water. Australian recommendations for snake bite treatment are against cleaning the wound. Traces of venom left on the skin/bandages from the strike can be used in combination with a snake bite identification kit to identify the species of snake. This speeds the determination of which antivenom to administer in the emergency room. Pressure immobilization As of 2008, clinical evidence for pressure immobilization via the use of an elastic bandage is limited. It is recommended for snakebites that have occurred in Australia (due to elapids which are neurotoxic). It is not recommended for bites from non-neurotoxic snakes such as those found in North America and other regions of the world. The British military recommends pressure immobilization in all cases where the type of snake is unknown. The object of pressure immobilization is to contain venom within a bitten limb and prevent it from moving through the lymphatic system to the vital organs. This therapy has two components: pressure to prevent lymphatic drainage, and immobilization of the bitten limb to prevent the pumping action of the skeletal muscles. Antivenom Until the advent of antivenom, bites from some species of snake were almost universally fatal. Despite huge advances in emergency therapy, antivenom is often still the only effective treatment for envenomation. The first antivenom was developed in 1895 by French physician Albert Calmette for the treatment of Indian cobra bites. Antivenom is made by injecting a small amount of venom into an animal (usually a horse or sheep) to initiate an immune system response. The resulting antibodies are then harvested from the animal's blood. Antivenom is injected into the person intravenously, and works by binding to and neutralizing venom enzymes. It cannot undo the damage already caused by venom, so antivenom treatment should be sought as soon as possible. Modern antivenoms are usually polyvalent, making them effective against the venom of numerous snake species. Pharmaceutical companies that produce antivenom target their products against the species native to a particular area. The availability of antivenom is a major concern in some areas, including most of Africa, due to economic reasons (antivenom crisis). In Sub-Saharan Africa, the efficacy of antivenom is often poorly characterised and some of the few available products have even been found to lack effectiveness. Although some people may develop serious adverse reactions to antivenom, such as anaphylaxis, in emergency situations this is usually treatable in a hospital setting and hence the benefit outweighs the potential consequences of not using antivenom. Giving adrenaline (epinephrine) to prevent adverse reactions to antivenom before they occur might be reasonable in cases where they occur commonly. Antihistamines do not appear to provide any benefit in preventing adverse reactions. Chronic Complications Chronic health effects of snakebite include but are not limited to non-healing and chronic ulcers, musculoskeletal disorders, amputations, chronic kidney disease, and other neurological and endocrine complications. The treatment of chronic complications of snakebite has not been well researched and there a systems approach consisting of a multi-component intervention. Outmoded The following treatments, while once recommended, are considered of no use or harmful, including tourniquets, incisions, suction, application of cold, and application of electricity. Cases in which these treatments appear to work may be the result of dry bites. Application of a tourniquet to the bitten limb is generally not recommended. There is no convincing evidence that it is an effective first-aid tool as ordinarily applied. Tourniquets have been found to be completely ineffective in the treatment of Crotalus durissus bites, but some positive results have been seen with properly applied tourniquets for cobra venom in the Philippines. Uninformed tourniquet use is dangerous since reducing or cutting off circulation can lead to gangrene, which can be fatal. The use of a compression bandage is generally as effective, and much safer. Cutting open the bitten area, an action often taken before suction, is not recommended since it causes further damage and increases the risk of infection; the subsequent cauterization of the area with fire or silver nitrate (also known as infernal stone) is also potentially threatening. Sucking out venom, either by mouth or with a pump, does not work and may harm the affected area directly. Suction started after three minutes removes a clinically insignificant quantity—less than one-thousandth of the venom injected—as shown in a human study. In a study with pigs, suction not only caused no improvement but led to necrosis in the suctioned area. Suctioning by mouth presents a risk of further poisoning through the mouth's mucous tissues. The helper may also release bacteria into the person's wound, leading to infection. Immersion in warm water or sour milk, followed by the application of snake-stones (also known as la Pierre Noire), which are believed to draw off the poison in much the way a sponge soaks up water. Application of a one-percent solution of potassium permanganate or chromic acid to the cut, exposed area. The latter substance is notably toxic and carcinogenic. Drinking abundant quantities of alcohol following the cauterization or disinfection of the wound area. Use of electroshock therapy in animal tests has shown this treatment to be useless and potentially dangerous. In extreme cases, in remote areas, all of these misguided attempts at treatment have resulted in injuries far worse than an otherwise mild to moderate snakebite. In worst-case scenarios, thoroughly constricting tourniquets have been applied to bitten limbs, completely shutting off blood flow to the area. By the time the person finally reached appropriate medical facilities, their limbs had to be amputated. In development Several new drugs and treatments are under development for snakebite. For instance, the metal chelator dimercaprol has recently been shown to potently antagonize the activity of Zn2+-dependent snake venom metalloproteinases in vitro. New monoclonal antibodies, polymer gels and a small molecule inhibitor called Varespladib are in development. A core outcome set (minimal list of consensus outcomes that should be used in future intervention research) for snakebite in South Asia is being developed. Epidemiology Earlier estimates for snakebite vary from 1.2 to 5.5 million, with 421,000 to 2.5 million being envenomings, and causing 20,000 to 125,000 deaths. More recent modelling estimates that in 2019, about 63,400 people died globally from snakebite, with 51,100 of these deaths happening in India. Since reporting is not mandatory in much of the world, the data on the frequency of snakebites is not precise. Many people who survive bites have permanent tissue damage caused by venom, leading to disability. Most snake envenomings and fatalities occur in South Asia, Southeast Asia, and sub-Saharan Africa, with India reporting the most snakebite deaths of any country. Available evidence on the effect of climate change on the epidemiology of snakebite is limited but it is expected that there will be a geographic shift in the risk of snakebite: northwards in North America and southwards in South America and Mozambique, and increase in the incidence of bite in Sri Lanka. Most snakebites are caused by non-venomous snakes. Of the roughly 3,000 known species of snake found worldwide, only 15% are considered dangerous to humans. Snakes are found on every continent except Antarctica. The most diverse and widely distributed snake family, the colubrids, has approximately 700 venomous species, but only five genera—boomslangs, twig snakes, keelback snakes, green snakes, and slender snakes—have caused human fatalities. Worldwide, snakebites occur most frequently in the summer season when snakes are active and humans are outdoors. Agricultural and tropical regions report more snakebites than anywhere else. In the United States, those bitten are typically male and between 17 and 27 years of age. Children and the elderly are the most likely to die. Mechanics When venomous snakes bite a target, they secrete venom through their venom delivery system. The venom delivery system generally consists of two venom glands, a compressor muscle, venom ducts, a fang sheath, and fangs. The primary and accessory venom glands store the venom quantities required during envenomation. The compressor muscle contracts during bites to increase the pressure throughout the venom delivery system. The pressurized venom travels through the primary venom duct to the secondary venom duct that leads down through the fang sheath and fang. The venom is then expelled through the exit orifice of the fang. The total volume and flow rate of venom administered into a target varies widely, sometimes as much as an order of magnitude. One of the largest factors is snake species and size, larger snakes have been shown to administer larger quantities of venom. Predatory vs. defensive bites Snake bites are classified as either predatory or defensive. During defensive strikes, the rate of venom expulsion and total volume of venom expelled is much greater than during predatory strikes. Defensive strikes can have 10 times as much venom volume expelled at 8.5 times the flow rate. This can be explained by the snake's need to quickly subdue a threat. While employing similar venom expulsion mechanics, predatory strikes are quite different from defensive strikes. Snakes usually release the prey shortly after the envenomation allowing the prey to run away and die. Releasing prey prevents retaliatory damage to the snake. The venom scent allows the snake to relocate the prey once it is deceased. The amount of venom injected has been shown to increase with the mass of the prey animal. Larger venom volumes allow snakes to effectively euthanize larger prey while remaining economical during strikes against smaller prey. This is an important skill as venom is a metabolically expensive resource. Venom Metering Venom metering is the ability of a snake to have neurological control over the amount of venom released into a target during a strike based on situational cues. This ability would prove useful as venom is a limited resource, larger animals are less susceptible to the effects of venom, and various situations require different levels of force. There is a lot of evidence to support the venom metering hypothesis. For example, snakes frequently use more venom during defensive strikes, administer more venom to larger prey, and are capable of dry biting. A dry bite is a bite from a venomous snake that results in very little or no venom expulsion, leaving the target asymptomatic. However, there is debate among many academics about venom metering in snakes. The alternative to venom metering is the pressure balance hypothesis. The pressure balance hypothesis cites the retraction of the fang sheath as the many mechanisms for producing outward venom flow from the venom delivery system. When isolated, fang sheath retraction has experimentally been shown to induce very high pressures in the venom delivery system. A similar method was used to stimulate the compressor musculature, the main muscle responsible for the contraction and squeezing of the venom gland, and then measuring the induced pressures. It was determined that the pressure created from the fang sheath retraction was at times an order of magnitude greater than those created by the compressor musculature. Snakes do not have direct neurological control of the fang sheath, it can only be retracted as the fangs enter a target and the target's skin and body provide substantial resistance to retract the sheath. For these reasons, the pressure balance hypothesis concludes that external factors, mainly the bite and physical mechanics, are responsible for the quantity of venom expelled. Venom Spitting Venom spitting is another venom delivery method that is unique to some Asiatic and African cobras. In venom spitting, a stream of venom is propelled at very high pressures outwards up to 3 meters (300 centimeters). The venom stream is usually aimed at the eyes and face of the target as a deterrent for predators. There are non-spitting cobras that provide useful information on the unique mechanics behind venom spitting. Unlike the elongated oval shaped exit orifices of non-spitting cobras, spitting cobras have circular exit orifice at their fang tips. This combined with the ability to partially retract their fang sheath by displacing the palato-maxillary arch and contracting the adductor mandibulae, allows the spitting cobras to create large pressures within the venom delivery system. While venom spitting is a less common venom delivery system, the venom can still cause the effects if ingested. Society and culture Snakes were both revered and worshipped and feared by early civilizations. The ancient Egyptians recorded prescribed treatments for snakebites as early as the Thirteenth Dynasty in the Brooklyn Papyrus, which includes at least seven venomous species common to the region today, such as the horned vipers. In Judaism, the Nehushtan was a pole with a snake made of copper fixed upon it. The object was regarded as a divinely empowered instrument of God that could bring healing to Jews bitten by venomous snakes while they were wandering in the desert after their exodus from Egypt. Healing was said to occur by merely looking at the object as it was held up by Moses. Historically, snakebites were seen as a means of execution in some cultures. Reportedly, in Southern Han during China's Five Dynasties and Ten Kingdoms period and in India a form of capital punishment was to throw people into snake pits, leaving people to die from multiple venomous bites. According to popular belief, the Egyptian queen Cleopatra VII committed suicide by letting herself be bitten by an asp—likely an Egyptian cobra—after hearing of Mark Antony's death, while some contemporary ancient authors rather assumed a direct application of poison. Snakebite as a surreptitious form of murder has been featured in stories such as Sir Arthur Conan Doyle's The Adventure of the Speckled Band, but actual occurrences are virtually unheard of, with only a few documented cases. It has been suggested that Boris III of Bulgaria, who was allied to Nazi Germany during World War II, may have been killed with snake venom, although there is no definitive evidence. At least one attempted suicide by snakebite has been documented in medical literature involving a puff adder bite to the hand. In Jainism, the goddess Padmāvatī has been associated with curing snakebites. Research In 2018, the World Health Organization listed snakebite envenoming as a neglected tropical disease. In 2019, they launched a strategy to prevent and control snakebite envenoming, which involved a program targeting affected communities and their health systems. A policy analysis however found that the placement of snakebite in the global health agenda of WHO is fragile due to reluctance to accept the disease in the neglected tropical disease community and the perceived colonial nature of the network driving the agenda. Key institutions conducting snakebite research on snakebite are the George Institute for Global Health, the Liverpool School of Tropical Medicine, and the Indian Institute of Science. Other animals Several animals acquired immunity against the venom of snakes that occur in the same habitat. This has been documented in some humans as well.
Biology and health sciences
Types
Health
716184
https://en.wikipedia.org/wiki/JAXA
JAXA
The is the Japanese national air and space agency. Through the merger of three previously independent organizations, JAXA was formed on 1 October 2003. JAXA is responsible for research, technology development and launch of satellites into orbit, and is involved in many more advanced missions such as asteroid exploration and possible human exploration of the Moon. Its motto is One JAXA and its corporate slogan is Explore to Realize (formerly Reaching for the skies, exploring space). History On 1 October 2003, three organizations were merged to form the new JAXA: Japan's Institute of Space and Astronautical Science (ISAS), the National Aerospace Laboratory of Japan (NAL), and National Space Development Agency of Japan (NASDA). JAXA was formed as an Independent Administrative Institution administered by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) and the Ministry of Internal Affairs and Communications (MIC). Before the merger, ISAS was responsible for space and planetary research, while NAL was focused on aviation research. ISAS had been most successful in its space program in the field of X-ray astronomy during the 1980s and 1990s. Another successful area for Japan has been Very Long Baseline Interferometry (VLBI) with the HALCA mission. Additional success was achieved with solar observation and research of the magnetosphere, among other areas. NASDA, which was founded on 1 October 1969, had developed rockets, satellites, and also built the Japanese Experiment Module. The old NASDA headquarters were located at the current site of the Tanegashima Space Center, on Tanegashima Island, 115 kilometers south of Kyūshū. NASDA was mostly active in the field of communication satellite technology. However, since the satellite market of Japan is completely open, the first time a Japanese company won a contract for a civilian communication satellite was in 2005. Another prime focus of the NASDA body is Earth climate observation. NASDA also trained the Japanese astronauts who flew with the US Space Shuttles. The Basic Space Law was passed in 2008, and the jurisdictional authority of JAXA moved from MEXT to the Strategic Headquarters for Space Development (SHSD) in the Cabinet, led by the Prime Minister. In 2016, the National Space Policy Secretariat (NSPS) was set up by the Cabinet. JAXA was awarded the Space Foundation's John L. "Jack" Swigert Jr., Award for Space Exploration in 2008. Planning interplanetary research missions can take many years. Due to the lag time between these interplanetary events and mission planning time, opportunities to gain new knowledge about the cosmos might be lost. To prevent this, JAXA began commencing smaller and faster missions from 2010 onward. In 2012, new legislation extended JAXA's remit from peaceful purposes only to include some military space development, such as missile early warning systems. Political control of JAXA passed from MEXT to the Prime Minister's Cabinet Office through a new Space Strategy Office. Rockets JAXA uses the H-IIA (H "two" A) rocket from the former NASDA body as a medium-lift launch vehicle. JAXA has also developed a new medium-lift vehicle H3. For smaller launch needs, JAXA uses the Epsilon rocket. For experiments in the upper atmosphere JAXA uses the SS-520, S-520, and S-310 sounding rockets. Historical, nowadays retired, JAXA orbital rockets are as follows: Mu rocket family (M-V) and H-IIB. Launch development Japan launched its first satellite, Ohsumi, in 1970, using ISAS' L-4S rocket. Prior to the merger, ISAS used small Mu rocket family of solid-fueled launch vehicles, while NASDA developed larger liquid-fueled launchers. In the beginning, NASDA used licensed American models. The first model of liquid-fueled launch vehicle developed domestically in Japan was the H-II, introduced in 1994. NASDA developed the H-II with two goals in mind: to be able to launch satellites using only its own technology, such as the ISAS, and to dramatically improve its launch capability over previous licensed models. To achieve these two goals, a staged combustion cycle was adopted for the first stage engine, the LE-7. The combination of the liquid hydrogen two-stage combustion cycle first stage engine and solid rocket boosters was carried over to its successor, the H-IIA and H-IIB and became the basic configuration of Japan's liquid fuel launch vehicles for 30 years, from 1994 to 2024. In 2003, JAXA was formed by merging Japan's three space agencies to streamline Japan's space program, and JAXA took over operations of the H-IIA liquid-fueled launch vehicle, the M-V solid-fuel launch vehicle, and several observation rockets from each agency. The H-IIA is a launch vehicle that improved reliability while reducing costs by making significant improvements to the H-II, and the M-V was the world's largest solid-fuel launch vehicle at the time. In November 2003, JAXA's first launch after its inauguration, H-IIA No. 6, failed, but all other H-IIA launches were successful, and as of February 2024, the H-IIA had successfully launched 47 of its 48 launches. JAXA plans to end H-IIA operations with H-IIA Flight No. 50 and retire it by March 2025. JAXA operated the H-IIB, an upgraded version of the H-IIA, from September 2009 to May 2020 and successfully launched the H-II Transfer Vehicle six times. This cargo spacecraft was responsible for resupplying the Kibo Japanese Experiment Module on the International Space Station. To be able to launch smaller mission on JAXA developed a new solid-fueled rocket, the Epsilon as a replacement to the retired M-V. The maiden flight successfully happened in 2013. So far, the rocket has flown six times with one launch failure. In January 2017, JAXA attempted and failed to put a miniature satellite into orbit atop one of its SS520 series rockets. A second attempt on 2 February 2018 was successful, putting a four kilogram CubeSat into Earth orbit. The rocket, known as the SS-520-5, is the world's smallest orbital launcher. In 2023, JAXA began operating the H3, which will replace the H-IIA and H-IIIB; the H3 is a liquid-fueled launch vehicle developed from a completely new design like the H-II, rather than an improved development like the H-IIA and H-IIB, which were based on the H-II. The design goal of the H3 is to increase launch capability at a lower cost than the H-IIA and H-IIB. To achieve this, an expander bleed cycle was used for the first time in the world for the first stage of the engine. Lunar and interplanetary missions Japan's first missions beyond Earth orbit were the 1985 Halley's comet observation spacecraft Sakigake (MS-T5) and Suisei (PLANET-A). To prepare for future missions, ISAS tested Earth swing by orbits with the Hiten lunar mission in 1990. The first Japanese interplanetary mission was the Mars Orbiter Nozomi (PLANET-B), which was launched in 1998. It passed Mars in 2003, but failed to reach Mars orbit due to maneuvering systems failures earlier in the mission. Currently interplanetary missions remain at the ISAS group under the JAXA umbrella. However, for FY 2008 JAXA is planning to set up an independent working group within the organization. New head for this group will be Hayabusa project manager Kawaguchi. Active Missions: PLANET-C, IKAROS, Hayabusa2, BepiColombo, SLIM Under Development: MMX, DESTINY+ Retired: PLANET-B, SELENE, MUSES-C, LEV-1, LEV-2 Cancelled: LUNAR-A Small body exploration: Hayabusa mission On 9 May 2003, Hayabusa (meaning Peregrine falcon), was launched from an M-V rocket. The goal of the mission was to collect samples from a small near-Earth asteroid named 25143 Itokawa. The craft rendezvoused with the asteroid in September 2005. It was confirmed that the spacecraft successfully landed on the asteroid in November 2005, after some initial confusion regarding the incoming data. Hayabusa returned to Earth with samples from the asteroid on 13 June 2010. Hayabusa was the world's first spacecraft to return asteroid samples to Earth and the world's first spacecraft to make a round trip to a celestial body farther from Earth than the Moon. Hayabusa2 was launched in 2014 and returned samples from asteroid 162173 Ryugu to Earth in 2020. Lunar exploration After Hiten in 1990, JAXA planned a lunar penetrator mission called LUNAR-A but after delays due to technical problems, the project was terminated in January 2007. The seismometer penetrator design for LUNAR-A may be reused in a future mission. On 14 September 2007, JAXA succeeded in launching the lunar orbit explorer Kaguya, also known as SELENE, on an H-2A rocket (costing 55 billion yen including launch vehicle), the largest such mission since the Apollo program. Its mission was to gather data on the Moon's origin and evolution. It entered lunar orbit on 4 October 2007. After 1 year and 8 months, it impacted the lunar surface on 10 June 2009 at 18:25 UTC. JAXA launched its first lunar surface mission SLIM (Smart Lander for Investigating Moon) in 2023. It successfully soft landed on 19 January 2024 at 15:20 UTC, making Japan the 5th country to do so. The main goal of SLIM was to improve the accuracy of spacecraft landing on the Moon and to land a spacecraft within 100 meters of its target, which no spacecraft had achieved before. SLIM landed 55 meters from the target landing site, and JAXA announced that it was the world's first successful "pinpoint landing. Although it landed successfully, it landed with the solar panels oriented westwards, facing away from the Sun at the start of lunar day, thereby failing to generate enough power. The lander operated on internal battery power, which was fully drained that day. The mission's operators hope that the lander will wake up after a few days when sunlight should hit the solar panels. Two rovers, LEV 1 and 2, deployed during hovering just before final landing are working as expected with LEV-1 communicating independently to the ground stations. LEV-1 conducted seven hops over 107 minutes on the lunar surface. Images taken by LEV-2 show that it landed in the wrong attitude with loss of an engine nozzle during descent and even possible sustained damage to lander's Earth bound antenna which is not pointed towards Earth. The mission was considered fully successful after confirmation that its primary goal, landing within of the target was achieved, despite subsequent issues. On 29 January, the lander resumed operations after being shutdown for a week. JAXA said it re-established contact with the lander and its solar cells were working again after a shift in lighting conditions allowed it to catch sunlight. After that, SLIM was put into sleep mode due to the approaching harsh lunar night where temperatures reach . SLIM was expected to operate only for one lunar daylight period, which lasts for 14 Earth days, and the on-board electronics were not designed to withstand the nighttime temperatures on the Moon. On 25 February 2024, JAXA sent wake-up calls and found SLIM had successfully survived the night on the lunar surface while maintaining communication capabilities. At that time it was solar noon on the Moon so the temperature of the communications equipment was extremely high, so communication was terminated after only a short period of time. JAXA is now preparing for resumed operations, once the temperature has fallen sufficiently. The feat of surviving lunar night without a Radioisotope heater unit had only been achieved by some landers in Surveyor Program. Planetary exploration Japan's planetary missions have so far been limited to the inner Solar System, and emphasis has been put on magnetospheric and atmospheric research. The Mars explorer Nozomi (PLANET-B), which ISAS launched prior to the merger of the three aerospace institutes, became one of the earliest difficulties the newly formed JAXA faced. Nozomi ultimately passed 1,000 km from the surface of Mars. On 20 May 2010, the Venus Climate Orbiter Akatsuki (PLANET-C) and IKAROS solar sail demonstrator was launched by a H-2A launch vehicle. On 7 December 2010, Akatsuki was unable to complete its Venus orbit insertion maneuver. Akatsuki finally entered Venus orbit on 7 December 2015, making it the first Japanese spacecraft to orbit another planet, sixteen years after the originally planned orbital insertion of Nozomi. One of Akatsuki's main goal is to uncover the mechanism behind Venus atmosphere's super-rotation, a phenomenon in which the cloud top winds in the troposphere circulates around the planet faster than the speed that Venus itself rotates. A thorough explanation for this phenomenon has yet been found. JAXA/ISAS was part of the international Laplace Jupiter mission proposal from its foundation. A Japanese contribution was sought in the form of an independent orbiter to research Jupiter's magnetosphere, JMO (Jupiter Magnetospheric Orbiter). Although JMO never left the conception phase, ISAS scientists will see their instruments reaching Jupiter on the ESA-led JUICE (Jupiter Icy Moon Explorer) mission. JUICE is a reformulation of the ESA Ganymede orbiter from the Laplace project. JAXA's contribution includes providing components of the RPWI (Radio & Plasma Wave Investigation), PEP (Particle Environment Package), GALA (GAnymede Laser Altimeter) instruments. JAXA is reviewing a new spacecraft mission to the Martian system; a sample return mission to Phobos called MMX (Martian Moons Explorer). First revealed on 9 June 2015, MMX's primary goal is to determine the origin of the Martian moons. Alongside collecting samples from Phobos, MMX will perform remote sensing of Deimos, and may also observe the atmosphere of Mars as well. As of December 2023, MMX is to be launched in fiscal year 2026. Solar sail research On 9 August 2004, ISAS successfully deployed two prototype solar sails from a sounding rocket. A clover-type sail was deployed at 122 km altitude and a fan type sail was deployed at 169 km altitude. Both sails used 7.5 micrometer-thick film. ISAS tested a solar sail again as a sub-payload to the Akari (ASTRO-F) mission on 22 February 2006. However the solar sail did not deploy fully. ISAS tested a solar sail again as a sub payload of the SOLAR-B launch at 23 September 2006, but contact with the probe was lost. The IKAROS solar sail was launched in May 2010 and successfully demonstrated solar sail technology in July. This made IKAROS the world's first spacecraft to successfully demonstrate solar sail technology in interplanetary space. The goal is to have a solar sail mission to Jupiter after 2020. Astronomy program The first Japanese astronomy mission was the X-ray satellite Hakucho (CORSA-b), which was launched in 1979. Later ISAS moved into solar observation, radio astronomy through space VLBI and infrared astronomy. Active Missions: SOLAR-B, MAXI, SPRINT-A, CALET, XRISM Under Development: Retired: HALCA, ASTRO-F, ASTRO-EII, and ASTRO-H Cancelled(C)/Failed(F): ASTRO-E (F), ASTRO-G (C), Infrared astronomy Japan's infrared astronomy began with the 15-cm IRTS telescope which was part of the SFU multipurpose satellite in 1995. ISAS also gave ground support for the ESA Infrared Space Observatory (ISO) infrared mission. JAXA's first infrared astronomy satellite was the Akari spacecraft, with the pre-launch designation ASTRO-F. This satellite was launched on 21 February 2006. Its mission is infrared astronomy with a 68 cm telescope. This is the first all sky survey since the first infrared mission IRAS in 1983. (A 3.6 kg nanosatellite named CUTE-1.7 was also released from the same launch vehicle.) JAXA is also doing further R&D for increasing the performance of its mechanical coolers for its future infrared mission, SPICA. This would enable a warm launch without liquid helium. SPICA has the same size as the ESA Herschel Space Observatory mission, but is planned to have a temperature of just 4.5 K and will be much colder. Unlike Akari, which had a geocentric orbit, SPICA will be located at Sun–Earth . The launch is expected in 2027 or 2028 on JAXA's new H3 Launch Vehicle, however the mission is not yet fully funded. ESA and NASA may also each contribute an instrument. The SPICA mission was cancelled in 2020. X-ray astronomy Starting from 1979 with Hakucho (CORSA-b), for nearly two decades Japan had achieved continuous observation. However, in the year 2000 the launch of ISAS's X-ray observation satellite, ASTRO-E failed (as it failed at launch it never received a proper name). Then on 10 July 2005, JAXA was finally able to launch a new X-ray astronomy mission named Suzaku (ASTRO-EII). This launch was important for JAXA, because in the five years since the launch failure of the original ASTRO-E satellite, Japan was without an x-ray telescope. Three instruments were included in this satellite: an X-ray spectrometer (XRS), an X-ray imaging spectrometer (XIS), and a hard X-ray detector (HXD). However, the XRS was rendered inoperable due to a malfunction which caused the satellite to lose its supply of liquid helium. The next JAXA x-ray mission is the Monitor of All-sky X-ray Image (MAXI). MAXI continuously monitors astronomical X-ray objects over a broad energy band (0.5 to 30 keV). MAXI is installed on the Japanese external module of the ISS. On 17 February 2016, Hitomi (ASTRO-H) was launched as the successor to Suzaku, which completed its mission a year before. Solar observation Japan's solar astronomy started in the early 1980s with the launch of the Hinotori (ASTRO-A) X-ray mission. The Hinode (SOLAR-B) spacecraft, the follow-on to the joint Japan/US/UK Yohkoh (SOLAR-A) spacecraft, was launched on 23 September 2006 by JAXA. A SOLAR-C can be expected sometime after 2020. However no details are worked out yet other than it will not be launched with the former ISAS's Mu rockets. Instead a H-2A from Tanegashima could launch it. As H-2A is more powerful, SOLAR-C could either be heavier or be stationed at (Lagrange point 1). Radio astronomy In 1997, Japan launched the HALCA (MUSES-B) mission, the world's first spacecraft dedicated to conduct space VLBI observations of pulsars, among others. To do so, ISAS set up a ground network around the world through international cooperation. The observation part of the mission lasted until 2003 and the satellite was retired at the end of 2005. In FY 2006, Japan funded the ASTRO-G as the succeeding mission. ASTRO-G was canceled in 2011. Communication, positioning and technology tests One of the primary duties of the former NASDA body was the testing of new space technologies, mostly in the field of communication. The first test satellite was ETS-I, launched in 1975. However, during the 1990s, NASDA was afflicted by problems surrounding the ETS-VI and COMETS missions. In February 2018, JAXA announced a research collaboration with Sony to test a laser communication system from the Kibo module in late 2018. In January 2025, it was reported that JAXA is collaborating with Japan Airlines and O-Well Corporation to test a riblet-shaped coating on the airline's Boeing 787-9 Dreamliner that would reduce aerodynamic drag and improve fuel efficiency. The coating is capable of reducing drag by 0.24%, leading to the savings of 119 tons of fuel and 381 tons of CO2 emissions per plane per annum. Testing of communication technologies remains to be one of JAXA's key duties in cooperation with NICT. Active Missions: INDEX, QZS-1, SLATS, QZS-2, QZS-3, QZS-4, QZS-1R Under Development: ETS-IX Retired: OICETS, ETS-VIII, WINDS i-Space : ETS-VIII, WINDS and QZS-1 To upgrade Japan's communication technology the Japanese state launched the i-Space initiative with the ETS-VIII and WINDS missions. ETS-VIII was launched on 18 December 2006. The purpose of ETS-VIII is to test communication equipment with two very large antennas and an atomic clock test. On 26 December both antennas were successfully deployed. This was not unexpected, since JAXA tested the deployment mechanism before with the LDREX-2 Mission, which was launched on 14 October with the European Ariane 5. The test was successful. On 23 February 2008, JAXA launched the Wideband InterNetworking engineering test and Demonstration Satellite (WINDS), also called "KIZUNA". WINDS aimed to facilitate experiments with faster satellite Internet connections. The launch, using H-IIA launch vehicle 14, took place from Tanegashima Space Center. WINDS was decommissioned on 27 February 2019. On 11 September 2010, JAXA launched QZS-1 (Michibiki-1), the first satellite of the Quasi Zenith Satellite System (QZSS), a subsystem of the global positioning system (GPS). Three more followed in 2017, and a replacement for QZS-1 is scheduled to launch in late 2021. A next-generation set of three satellites, able to operate independent of GPS, is scheduled to begin launching in 2023. OICETS and INDEX On 24 August 2005, JAXA launched the experimental satellites OICETS and INDEX on a Ukrainian Dnepr rocket. OICETS (Kirari) is a mission tasked with testing optical links with the European Space Agency (ESA) ARTEMIS satellite, which is around 40,000 km away from OICETS. The experiment was successful on 9 December, when the link could be established. In March 2006, JAXA could establish with OICETS the worldwide first optical links between a LEO satellite and a ground station first in Japan and in June 2006 with a mobile station in Germany. INDEX (Reimei) is a small 70 kg satellite for testing various equipment, and functions as an aurora observation mission as well. The Reimei satellite is currently in its extended mission phase. Earth observation program Japan's first Earth observation satellites were MOS-1a and MOS-1b launched in 1987 and 1990. During the 1990s, and the new millennium this NASDA program came under heavy fire, because both Adeos (Midori) and Adeos 2 (Midori 2) satellites failed after just ten months in orbit. Active Missions: GOSAT, GCOM-W, ALOS-2, GCOM-C, GOSAT-2 Retired/Failed (R/F): ALOS (R), ALOS-3 (F) ALOS In January 2006, JAXA successfully launched the Advanced Land Observation Satellite (ALOS/Daichi). Communication between ALOS and the ground station in Japan will be done through the Kodama Data Relay Satellite, which was launched during 2002. This project is under intense pressure due to the shorter than expected lifetime of the ADEOS II (Midori) Earth Observation Mission. For missions following Daichi, JAXA opted to separate it into a radar satellite (ALOS-2) and an optical satellite, (ALOS-3). ALOS 2 SAR (Synthetic Aperture Radar) satellite was launched in May 2014. The ALOS-3 satellite was aboard a H3 rocket in March 2023, but the satellite was lost in a launch failure when the second stage failed to ignite. ALOS-4, 2's SAR successor, was launched successfully in July 2024. A true successor to ALOS-3 is planned to launch around 2027. Rainfall observation Since Japan is an island nation and gets struck by typhoons every year, research about the dynamics of the atmosphere is a very important issue. For this reason Japan launched in 1997 the TRMM (Tropical Rainfall Measuring Mission) satellite in cooperation with NASA, to observe the tropical rainfall seasons. For further research NASDA had launched the ADEOS and ADEOS II missions in 1996 and 2003. However, due to various reasons, both satellites had a much shorter than expected life term. On 28 February 2014, a H-2A rocket launched the GPM Core Observatory, a satellite jointly developed by JAXA and NASA. The GPM mission is the successor to the TRMM mission, which by the time of the GPM launch had been noted as highly successful. JAXA provided the Global Precipitation Measurement/Dual-frequency Precipitation Radar (GPM/DPR) Instrument for this mission. Global Precipitation Measurement itself is a satellite constellation, whilst the GPM Core Observatory provides a new calibration standard for other satellites in the constellation. Other countries/agencies like France, India, ESA, etc. provides the sub-satellites. The aim of GPM is to measure global rainfall with unprecedented detail. Monitoring of carbon dioxide At the end of the 2008 fiscal year, JAXA launched the satellite GOSAT (Greenhouse Gas Observing SATellite) to help scientists determine and monitor the density distribution of carbon dioxide in the atmosphere. The satellite is being jointly developed by JAXA and Japan's Ministry of the Environment. JAXA is building the satellite while the Ministry is in charge of the data that will be collected. Since the number of ground-based carbon dioxide observatories cannot monitor enough of the world's atmosphere and are distributed unevenly throughout the globe, the GOSAT may be able to gather more accurate data and fill in the gaps on the globe where there are no observatories on the ground. Sensors for methane and other greenhouse gasses are also being considered for the satellite, although the plans are not yet finalized. The satellite weighs approximately 1650 kg and is expected to have a life span of five years. The successor satellite GOSAT 2 was launched in October 2018. GCOM series The next funded Earth-observation mission after GOSAT is the GCOM (Global Change Observation Mission) Earth-observation program as a successor to ADEOS II (Midori) and the Aqua mission. To reduce the risk and for a longer observation time the mission will be split into smaller satellites. Altogether GCOM will be a series of six satellites. The first satellite, GCOM-W (Shizuku), was launched on 17 May 2012 with the H-IIA. The second satellite, GCOM-C (Shikisai), was launched in 2017. Satellites for other agencies For weather observation Japan launched in February 2005 the Multi-Functional Transport Satellite 1R (MTSAT-1R). The success of this launch was critical for Japan, since the original MTSAT-1 could not be put into orbit because of a launch failure with the H-2 rocket in 1999. Since then Japan relied for weather forecasting on an old satellite which was already beyond its useful life term and on American systems. On 18 February 2006, JAXA, as head of the H-IIA at this time, successfully launched the MTSAT-2 aboard a H-2A rocket. MTSAT-2 is the backup to the MTSAT-1R. The MTSAT-2 uses the DS2000 satellite bus developed by Mitsubishi Electric. The DS2000 is also used for the DRTS Kodama, ETS-VIII and the Superbird 7 communication satellite, making it the first commercial success for Japan. As a secondary mission both the MTSAT-1R and MTSAT-2 help to direct air traffic. Other JAXA satellites currently in use GEOTAIL magnetosphere observation satellite (since 1992) DRTS (Kodama) Data Relay Satellite, since 2002. (Projected Life Span is seven years) Ongoing joint missions with NASA are the Aqua Earth Observation Satellite, and the Global Precipitation Measurement (GPM) Core satellite.JAXA also provided the Light Particle Telescope (LPT) for the 2008 Jason 2 satellite by the French CNES. On 11 May 2018, JAXA deployed the first satellite developed in Kenya from the Japanese Experiment Module of the International Space Station. The satellite, 1KUNS-PF, was created by the University of Nairobi. Completed missions ASTRO-H X-Ray Astronomy Mission 2016 (failed) Tropical Rainfall Measuring Mission (TRMM) 1997–2015 (decommissioned) Akebono Aurora Observation 1989–2015 (decommissioned) Suzaku X-Ray Astronomy 2005–2015 (decommissioned) ALOS Earth Observation 2006–2011 (decommissioned) Akari, Infrared astronomy mission 2006–2011 (decommissioned) Hayabusa Asteroid sample return mission 2003–2010 (decommissioned) OICETS, Technology Demonstration 2005–2009 (decommissioned) SELENE, Moon probe 2007–2009 (decommissioned) Micro Lab Sat 1, Small engineering mission, launched 2002 (decommissioned) HALCA, Space VLBI 1997–2005 (decommissioned) Nozomi, Mars Mission 1998–2003 (failed) MDS-1, Technology Demonstration 2002–2003 (decommissioned) ADEOS 2 (Midori 2) Earth Observation 2002–2003 (lost) Future missions Launch schedule FY 2024 ALOS-4 GOSAT-GW QZS-5 Nano-JASMINE (uncertain) FY 2025 ETS-IX HTV-X1 Innovative Satellite Technology Demonstration-4 QZS-6 QZS-7 FY 2026 HTV-X2 HTV-X3 MMX: Remote sensing of Deimos, sample return from Phobos FY 2027 Innovative Satellite Technology Demonstration-5 FY 2028 DESTINY+: Small-scale technology demonstrator which will also conduct scientific observations of asteroid 3200 Phaethon JASMINE: an astrometric telescope similar to the Gaia mission but operating in the infrared (2.2 μm) and specifically targeting the Galactic plane and center, where Gaias results are impaired by dust absorption. LUPEX: joint lunar lander and rover with ISRO SOLAR-C EUVST FY 2029 Comet Interceptor (ESA led mission, Japan provides one of the secondary spacecraft) Innovative Satellite Technology Demonstration-6 FY 2031 Innovative Satellite Technology Demonstration-7 FY 2032 LiteBIRD: a mission to study CMB B-mode polarization and cosmic inflation based at the Sun–Earth Lagrangian point Other missions For the 2023 EarthCARE mission with ESA, JAXA will provide the radar system on the satellite. JAXA will provide the Auroral Electron Sensor (AES) for the Taiwanese FORMOSAT-5. XEUS: joint X-Ray telescope with ESA, originally planned for launch after 2015. Cancelled and replaced by ATHENA. Proposals Human Lunar Systems, conceptual system study on the future human lunar outpost OKEANOS, a mission to Jupiter and Trojan asteroids utilizing "hybrid propulsion" of solar sail and ion engines SPICA, a 2.5 meter infrared telescope to be placed at L2 FORCE, small-scale hard x-ray observation with high sensitivity DIOS, small-scale x-ray observation mission to survey warm–hot intergalactic medium APPROACH, small-scale lunar penetrator mission HiZ-GUNDAM, small-scale gamma ray burst observation mission B-DECIGO, gravity wave observation test mission Hayabusa Mk2/Marco Polo Space Solar Power System (SSPS), space-based solar power prototype launch in 2020, aiming for a full-power system in 2030 Human spaceflight program Japan has ten astronauts but has not yet developed its own crewed spacecraft and is not currently developing one officially. A potentially crewed spaceplane HOPE-X project launched by the conventional space launcher H-II was developed for several years (including test flights of HYFLEX/OREX prototypes) but was postponed. The simpler crewed capsule Fuji was proposed but not adopted. Projects for single-stage-to-orbit, horizontal takeoff reusable launch vehicle and landing ASSTS and the vertical takeoff and landing Kankoh-maru also exist but have not been adopted. The first Japanese citizen to fly in space was Toyohiro Akiyama, a journalist sponsored by TBS, who flew on the Soviet Soyuz TM-11 in December 1990. He spent more than seven days in space on the Mir space station, in what the Soviets called their first commercial spaceflight which allowed them to earn $14 million. Japan participates in US and international crewed space programs including flights of Japanese astronauts on Russian Soyuz spacecraft to the ISS. One Space Shuttle mission (STS-47) in September 1992 was partially funded by Japan. This flight included JAXA's first astronaut in space, Mamoru Mohri, as the Payload Specialist for the Spacelab-J, one of the European built Spacelab modules. This mission was also designated Japan. Three other NASA Space Shuttle missions (STS-123, STS-124, STS-127) in 2008–2009 delivered parts of the Japanese built spacelab-module Kibō to ISS. Japanese plans for a crewed lunar landing were in development but were shelved in early 2010 due to budget constraints. In June 2014, Japan's science and technology ministry said it was considering a space mission to Mars. In a ministry paper it indicated uncrewed exploration, crewed missions to Mars and long-term settlement on the Moon were objectives, for which international cooperation and support was going to be sought. On 18 October 2017, JAXA discovered a "tunnel"-like lava tube under the surface of the Moon . The tunnel appears to be suitable as a location for a base of operations for peaceful crewed space missions, according to JAXA. Supersonic aircraft development Besides the H-IIA/B and Epsilon rockets, JAXA is also developing technology for a next-generation supersonic transport that could become the commercial replacement for the Concorde. The design goal of the project (working name Next Generation Supersonic Transport) is to develop a jet that can carry 300 passengers at Mach 2. A subscale model of the jet underwent aerodynamic testing in September and October 2005 in Australia. In 2015, JAXA performed tests aimed at reducing the effects of supersonic flight under the D-SEND program. The economic success of such a project is still unclear, and as a consequence the project has been met with limited interest from Japanese aerospace companies like Mitsubishi Heavy Industries so far. Reusable launch vehicles Until 2003, JAXA (ISAS) conducted research on a reusable launch vehicle under the Reusable Vehicle Testing (RVT) project. Organization JAXA is composed of the following organizations: Space Technology Directorate I Space Technology Directorate II Human Spaceflight Technology Directorate Research and Development Directorate Aeronautical Technology Directorate Institute of Space and Astronautical Science (ISAS) Space Exploration Innovation Hub Center (TansaX) JAXA has research centers in many locations in Japan, and some offices overseas. Its headquarters are in Chōfu, Tokyo. It also has Earth Observation Research Center (EORC), Tokyo Earth Observation Center (EOC) in Hatoyama, Saitama Noshiro Testing Center (NTC) in Noshiro, Akita – Established in 1962. It carries out development and testing of rocket engines. Sanriku Balloon Center (SBC) – Balloons have been launched from this site since 1971. Kakuda Space Center (KSPC) in Kakuda, Miyagi – Leads the development of rocket engines. Works mainly with development of liquid-fuel engines. Sagamihara Campus (ISAS) in Sagamihara, Kanagawa – Development of experimental equipment for rockets and satellites. Also administrative buildings. Tanegashima Space Center in Tanegashima, Kagoshima – currently the launch site for the H-IIA and H3 rockets. Tsukuba Space Center (TKSC) in Tsukuba, Ibaraki. This is the center of Japan's space network. It is involved in research and development of satellites and rockets, and tracking and controlling of satellites. It develops experimental equipment for the Japanese Experiment Module ("Kibo"). Training of astronauts also takes place here. For International Space Station operations, the Japanese Flight Control Team is located at the Space Station Integration & Promotion Center (SSIPC) in Tsukuba. SSIPC communicates regularly with ISS crewmembers via S-band audio. Uchinoura Space Center in Kimotsuki, Kagoshima – currently the launch site for the Epsilon rocket. Communication ground stations for interplanetary spacecraft Usuda Deep Space Center (UDSC) is a spacecraft tracking station in Saku, Nagano (originally in Usuda, Nagano; Usuda merged into Saku in 2005), the first deep-space antenna constructed with beam-waveguide technology, and for many years, Japan's only ground station for communication with interplanetary spacecraft in deep space. Opening in 1984, the 64 meter antenna, built by Mitsubishi Electric, primarily operated in the X- and S-band frequencies. Upon completion in 2021, MDSS succeeded UDSC as the primary antenna for JAXA's Deep Space Network. Misasa Deep Space Station (MDSS), also in Saku, Nagano (and just over one kilometer northwest from UDSC), also known as GREAT (Ground Station for Deep Space Exploration and Telecommunication) was completed in 2021 at a cost of over ten billion Yen. It is equipped with a 54 meter dish, also built by Mitsubishi Electric, communicating with spacecraft in the X- and Ka-band frequencies. Phase 2 (GREAT2) to improve performance and reliability, in support of future projects, over the previous phase is now in progress. Other tracking stations in Okinawa, Masuda, and Katsuura are for satellite tracking and control. Collaborating with other space agencies: Previously, JAXA has worked closely with other space agencies in support of their respective deep space projects. Notably, in 2015 NASA's Deep Space Network provided communication and tracking services to the Akatsuki Venus probe through its 34 meter antennas. In October 2021, JAXA provided NASA with data it had received at Misasa from Juno during its flyby of Jupiter's moon Europa. As part of on-going joint support of deep space missions JAXA, ESA, and NASA are engaged in an effort to improve the X/Ka celestial reference frame as well as a unified X/Ka terrestrial frame to be shared by the three agencies. The 54 meter dish at MDSS enhances X/Ka sensitivity from having an aperture area two and a half times larger than the equivalent antennas in the NASA and ESA network. MDSS improves the network geometry with the first direct north-south baseline (Japan-Australia) in the X/Ka VLBI network, thereby providing four new baselines which will provide optimal geometry for improving declinations.
Technology
Programs and launch sites
null
716422
https://en.wikipedia.org/wiki/16-cell
16-cell
In geometry, the 16-cell is the regular convex 4-polytope (four-dimensional analogue of a Platonic solid) with Schläfli symbol {3,3,4}. It is one of the six regular convex 4-polytopes first described by the Swiss mathematician Ludwig Schläfli in the mid-19th century. It is also called C16, hexadecachoron, or hexdecahedroid . It is the 4-dimesional member of an infinite family of polytopes called cross-polytopes, orthoplexes, or hyperoctahedrons which are analogous to the octahedron in three dimensions. It is Coxeter's polytope. The dual polytope is the tesseract (4-cube), which it can be combined with to form a compound figure. The cells of the 16-cell are dual to the 16 vertices of the tesseract. Geometry The 16-cell is the second in the sequence of 6 convex regular 4-polytopes (in order of size and complexity). Each of its 4 successor convex regular 4-polytopes can be constructed as the convex hull of a polytope compound of multiple 16-cells: the 16-vertex tesseract as a compound of two 16-cells, the 24-vertex 24-cell as a compound of three 16-cells, the 120-vertex 600-cell as a compound of fifteen 16-cells, and the 600-vertex 120-cell as a compound of seventy-five 16-cells. Coordinates The 16-cell is the 4-dimensional cross polytope (4-orthoplex), which means its vertices lie in opposite pairs on the 4 axes of a (w, x, y, z) Cartesian coordinate system. The eight vertices are (±1, 0, 0, 0), (0, ±1, 0, 0), (0, 0, ±1, 0), (0, 0, 0, ±1). All vertices are connected by edges except opposite pairs. The edge length is . The vertex coordinates form 6 orthogonal central squares lying in the 6 coordinate planes. Squares in opposite planes that do not share an axis (e.g. in the xy and wz planes) are completely disjoint (they do not intersect at any vertices). The 16-cell constitutes an orthonormal basis for the choice of a 4-dimensional reference frame, because its vertices exactly define the four orthogonal axes. Structure The Schläfli symbol of the 16-cell is {3,3,4}, indicating that its cells are regular tetrahedra {3,3} and its vertex figure is a regular octahedron {3,4}. There are 8 tetrahedra, 12 triangles, and 6 edges meeting at every vertex. Its edge figure is a square. There are 4 tetrahedra and 4 triangles meeting at every edge. The 16-cell is bounded by 16 cells, all of which are regular tetrahedra. It has 32 triangular faces, 24 edges, and 8 vertices. The 24 edges bound 6 orthogonal central squares lying on great circles in the 6 coordinate planes (3 pairs of completely orthogonal great squares). At each vertex, 3 great squares cross perpendicularly. The 6 edges meet at the vertex the way 6 edges meet at the apex of a canonical octahedral pyramid. The 6 orthogonal central planes of the 16-cell can be divided into 4 orthogonal central hyperplanes (3-spaces) each forming an octahedron with 3 orthogonal great squares. Rotations Rotations in 4-dimensional Euclidean space can be seen as the composition of two 2-dimensional rotations in completely orthogonal planes. The 16-cell is a simple frame in which to observe 4-dimensional rotations, because each of the 16-cell's 6 great squares has another completely orthogonal great square (there are 3 pairs of completely orthogonal squares). Many rotations of the 16-cell can be characterized by the angle of rotation in one of its great square planes (e.g. the xy plane) and another angle of rotation in the completely orthogonal great square plane (the wz plane). Completely orthogonal great squares have disjoint vertices: 4 of the 16-cell's 8 vertices rotate in one plane, and the other 4 rotate independently in the completely orthogonal plane. In 2 or 3 dimensions a rotation is characterized by a single plane of rotation; this kind of rotation taking place in 4-space is called a simple rotation, in which only one of the two completely orthogonal planes rotates (the angle of rotation in the other plane is 0). In the 16-cell, a simple rotation in one of the 6 orthogonal planes moves only 4 of the 8 vertices; the other 4 remain fixed. (In the simple rotation animation above, all 8 vertices move because the plane of rotation is not one of the 6 orthogonal basis planes.) In a double rotation both sets of 4 vertices move, but independently: the angles of rotation may be different in the 2 completely orthogonal planes. If the two angles happen to be the same, a maximally symmetric isoclinic rotation takes place. In the 16-cell an isoclinic rotation by 90 degrees of any pair of completely orthogonal square planes takes every square plane to its completely orthogonal square plane. Constructions Octahedral dipyramid The simplest construction of the 16-cell is on the 3-dimensional cross polytope, the octahedron. The octahedron has 3 perpendicular axes and 6 vertices in 3 opposite pairs (its Petrie polygon is the hexagon). Add another pair of vertices, on a fourth axis perpendicular to all 3 of the other axes. Connect each new vertex to all 6 of the original vertices, adding 12 new edges. This raises two octahedral pyramids on a shared octahedron base that lies in the 16-cell's central hyperplane. The octahedron that the construction starts with has three perpendicular intersecting squares (which appear as rectangles in the hexagonal projections). Each square intersects with each of the other squares at two opposite vertices, with two of the squares crossing at each vertex. Then two more points are added in the fourth dimension (above and below the 3-dimensional hyperplane). These new vertices are connected to all the octahedron's vertices, creating 12 new edges and three more squares (which appear edge-on as the 3 diameters of the hexagon in the projection), and three more octahedra. Something unprecedented has also been created. Notice that each square no longer intersects with all of the other squares: it does intersect with four of them (with three of the squares crossing at each vertex now), but each square has one other square with which it shares no vertices: it is not directly connected to that square at all. These two separate perpendicular squares (there are three pairs of them) are like the opposite edges of a tetrahedron: perpendicular, but non-intersecting. They lie opposite each other (parallel in some sense), and they don't touch, but they also pass through each other like two perpendicular links in a chain (but unlike links in a chain they have a common center). They are an example of Clifford parallel planes, and the 16-cell is the simplest regular polytope in which they occur. Clifford parallelism of objects of more than one dimension (more than just curved lines) emerges here and occurs in all the subsequent 4-dimensional regular polytopes, where it can be seen as the defining relationship among disjoint concentric regular 4-polytopes and their corresponding parts. It can occur between congruent (similar) polytopes of 2 or more dimensions. For example, as noted above all the subsequent convex regular 4-polytopes are compounds of multiple 16-cells; those 16-cells are Clifford parallel polytopes. Tetrahedral constructions The 16-cell has two Wythoff constructions from regular tetrahedra, a regular form and alternated form, shown here as nets, the second represented by tetrahedral cells of two alternating colors. The alternated form is a lower symmetry construction of the 16-cell called the demitesseract. Wythoff's construction replicates the 16-cell's characteristic 5-cell in a kaleidoscope of mirrors. Every regular 4-polytope has its characteristic 4-orthoscheme, an irregular 5-cell. There are three regular 4-polytopes with tetrahedral cells: the 5-cell, the 16-cell, and the 600-cell. Although all are bounded by regular tetrahedron cells, their characteristic 5-cells (4-orthoschemes) are different tetrahedral pyramids, all based on the same characteristic irregular tetrahedron. They share the same characteristic tetrahedron (3-orthoscheme) and characteristic right triangle (2-orthoscheme) because they have the same kind of cell. The characteristic 5-cell of the regular 16-cell is represented by the Coxeter-Dynkin diagram , which can be read as a list of the dihedral angles between its mirror facets. It is an irregular tetrahedral pyramid based on the characteristic tetrahedron of the regular tetrahedron. The regular 16-cell is subdivided by its symmetry hyperplanes into 384 instances of its characteristic 5-cell that all meet at its center. The characteristic 5-cell (4-orthoscheme) has four more edges than its base characteristic tetrahedron (3-orthoscheme), joining the four vertices of the base to its apex (the fifth vertex of the 4-orthoscheme, at the center of the regular 16-cell). If the regular 16-cell has unit radius edge and edge length 𝒍 = , its characteristic 5-cell's ten edges have lengths , , around its exterior right-triangle face (the edges opposite the characteristic angles 𝟀, 𝝉, 𝟁), plus , , (the other three edges of the exterior 3-orthoscheme facet the characteristic tetrahedron, which are the characteristic radii of the regular tetrahedron), plus , , , (edges which are the characteristic radii of the regular 16-cell). The 4-edge path along orthogonal edges of the orthoscheme is , , , , first from a 16-cell vertex to a 16-cell edge center, then turning 90° to a 16-cell face center, then turning 90° to a 16-cell tetrahedral cell center, then turning 90° to the 16-cell center. Helical construction A 16-cell can be constructed (three different ways) from two Boerdijk–Coxeter helixes of eight chained tetrahedra, each bent in the fourth dimension into a ring. The two circular helixes spiral around each other, nest into each other and pass through each other forming a Hopf link. The 16 triangle faces can be seen in a 2D net within a triangular tiling, with 6 triangles around every vertex. The purple edges represent the Petrie polygon of the 16-cell. The eight-cell ring of tetrahedra contains three octagrams of different colors, eight-edge circular paths that wind twice around the 16-cell on every third vertex of the octagram. The orange and yellow edges are two four-edge halves of one octagram, which join their ends to form a Möbius strip. Thus the 16-cell can be decomposed into two cell-disjoint circular chains of eight tetrahedrons each, four edges long, one spiraling to the right (clockwise) and the other spiraling to the left (counterclockwise). The left-handed and right-handed cell rings fit together, nesting into each other and entirely filling the 16-cell, even though they are of opposite chirality. This decomposition can be seen in a 4-4 duoantiprism construction of the 16-cell: or , Schläfli symbol {2}⨂{2} or s{2}s{2}, symmetry [4,2+,4], order 64. Three eight-edge paths (of different colors) spiral along each eight-cell ring, making 90° angles at each vertex. (In the Boerdijk–Coxeter helix before it is bent into a ring, the angles in different paths vary, but are not 90°.) Three paths (with three different colors and apparent angles) pass through each vertex. When the helix is bent into a ring, the segments of each eight-edge path (of various lengths) join their ends, forming a Möbius strip eight edges long along its single-sided circumference of 4𝝅, and one edge wide. The six four-edge halves of the three eight-edge paths each make four 90° angles, but they are not the six orthogonal great squares: they are open-ended squares, four-edge 360° helices whose open ends are antipodal vertices. The four edges come from four different great squares, and are mutually orthogonal. Combined end-to-end in pairs of the same chirality, the six four-edge paths make three eight-edge Möbius loops, helical octagrams. Each octagram is both a Petrie polygon of the 16-cell, and the helical track along which all eight vertices rotate together, in one of the 16-cell's distinct isoclinic rotations. Each eight-edge helix is a skew octagram{8/3} that winds three times around the 16-cell and visits every vertex before closing into a loop. Its eight edges are chords of an isocline, a helical arc on which the 8 vertices circle during an isoclinic rotation. All eight 16-cell vertices are apart except for opposite (antipodal) vertices, which are apart. A vertex moving on the isocline visits three other vertices that are apart before reaching the fourth vertex that is away. The eight-cell ring is chiral: there is a right-handed form which spirals clockwise, and a left-handed form which spirals counterclockwise. The 16-cell contains one of each, so it also contains a left and a right isocline; the isocline is the circular axis around which the eight-cell ring twists. Each isocline visits all eight vertices of the 16-cell. Each eight-cell ring contains half of the 16 cells, but all 8 vertices; the two rings share the vertices, as they nest into each other and fit together. They also share the 24 edges, though left and right octagram helices are different eight-edge paths. Because there are three pairs of completely orthogonal great squares, there are three congruent ways to compose a 16-cell from two eight-cell rings. The 16-cell contains three left-right pairs of eight-cell rings in different orientations, with each cell ring containing its axial isocline. Each left-right pair of isoclines is the track of a left-right pair of distinct isoclinic rotations: the rotations in one pair of completely orthogonal invariant planes of rotation. At each vertex, there are three great squares and six octagram isoclines that cross at the vertex and share a 16-cell axis chord. As a configuration This configuration matrix represents the 16-cell. The rows and columns correspond to vertices, edges, faces, and cells. The diagonal numbers say how many of each element occur in the whole 16-cell. The nondiagonal numbers say how many of the column's element occur in or at the row's element. Tessellations One can tessellate 4-dimensional Euclidean space by regular 16-cells. This is called the 16-cell honeycomb and has Schläfli symbol {3,3,4,3}. Hence, the 16-cell has a dihedral angle of 120°. Each 16-cell has 16 neighbors with which it shares a tetrahedron, 24 neighbors with which it shares only an edge, and 72 neighbors with which it shares only a single point. Twenty-four 16-cells meet at any given vertex in this tessellation. The dual tessellation, the 24-cell honeycomb, {3,4,3,3}, is made of regular 24-cells. Together with the tesseractic honeycomb {4,3,3,4} these are the only three regular tessellations of R4. Projections The cell-first parallel projection of the 16-cell into 3-space has a cubical envelope. The closest and farthest cells are projected to inscribed tetrahedra within the cube, corresponding with the two possible ways to inscribe a regular tetrahedron in a cube. Surrounding each of these tetrahedra are 4 other (non-regular) tetrahedral volumes that are the images of the 4 surrounding tetrahedral cells, filling up the space between the inscribed tetrahedron and the cube. The remaining 6 cells are projected onto the square faces of the cube. In this projection of the 16-cell, all its edges lie on the faces of the cubical envelope. The cell-first perspective projection of the 16-cell into 3-space has a triakis tetrahedral envelope. The layout of the cells within this envelope are analogous to that of the cell-first parallel projection. The vertex-first parallel projection of the 16-cell into 3-space has an octahedral envelope. This octahedron can be divided into 8 tetrahedral volumes, by cutting along the coordinate planes. Each of these volumes is the image of a pair of cells in the 16-cell. The closest vertex of the 16-cell to the viewer projects onto the center of the octahedron. Finally the edge-first parallel projection has a shortened octahedral envelope, and the face-first parallel projection has a hexagonal bipyramidal envelope. 4 sphere Venn diagram A 3-dimensional projection of the 16-cell and 4 intersecting spheres (a Venn diagram of 4 sets) are topologically equivalent. Symmetry constructions The 16-cell's symmetry group is denoted B4. There is a lower symmetry form of the 16-cell, called a demitesseract or 4-demicube, a member of the demihypercube family, and represented by h{4,3,3}, and Coxeter diagrams or . It can be drawn bicolored with alternating tetrahedral cells. It can also be seen in lower symmetry form as a tetrahedral antiprism, constructed by 2 parallel tetrahedra in dual configurations, connected by 8 (possibly elongated) tetrahedra. It is represented by s{2,4,3}, and Coxeter diagram: . It can also be seen as a snub 4-orthotope, represented by s{21,1,1}, and Coxeter diagram: or . With the tesseract constructed as a 4-4 duoprism, the 16-cell can be seen as its dual, a 4-4 duopyramid. Related complex polygons The Möbius–Kantor polygon is a regular complex polygon 3{3}3, , in shares the same vertices as the 16-cell. It has 8 vertices, and 8 3-edges. The regular complex polygon, 2{4}4, , in has a real representation as a 16-cell in 4-dimensional space with 8 vertices, 16 2-edges, only half of the edges of the 16-cell. Its symmetry is 4[4]2, order 32. Related uniform polytopes and honeycombs The regular 16-cell and tesseract are the regular members of a set of 15 uniform 4-polytopes with the same B4 symmetry. The 16-cell is also one of the uniform polytopes of D4 symmetry. The 16-cell is also related to the cubic honeycomb, order-4 dodecahedral honeycomb, and order-4 hexagonal tiling honeycomb which all have octahedral vertex figures. It belongs to the sequence of {3,3,p} 4-polytopes which have tetrahedral cells. The sequence includes three regular 4-polytopes of Euclidean 4-space, the 5-cell {3,3,3}, 16-cell {3,3,4}, and 600-cell {3,3,5}), and the order-6 tetrahedral honeycomb {3,3,6} of hyperbolic space. It is first in a sequence of quasiregular polytopes and honeycombs h{4,p,q}, and a half symmetry sequence, for regular forms {p,3,4}.
Mathematics
Four-dimensional space
null
16976158
https://en.wikipedia.org/wiki/Notosuchia
Notosuchia
Notosuchia is a suborder of primarily Gondwanan mesoeucrocodylian crocodylomorphs that lived during the Jurassic and Cretaceous. Some phylogenies recover Sebecosuchia as a clade within Notosuchia, others as a sister group (see below); if Sebecosuchia is included within Notosuchia its existence is pushed into the Middle Miocene, about 11 million years ago. Fossils have been found from South America, Africa, Asia, and Europe. Notosuchia was a clade of terrestrial crocodilians that evolved a range of feeding behaviours, including herbivory (Chimaerasuchus), omnivory (Simosuchus), and terrestrial hypercarnivory (Baurusuchus). It included many members with highly derived traits unusual for crocodylomorphs, including mammal-like teeth, flexible bands of shield-like body armor similar to those of armadillos (Armadillosuchus), and possibly fleshy cheeks and pig-like snouts (Notosuchus). The suborder was first named in 1971 by Zulma Gasparini and has since undergone many phylogenetic revisions. Description Notosuchians were generally small, with slender bodies and erect limbs. The most distinctive characteristics are usually seen in the skull. Notosuchian skulls are generally short and deep. While most are relatively narrow, some are very broad. Simosuchus has a broadened skull and jaw that resembles a pug, while Anatosuchus has a broad, flat snout like that of a duck. The teeth vary greatly between different genera. Many have heterodont dentitions that vary in shape across the jaw. Often, there are large canine-like teeth protruding from the front of the mouth and broader molar-like teeth in the back. Some genera, such as Yacarerani and Pakasuchus, have extremely mammal-like teeth. Their molars are complex and multicuspid, and are able to occlude or fit with one another. Some forms such as Malawisuchus had jaw joints that enabled them to move the jaw back and forth in a shearing motion rather than just up and down. A derived group of notosuchians, the baurusuchids differ considerably from other forms. They are very large in comparison to other notosuchians and are exclusively carnivorous. Baurusuchids have deep skulls and prominent canine-like teeth. Recent research found Araripesuchus wegeneri, Armadillosuchus arrudai, Baurusuchus, Iberosuchus macrodon, and Stratiotosuchus maxhechti were ectothermic organisms Classification Taxonomy Genera The evolutionary interrelationships of Notosuchia are in flux, but the following genera are generally considered notosuchians: Phylogeny The clade Notosuchia has undergone many recent phylogenetic revisions. In 2000, Notosuchia was proposed to be one of two groups within the clade Ziphosuchia, the other being Sebecosuchia, which included deep snouted forms such as baurusuchids and sebecids. The definition of Notosuchia by Sereno et al. (2001) is similar to that of Ziphosuchia as it includes within it Sebecosuchia. Pol (2003) also includes Sebecosuchia within Notosuchia. More recently, a phylogenetic analysis by Larsson and Sues (2007) resulted in the naming of a new clade, Sebecia, to include sebecids and peirosaurids. Baurusuchidae was considered to be polyphyletic in this study, with Pabwehshi being a basal member of Sebecia and Baurusuchus being the sister taxon to the clade containing Neosuchia and Sebecia. Thus, Sebecosuchia was no longer within Notosuchia and not considered to be a true clade, while Notosuchia was found to be a basal clade of Metasuchia. The following cladogram simplified after the most comprehensive analysis of notosuchians as of 2014, presented by Pol et al. in 2014. It is based mainly on the data matrix published by Pol et al. (2012) which is itself a modified version of previous analyses. Thirty-one additional characters were added from other comprehensive analyses of notosuchians, e.g. Turner and Sertich (2010), Andrade et al. (2011), Montefeltro et al. (2011), Larsson and Sues (2007), and Novas et al. (2009), and 34 characters were noval, resulting in a matrix that includes 109 crocodyliforms and outgroup taxa which are scored based on 412 morphological traits. This cladogram represents the results of the most comprehensive analysis of notosuchian relationships to date, performed in the description of Antaeusuchus taouzensis by Nicholl et al. 2021. It is largely based on the matrix from the above Pol et al. 2014 study, but also adding character scores from Leardi et al. 2015, Fiorelli et al. 2016, Leardi et al. 2018, and Martinez et al. 2018. The final matrix consisted of 121 taxa scored for 443 morphological traits.
Biology and health sciences
Prehistoric crocodiles
Animals
4533924
https://en.wikipedia.org/wiki/Binary%20multiplier
Binary multiplier
A binary multiplier is an electronic circuit used in digital electronics, such as a computer, to multiply two binary numbers. A variety of computer arithmetic techniques can be used to implement a digital multiplier. Most techniques involve computing the set of partial products, which are then summed together using binary adders. This process is similar to long multiplication, except that it uses a base-2 (binary) numeral system. History Between 1947 and 1949 Arthur Alec Robinson worked for English Electric, as a student apprentice, and then as a development engineer. Crucially during this period he studied for a PhD degree at the University of Manchester, where he worked on the design of the hardware multiplier for the early Mark 1 computer. However, until the late 1970s, most minicomputers did not have a multiply instruction, and so programmers used a "multiply routine" which repeatedly shifts and accumulates partial results, often written using loop unwinding. Mainframe computers had multiply instructions, but they did the same sorts of shifts and adds as a "multiply routine". Early microprocessors also had no multiply instruction. Though the multiply instruction became common with the 16-bit generation, at least two 8-bit processors have a multiply instruction: the Motorola 6809, introduced in 1978, and Intel MCS-51 family, developed in 1980, and later the modern Atmel AVR 8-bit microprocessors present in the ATMega, ATTiny and ATXMega microcontrollers. As more transistors per chip became available due to larger-scale integration, it became possible to put enough adders on a single chip to sum all the partial products at once, rather than reuse a single adder to handle each partial product one at a time. Because some common digital signal processing algorithms spend most of their time multiplying, digital signal processor designers sacrifice considerable chip area in order to make the multiply as fast as possible; a single-cycle multiply–accumulate unit often used up most of the chip area of early DSPs. Binary long multiplication The method taught in school for multiplying decimal numbers is based on calculating partial products, shifting them to the left and then adding them together. The most difficult part is to obtain the partial products, as that involves multiplying a long number by one digit (from 0 to 9): 123 × 456 ===== 738 (this is 123 × 6) 615 (this is 123 × 5, shifted one position to the left) + 492 (this is 123 × 4, shifted two positions to the left) ===== 56088 A binary computer does exactly the same multiplication as decimal numbers do, but with binary numbers. In binary encoding each long number is multiplied by one digit (either 0 or 1), and that is much easier than in decimal, as the product by 0 or 1 is just 0 or the same number. Therefore, the multiplication of two binary numbers comes down to calculating partial products (which are 0 or the first number), shifting them left, and then adding them together (a binary addition, of course): 1011 (this is binary for decimal 11) × 1110 (this is binary for decimal 14) ====== 0000 (this is 1011 × 0) 1011 (this is 1011 × 1, shifted one position to the left) 1011 (this is 1011 × 1, shifted two positions to the left) + 1011 (this is 1011 × 1, shifted three positions to the left) ========= 10011010 (this is binary for decimal 154) This is much simpler than in the decimal system, as there is no table of multiplication to remember: just shifts and adds. This method is mathematically correct and has the advantage that a small CPU may perform the multiplication by using the shift and add features of its arithmetic logic unit rather than a specialized circuit. The method is slow, however, as it involves many intermediate additions. These additions are time-consuming. Faster multipliers may be engineered in order to do fewer additions; a modern processor can multiply two 64-bit numbers with 6 additions (rather than 64), and can do several steps in parallel. The second problem is that the basic school method handles the sign with a separate rule ("+ with + yields +", "+ with − yields −", etc.). Modern computers embed the sign of the number in the number itself, usually in the two's complement representation. That forces the multiplication process to be adapted to handle two's complement numbers, and that complicates the process a bit more. Similarly, processors that use ones' complement, sign-and-magnitude, IEEE-754 or other binary representations require specific adjustments to the multiplication process. Unsigned integers For example, suppose we want to multiply two unsigned 8-bit integers together: a[7:0] and b[7:0]. We can produce eight partial products by performing eight 1-bit multiplications, one for each bit in multiplicand a: p0[7:0] = a[0] × b[7:0] = {8{a[0]}} & b[7:0] p1[7:0] = a[1] × b[7:0] = {8{a[1]}} & b[7:0] p2[7:0] = a[2] × b[7:0] = {8{a[2]}} & b[7:0] p3[7:0] = a[3] × b[7:0] = {8{a[3]}} & b[7:0] p4[7:0] = a[4] × b[7:0] = {8{a[4]}} & b[7:0] p5[7:0] = a[5] × b[7:0] = {8{a[5]}} & b[7:0] p6[7:0] = a[6] × b[7:0] = {8{a[6]}} & b[7:0] p7[7:0] = a[7] × b[7:0] = {8{a[7]}} & b[7:0] where {8{a[0]}} means repeating a[0] (the 0th bit of a) 8 times (Verilog notation). In order to obtain our product, we then need to add up all eight of our partial products, as shown here: p0[7] p0[6] p0[5] p0[4] p0[3] p0[2] p0[1] p0[0] + p1[7] p1[6] p1[5] p1[4] p1[3] p1[2] p1[1] p1[0] 0 + p2[7] p2[6] p2[5] p2[4] p2[3] p2[2] p2[1] p2[0] 0 0 + p3[7] p3[6] p3[5] p3[4] p3[3] p3[2] p3[1] p3[0] 0 0 0 + p4[7] p4[6] p4[5] p4[4] p4[3] p4[2] p4[1] p4[0] 0 0 0 0 + p5[7] p5[6] p5[5] p5[4] p5[3] p5[2] p5[1] p5[0] 0 0 0 0 0 + p6[7] p6[6] p6[5] p6[4] p6[3] p6[2] p6[1] p6[0] 0 0 0 0 0 0 + p7[7] p7[6] p7[5] p7[4] p7[3] p7[2] p7[1] p7[0] 0 0 0 0 0 0 0 ----------------------------------------------------------------------------------------------- P[15] P[14] P[13] P[12] P[11] P[10] P[9] P[8] P[7] P[6] P[5] P[4] P[3] P[2] P[1] P[0] In other words, P[15:0] is produced by summing p0, p1 << 1, p2 << 2, and so forth, to produce our final unsigned 16-bit product. Signed integers If b had been a signed integer instead of an unsigned integer, then the partial products would need to have been sign-extended up to the width of the product before summing. If a had been a signed integer, then partial product p7 would need to be subtracted from the final sum, rather than added to it. The above array multiplier can be modified to support two's complement notation signed numbers by inverting several of the product terms and inserting a one to the left of the first partial product term: 1 ~p0[7] p0[6] p0[5] p0[4] p0[3] p0[2] p0[1] p0[0] + ~p1[7] p1[6] p1[5] p1[4] p1[3] p1[2] p1[1] p1[0] 0 + ~p2[7] p2[6] p2[5] p2[4] p2[3] p2[2] p2[1] p2[0] 0 0 + ~p3[7] p3[6] p3[5] p3[4] p3[3] p3[2] p3[1] p3[0] 0 0 0 + ~p4[7] p4[6] p4[5] p4[4] p4[3] p4[2] p4[1] p4[0] 0 0 0 0 + ~p5[7] p5[6] p5[5] p5[4] p5[3] p5[2] p5[1] p5[0] 0 0 0 0 0 + ~p6[7] p6[6] p6[5] p6[4] p6[3] p6[2] p6[1] p6[0] 0 0 0 0 0 0 + 1 p7[7] ~p7[6] ~p7[5] ~p7[4] ~p7[3] ~p7[2] ~p7[1] ~p7[0] 0 0 0 0 0 0 0 --------------------------------------------------------------------------------------------------------------- P[15] P[14] P[13] P[12] P[11] P[10] P[9] P[8] P[7] P[6] P[5] P[4] P[3] P[2] P[1] P[0] Where ~p represents the complement (opposite value) of p. There are many simplifications in the bit array above that are not shown and are not obvious. The sequences of one complemented bit followed by noncomplemented bits are implementing a two's complement trick to avoid sign extension. The sequence of p7 (noncomplemented bit followed by all complemented bits) is because we're subtracting this term so they were all negated to start out with (and a 1 was added in the least significant position). For both types of sequences, the last bit is flipped and an implicit −1 should be added directly below the MSB. When the +1 from the two's complement negation for p7 in bit position 0 (LSB) and all the −1's in bit columns 7 through 14 (where each of the MSBs are located) are added together, they can be simplified to the single 1 that "magically" is floating out to the left. For an explanation and proof of why flipping the MSB saves us the sign extension, see a computer arithmetic book. Floating-point numbers A binary floating-point number contains a sign bit, significant bits (known as the significand) and exponent bits (for simplicity, we don't consider base and combination field). The sign bits of each operand are XOR'd to get the sign of the answer. Then, the two exponents are added to get the exponent of the result. Finally, multiplication of each operand's significand will return the significand of the result. However, if the result of the binary multiplication is higher than the total number of bits for a specific precision (e.g. 32, 64, 128), rounding is required and the exponent is changed appropriately. Hardware implementation The process of multiplication can be split into 3 steps: generating partial product reducing partial product computing final product Older multiplier architectures employed a shifter and accumulator to sum each partial product, often one partial product per cycle, trading off speed for die area. Modern multiplier architectures use the (Modified) Baugh–Wooley algorithm, Wallace trees, or Dadda multipliers to add the partial products together in a single cycle. The performance of the Wallace tree implementation is sometimes improved by modified Booth encoding one of the two multiplicands, which reduces the number of partial products that must be summed. For speed, shift-and-add multipliers require a fast adder (something faster than ripple-carry). A "single cycle" multiplier (or "fast multiplier") is pure combinational logic. In a fast multiplier, the partial-product reduction process usually contributes the most to the delay, power, and area of the multiplier. For speed, the "reduce partial product" stages are typically implemented as a carry-save adder composed of compressors and the "compute final product" step is implemented as a fast adder (something faster than ripple-carry). Many fast multipliers use full adders as compressors ("3:2 compressors") implemented in static CMOS. To achieve better performance in the same area or the same performance in a smaller area, multiplier designs may use higher order compressors such as 7:3 compressors; implement the compressors in faster logic (such transmission gate logic, pass transistor logic, domino logic); connect the compressors in a different pattern; or some combination. Example circuits
Technology
Digital logic
null
4535852
https://en.wikipedia.org/wiki/Carbon%20sequestration
Carbon sequestration
Carbon sequestration is the process of storing carbon in a carbon pool. It plays a crucial role in limiting climate change by reducing the amount of carbon dioxide in the atmosphere. There are two main types of carbon sequestration: biologic (also called biosequestration) and geologic. Biologic carbon sequestration is a naturally occurring process as part of the carbon cycle. Humans can enhance it through deliberate actions and use of technology. Carbon dioxide () is naturally captured from the atmosphere through biological, chemical, and physical processes. These processes can be accelerated for example through changes in land use and agricultural practices, called carbon farming. Artificial processes have also been devised to produce similar effects. This approach is called carbon capture and storage. It involves using technology to capture and sequester (store) that is produced from human activities underground or under the sea bed. Plants, such as forests and kelp beds, absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores may be temporary carbon sinks, as long-term sequestration cannot be guaranteed. Wildfires, disease, economic pressures, and changing political priorities may release the sequestered carbon back into the atmosphere. Carbon dioxide that has been removed from the atmosphere can also be stored in the Earth's crust by injecting it underground, or in the form of insoluble carbonate salts. The latter process is called mineral sequestration. These methods are considered non-volatile because they not only remove carbon dioxide from the atmosphere but also sequester it indefinitely. This means the carbon is "locked away" for thousands to millions of years. To enhance carbon sequestration processes in oceans the following chemical or physical technologies have been proposed: ocean fertilization, artificial upwelling, basalt storage, mineralization and deep-sea sediments, and adding bases to neutralize acids. However, none have achieved large scale application so far. Large-scale seaweed farming on the other hand is a biological process and could sequester significant amounts of carbon. The potential growth of seaweed for carbon farming would see the harvested seaweed transported to the deep ocean for long-term burial. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" on seaweed farming as a mitigation tactic. Terminology The term carbon sequestration is used in different ways in the literature and media. The IPCC Sixth Assessment Report defines it as "The process of storing carbon in a carbon pool". Subsequently, a pool is defined as "a reservoir in the Earth system where elements, such as carbon and nitrogen, reside in various chemical forms for a period of time". The United States Geological Survey (USGS) defines carbon sequestration as follows: "Carbon sequestration is the process of capturing and storing atmospheric carbon dioxide." Therefore, the difference between carbon sequestration and carbon capture and storage (CCS) is sometimes blurred in the media. The IPCC, however, defines CCS as "a process in which a relatively pure stream of carbon dioxide (CO2) from industrial sources is separated, treated and transported to a long-term storage location". Roles In nature Carbon sequestration is part of the natural carbon cycle by which carbon is exchanged among the biosphere, pedosphere (soil), geosphere, hydrosphere, and atmosphere of Earth. Carbon dioxide is naturally captured from the atmosphere through biological, chemical, or physical processes, and stored in long-term reservoirs. Plants, such as forests and kelp beds, absorb carbon dioxide from the air as they grow, and bind it into biomass. However, these biological stores are considered volatile carbon sinks as long-term sequestration cannot be guaranteed. Events such as wildfires or disease, economic pressures, and changing political priorities can result in the sequestered carbon being released back into the atmosphere. In climate change mitigation and policies Carbon sequestration - when acting as a carbon sink - helps to mitigate climate change and thus reduce harmful effects of climate change. It helps to slow the atmospheric and marine accumulation of greenhouse gases, which is mainly carbon dioxide released by burning fossil fuels. Carbon sequestration, when applied for climate change mitigation, can either build on enhancing naturally occurring carbon sequestration or use technology for carbon sequestration processes. Within the carbon capture and storage approaches, carbon sequestration refers to the storage component. Artificial carbon storage technologies can be applied, such as gaseous storage in deep geological formations (including saline formations and exhausted gas fields), and solid storage by reaction of CO2 with metal oxides to produce stable carbonates. For carbon to be sequestered artificially (i.e. not using the natural processes of the carbon cycle) it must first be captured, or it must be significantly delayed or prevented from being re-released into the atmosphere (by combustion, decay, etc.) from an existing carbon-rich material, by being incorporated into an enduring usage (such as in construction). Thereafter it can be passively stored or remain productively utilized over time in a variety of ways. For instance, upon harvesting, wood (as a carbon-rich material) can be incorporated into construction or a range of other durable products, thus sequestering its carbon over years or even centuries. In industrial production, engineers typically capture carbon dioxide from emissions from power plants or factories. For example, in the United States, the Executive Order 13990 (officially titled "Protecting Public Health and the Environment and Restoring Science to Tackle the Climate Crisis") from 2021, includes several mentions of carbon sequestration via conservation and restoration of carbon sink ecosystems, such as wetlands and forests. The document emphasizes the importance of farmers, landowners, and coastal communities in carbon sequestration. It directs the Treasury Department to promote conservation of carbon sinks through market based mechanisms. Biological carbon sequestration on land Biological carbon sequestration (also called biosequestration) is the capture and storage of the atmospheric greenhouse gas carbon dioxide by continualor enhanced biological processes. This form of carbon sequestration occurs through increased rates of photosynthesis via land-use practices such as reforestation and sustainable forest management. Land-use changes that enhance natural carbon capture have the potential to capture and store large amounts of carbon dioxide each year. These include the conservation, management, and restoration of ecosystems such as forests, peatlands, wetlands, and grasslands, in addition to carbon sequestration methods in agriculture. Methods and practices exist to enhance soil carbon sequestration in both agriculture and forestry. Forestry Forests are an important part of the global carbon cycle because trees and plants absorb carbon dioxide through photosynthesis. Therefore, they play an important role in climate change mitigation. By removing the greenhouse gas carbon dioxide from the air, forests function as terrestrial carbon sinks, meaning they store large amounts of carbon in the form of biomass, encompassing roots, stems, branches, and leaves. Throughout their lifespan, trees continue to sequester carbon, storing atmospheric CO2 long-term. Sustainable forest management, afforestation, reforestation are therefore important contributions to climate change mitigation. An important consideration in such efforts is that forests can turn from sinks to carbon sources. In 2019 forests took up a third less carbon than they did in the 1990s, due to higher temperatures, droughts and deforestation. The typical tropical forest may become a carbon source by the 2060s. Researchers have found that, in terms of environmental services, it is better to avoid deforestation than to allow for deforestation to subsequently reforest, as the latter leads to irreversible effects in terms of biodiversity loss and soil degradation. Furthermore, the probability that legacy carbon will be released from soil is higher in younger boreal forest. Global greenhouse gas emissions caused by damage to tropical rainforests may have been substantially underestimated until around 2019. Additionally, the effects of afforestation and reforestation will be farther in the future than keeping existing forests intact. It takes much longer − several decades − for the benefits for global warming to manifest to the same carbon sequestration benefits from mature trees in tropical forests and hence from limiting deforestation. Therefore, scientists consider "the protection and recovery of carbon-rich and long-lived ecosystems, especially natural forests" to be "the major climate solution". The planting of trees on marginal crop and pasture lands helps to incorporate carbon from atmospheric into biomass. For this carbon sequestration process to succeed the carbon must not return to the atmosphere from biomass burning or rotting when the trees die. To this end, land allotted to the trees must not be converted to other uses. Alternatively, the wood from them must itself be sequestered, e.g., via biochar, bioenergy with carbon capture and storage, landfill or stored by use in construction. Earth offers enough room to plant an additional 0.9 billion ha of tree canopy cover, although this estimate has been criticized, and the true area that has a net cooling effect on the climate when accounting for biophysical feedbacks like albedo is 20-80% lower. Planting and protecting these trees would sequester 205 billion tons of carbon if the trees survive future climate stress to reach maturity. To put this number into perspective, this is about 20 years of current global carbon emissions (as of 2019) . This level of sequestration would represent about 25% of the atmosphere's carbon pool in 2019. Life expectancy of forests varies throughout the world, influenced by tree species, site conditions, and natural disturbance patterns. In some forests, carbon may be stored for centuries, while in other forests, carbon is released with frequent stand replacing fires. Forests that are harvested prior to stand replacing events allow for the retention of carbon in manufactured forest products such as lumber. However, only a portion of the carbon removed from logged forests ends up as durable goods and buildings. The remainder ends up as sawmill by-products such as pulp, paper, and pallets. If all new construction globally utilized 90% wood products, largely via adoption of mass timber in low rise construction, this could sequester 700 million net tons of carbon per year. This is in addition to the elimination of carbon emissions from the displaced construction material such as steel or concrete, which are carbon-intense to produce. A meta-analysis found that mixed species plantations would increase carbon storage alongside other benefits of diversifying planted forests. Although a bamboo forest stores less total carbon than a mature forest of trees, a bamboo plantation sequesters carbon at a much faster rate than a mature forest or a tree plantation. Therefore, the farming of bamboo timber may have significant carbon sequestration potential. The Food and Agriculture Organization (FAO) reported that: "The total carbon stock in forests decreased from 668 gigatonnes in 1990 to 662 gigatonnes in 2020". In Canada's boreal forests as much as 80% of the total carbon is stored in the soils as dead organic matter. The IPCC Sixth Assessment Report says: "Secondary forest regrowth and restoration of degraded forests and non-forest ecosystems can play a large role in carbon sequestration (high confidence) with high resilience to disturbances and additional benefits such as enhanced biodiversity." Impacts on temperature are affected by the location of the forest. For example, reforestation in boreal or subarctic regions has less impact on climate. This is because it substitutes a high-albedo, snow-dominated region with a lower-albedo forest canopy. By contrast, tropical reforestation projects lead to a positive change such as the formation of clouds. These clouds then reflect the sunlight, lowering temperatures. Planting trees in tropical climates with wet seasons has another advantage. In such a setting, trees grow more quickly (fixing more carbon) because they can grow year-round. Trees in tropical climates have, on average, larger, brighter, and more abundant leaves than non-tropical climates. A study of the girth of 70,000 trees across Africa has shown that tropical forests fix more carbon dioxide pollution than previously realized. The research suggested almost one-fifth of fossil fuel emissions are absorbed by forests across Africa, Amazonia and Asia. Simon Lewis stated, "Tropical forest trees are absorbing about 18% of the carbon dioxide added to the atmosphere each year from burning fossil fuels, substantially buffering the rate of change." Wetlands Wetland restoration involves restoring a wetland's natural biological, geological, and chemical functions through re-establishment or rehabilitation. It is a good way to reduce climate change. Wetland soil, particularly in coastal wetlands such as mangroves, sea grasses, and salt marshes, is an important carbon reservoir; 20–30% of the world's soil carbon is found in wetlands, while only 5–8% of the world's land is composed of wetlands. Studies have shown that restored wetlands can become productive sinks and many are being restored. Aside from climate benefits, wetland restoration and conservation can help preserve biodiversity, improve water quality, and aid with flood control. The plants that makeup wetlands absorb carbon dioxide () from the atmosphere and convert it into organic matter. The waterlogged nature of the soil slows down the decomposition of organic material, leading to the accumulation of carbon-rich sediments, acting as a long-term carbon sink. Also, anaerobic conditions in waterlogged soils hinder the complete breakdown of organic matter, promoting the conversion of carbon into more stable forms. As with forests, for the sequestration process to succeed, the wetland must remain undisturbed. If it is disturbed the carbon stored in the plants and sediments will be released back into the atmosphere, and the ecosystem will no longer function as a carbon sink. Additionally, some wetlands can release non- greenhouse gases, such as methane and nitrous oxide which could offset potential climate benefits. The amounts of carbon sequestered via blue carbon by wetlands can also be difficult to measure. Wetland soil is an important carbon sink; 14.5% of the world's soil carbon is found in wetlands, while only 5.5% of the world's land is composed of wetlands. Not only are wetlands a great carbon sink, they have many other benefits like collecting floodwater, filtering out air and water pollutants, and creating a home for numerous birds, fish, insects, and plants. Climate change could alter wetland soil carbon storage, changing it from a sink to a source.With rising temperatures comes an increase in greenhouse gasses from wetlands especially locations with permafrost. When this permafrost melts it increases the available oxygen and water in the soil. Because of this, bacteria in the soil would create large amounts of carbon dioxide and methane to be released into the atmosphere. The link between climate change and wetlands is still not fully known.It is also not clear how restored wetlands manage carbon while still being a contributing source of methane. However, preserving these areas would help prevent further release of carbon into the atmosphere. Peatlands, mires and peat bogs Despite occupying only 3% of the global land area, peatlands hold approximately 30% of the carbon in our ecosystem - twice the amount stored in the world's forests. Most peatlands are situated in high latitude areas of the northern hemisphere, with most of their growth occurring since the last ice age, but they are also found in tropical regions, such as the Amazon and Congo Basin. Peatlands grow steadily over thousands of years, accumulating dead plant material – and the carbon contained within it – due to waterlogged conditions which greatly slow rates of decay. If peatlands are drained, for farmland or development, the plant material stored within them decomposes rapidly, releasing stored carbon. These degraded peatlands account for 5-10% of global carbon emissions from human activities. The loss of one peatland could potentially produce more carbon than 175–500 years of methane emissions. Peatland protection and restoration are therefore important measures to mitigate carbon emissions, and also provides benefits for biodiversity, freshwater provision, and flood risk reduction. Agriculture Compared to natural vegetation, cropland soils are depleted in soil organic carbon (SOC). When soil is converted from natural land or semi-natural land, such as forests, woodlands, grasslands, steppes, and savannas, the SOC content in the soil reduces by about 30–40%. This loss is due to harvesting, as plants contain carbon. When land use changes, the carbon in the soil will either increase or decrease, and this change will continue until the soil reaches a new equilibrium. Deviations from this equilibrium can also be affected by variated climate. The decreasing of SOC content can be counteracted by increasing the carbon input. This can be done with several strategies, e.g. leave harvest residues on the field, use manure as fertilizer, or include perennial crops in the rotation. Perennial crops have a larger below-ground biomass fraction, which increases the SOC content. Perennial crops reduce the need for tillage and thus help mitigate soil erosion, and may help increase soil organic matter. Globally, soils are estimated to contain >8,580 gigatons of organic carbon, about ten times the amount in the atmosphere and much more than in vegetation. Researchers have found that rising temperatures can lead to population booms in soil microbes, converting stored carbon into carbon dioxide. In laboratory experiments heating soil, fungi-rich soils released less carbon dioxide than other soils. Following carbon dioxide (CO2) absorption from the atmosphere, plants deposit organic matter into the soil. This organic matter, derived from decaying plant material and root systems, is rich in carbon compounds. Microorganisms in the soil break down this organic matter, and in the process, some of the carbon becomes further stabilized in the soil as humus - a process known as humification. On a global basis, it is estimated that soil contains about 2,500 gigatons of carbon.This is greater than 3-fold the carbon found in the atmosphere and 4-fold of that found in living plants and animals. About 70% of the global soil organic carbon in non-permafrost areas is found in the deeper soil within the upper metre and is stabilized by mineral-organic associations. Carbon farming Prairies Prairie restoration is a conservation effort to restore prairie lands that were destroyed due to industrial, agricultural, commercial, or residential development. The primary aim is to return areas and ecosystems to their previous state before their depletion. The mass of SOC able to be stored in these restored plots is typically greater than the previous crop, acting as a more effective carbon sink. Biochar Biochar is charcoal created by pyrolysis of biomass waste. The resulting material is added to a landfill or used as a soil improver to create terra preta. Adding biochar may increase the soil-C stock for the long term and so mitigate global warming by offsetting the atmospheric C (up to 9.5 Gigatons C annually). In the soil, the biochar carbon is unavailable for oxidation to and consequential atmospheric release. However concerns have been raised about biochar potentially accelerating release of the carbon already present in the soil. Terra preta, an anthropogenic, high-carbon soil, is also being investigated as a sequestration mechanism. By pyrolysing biomass, about half of its carbon can be reduced to charcoal, which can persist in the soil for centuries, and makes a useful soil amendment, especially in tropical soils (biochar or agrichar). Burial of biomass Burying biomass (such as trees) directly mimics the natural processes that created fossil fuels. The global potential for carbon sequestration using wood burial is estimated to be 10 ± 5 GtC/yr and largest rates in tropical forests (4.2 GtC/yr), followed by temperate (3.7 GtC/yr) and boreal forests (2.1 GtC/yr). In 2008, Ning Zeng of the University of Maryland estimated 65 GtC lying on the floor of the world's forests as coarse woody material which could be buried and costs for wood burial carbon sequestration run at US$50/tC which is much lower than carbon capture from e.g. power plant emissions. CO2 fixation into woody biomass is a natural process carried out through photosynthesis. This is a nature-based solution and methods being trialled include the use of "wood vaults" to store the wood-containing carbon under oxygen-free conditions. In 2022, a certification organization published methodologies for biomass burial. Other biomass storage proposals have included the burial of biomass deep underwater, including at the bottom of the Black Sea. Geological carbon sequestration Underground storage in suitable geologic formations Geological sequestration refers to the storage of CO2 underground in depleted oil and gas reservoirs, saline formations, or deep, coal beds unsuitable for mining. Once CO2 is captured from a point source, such as a cement factory, it can be compressed to ≈100 bar into a supercritical fluid. In this form, the CO2 could be transported via pipeline to the place of storage. The CO2 could then be injected deep underground, typically around , where it would be stable for hundreds to millions of years. Under these storage conditions, the density of supercritical CO2 is 600 to 800 kg/m3. The important parameters in determining a good site for carbon storage are: rock porosity, rock permeability, absence of faults, and geometry of rock layers. The medium in which the CO2 is to be stored ideally has a high porosity and permeability, such as sandstone or limestone. Sandstone can have a permeability ranging from 1 to 10−5 Darcy, with a porosity as high as ≈30%. The porous rock must be capped by a layer of low permeability which acts as a seal, or caprock, for the CO2. Shale is an example of a very good caprock, with a permeability of 10−5 to 10−9 Darcy. Once injected, the CO2 plume will rise via buoyant forces, since it is less dense than its surroundings. Once it encounters a caprock, it will spread laterally until it encounters a gap. If there are fault planes near the injection zone, there is a possibility the CO2 could migrate along the fault to the surface, leaking into the atmosphere, which would be potentially dangerous to life in the surrounding area. Another risk related to carbon sequestration is induced seismicity. If the injection of CO2 creates pressures underground that are too high, the formation will fracture, potentially causing an earthquake. Structural trapping is considered the principal storage mechanism, impermeable or low permeability rocks such as mudstone, anhydrite, halite, or tight carbonates act as a barrier to the upward buoyant migration of CO2, resulting in the retention of CO2 within a storage formation. While trapped in a rock formation, CO2 can be in the supercritical fluid phase or dissolve in groundwater/brine. It can also react with minerals in the geologic formation to become carbonates. Mineral sequestration Mineral sequestration aims to trap carbon in the form of solid carbonate salts. This process occurs slowly in nature and is responsible for the deposition and accumulation of limestone over geologic time. Carbonic acid in groundwater slowly reacts with complex silicates to dissolve calcium, magnesium, alkalis and silica and leave a residue of clay minerals. The dissolved calcium and magnesium react with bicarbonate to precipitate calcium and magnesium carbonates, a process that organisms use to make shells. When the organisms die, their shells are deposited as sediment and eventually turn into limestone. Limestones have accumulated over billions of years of geologic time and contain much of Earth's carbon. Ongoing research aims to speed up similar reactions involving alkali carbonates. Zeolitic imidazolate frameworks (ZIFs) are metal–organic frameworks similar to zeolites. Because of their porosity, chemical stability and thermal resistance, ZIFs are being examined for their capacity to capture carbon dioxide. Mineral carbonation CO2 exothermically reacts with metal oxides, producing stable carbonates (e.g. calcite, magnesite). This process (CO2-to-stone) occurs naturally over periods of years and is responsible for much surface limestone. Olivine is one such metal oxide. Rocks rich in metal oxides that react with CO2, such as MgO and CaO as contained in basalts, have been proven as a viable means to achieve carbon-dioxide mineral storage. The reaction rate can in principle be accelerated with a catalyst or by increasing pressures, or by mineral pre-treatment, although this method can require additional energy. Ultramafic mine tailings are a readily available source of fine-grained metal oxides that could serve this purpose. Accelerating passive CO2 sequestration via mineral carbonation may be achieved through microbial processes that enhance mineral dissolution and carbonate precipitation. Carbon, in the form of can be removed from the atmosphere by chemical processes, and stored in stable carbonate mineral forms. This process (-to-stone) is known as "carbon sequestration by mineral carbonation" or mineral sequestration. The process involves reacting carbon dioxide with abundantly available metal oxides – either magnesium oxide (MgO) or calcium oxide (CaO) – to form stable carbonates. These reactions are exothermic and occur naturally (e.g., the weathering of rock over geologic time periods). CaO + → MgO + → Calcium and magnesium are found in nature typically as calcium and magnesium silicates (such as forsterite and serpentinite) and not as binary oxides. For forsterite and serpentine the reactions are: + 2 → 2 + + 3 → 3 + 2 + 2 These reactions are slightly more favorable at low temperatures. This process occurs naturally over geologic time frames and is responsible for much of the Earth's surface limestone. The reaction rate can be made faster however, by reacting at higher temperatures and/or pressures, although this method requires some additional energy. Alternatively, the mineral could be milled to increase its surface area, and exposed to water and constant abrasion to remove the inert silica as could be achieved naturally by dumping olivine in the high energy surf of beaches. When is dissolved in water and injected into hot basaltic rocks underground it has been shown that the reacts with the basalt to form solid carbonate minerals. A test plant in Iceland started up in October 2017, extracting up to 50 tons of CO2 a year from the atmosphere and storing it underground in basaltic rock. Sequestration in oceans Several start-ups are trying to do this at scale. Marine carbon pumps The ocean naturally sequesters carbon through different processes. The solubility pump moves carbon dioxide from the atmosphere into the surface ocean where it reacts with water molecules to form carbonic acid. The solubility of carbon dioxide increases with decreasing water temperatures. Thermohaline circulation moves dissolved carbon dioxide to cooler waters where it is more soluble, increasing carbon concentrations in the ocean interior. The biological pump moves dissolved carbon dioxide from the surface ocean to the ocean's interior through the conversion of inorganic carbon to organic carbon by photosynthesis. Organic matter that survives respiration and remineralization can be transported through sinking particles and organism migration to the deep ocean. The low temperatures, high pressure, and reduced oxygen levels in the deep sea slow down decomposition processes, preventing the rapid release of carbon back into the atmosphere and acting as a long-term storage reservoir. Vegetated coastal ecosystems Seaweed farming and algae Seaweed grows in shallow and coastal areas, and captures significant amounts of carbon that can be transported to the deep ocean by oceanic mechanisms; seaweed reaching the deep ocean sequester carbon and prevent it from exchanging with the atmosphere over millennia. Growing seaweed offshore with the purpose of sinking the seaweed in the depths of the sea to sequester carbon has been suggested. In addition, seaweed grows very fast and can theoretically be harvested and processed to generate biomethane, via anaerobic digestion to generate electricity, via cogeneration/CHP or as a replacement for natural gas. One study suggested that if seaweed farms covered 9% of the ocean they could produce enough biomethane to supply Earth's equivalent demand for fossil fuel energy, remove 53 gigatonnes of per year from the atmosphere and sustainably produce 200 kg per year of fish, per person, for 10 billion people.Ideal species for such farming and conversion include Laminaria digitata, Fucus serratus and Saccharina latissima. Both macroalgae and microalgae are being investigated as possible means of carbon sequestration. Marine phytoplankton perform half of the global photosynthetic CO2 fixation (net global primary production of ~50 Pg C per year) and half of the oxygen production despite amounting to only ~1% of global plant biomass. Because algae lack the complex lignin associated with terrestrial plants, the carbon in algae is released into the atmosphere more rapidly than carbon captured on land. Algae have been proposed as a short-term storage pool of carbon that can be used as a feedstock for the production of various biogenic fuels. Large-scale seaweed farming could sequester significant amounts of carbon. Wild seaweed will sequester large amount of carbon through dissolved particles of organic matter being transported to deep ocean seafloors where it will become buried and remain for long periods of time. With respect to carbon farming, the potential growth of seaweed for carbon farming would see the harvested seaweed transported to the deep ocean for long-term burial. Seaweed farming occurs mostly in the Asian Pacific coastal areas where it has been a rapidly increasing market. The IPCC Special Report on the Ocean and Cryosphere in a Changing Climate recommends "further research attention" on seaweed farming as a mitigation tactic. Ocean fertilization Artificial upwelling Artificial upwelling or downwelling is an approach that would change the mixing layers of the ocean. Encouraging various ocean layers to mix can move nutrients and dissolved gases around. Mixing may be achieved by placing large vertical pipes in the oceans to pump nutrient rich water to the surface, triggering blooms of algae, which store carbon when they grow and export carbon when they die. This produces results somewhat similar to iron fertilization. One side-effect is a short-term rise in , which limits its attractiveness. Mixing layers involve transporting the denser and colder deep ocean water to the surface mixed layer. As the ocean temperature decreases with depth, more carbon dioxide and other compounds are able to dissolve in the deeper layers. This can be induced by reversing the oceanic carbon cycle through the use of large vertical pipes serving as ocean pumps, or a mixer array. When the nutrient rich deep ocean water is moved to the surface, algae bloom occurs, resulting in a decrease in carbon dioxide due to carbon intake from phytoplankton and other photosynthetic eukaryotic organisms. The transfer of heat between the layers will also cause seawater from the mixed layer to sink and absorb more carbon dioxide. This method has not gained much traction as algae bloom harms marine ecosystems by blocking sunlight and releasing harmful toxins into the ocean. The sudden increase in carbon dioxide on the surface level will also temporarily decrease the pH of the seawater, impairing the growth of coral reefs. The production of carbonic acid through the dissolution of carbon dioxide in seawater hinders marine biogenic calcification and causes major disruptions to the oceanic food chain. Basalt storage Carbon dioxide sequestration in basalt involves the injecting of into deep-sea formations. The first mixes with seawater and then reacts with the basalt, both of which are alkaline-rich elements. This reaction results in the release of and ions forming stable carbonate minerals. Underwater basalt offers a good alternative to other forms of oceanic carbon storage because it has a number of trapping measures to ensure added protection against leakage. These measures include "geochemical, sediment, gravitational and hydrate formation." Because hydrate is denser than in seawater, the risk of leakage is minimal. Injecting the at depths greater than ensures that the has a greater density than seawater, causing it to sink. One possible injection site is Juan de Fuca Plate. Researchers at the Lamont–Doherty Earth Observatory found that this plate at the western coast of the United States has a possible storage capacity of 208 gigatons. This could cover the entire current U.S. carbon emissions for over 100 years (as of 2009). This process is undergoing tests as part of the CarbFix project, resulting in 95% of the injected 250 tonnes of CO2 to solidify into calcite in two years, using 25 tonnes of water per tonne of CO2. Mineralization and deep sea sediments Similar to mineralization processes that take place within rocks, mineralization can also occur under the sea. The rate of dissolution of carbon dioxide from atmosphere to oceanic regions is determined by the circulation period of the ocean and buffering ability of subducting surface water. Researchers have demonstrated that the carbon dioxide marine storage at several kilometers depth could be viable for up to 500 years, but is dependent on injection site and conditions. Several studies have shown that although it may fix carbon dioxide effectively, carbon dioxide may be released back to the atmosphere over time. However, this is unlikely for at least a few more centuries. The neutralization of CaCO3, or balancing the concentration of CaCO3 on the seafloor, land and in the ocean, can be measured on a timescale of thousands of years. More specifically, the predicted time is 1700 years for ocean and approximately 5000 to 6000 years for land. Further, the dissolution time for CaCO3 can be improved by injecting near or downstream of the storage site. In addition to carbon mineralization, another proposal is deep sea sediment injection. It injects liquid carbon dioxide at least below the surface directly into ocean sediments to generate carbon dioxide hydrate. Two regions are defined for exploration: 1) the negative buoyancy zone (NBZ), which is the region between liquid carbon dioxide denser than surrounding water and where liquid carbon dioxide has neutral buoyancy, and 2) the hydrate formation zone (HFZ), which typically has low temperatures and high pressures. Several research models have shown that the optimal depth of injection requires consideration of intrinsic permeability and any changes in liquid carbon dioxide permeability for optimal storage. The formation of hydrates decreases liquid carbon dioxide permeability, and injection below HFZ is more energetically favored than within the HFZ. If the NBZ is a greater column of water than the HFZ, the injection should happen below the HFZ and directly to the NBZ. In this case, liquid carbon dioxide will sink to the NBZ and be stored below the buoyancy and hydrate cap. Carbon dioxide leakage can occur if there is dissolution into pore fluidor via molecular diffusion. However, this occurs over thousands of years. Adding bases to neutralize acids Carbon dioxide forms carbonic acid when dissolved in water, so ocean acidification is a significant consequence of elevated carbon dioxide levels, and limits the rate at which it can be absorbed into the ocean (the solubility pump). A variety of different bases have been suggested that could neutralize the acid and thus increase absorption. For example, adding crushed limestone to oceans enhances the absorption of carbon dioxide. Another approach is to add sodium hydroxide to oceans which is produced by electrolysis of salt water or brine, while eliminating the waste hydrochloric acid by reaction with a volcanic silicate rock such as enstatite, effectively increasing the rate of natural weathering of these rocks to restore ocean pH. Single-step carbon sequestration and storage Single-step carbon sequestration and storage is a saline water-based mineralization technology extracting carbon dioxide from seawater and storing it in the form of solid minerals. Abandoned ideas Direct deep-sea carbon dioxide injection It was once suggested that CO2 could be stored in the oceans by direct injection into the deep ocean and storing it there for some centuries. At the time, this proposal was called "ocean storage" but more precisely it was known as "direct deep-sea carbon dioxide injection". However, the interest in this avenue of carbon storage has much reduced since about 2001 because of concerns about the unknown impacts on marine life, high costs and concerns about its stability or permanence. The "IPCC Special Report on Carbon Dioxide Capture and Storage" in 2005 did include this technology as an option. However, the IPCC Fifth Assessment Report in 2014 no longer mentioned the term "ocean storage" in its report on climate change mitigation methods. The most recent IPCC Sixth Assessment Report in 2022 also no longer includes any mention of "ocean storage" in its "Carbon Dioxide Removal taxonomy". Costs Cost of carbon sequestration (not including capture and transport) varies but is below US$10 per tonne in some cases where onshore storage is available. For example Carbfix cost is around US$25 per tonne of CO2. A 2020 report estimated sequestration in forests (so including capture) at US$35 for small quantities to US$280 per tonne for 10% of the total required to keep to 1.5 C warming. But there is risk of forest fires releasing the carbon.
Physical sciences
Earth science basics: General
Earth science
1157422
https://en.wikipedia.org/wiki/Virial%20expansion
Virial expansion
The virial expansion is a model of thermodynamic equations of state. It expresses the pressure of a gas in local equilibrium as a power series of the density. This equation may be represented in terms of the compressibility factor, , as This equation was first proposed by Kamerlingh Onnes. The terms , , and represent the virial coefficients. The leading coefficient is defined as the constant value of 1, which ensures that the equation reduces to the ideal gas expression as the gas density approaches zero. Second and third virial coefficients The second, , and third, , virial coefficients have been studied extensively and tabulated for many fluids for more than a century. Two of the most extensive compilations are in the books by Dymond and the National Institute of Standards and Technology's Thermo Data Engine Database and its Web Thermo Tables. Tables of second and third virial coefficients of many fluids are included in these compilations. Casting equations of the state into virial form Most equations of state can be reformulated and cast in virial equations to evaluate and compare their implicit second and third virial coefficients. The seminal van der Waals equation of state was proposed in 1873: where is molar volume. It can be rearranged by expanding into a Taylor series: In the van der Waals equation, the second virial coefficient has roughly the correct behavior, as it decreases monotonically when the temperature is lowered. The third and higher virial coefficients are independent of temperature, and are not correct, especially at low temperatures. Almost all subsequent equations of state are derived from the van der Waals equation, like those from Dieterici, Berthelot, Redlich-Kwong, and Peng-Robinson suffer from the singularity introduced by . Other equations of state, started by Beattie and Bridgeman, are more closely related to virial equations, and show to be more accurate in representing behavior of fluids in both gaseous and liquid phases. The Beattie-Bridgeman equation of state, proposed in 1928, where can be rearranged as The Benedict-Webb-Rubin equation of state of 1940 represents better isotherms below the critical temperature: More improvements were achieved by Starling in 1972: Following are plots of reduced second and third virial coefficients against reduced temperature according to Starling: The exponential terms in the last two equations correct the third virial coefficient so that the isotherms in the liquid phase can be represented correctly. The exponential term converges rapidly as ρ increases, and if only the first two terms in its Taylor expansion series are taken, , and multiplied with , the result is , which contributes a term to the third virial coefficient, and one term to the eighth virial coefficient, which can be ignored. After the expansion of the exponential terms, the Benedict-Webb-Rubin and Starling equations of state have this form: Cubic virial equation of state The three-term virial equation or a cubic virial equation of state has the simplicity of the Van der Waals equation of state without its singularity at . Theoretically, the second virial coefficient represents bimolecular attraction forces, and the third virial term represents the repulsive forces among three molecules in close contact. With this cubic virial equation, the coefficients B and C can be solved in closed form. Imposing the critical conditions: the cubic virial equation can be solved to yield: and is therefore 0.333, compared to 0.375 from the Van der Waals equation. Between the critical point and the triple point is the saturation region of fluids. In this region, the gaseous phase coexists with the liquid phase under saturation pressure , and the saturation temperature . Under the saturation pressure, the liquid phase has a molar volume of , and the gaseous phase has a molar volume of . The corresponding molar densities are and . These are the saturation properties needed to compute second and third virial coefficients. A valid equation of state must produce an isotherm which crosses the horizontal line of at and , on . Under and , gas is in equilibrium with liquid. This means that the PρT isotherm has three roots at . The cubic virial equation of state at is: It can be rearranged as: The factor is the volume of saturated gas according to the ideal gas law, and can be given a unique name : In the saturation region, the cubic equation has three roots, and can be written alternatively as: which can be expanded as: is a volume of an unstable state between and . The cubic equations are identical. Therefore, from the linear terms in these equations, can be solved: From the quadratic terms, B can be solved: And from the cubic terms, C can be solved: Since , and have been tabulated for many fluids with as a parameter, B and C can be computed in the saturation region of these fluids. The results are generally in agreement with those computed from Benedict-Webb-Rubin and Starling equations of state.
Physical sciences
Thermodynamics
Physics
1157601
https://en.wikipedia.org/wiki/Rake%20%28tool%29
Rake (tool)
A rake (Old English raca, cognate with Dutch hark, German Rechen, from the root meaning "to scrape together", "heap up") is a broom for outside use; a horticultural implement consisting of a toothed bar fixed transversely to a handle, or tines fixed to a handle, and used to collect leaves, hay, grass, etc., and in gardening, for loosening the soil, light weeding and levelling, removing dead grass from lawns, and generally for purposes performed in agriculture by the harrow. Large mechanized versions of rakes are used in farming, called hay rakes, are built in many different forms (e.g. star-wheel rakes, rotary rakes, etc.). Nonmechanized farming may be done with various forms of a hand rake. Types Modern hand-rakes usually have steel, plastic, or bamboo teeth or tines, though historically they have been made with wood or iron. The handle is typically a ~ haft made of wood, bamboo, steel or fiberglass. Plastic rakes are generally lighter weight and lower cost. Because they can be fabricated in widths of greater dimensions they are more suitable for leaves which have recently been deposited. Metal tined rakes are better suited for spring raking when the debris is often wet or rotted and can best be collected when the metal tines penetrate to the thatch layer. Leaf rakes are used to gather leaves, cut grass and debris, have long, flat teeth bent into an L-shape and fanned out from the point of attachment. This permits some flexibility to allow the teeth to conform to terrain, while also being light to minimize damage to vegetation. Compact, telescoping leaf rakes allow the teeth to be withdrawn by sliding a movable fixture point up the shaft. Garden rakes typically have steel teeth and are intended for heavier use in soil and larger debris. They have long, stiff teeth which must be able to withstand abrasion and bending forces. Bow rakes are a subset of garden rakes which separate the handle and bar with a bow-shaped extension which allows the flat back of the bar to be used for levelling and scraping. These may have somewhat finer and shorter teeth and can perform a myriad of gardening and landscape tasks, and due to their more costly construction are likely to be a professional tool. Alternatively, a second set of differently shaped teeth may be added to the back of the bar. A landscaping rake resembles an oversized garden rake, with a longer head. A landscaping rake serves the purpose of smoothing and grading extensive soil areas or earth areas. It distinguishes itself from traditional leaf rakes or soil clod breakers due to its substantial width. Typically, a landscaping rake boasts a head measuring 30 to 38 inches or even broader, featuring steel tines set at a 90-degree angle to the handle. A stone rake is similar to a landscape rake, but with a narrower head of about 18 to 28 inches and is constructed from steel or aluminum. The head sits at a 90-degree angle to the handle. A thatch rake's primary function is to eliminate thatch—an organic layer situated between the lawn and the soil surface. Diverging from the typical structure of rakes, a thatch rake is equipped with sharp blades on both sides of its head. One side effectively breaks up the thatch, while the other side facilitates its removal. When left unaddressed, a dense thatch layer can impede the penetration of air and sunlight to the base of grass blades, potentially leading to lawn diseases. The removal of a substantial thatch layer, particularly if it measures 1/4-inch thick or thicker, proves beneficial for enhancing the overall health and vitality of the lawn. A reliable thatch rake stands out as an indispensable tool for effectively performing this task. A heavy rake is for conditioning and dethatching soil as well as moving larger pieces of debris. Most weeds have weaker and shallower roots than grass and thus dethatching along with (afterward) necessary sunlight, fertilizer and seed, and if later necessary any remedial chemicals, makes for a good crop of grass. Larger tools (or lawnmower attachments) are more often used for large areas of de-thatching or soil preparation. A concrete rake is a heavy-duty tool with a flat edge for spreading and smoothing wet concrete and a curved side for scooping. Made of durable materials, it is essential for leveling concrete surfaces in construction. A roof rake features a long handle and a flat, scoop-like head for removing debris or snow from rooftops, preventing gutter clogs and structural damage, especially in snowy regions. Berry-picking rakes are tools for collecting berries. Fire rakes, a heavy-duty variant of the normal rake, are used for fire prevention. Cultural associations If a rake lies in the ground with the teeth facing upwards, as shown on the top picture, and someone accidentally steps on the teeth, the rake's handle can swing rapidly upwards, colliding with the victim's face. This is often seen in slapstick comedy and cartoons, such as Tom and Jerry and The Simpsons episode "Cape Feare", wherein a series of rakes become what Sideshow Bob describes as his "arch-nemesis". There is a Russian saying "to step on the same rake" (), which means "to repeat the same silly mistake", also the word "rake" () in Russian slang means "troubles". In the 16th century Chinese novel Journey to the West, the major character Zhu Bajie wields a rake, his "Nine-toothed rake" (Jiǔchǐdīngpá), as his signatory weapon. In Japanese folklore, the Kumade (熊手, lit. 'bear hand') is a rake; a smaller, handheld, decorated version is sold as an engimono, often during Tori-no-Ichi (lit. "Rooster markets) which take place throughout Japan each November, and is believed to be able to, literally, rake-in good-fortune &/or rake-out bad-fortune for the user.
Technology
Agricultural tools
null
1157828
https://en.wikipedia.org/wiki/ROSAT
ROSAT
ROSAT (short for Röntgensatellit; in German X-rays are called Röntgenstrahlen, in honour of Wilhelm Röntgen) was a German Aerospace Center-led satellite X-ray telescope, with instruments built by West Germany, the United Kingdom and the United States. It was launched on 1 June 1990, on a Delta II rocket from Cape Canaveral, on what was initially designed as an 18-month mission, with provision for up to five years of operation. ROSAT operated for over eight years, finally shutting down on 12 February 1999. In February 2011, it was reported that the satellite was unlikely to burn up entirely while re-entering the Earth's atmosphere due to the large amount of ceramics and glass used in construction. Parts as heavy as could impact the surface. ROSAT eventually re-entered the Earth's atmosphere on 23 October 2011 over the Bay of Bengal. Overview The Roentgensatellit (ROSAT) was a joint German, U.S. and British X-ray astrophysics project. ROSAT carried a German-built imaging X-ray Telescope (XRT) with three focal plane instruments: two German Position Sensitive Proportional Counters (PSPC) and the US-supplied High Resolution Imager (HRI). The X-ray mirror assembly was a grazing incidence four-fold nested Wolter I telescope with an 84-cm diameter aperture and 240-cm focal length. The angular resolution was less than 5 arcsecond at half energy width (the "angle within which half of the electromagnetic radiation" is focused). The XRT assembly was sensitive to X-rays between 0.1 and 2 keV (one thousand Electronvolt). In addition, a British-supplied extreme ultraviolet (XUV) telescope, the Wide Field Camera (WFC), was coaligned with the XRT and covered the energy band from 0.042 to 0.21 keV (30 to 6 nm). ROSAT's unique strengths were high spatial resolution, low-background, soft X-ray imaging for the study of the structure of low surface brightness features, and for low-resolution spectroscopy. The ROSAT spacecraft was a three-axis stabilized satellite which can be used for pointed observations, for slewing between targets, and for performing scanning observations on great circles perpendicular to the plane of the ecliptic. ROSAT was capable of fast slews (180 deg. in ~15 min.) which makes it possible to observe two targets on opposite hemispheres during each orbit. The pointing accuracy was 1 arcminute with stability less than 5 arcsec per sec and jitter radius of ~10 arcsec. Two CCD star sensors were used for optical position sensing of guide stars and attitude determination of the spacecraft. The post facto attitude determination accuracy was 6 arcsec. The ROSAT mission was divided into two phases: After a two-month on-orbit calibration and verification period, an all-sky survey was performed for six months using the PSPC in the focus of XRT, and in two XUV bands using the WFC. The survey was carried out in the scan mode. The second phase consists of the remainder of the mission and was devoted to pointed observations of selected astrophysical sources. In ROSAT's pointed phase, observing time was allocated to Guest Investigators from all three participating countries through peer review of submitted proposals. ROSAT had a design life of 18 months, but was expected to operate beyond its nominal lifetime. Instruments X-ray Telescope (XRT) The main assembly was a German-built imaging X-ray Telescope (XRT) with three focal plane instruments: two German Position Sensitive Proportional Counters (PSPC) and the US-supplied High Resolution Imager (HRI). The X-ray mirror assembly was a grazing incidence four-fold nested Wolter I telescope with an diameter aperture and focal length. The angular resolution was less than 5 arcsec at half energy width. The XRT assembly was sensitive to X-rays between 0.1 and 2 keV. Position Sensitive Proportional Counters (two) (PSPC) There are two Position Sensitive Proportional Counters (PSPC), PSPC-B and PSPC-C, mounted on a carousel within the focal plane turret of ROSAT. PSPC-C was intended to be the primary detector for the mission and was used for the bulk of the All-Sky Survey until it was destroyed during the AMCS glitch on 1991, January 25th. After the glitch, PSPC-B was used for all further observations. Two more PSPCs(PSPC-A and PSPC-D) were mounted on ROSAT for ground calibration. Each PSPC is a thin-window gas counter. Each incoming X-ray photon produces an electron cloud whose position and charge are detected using two wire grids. The photon position is determined with an accuracy of about 120 micrometers. The electron cloud's charge corresponds to the photon energy, with a nominal spectral bandpass 0.1-2.4 keV. High Resolution Imager (HRI) The US-supplied High Resolution Imager used a crossed grid detector with a position accuracy to 25 micrometers. The instrument was damaged by solar exposure on 20 September 1998. Wide Field Camera (WFC) The Wide Field Camera (WFC) was a UK-supplied extreme ultraviolet (XUV) telescope co-aligned with the XRT and covered the wave band between 300 and 60 angstroms (0.042 to 0.21 keV). Highlights X-ray all-sky survey catalog, more than 150,000 objects XUV all-sky survey catalog (479 objects) Source catalogs from the pointed phase (PSPC and HRI) containing ~ 100,000 serendipitous sources Detailed morphology of supernova remnants and clusters of galaxies. Detection of shadowing of diffuse X-ray emission by molecular clouds. Detection of pulsations from Geminga. Detection of isolated neutron stars. Discovery of X-ray emission from comets. Observation of X-ray emission from the collision of Comet Shoemaker-Levy with Jupiter. Catalogues 1RXS – an acronym which is the prefix used for the First ROSAT X-ray Survey (1st ROSAT X-ray Survey), a catalogue of astronomical objects visible for ROSAT in the X-ray spectrum.
Technology
Space-based observatories
null
1157853
https://en.wikipedia.org/wiki/Bellis%20perennis
Bellis perennis
Bellis perennis (), the daisy, is a European species of the family Asteraceae, often considered the archetypal species of the name daisy. To distinguish this species from other plants known as daisies, it is sometimes qualified or known as common daisy, lawn daisy or English daisy. Description Bellis perennis is a perennial herbaceous plant growing to in height. It has short creeping rhizomes and rosettes of small rounded or spoon-shaped leaves that are from long and grow flat to the ground. The species habitually colonises lawns, and is difficult to eradicate by mowing, hence the term 'lawn daisy'. It blooms from March to September and exhibits the phenomenon of heliotropism, in which the flowers follow the position of the sun in the sky. The flowerheads are composite, about in diameter, in the form of a pseudanthium, consisting of many sessile flowers with white ray florets (often tipped red) and yellow disc florets. Each inflorescence is borne on a single leafless stem , rarely tall. The capitulum, or disc of florets, is surrounded by two rows of green bracts known as "phyllaries". The achenes are without pappus. Etymology Bellis may come from bellus, Latin for "pretty", and perennis is Latin for "everlasting". The name "daisy", possibly originating with this plant, is considered a corruption of "day's eye", because the whole head closes at night and opens in the morning. Geoffrey Chaucer called it "eye of the day". In Medieval times, Bellis perennis or the English Daisy was commonly known as "Mary's Rose". Historically, the plant has also been widely known as bruisewort, and occasionally woundwort (although this name is now more closely associated with the genus Stachys). It is also known as bone flower. Distribution and habitat Bellis perennis is native to western, central and northern Europe, including remote islands such as the Faroe Islands, but has become widely naturalised in most temperate regions, including the Americas and Australasia. It prefers field-like habitats. Cultivation The species generally blooms from early to midsummer, although when grown under ideal conditions, it has a very long flowering season and will even produce a few flowers in the middle of mild winters. It can generally be grown where minimum temperatures are above , in full sun to partial shade conditions, and requires little or no maintenance. It has no known serious insect or disease problems and can generally be grown in most well-drained soils. The plant may be propagated either by seed after the last frost, or by division after flowering. Though not native to the United States, the species is still considered a valuable ground cover in certain garden settings (e.g., as part of English or cottage inspired gardens, as well as spring meadows where low growth and some colour is desired in parallel with minimal care and maintenance while helping to crowd out noxious weeds once established and naturalised). Numerous single- and double-flowered varieties are in cultivation, producing flat or spherical blooms in a range of sizes () and colours (red, pink and white). They are generally grown from seed as biennial bedding plants. They can also be purchased as plugs in spring. It has been reported to be mostly self-fertilizing, but some plants may be self-sterile. Uses Bellis perennis may be used as a potherb. Young leaves can be eaten raw in salads, or cooked, though the leaves become increasingly astringent with age. Flower buds and petals can be eaten raw in sandwiches, soups and salads. It is also used as a tea and as a vitamin supplement. B. perennis has astringent properties and has been used in herbal medicine. Daisies have traditionally been used for making daisy chains in children's games. Culture Daisy is used as a feminine name, and sometimes as a nickname for people named Margaret, after the French name for the oxeye daisy, marguerite. The daisy is the national flower of the Netherlands. Gallery
Biology and health sciences
Asterales
null
1158125
https://en.wikipedia.org/wiki/DNA%20sequencing
DNA sequencing
DNA sequencing is the process of determining the nucleic acid sequence – the order of nucleotides in DNA. It includes any method or technology that is used to determine the order of the four bases: adenine, guanine, cytosine, and thymine. The advent of rapid DNA sequencing methods has greatly accelerated biological and medical research and discovery. Knowledge of DNA sequences has become indispensable for basic biological research, DNA Genographic Projects and in numerous applied fields such as medical diagnosis, biotechnology, forensic biology, virology and biological systematics. Comparing healthy and mutated DNA sequences can diagnose different diseases including various cancers, characterize antibody repertoire, and can be used to guide patient treatment. Having a quick way to sequence DNA allows for faster and more individualized medical care to be administered, and for more organisms to be identified and cataloged. The rapid advancements in DNA sequencing technology have played a crucial role in sequencing complete genomes of various life forms, including humans, as well as numerous animal, plant, and microbial species. The first DNA sequences were obtained in the early 1970s by academic researchers using laborious methods based on two-dimensional chromatography. Following the development of fluorescence-based sequencing methods with a DNA sequencer, DNA sequencing has become easier and orders of magnitude faster. Applications DNA sequencing may be used to determine the sequence of individual genes, larger genetic regions (i.e. clusters of genes or operons), full chromosomes, or entire genomes of any organism. DNA sequencing is also the most efficient way to indirectly sequence RNA or proteins (via their open reading frames). In fact, DNA sequencing has become a key technology in many areas of biology and other sciences such as medicine, forensics, and anthropology. Molecular biology Sequencing is used in molecular biology to study genomes and the proteins they encode. Information obtained using sequencing allows researchers to identify changes in genes and noncoding DNA (including regulatory sequences), associations with diseases and phenotypes, and identify potential drug targets. Evolutionary biology Since DNA is an informative macromolecule in terms of transmission from one generation to another, DNA sequencing is used in evolutionary biology to study how different organisms are related and how they evolved. In February 2021, scientists reported, for the first time, the sequencing of DNA from animal remains, a mammoth in this instance, over a million years old, the oldest DNA sequenced to date. Metagenomics The field of metagenomics involves identification of organisms present in a body of water, sewage, dirt, debris filtered from the air, or swab samples from organisms. Knowing which organisms are present in a particular environment is critical to research in ecology, epidemiology, microbiology, and other fields. Sequencing enables researchers to determine which types of microbes may be present in a microbiome, for example. Virology As most viruses are too small to be seen by a light microscope, sequencing is one of the main tools in virology to identify and study the virus. Viral genomes can be based in DNA or RNA. RNA viruses are more time-sensitive for genome sequencing, as they degrade faster in clinical samples. Traditional Sanger sequencing and next-generation sequencing are used to sequence viruses in basic and clinical research, as well as for the diagnosis of emerging viral infections, molecular epidemiology of viral pathogens, and drug-resistance testing. There are more than 2.3 million unique viral sequences in GenBank. Recently, NGS has surpassed traditional Sanger as the most popular approach for generating viral genomes. During the 1997 avian influenza outbreak, viral sequencing determined that the influenza sub-type originated through reassortment between quail and poultry. This led to legislation in Hong Kong that prohibited selling live quail and poultry together at market. Viral sequencing can also be used to estimate when a viral outbreak began by using a molecular clock technique. Medicine Medical technicians may sequence genes (or, theoretically, full genomes) from patients to determine if there is risk of genetic diseases. This is a form of genetic testing, though some genetic tests may not involve DNA sequencing. As of 2013 DNA sequencing was increasingly used to diagnose and treat rare diseases. As more and more genes are identified that cause rare genetic diseases, molecular diagnoses for patients become more mainstream. DNA sequencing allows clinicians to identify genetic diseases, improve disease management, provide reproductive counseling, and more effective therapies. Gene sequencing panels are used to identify multiple potential genetic causes of a suspected disorder. Also, DNA sequencing may be useful for determining a specific bacteria, to allow for more precise antibiotics treatments, hereby reducing the risk of creating antimicrobial resistance in bacteria populations. Forensic investigation DNA sequencing may be used along with DNA profiling methods for forensic identification and paternity testing. DNA testing has evolved tremendously in the last few decades to ultimately link a DNA print to what is under investigation. The DNA patterns in fingerprint, saliva, hair follicles, etc. uniquely separate each living organism from another. Testing DNA is a technique which can detect specific genomes in a DNA strand to produce a unique and individualized pattern. DNA sequencing may be used along with DNA profiling methods for forensic identification and paternity testing, as it has evolved significantly over the past few decades to ultimately link a DNA print to what is under investigation. The DNA patterns in fingerprint, saliva, hair follicles, and other bodily fluids uniquely separate each living organism from another, making it an invaluable tool in the field of forensic science. The process of DNA testing involves detecting specific genomes in a DNA strand to produce a unique and individualized pattern, which can be used to identify individuals or determine their relationships. The advancements in DNA sequencing technology have made it possible to analyze and compare large amounts of genetic data quickly and accurately, allowing investigators to gather evidence and solve crimes more efficiently. This technology has been used in various applications, including forensic identification, paternity testing, and human identification in cases where traditional identification methods are unavailable or unreliable. The use of DNA sequencing has also led to the development of new forensic techniques, such as DNA phenotyping, which allows investigators to predict an individual's physical characteristics based on their genetic data. In addition to its applications in forensic science, DNA sequencing has also been used in medical research and diagnosis. It has enabled scientists to identify genetic mutations and variations that are associated with certain diseases and disorders, allowing for more accurate diagnoses and targeted treatments. Moreover, DNA sequencing has also been used in conservation biology to study the genetic diversity of endangered species and develop strategies for their conservation. Furthermore, the use of DNA sequencing has also raised important ethical and legal considerations. For example, there are concerns about the privacy and security of genetic data, as well as the potential for misuse or discrimination based on genetic information. As a result, there are ongoing debates about the need for regulations and guidelines to ensure the responsible use of DNA sequencing technology. Overall, the development of DNA sequencing technology has revolutionized the field of forensic science and has far-reaching implications for our understanding of genetics, medicine, and conservation biology. The four canonical bases The canonical structure of DNA has four bases: thymine (T), adenine (A), cytosine (C), and guanine (G). DNA sequencing is the determination of the physical order of these bases in a molecule of DNA. However, there are many other bases that may be present in a molecule. In some viruses (specifically, bacteriophage), cytosine may be replaced by hydroxy methyl or hydroxy methyl glucose cytosine. In mammalian DNA, variant bases with methyl groups or phosphosulfate may be found. Depending on the sequencing technique, a particular modification, e.g., the 5mC (5 methyl cytosine) common in humans, may or may not be detected. In almost all organisms, DNA is synthesized in vivo using only the 4 canonical bases; modification that occurs post replication creates other bases like 5 methyl C. However, some bacteriophage can incorporate a non standard base directly. In addition to modifications, DNA is under constant assault by environmental agents such as UV and Oxygen radicals. At the present time, the presence of such damaged bases is not detected by most DNA sequencing methods, although PacBio has published on this. History Discovery of DNA structure and function Deoxyribonucleic acid (DNA) was first discovered and isolated by Friedrich Miescher in 1869, but it remained under-studied for many decades because proteins, rather than DNA, were thought to hold the genetic blueprint to life. This situation changed after 1944 as a result of some experiments by Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrating that purified DNA could change one strain of bacteria into another. This was the first time that DNA was shown capable of transforming the properties of cells. In 1953, James Watson and Francis Crick put forward their double-helix model of DNA, based on crystallized X-ray structures being studied by Rosalind Franklin. According to the model, DNA is composed of two strands of nucleotides coiled around each other, linked together by hydrogen bonds and running in opposite directions. Each strand is composed of four complementary nucleotides – adenine (A), cytosine (C), guanine (G) and thymine (T) – with an A on one strand always paired with T on the other, and C always paired with G. They proposed that such a structure allowed each strand to be used to reconstruct the other, an idea central to the passing on of hereditary information between generations. The foundation for sequencing proteins was first laid by the work of Frederick Sanger who by 1955 had completed the sequence of all the amino acids in insulin, a small protein secreted by the pancreas. This provided the first conclusive evidence that proteins were chemical entities with a specific molecular pattern rather than a random mixture of material suspended in fluid. Sanger's success in sequencing insulin spurred on x-ray crystallographers, including Watson and Crick, who by now were trying to understand how DNA directed the formation of proteins within a cell. Soon after attending a series of lectures given by Frederick Sanger in October 1954, Crick began developing a theory which argued that the arrangement of nucleotides in DNA determined the sequence of amino acids in proteins, which in turn helped determine the function of a protein. He published this theory in 1958. RNA sequencing RNA sequencing was one of the earliest forms of nucleotide sequencing. The major landmark of RNA sequencing is the sequence of the first complete gene and the complete genome of Bacteriophage MS2, identified and published by Walter Fiers and his coworkers at the University of Ghent (Ghent, Belgium), in 1972 and 1976. Traditional RNA sequencing methods require the creation of a cDNA molecule which must be sequenced. Traditional RNA Sequencing Methods Traditional RNA sequencing methods involve several steps: 1) Reverse Transcription: The first step is to convert the RNA molecule into a complementary DNA (cDNA) molecule using an enzyme called reverse transcriptase. 2) cDNA Synthesis: The cDNA molecule is then synthesized through a process called PCR (Polymerase Chain Reaction), which amplifies the cDNA to produce multiple copies. 3)Sequencing: The amplified cDNA is then sequenced using a technique such as Sanger sequencing or Maxam-Gilbert sequencing. Challenges and Limitations Traditional RNA sequencing methods have several limitations. For example: They require the creation of a cDNA molecule, which can be time-consuming and labor-intensive. They are prone to errors and biases, which can affect the accuracy of the sequencing results. They are limited in their ability to detect rare or low-abundance transcripts. Advances in RNA Sequencing Technology In recent years, advances in RNA sequencing technology have addressed some of these limitations. New methods such as next-generation sequencing (NGS) and single-molecule real-time (SMRT) sequencing have enabled faster, more accurate, and more cost-effective sequencing of RNA molecules. These advances have opened up new possibilities for studying gene expression, identifying new genes, and understanding the regulation of gene expression. Early DNA sequencing methods The first method for determining DNA sequences involved a location-specific primer extension strategy established by Ray Wu, a geneticist, at Cornell University in 1970. DNA polymerase catalysis and specific nucleotide labeling, both of which figure prominently in current sequencing schemes, were used to sequence the cohesive ends of lambda phage DNA. Between 1970 and 1973, Wu, scientist Radha Padmanabhan and colleagues demonstrated that this method can be employed to determine any DNA sequence using synthetic location-specific primers. Walter Gilbert and Allan Maxam at Harvard also developed sequencing methods, including one for "DNA sequencing by chemical degradation". In 1973, Gilbert and Maxam reported the sequence of 24 basepairs using a method known as wandering-spot analysis. Advancements in sequencing were aided by the concurrent development of recombinant DNA technology, allowing DNA samples to be isolated from sources other than viruses. In 1977, Frederick Sanger then adopted a primer-extension strategy to develop more rapid DNA sequencing methods at the MRC Centre, Cambridge, UK. This technique was similar to his “Plus and Minus” strategy, however, it was based upon the selective incorporation of chain-terminating dideoxynucleotides (ddNTPs) by DNA polymerase during in vitro DNA replication. Sanger published this method in the same year. Sequencing of full genomes The first full DNA genome to be sequenced was that of bacteriophage φX174 in 1977. Medical Research Council scientists deciphered the complete DNA sequence of the Epstein-Barr virus in 1984, finding it contained 172,282 nucleotides. Completion of the sequence marked a significant turning point in DNA sequencing because it was achieved with no prior genetic profile knowledge of the virus. A non-radioactive method for transferring the DNA molecules of sequencing reaction mixtures onto an immobilizing matrix during electrophoresis was developed by Herbert Pohl and co-workers in the early 1980s. Followed by the commercialization of the DNA sequencer "Direct-Blotting-Electrophoresis-System GATC 1500" by GATC Biotech, which was intensively used in the framework of the EU genome-sequencing programme, the complete DNA sequence of the yeast Saccharomyces cerevisiae chromosome II. Leroy E. Hood's laboratory at the California Institute of Technology announced the first semi-automated DNA sequencing machine in 1986. This was followed by Applied Biosystems' marketing of the first fully automated sequencing machine, the ABI 370, in 1987 and by Dupont's Genesis 2000 which used a novel fluorescent labeling technique enabling all four dideoxynucleotides to be identified in a single lane. By 1990, the U.S. National Institutes of Health (NIH) had begun large-scale sequencing trials on Mycoplasma capricolum, Escherichia coli, Caenorhabditis elegans, and Saccharomyces cerevisiae at a cost of US$0.75 per base. Meanwhile, sequencing of human cDNA sequences called expressed sequence tags began in Craig Venter's lab, an attempt to capture the coding fraction of the human genome. In 1995, Venter, Hamilton Smith, and colleagues at The Institute for Genomic Research (TIGR) published the first complete genome of a free-living organism, the bacterium Haemophilus influenzae. The circular chromosome contains 1,830,137 bases and its publication in the journal Science marked the first published use of whole-genome shotgun sequencing, eliminating the need for initial mapping efforts. By 2001, shotgun sequencing methods had been used to produce a draft sequence of the human genome. High-throughput sequencing (HTS) methods Several new methods for DNA sequencing were developed in the mid to late 1990s and were implemented in commercial DNA sequencers by 2000. Together these were called the "next-generation" or "second-generation" sequencing (NGS) methods, in order to distinguish them from the earlier methods, including Sanger sequencing. In contrast to the first generation of sequencing, NGS technology is typically characterized by being highly scalable, allowing the entire genome to be sequenced at once. Usually, this is accomplished by fragmenting the genome into small pieces, randomly sampling for a fragment, and sequencing it using one of a variety of technologies, such as those described below. An entire genome is possible because multiple fragments are sequenced at once (giving it the name "massively parallel" sequencing) in an automated process. NGS technology has tremendously empowered researchers to look for insights into health, anthropologists to investigate human origins, and is catalyzing the "Personalized Medicine" movement. However, it has also opened the door to more room for error. There are many software tools to carry out the computational analysis of NGS data, often compiled at online platforms such as CSI NGS Portal, each with its own algorithm. Even the parameters within one software package can change the outcome of the analysis. In addition, the large quantities of data produced by DNA sequencing have also required development of new methods and programs for sequence analysis. Several efforts to develop standards in the NGS field have been attempted to address these challenges, most of which have been small-scale efforts arising from individual labs. Most recently, a large, organized, FDA-funded effort has culminated in the BioCompute standard. On 26 October 1990, Roger Tsien, Pepi Ross, Margaret Fahnestock and Allan J Johnston filed a patent describing stepwise ("base-by-base") sequencing with removable 3' blockers on DNA arrays (blots and single DNA molecules). In 1996, Pål Nyrén and his student Mostafa Ronaghi at the Royal Institute of Technology in Stockholm published their method of pyrosequencing. On 1 April 1997, Pascal Mayer and Laurent Farinelli submitted patents to the World Intellectual Property Organization describing DNA colony sequencing. The DNA sample preparation and random surface-polymerase chain reaction (PCR) arraying methods described in this patent, coupled to Roger Tsien et al.'s "base-by-base" sequencing method, is now implemented in Illumina's Hi-Seq genome sequencers. In 1998, Phil Green and Brent Ewing of the University of Washington described their phred quality score for sequencer data analysis, a landmark analysis technique that gained widespread adoption, and which is still the most common metric for assessing the accuracy of a sequencing platform. Lynx Therapeutics published and marketed massively parallel signature sequencing (MPSS), in 2000. This method incorporated a parallelized, adapter/ligation-mediated, bead-based sequencing technology and served as the first commercially available "next-generation" sequencing method, though no DNA sequencers were sold to independent laboratories. Basic methods Maxam-Gilbert sequencing Allan Maxam and Walter Gilbert published a DNA sequencing method in 1977 based on chemical modification of DNA and subsequent cleavage at specific bases. Also known as chemical sequencing, this method allowed purified samples of double-stranded DNA to be used without further cloning. This method's use of radioactive labeling and its technical complexity discouraged extensive use after refinements in the Sanger methods had been made. Maxam-Gilbert sequencing requires radioactive labeling at one 5' end of the DNA and purification of the DNA fragment to be sequenced. Chemical treatment then generates breaks at a small proportion of one or two of the four nucleotide bases in each of four reactions (G, A+G, C, C+T). The concentration of the modifying chemicals is controlled to introduce on average one modification per DNA molecule. Thus a series of labeled fragments is generated, from the radiolabeled end to the first "cut" site in each molecule. The fragments in the four reactions are electrophoresed side by side in denaturing acrylamide gels for size separation. To visualize the fragments, the gel is exposed to X-ray film for autoradiography, yielding a series of dark bands each corresponding to a radiolabeled DNA fragment, from which the sequence may be inferred. This method is mostly obsolete as of 2023. Chain-termination methods The chain-termination method developed by Frederick Sanger and coworkers in 1977 soon became the method of choice, owing to its relative ease and reliability. When invented, the chain-terminator method used fewer toxic chemicals and lower amounts of radioactivity than the Maxam and Gilbert method. Because of its comparative ease, the Sanger method was soon automated and was the method used in the first generation of DNA sequencers. Sanger sequencing is the method which prevailed from the 1980s until the mid-2000s. Over that period, great advances were made in the technique, such as fluorescent labelling, capillary electrophoresis, and general automation. These developments allowed much more efficient sequencing, leading to lower costs. The Sanger method, in mass production form, is the technology which produced the first human genome in 2001, ushering in the age of genomics. However, later in the decade, radically different approaches reached the market, bringing the cost per genome down from $100 million in 2001 to $10,000 in 2011. Sequencing by synthesis The objective for sequential sequencing by synthesis (SBS) is to determine the sequencing of a DNA sample by detecting the incorporation of a nucleotide by a DNA polymerase. An engineered polymerase is used to synthesize a copy of a single strand of DNA and the incorporation of each nucleotide is monitored. The principle of real-time sequencing by synthesis was first described in 1993 with improvements published some years later. The key parts are highly similar for all embodiments of SBS and includes (1) amplification of DNA (to enhance the subsequent signal) and attach the DNA to be sequenced to a solid support, (2) generation of single stranded DNA on the solid support, (3) incorporation of nucleotides using an engineered polymerase and (4) real-time detection of the incorporation of nucleotide The steps 3-4 are repeated and the sequence is assembled from the signals obtained in step 4. This principle of real-time sequencing-by-synthesis has been used for almost all massive parallel sequencing instruments, including 454, PacBio, IonTorrent, Illumina and MGI. Large-scale sequencing and de novo sequencing Large-scale sequencing often aims at sequencing very long DNA pieces, such as whole chromosomes, although large-scale sequencing can also be used to generate very large numbers of short sequences, such as found in phage display. For longer targets such as chromosomes, common approaches consist of cutting (with restriction enzymes) or shearing (with mechanical forces) large DNA fragments into shorter DNA fragments. The fragmented DNA may then be cloned into a DNA vector and amplified in a bacterial host such as Escherichia coli. Short DNA fragments purified from individual bacterial colonies are individually sequenced and assembled electronically into one long, contiguous sequence. Studies have shown that adding a size selection step to collect DNA fragments of uniform size can improve sequencing efficiency and accuracy of the genome assembly. In these studies, automated sizing has proven to be more reproducible and precise than manual gel sizing. The term "de novo sequencing" specifically refers to methods used to determine the sequence of DNA with no previously known sequence. De novo translates from Latin as "from the beginning". Gaps in the assembled sequence may be filled by primer walking. The different strategies have different tradeoffs in speed and accuracy; shotgun methods are often used for sequencing large genomes, but its assembly is complex and difficult, particularly with sequence repeats often causing gaps in genome assembly. Most sequencing approaches use an in vitro cloning step to amplify individual DNA molecules, because their molecular detection methods are not sensitive enough for single molecule sequencing. Emulsion PCR isolates individual DNA molecules along with primer-coated beads in aqueous droplets within an oil phase. A polymerase chain reaction (PCR) then coats each bead with clonal copies of the DNA molecule followed by immobilization for later sequencing. Emulsion PCR is used in the methods developed by Marguilis et al. (commercialized by 454 Life Sciences), Shendure and Porreca et al. (also known as "polony sequencing") and SOLiD sequencing, (developed by Agencourt, later Applied Biosystems, now Life Technologies). Emulsion PCR is also used in the GemCode and Chromium platforms developed by 10x Genomics. Shotgun sequencing Shotgun sequencing is a sequencing method designed for analysis of DNA sequences longer than 1000 base pairs, up to and including entire chromosomes. This method requires the target DNA to be broken into random fragments. After sequencing individual fragments using the chain termination method, the sequences can be reassembled on the basis of their overlapping regions. High-throughput methods High-throughput sequencing, which includes next-generation "short-read" and third-generation "long-read" sequencing methods, applies to exome sequencing, genome sequencing, genome resequencing, transcriptome profiling (RNA-Seq), DNA-protein interactions (ChIP-sequencing), and epigenome characterization. The high demand for low-cost sequencing has driven the development of high-throughput sequencing technologies that parallelize the sequencing process, producing thousands or millions of sequences concurrently. High-throughput sequencing technologies are intended to lower the cost of DNA sequencing beyond what is possible with standard dye-terminator methods. In ultra-high-throughput sequencing as many as 500,000 sequencing-by-synthesis operations may be run in parallel. Such technologies led to the ability to sequence an entire human genome in as little as one day. , corporate leaders in the development of high-throughput sequencing products included Illumina, Qiagen and ThermoFisher Scientific. Long-read sequencing methods Single molecule real time (SMRT) sequencing SMRT sequencing is based on the sequencing by synthesis approach. The DNA is synthesized in zero-mode wave-guides (ZMWs) – small well-like containers with the capturing tools located at the bottom of the well. The sequencing is performed with use of unmodified polymerase (attached to the ZMW bottom) and fluorescently labelled nucleotides flowing freely in the solution. The wells are constructed in a way that only the fluorescence occurring by the bottom of the well is detected. The fluorescent label is detached from the nucleotide upon its incorporation into the DNA strand, leaving an unmodified DNA strand. According to Pacific Biosciences (PacBio), the SMRT technology developer, this methodology allows detection of nucleotide modifications (such as cytosine methylation). This happens through the observation of polymerase kinetics. This approach allows reads of 20,000 nucleotides or more, with average read lengths of 5 kilobases. In 2015, Pacific Biosciences announced the launch of a new sequencing instrument called the Sequel System, with 1 million ZMWs compared to 150,000 ZMWs in the PacBio RS II instrument. SMRT sequencing is referred to as "third-generation" or "long-read" sequencing. Nanopore DNA sequencing The DNA passing through the nanopore changes its ion current. This change is dependent on the shape, size and length of the DNA sequence. Each type of the nucleotide blocks the ion flow through the pore for a different period of time. The method does not require modified nucleotides and is performed in real time. Nanopore sequencing is referred to as "third-generation" or "long-read" sequencing, along with SMRT sequencing. Early industrial research into this method was based on a technique called 'exonuclease sequencing', where the readout of electrical signals occurred as nucleotides passed by alpha(α)-hemolysin pores covalently bound with cyclodextrin. However the subsequent commercial method, 'strand sequencing', sequenced DNA bases in an intact strand. Two main areas of nanopore sequencing in development are solid state nanopore sequencing, and protein based nanopore sequencing. Protein nanopore sequencing utilizes membrane protein complexes such as α-hemolysin, MspA (Mycobacterium smegmatis Porin A) or CssG, which show great promise given their ability to distinguish between individual and groups of nucleotides. In contrast, solid-state nanopore sequencing utilizes synthetic materials such as silicon nitride and aluminum oxide and it is preferred for its superior mechanical ability and thermal and chemical stability. The fabrication method is essential for this type of sequencing given that the nanopore array can contain hundreds of pores with diameters smaller than eight nanometers. The concept originated from the idea that single stranded DNA or RNA molecules can be electrophoretically driven in a strict linear sequence through a biological pore that can be less than eight nanometers, and can be detected given that the molecules release an ionic current while moving through the pore. The pore contains a detection region capable of recognizing different bases, with each base generating various time specific signals corresponding to the sequence of bases as they cross the pore which are then evaluated. Precise control over the DNA transport through the pore is crucial for success. Various enzymes such as exonucleases and polymerases have been used to moderate this process by positioning them near the pore's entrance. Short-read sequencing methods Massively parallel signature sequencing (MPSS) The first of the high-throughput sequencing technologies, massively parallel signature sequencing (or MPSS, also called next generation sequencing), was developed in the 1990s at Lynx Therapeutics, a company founded in 1992 by Sydney Brenner and Sam Eletr. MPSS was a bead-based method that used a complex approach of adapter ligation followed by adapter decoding, reading the sequence in increments of four nucleotides. This method made it susceptible to sequence-specific bias or loss of specific sequences. Because the technology was so complex, MPSS was only performed 'in-house' by Lynx Therapeutics and no DNA sequencing machines were sold to independent laboratories. Lynx Therapeutics merged with Solexa (later acquired by Illumina) in 2004, leading to the development of sequencing-by-synthesis, a simpler approach acquired from Manteia Predictive Medicine, which rendered MPSS obsolete. However, the essential properties of the MPSS output were typical of later high-throughput data types, including hundreds of thousands of short DNA sequences. In the case of MPSS, these were typically used for sequencing cDNA for measurements of gene expression levels. Polony sequencing The polony sequencing method, developed in the laboratory of George M. Church at Harvard, was among the first high-throughput sequencing systems and was used to sequence a full E. coli genome in 2005. It combined an in vitro paired-tag library with emulsion PCR, an automated microscope, and ligation-based sequencing chemistry to sequence an E. coli genome at an accuracy of >99.9999% and a cost approximately 1/9 that of Sanger sequencing. The technology was licensed to Agencourt Biosciences, subsequently spun out into Agencourt Personal Genomics, and eventually incorporated into the Applied Biosystems SOLiD platform. Applied Biosystems was later acquired by Life Technologies, now part of Thermo Fisher Scientific. 454 pyrosequencing A parallelized version of pyrosequencing was developed by 454 Life Sciences, which has since been acquired by Roche Diagnostics. The method amplifies DNA inside water droplets in an oil solution (emulsion PCR), with each droplet containing a single DNA template attached to a single primer-coated bead that then forms a clonal colony. The sequencing machine contains many picoliter-volume wells each containing a single bead and sequencing enzymes. Pyrosequencing uses luciferase to generate light for detection of the individual nucleotides added to the nascent DNA, and the combined data are used to generate sequence reads. This technology provides intermediate read length and price per base compared to Sanger sequencing on one end and Solexa and SOLiD on the other. Illumina (Solexa) sequencing Solexa, now part of Illumina, was founded by Shankar Balasubramanian and David Klenerman in 1998, and developed a sequencing method based on reversible dye-terminators technology, and engineered polymerases. The reversible terminated chemistry concept was invented by Bruno Canard and Simon Sarfati at the Pasteur Institute in Paris. It was developed internally at Solexa by those named on the relevant patents. In 2004, Solexa acquired the company Manteia Predictive Medicine in order to gain a massively parallel sequencing technology invented in 1997 by Pascal Mayer and Laurent Farinelli. It is based on "DNA clusters" or "DNA colonies", which involves the clonal amplification of DNA on a surface. The cluster technology was co-acquired with Lynx Therapeutics of California. Solexa Ltd. later merged with Lynx to form Solexa Inc. In this method, DNA molecules and primers are first attached on a slide or flow cell and amplified with polymerase so that local clonal DNA colonies, later coined "DNA clusters", are formed. To determine the sequence, four types of reversible terminator bases (RT-bases) are added and non-incorporated nucleotides are washed away. A camera takes images of the fluorescently labeled nucleotides. Then the dye, along with the terminal 3' blocker, is chemically removed from the DNA, allowing for the next cycle to begin. Unlike pyrosequencing, the DNA chains are extended one nucleotide at a time and image acquisition can be performed at a delayed moment, allowing for very large arrays of DNA colonies to be captured by sequential images taken from a single camera. Decoupling the enzymatic reaction and the image capture allows for optimal throughput and theoretically unlimited sequencing capacity. With an optimal configuration, the ultimately reachable instrument throughput is thus dictated solely by the analog-to-digital conversion rate of the camera, multiplied by the number of cameras and divided by the number of pixels per DNA colony required for visualizing them optimally (approximately 10 pixels/colony). In 2012, with cameras operating at more than 10 MHz A/D conversion rates and available optics, fluidics and enzymatics, throughput can be multiples of 1 million nucleotides/second, corresponding roughly to 1 human genome equivalent at 1x coverage per hour per instrument, and 1 human genome re-sequenced (at approx. 30x) per day per instrument (equipped with a single camera). Combinatorial probe anchor synthesis (cPAS) This method is an upgraded modification to combinatorial probe anchor ligation technology (cPAL) described by Complete Genomics which has since become part of Chinese genomics company BGI in 2013. The two companies have refined the technology to allow for longer read lengths, reaction time reductions and faster time to results. In addition, data are now generated as contiguous full-length reads in the standard FASTQ file format and can be used as-is in most short-read-based bioinformatics analysis pipelines. The two technologies that form the basis for this high-throughput sequencing technology are DNA nanoballs (DNB) and patterned arrays for nanoball attachment to a solid surface. DNA nanoballs are simply formed by denaturing double stranded, adapter ligated libraries and ligating the forward strand only to a splint oligonucleotide to form a ssDNA circle. Faithful copies of the circles containing the DNA insert are produced utilizing Rolling Circle Amplification that generates approximately 300–500 copies. The long strand of ssDNA folds upon itself to produce a three-dimensional nanoball structure that is approximately 220 nm in diameter. Making DNBs replaces the need to generate PCR copies of the library on the flow cell and as such can remove large proportions of duplicate reads, adapter-adapter ligations and PCR induced errors. The patterned array of positively charged spots is fabricated through photolithography and etching techniques followed by chemical modification to generate a sequencing flow cell. Each spot on the flow cell is approximately 250 nm in diameter, are separated by 700 nm (centre to centre) and allows easy attachment of a single negatively charged DNB to the flow cell and thus reducing under or over-clustering on the flow cell. Sequencing is then performed by addition of an oligonucleotide probe that attaches in combination to specific sites within the DNB. The probe acts as an anchor that then allows one of four single reversibly inactivated, labelled nucleotides to bind after flowing across the flow cell. Unbound nucleotides are washed away before laser excitation of the attached labels then emit fluorescence and signal is captured by cameras that is converted to a digital output for base calling. The attached base has its terminator and label chemically cleaved at completion of the cycle. The cycle is repeated with another flow of free, labelled nucleotides across the flow cell to allow the next nucleotide to bind and have its signal captured. This process is completed a number of times (usually 50 to 300 times) to determine the sequence of the inserted piece of DNA at a rate of approximately 40 million nucleotides per second as of 2018. SOLiD sequencing Applied Biosystems' (now a Life Technologies brand) SOLiD technology employs sequencing by ligation. Here, a pool of all possible oligonucleotides of a fixed length are labeled according to the sequenced position. Oligonucleotides are annealed and ligated; the preferential ligation by DNA ligase for matching sequences results in a signal informative of the nucleotide at that position. Each base in the template is sequenced twice, and the resulting data are decoded according to the 2 base encoding scheme used in this method. Before sequencing, the DNA is amplified by emulsion PCR. The resulting beads, each containing single copies of the same DNA molecule, are deposited on a glass slide. The result is sequences of quantities and lengths comparable to Illumina sequencing. This sequencing by ligation method has been reported to have some issue sequencing palindromic sequences. Ion Torrent semiconductor sequencing Ion Torrent Systems Inc. (now owned by Life Technologies) developed a system based on using standard sequencing chemistry, but with a novel, semiconductor-based detection system. This method of sequencing is based on the detection of hydrogen ions that are released during the polymerisation of DNA, as opposed to the optical methods used in other sequencing systems. A microwell containing a template DNA strand to be sequenced is flooded with a single type of nucleotide. If the introduced nucleotide is complementary to the leading template nucleotide it is incorporated into the growing complementary strand. This causes the release of a hydrogen ion that triggers a hypersensitive ion sensor, which indicates that a reaction has occurred. If homopolymer repeats are present in the template sequence, multiple nucleotides will be incorporated in a single cycle. This leads to a corresponding number of released hydrogens and a proportionally higher electronic signal. DNA nanoball sequencing DNA nanoball sequencing is a type of high throughput sequencing technology used to determine the entire genomic sequence of an organism. The company Complete Genomics uses this technology to sequence samples submitted by independent researchers. The method uses rolling circle replication to amplify small fragments of genomic DNA into DNA nanoballs. Unchained sequencing by ligation is then used to determine the nucleotide sequence. This method of DNA sequencing allows large numbers of DNA nanoballs to be sequenced per run and at low reagent costs compared to other high-throughput sequencing platforms. However, only short sequences of DNA are determined from each DNA nanoball which makes mapping the short reads to a reference genome difficult. Heliscope single molecule sequencing Heliscope sequencing is a method of single-molecule sequencing developed by Helicos Biosciences. It uses DNA fragments with added poly-A tail adapters which are attached to the flow cell surface. The next steps involve extension-based sequencing with cyclic washes of the flow cell with fluorescently labeled nucleotides (one nucleotide type at a time, as with the Sanger method). The reads are performed by the Heliscope sequencer. The reads are short, averaging 35 bp. What made this technology especially novel was that it was the first of its class to sequence non-amplified DNA, thus preventing any read errors associated with amplification steps. In 2009 a human genome was sequenced using the Heliscope, however in 2012 the company went bankrupt. Microfluidic Systems There are two main microfluidic systems that are used to sequence DNA; droplet based microfluidics and digital microfluidics. Microfluidic devices solve many of the current limitations of current sequencing arrays. Abate et al. studied the use of droplet-based microfluidic devices for DNA sequencing. These devices have the ability to form and process picoliter sized droplets at the rate of thousands per second. The devices were created from polydimethylsiloxane (PDMS) and used Forster resonance energy transfer, FRET assays to read the sequences of DNA encompassed in the droplets. Each position on the array tested for a specific 15 base sequence. Fair et al. used digital microfluidic devices to study DNA pyrosequencing. Significant advantages include the portability of the device, reagent volume, speed of analysis, mass manufacturing abilities, and high throughput. This study provided a proof of concept showing that digital devices can be used for pyrosequencing; the study included using synthesis, which involves the extension of the enzymes and addition of labeled nucleotides. Boles et al. also studied pyrosequencing on digital microfluidic devices. They used an electro-wetting device to create, mix, and split droplets. The sequencing uses a three-enzyme protocol and DNA templates anchored with magnetic beads. The device was tested using two protocols and resulted in 100% accuracy based on raw pyrogram levels. The advantages of these digital microfluidic devices include size, cost, and achievable levels of functional integration. DNA sequencing research, using microfluidics, also has the ability to be applied to the sequencing of RNA, using similar droplet microfluidic techniques, such as the method, inDrops. This shows that many of these DNA sequencing techniques will be able to be applied further and be used to understand more about genomes and transcriptomes. Methods in development DNA sequencing methods currently under development include reading the sequence as a DNA strand transits through nanopores (a method that is now commercial but subsequent generations such as solid-state nanopores are still in development), and microscopy-based techniques, such as atomic force microscopy or transmission electron microscopy that are used to identify the positions of individual nucleotides within long DNA fragments (>5,000 bp) by nucleotide labeling with heavier elements (e.g., halogens) for visual detection and recording. Third generation technologies aim to increase throughput and decrease the time to result and cost by eliminating the need for excessive reagents and harnessing the processivity of DNA polymerase. Tunnelling currents DNA sequencing Another approach uses measurements of the electrical tunnelling currents across single-strand DNA as it moves through a channel. Depending on its electronic structure, each base affects the tunnelling current differently, allowing differentiation between different bases. The use of tunnelling currents has the potential to sequence orders of magnitude faster than ionic current methods and the sequencing of several DNA oligomers and micro-RNA has already been achieved. Sequencing by hybridization Sequencing by hybridization is a non-enzymatic method that uses a DNA microarray. A single pool of DNA whose sequence is to be determined is fluorescently labeled and hybridized to an array containing known sequences. Strong hybridization signals from a given spot on the array identifies its sequence in the DNA being sequenced. This method of sequencing utilizes binding characteristics of a library of short single stranded DNA molecules (oligonucleotides), also called DNA probes, to reconstruct a target DNA sequence. Non-specific hybrids are removed by washing and the target DNA is eluted. Hybrids are re-arranged such that the DNA sequence can be reconstructed. The benefit of this sequencing type is its ability to capture a large number of targets with a homogenous coverage. A large number of chemicals and starting DNA is usually required. However, with the advent of solution-based hybridization, much less equipment and chemicals are necessary. Sequencing with mass spectrometry Mass spectrometry may be used to determine DNA sequences. Matrix-assisted laser desorption ionization time-of-flight mass spectrometry, or MALDI-TOF MS, has specifically been investigated as an alternative method to gel electrophoresis for visualizing DNA fragments. With this method, DNA fragments generated by chain-termination sequencing reactions are compared by mass rather than by size. The mass of each nucleotide is different from the others and this difference is detectable by mass spectrometry. Single-nucleotide mutations in a fragment can be more easily detected with MS than by gel electrophoresis alone. MALDI-TOF MS can more easily detect differences between RNA fragments, so researchers may indirectly sequence DNA with MS-based methods by converting it to RNA first. The higher resolution of DNA fragments permitted by MS-based methods is of special interest to researchers in forensic science, as they may wish to find single-nucleotide polymorphisms in human DNA samples to identify individuals. These samples may be highly degraded so forensic researchers often prefer mitochondrial DNA for its higher stability and applications for lineage studies. MS-based sequencing methods have been used to compare the sequences of human mitochondrial DNA from samples in a Federal Bureau of Investigation database and from bones found in mass graves of World War I soldiers. Early chain-termination and TOF MS methods demonstrated read lengths of up to 100 base pairs. Researchers have been unable to exceed this average read size; like chain-termination sequencing alone, MS-based DNA sequencing may not be suitable for large de novo sequencing projects. Even so, a recent study did use the short sequence reads and mass spectroscopy to compare single-nucleotide polymorphisms in pathogenic Streptococcus strains. Microfluidic Sanger sequencing In microfluidic Sanger sequencing the entire thermocycling amplification of DNA fragments as well as their separation by electrophoresis is done on a single glass wafer (approximately 10 cm in diameter) thus reducing the reagent usage as well as cost. In some instances researchers have shown that they can increase the throughput of conventional sequencing through the use of microchips. Research will still need to be done in order to make this use of technology effective. Microscopy-based techniques This approach directly visualizes the sequence of DNA molecules using electron microscopy. The first identification of DNA base pairs within intact DNA molecules by enzymatically incorporating modified bases, which contain atoms of increased atomic number, direct visualization and identification of individually labeled bases within a synthetic 3,272 base-pair DNA molecule and a 7,249 base-pair viral genome has been demonstrated. RNAP sequencing This method is based on use of RNA polymerase (RNAP), which is attached to a polystyrene bead. One end of DNA to be sequenced is attached to another bead, with both beads being placed in optical traps. RNAP motion during transcription brings the beads in closer and their relative distance changes, which can then be recorded at a single nucleotide resolution. The sequence is deduced based on the four readouts with lowered concentrations of each of the four nucleotide types, similarly to the Sanger method. A comparison is made between regions and sequence information is deduced by comparing the known sequence regions to the unknown sequence regions. In vitro virus high-throughput sequencing A method has been developed to analyze full sets of protein interactions using a combination of 454 pyrosequencing and an in vitro virus mRNA display method. Specifically, this method covalently links proteins of interest to the mRNAs encoding them, then detects the mRNA pieces using reverse transcription PCRs. The mRNA may then be amplified and sequenced. The combined method was titled IVV-HiTSeq and can be performed under cell-free conditions, though its results may not be representative of in vivo conditions. Market share While there are many different ways to sequence DNA, only a few dominate the market. In 2022, Illumina had about 80% of the market; the rest of the market is taken by only a few players (PacBio, Oxford, 454, MGI) Sample preparation The success of any DNA sequencing protocol relies upon the DNA or RNA sample extraction and preparation from the biological material of interest. A successful DNA extraction will yield a DNA sample with long, non-degraded strands. A successful RNA extraction will yield a RNA sample that should be converted to complementary DNA (cDNA) using reverse transcriptase—a DNA polymerase that synthesizes a complementary DNA based on existing strands of RNA in a PCR-like manner. Complementary DNA can then be processed the same way as genomic DNA. After DNA or RNA extraction, samples may require further preparation depending on the sequencing method. For Sanger sequencing, either cloning procedures or PCR are required prior to sequencing. In the case of next-generation sequencing methods, library preparation is required before processing. Assessing the quality and quantity of nucleic acids both after extraction and after library preparation identifies degraded, fragmented, and low-purity samples and yields high-quality sequencing data. Development initiatives In October 2006, the X Prize Foundation established an initiative to promote the development of full genome sequencing technologies, called the Archon X Prize, intending to award $10 million to "the first Team that can build a device and use it to sequence 100 human genomes within 10 days or less, with an accuracy of no more than one error in every 100,000 bases sequenced, with sequences accurately covering at least 98% of the genome, and at a recurring cost of no more than $10,000 (US) per genome." Each year the National Human Genome Research Institute, or NHGRI, promotes grants for new research and developments in genomics. 2010 grants and 2011 candidates include continuing work in microfluidic, polony and base-heavy sequencing methodologies. Computational challenges The sequencing technologies described here produce raw data that needs to be assembled into longer sequences such as complete genomes (sequence assembly). There are many computational challenges to achieve this, such as the evaluation of the raw sequence data which is done by programs and algorithms such as Phred and Phrap. Other challenges have to deal with repetitive sequences that often prevent complete genome assemblies because they occur in many places of the genome. As a consequence, many sequences may not be assigned to particular chromosomes. The production of raw sequence data is only the beginning of its detailed bioinformatical analysis. Yet new methods for sequencing and correcting sequencing errors were developed. Read trimming Sometimes, the raw reads produced by the sequencer are correct and precise only in a fraction of their length. Using the entire read may introduce artifacts in the downstream analyses like genome assembly, SNP calling, or gene expression estimation. Two classes of trimming programs have been introduced, based on the window-based or the running-sum classes of algorithms. This is a partial list of the trimming algorithms currently available, specifying the algorithm class they belong to: Ethical issues Human genetics have been included within the field of bioethics since the early 1970s and the growth in the use of DNA sequencing (particularly high-throughput sequencing) has introduced a number of ethical issues. One key issue is the ownership of an individual's DNA and the data produced when that DNA is sequenced. Regarding the DNA molecule itself, the leading legal case on this topic, Moore v. Regents of the University of California (1990) ruled that individuals have no property rights to discarded cells or any profits made using these cells (for instance, as a patented cell line). However, individuals have a right to informed consent regarding removal and use of cells. Regarding the data produced through DNA sequencing, Moore gives the individual no rights to the information derived from their DNA. As DNA sequencing becomes more widespread, the storage, security and sharing of genomic data has also become more important. For instance, one concern is that insurers may use an individual's genomic data to modify their quote, depending on the perceived future health of the individual based on their DNA. In May 2008, the Genetic Information Nondiscrimination Act (GINA) was signed in the United States, prohibiting discrimination on the basis of genetic information with respect to health insurance and employment. In 2012, the US Presidential Commission for the Study of Bioethical Issues reported that existing privacy legislation for DNA sequencing data such as GINA and the Health Insurance Portability and Accountability Act were insufficient, noting that whole-genome sequencing data was particularly sensitive, as it could be used to identify not only the individual from which the data was created, but also their relatives. In most of the United States, DNA that is "abandoned", such as that found on a licked stamp or envelope, coffee cup, cigarette, chewing gum, household trash, or hair that has fallen on a public sidewalk, may legally be collected and sequenced by anyone, including the police, private investigators, political opponents, or people involved in paternity disputes. As of 2013, eleven states have laws that can be interpreted to prohibit "DNA theft". Ethical issues have also been raised by the increasing use of genetic variation screening, both in newborns, and in adults by companies such as 23andMe. It has been asserted that screening for genetic variations can be harmful, increasing anxiety in individuals who have been found to have an increased risk of disease. For example, in one case noted in Time, doctors screening an ill baby for genetic variants chose not to inform the parents of an unrelated variant linked to dementia due to the harm it would cause to the parents. However, a 2011 study in The New England Journal of Medicine has shown that individuals undergoing disease risk profiling did not show increased levels of anxiety. Also, the development of Next Generation sequencing technologies such as Nanopore based sequencing has also raised further ethical concerns.
Biology and health sciences
Genetics
Biology
1158235
https://en.wikipedia.org/wiki/Debye%20length
Debye length
In plasmas and electrolytes, the Debye length (Debye radius or Debye–Hückel screening length), is a measure of a charge carrier's net electrostatic effect in a solution and how far its electrostatic effect persists. With each Debye length the charges are increasingly electrically screened and the electric potential decreases in magnitude by e. A Debye sphere is a volume whose radius is the Debye length. Debye length is an important parameter in plasma physics, electrolytes, and colloids (DLVO theory). The Debye length for a plasma consisting of particles with density , charge , and temperature is given by . The corresponding Debye screening wavenumber is given by . The analogous quantities at very low temperatures () are known as the Thomas–Fermi length and the Thomas–Fermi wavenumber, respectively. They are of interest in describing the behaviour of electrons in metals at room temperature and warm dense matter. The Debye length is named after the Dutch-American physicist and chemist Peter Debye (1884–1966), a Nobel laureate in Chemistry. Physical origin The Debye length arises naturally in the description of a substance with mobile charges, such as a plasma, electrolyte solution, or semiconductor. In such a substance, charges naturally screen out electric fields induced in the substance, with a certain characteristic length. That characteristic length is the Debye length. Its value can be mathematically derived for a system of different species of charged particles, where the -th species carries charge and has concentration at position . The distribution of charged particles within this medium gives rise to an electric potential that satisfies Poisson's equation: where is the medium's permitivity, and is any static charge density that is not part of the medium. The mobile charges don't only affect , but are also affected by due to the corresponding Coulomb force, . If we further assume the system to be at temperature , then the charge concentration may be considered, under the assumptions of mean field theory, to tend toward the Boltzmann distribution, where is the Boltzmann constant and where is the mean concentration of charges of species . Identifying the instantaneous concentrations and potential in the Poisson equation with their mean-field counterparts in the Boltzmann distribution yields the Poisson–Boltzmann equation: Solutions to this nonlinear equation are known for some simple systems. Solutions for more general systems may be obtained in the high-temperature (weak coupling) limit, , by Taylor expanding the exponential: This approximation yields the linearized Poisson–Boltzmann equation which also is known as the Debye–Hückel equation: The second term on the right-hand side vanishes for systems that are electrically neutral. The term in parentheses divided by has the units of an inverse length squared, and by dimensional analysis leads to the definition of the characteristic length scale: Substituting this length scale into the Debye–Hückel equation and neglecting the second and third terms on the right side yields the much simplified form . As the only characteristic length scale in the Debye–Hückel equation, sets the scale for variations in the potential and in the concentrations of charged species. All charged species contribute to the Debye length in the same way, regardless of the sign of their charges. To illustrate Debye screening, one can consider the example of a point charge placed in a plasma. The external charge density is then , and the resulting potential is The bare Coulomb potential is exponentially screened by the medium, over a distance of the Debye length: this is called Debye screening or shielding. The Debye length may be expressed in terms of the Bjerrum length as where is the integer charge number that relates the charge on the -th ionic species to the elementary charge . In a plasma For a weakly collisional plasma, Debye shielding can be introduced in a very intuitive way by taking into account the granular character of such a plasma. Let us imagine a sphere about one of its electrons, and compare the number of electrons crossing this sphere with and without Coulomb repulsion. With repulsion, this number is smaller. Therefore, according to Gauss theorem, the apparent charge of the first electron is smaller than in the absence of repulsion. The larger the sphere radius, the larger is the number of deflected electrons, and the smaller the apparent charge: this is Debye shielding. Since the global deflection of particles includes the contributions of many other ones, the density of the electrons does not change, at variance with the shielding at work next to a Langmuir probe (Debye sheath). Ions bring a similar contribution to shielding, because of the attractive Coulombian deflection of charges with opposite signs. This intuitive picture leads to an effective calculation of Debye shielding (see section II.A.2 of ). The assumption of a Boltzmann distribution is not necessary in this calculation: it works for whatever particle distribution function. The calculation also avoids approximating weakly collisional plasmas as continuous media. An N-body calculation reveals that the bare Coulomb acceleration of a particle by another one is modified by a contribution mediated by all other particles, a signature of Debye shielding (see section 8 of ). When starting from random particle positions, the typical time-scale for shielding to set in is the time for a thermal particle to cross a Debye length, i.e. the inverse of the plasma frequency. Therefore in a weakly collisional plasma, collisions play an essential role by bringing a cooperative self-organization process: Debye shielding. This shielding is important to get a finite diffusion coefficient in the calculation of Coulomb scattering (Coulomb collision). In a non-isothermic plasma, the temperatures for electrons and heavy species may differ while the background medium may be treated as the vacuum and the Debye length is where λD is the Debye length, ε0 is the permittivity of free space, kB is the Boltzmann constant, qe is the charge of an electron, Te and Ti are the temperatures of the electrons and ions, respectively, ne is the density of electrons, nj is the density of atomic species j, with positive ionic charge zjqe Even in quasineutral cold plasma, where ion contribution virtually seems to be larger due to lower ion temperature, the ion term is actually often dropped, giving although this is only valid when the mobility of ions is negligible compared to the process's timescale. A useful form of this equation is where is in cm, in eV, and in 1/cm. Typical values In space plasmas where the electron density is relatively low, the Debye length may reach macroscopic values, such as in the magnetosphere, solar wind, interstellar medium and intergalactic medium. See the table here below: In an electrolyte solution In an electrolyte or a colloidal suspension, the Debye length for a monovalent electrolyte is usually denoted with symbol κ−1 where I is the ionic strength of the electrolyte in number/m3 units, ε0 is the permittivity of free space, εr is the dielectric constant, kB is the Boltzmann constant, T is the absolute temperature in kelvins, is the elementary charge, or, for a symmetric monovalent electrolyte, where R is the gas constant, F is the Faraday constant, C0 is the electrolyte concentration in molar units (M or mol/L). Alternatively, where is the Bjerrum length of the medium in nm, and the factor derives from transforming unit volume from cubic dm to cubic nm. For deionized water at room temperature, at pH=7, λB ≈ 1μm. At room temperature (), one can consider in water the relation: where κ−1 is expressed in nanometres (nm) I is the ionic strength expressed in molar (M or mol/L) There is a method of estimating an approximate value of the Debye length in liquids using conductivity, which is described in ISO Standard, and the book. In semiconductors The Debye length has become increasingly significant in the modeling of solid state devices as improvements in lithographic technologies have enabled smaller geometries. The Debye length of semiconductors is given: where ε is the dielectric constant, kB is the Boltzmann constant, T is the absolute temperature in kelvins, q is the elementary charge, and Ndop is the net density of dopants (either donors or acceptors). When doping profiles exceed the Debye length, majority carriers no longer behave according to the distribution of the dopants. Instead, a measure of the profile of the doping gradients provides an "effective" profile that better matches the profile of the majority carrier density. In the context of solids, Thomas–Fermi screening length may be required instead of Debye length.
Physical sciences
Electrostatics
Physics
1160429
https://en.wikipedia.org/wiki/Foreshock
Foreshock
A foreshock is an earthquake that occurs before a larger seismic eventthe mainshockand is related to it in both time and space. The designation of an earthquake as foreshock, mainshock or aftershock is only possible after the full sequence of events has happened. Occurrence Foreshock activity has been detected for about 40% of all moderate to large earthquakes, and about 70% for events of M>7.0. They occur from a matter of minutes to days or even longer before the main shock; for example, the 2002 Sumatra earthquake is regarded as a foreshock of the 2004 Indian Ocean earthquake with a delay of more than two years between the two events. Some great earthquakes (M>8.0) show no foreshock activity at all, such as the M8.6 1950 India–China earthquake. The increase in foreshock activity is difficult to quantify for individual earthquakes but becomes apparent when combining the results of many different events. From such combined observations, the increase before the mainshock is observed to be of inverse power law type. This may either indicate that foreshocks cause stress changes resulting in the mainshock or that the increase is related to a general increase in stress in the region. Mechanics The observation of foreshocks associated with many earthquakes suggests that they are part of a preparation process prior to nucleation. In one model of earthquake rupture, the process forms as a cascade, starting with a very small event that triggers a larger one, continuing until the main shock rupture is triggered. However, analysis of some foreshocks has shown that they tend to relieve stress around the fault. In this view, foreshocks and aftershocks are part of the same process. This is supported by an observed relationship between the rate of foreshocks and the rate of aftershocks for an event. In practice, there are two main conflicting theories about foreshocks: earthquake triggering process (described in SOC models and ETAS-like models) and the loading process by aseismic slip (nucleation models). This debate about the prognostic value of foreshocks is well known as Foreshock Hypothesis. Earthquake prediction An increase in seismic activity in an area has been used as a method of predicting earthquakes, most notably in the case of the 1975 Haicheng earthquake in China, where an evacuation was triggered by an increase in activity. However, most earthquakes lack obvious foreshock patterns and this method has not proven useful, as most small earthquakes are not foreshocks, leading to probable false alarms. Earthquakes along oceanic transform faults do show repeatable foreshock behaviour, allowing the prediction of both the location and timing of such earthquakes. Examples of earthquakes with foreshock events The strongest recorded mainshock that followed a foreshock is the 1960 Valdivia earthquake, which had a magnitude of 9.5 MW. Note: dates are in local time
Physical sciences
Seismology
Earth science
1161453
https://en.wikipedia.org/wiki/Clay%20mineral
Clay mineral
Clay minerals are hydrous aluminium phyllosilicates (e.g. kaolin, Al2Si2O5(OH)4), sometimes with variable amounts of iron, magnesium, alkali metals, alkaline earths, and other cations found on or near some planetary surfaces. Clay minerals form in the presence of water and have been important to life, and many theories of abiogenesis involve them. They are important constituents of soils, and have been useful to humans since ancient times in agriculture and manufacturing. Properties Clay is a very fine-grained geologic material that develops plasticity when wet, but becomes hard, brittle and non–plastic upon drying or firing. It is a very common material, and is the oldest known ceramic. Prehistoric humans discovered the useful properties of clay and used it for making pottery. The chemistry of clay, including its capacity to retain nutrient cations such as potassium and ammonium, is important to soil fertility. Because the individual particles in clay are less than in size, they cannot be characterized by ordinary optical or physical methods. The crystallographic structure of clay minerals became better understood in the 1930s with advancements in the x-ray diffraction (XRD) technique indispensable to deciphering their crystal lattice. Clay particles were found to be predominantly sheet silicate (phyllosilicate) minerals, now grouped together as clay minerals. Their structure is based on flat hexagonal sheets similar to those of the mica group of minerals. Standardization in terminology arose during this period as well, with special attention given to similar words that resulted in confusion, such as sheet and plane. Because clay minerals are usually (but not necessarily) ultrafine-grained, special analytical techniques are required for their identification and study. In addition to X-ray crystallography, these include electron diffraction methods, various spectroscopic methods such as Mössbauer spectroscopy, infrared spectroscopy, Raman spectroscopy, and SEM-EDS or automated mineralogy processes. These methods can be augmented by polarized light microscopy, a traditional technique establishing fundamental occurrences or petrologic relationships. Occurrence Clay minerals are common weathering products (including weathering of feldspar) and low-temperature hydrothermal alteration products. Clay minerals are very common in soils, in fine-grained sedimentary rocks such as shale, mudstone, and siltstone and in fine-grained metamorphic slate and phyllite. Given the requirement of water, clay minerals are relatively rare in the Solar System, though they occur extensively on Earth where water has interacted with other minerals and organic matter. Clay minerals have been detected at several locations on Mars, including Echus Chasma, Mawrth Vallis, the Memnonia quadrangle and the Elysium quadrangle. Spectrography has confirmed their presence on celestial bodies including the dwarf planet Ceres, asteroid 101955 Bennu, and comet Tempel 1, as well as Jupiter's moon Europa. Structure Like all phyllosilicates, clay minerals are characterised by two-dimensional sheets of corner-sharing tetrahedra or octahedra. The sheet units have the chemical composition . Each silica tetrahedron shares three of its vertex oxygen ions with other tetrahedra, forming a hexagonal array in two dimensions. The fourth oxygen ion is not shared with another tetrahedron and all of the tetrahedra "point" in the same direction; i.e. all of the unshared oxygen ions are on the same side of the sheet. These unshared oxygen ions are called apical oxygen ions. In clays, the tetrahedral sheets are always bonded to octahedral sheets formed from small cations, such as aluminum or magnesium, and coordinated by six oxygen atoms. The unshared vertex from the tetrahedral sheet also forms part of one side of the octahedral sheet, but an additional oxygen atom is located above the gap in the tetrahedral sheet at the center of the six tetrahedra. This oxygen atom is bonded to a hydrogen atom forming an OH group in the clay structure. Clays can be categorized depending on the way that tetrahedral and octahedral sheets are packaged into layers. If there is only one tetrahedral and one octahedral group in each layer the clay is known as a 1:1 clay. The alternative, known as a 2:1 clay, has two tetrahedral sheets with the unshared vertex of each sheet pointing towards each other and forming each side of the octahedral sheet. Bonding between the tetrahedral and octahedral sheets requires that the tetrahedral sheet becomes corrugated or twisted, causing ditrigonal distortion to the hexagonal array, and the octahedral sheet is flattened. This minimizes the overall bond-valence distortions of the crystallite. Depending on the composition of the tetrahedral and octahedral sheets, the layer will have no charge or will have a net negative charge. If the layers are charged this charge is balanced by interlayer cations such as Na+ or K+ or by a lone octahedral sheet. The interlayer may also contain water. The crystal structure is formed from a stack of layers interspaced with the interlayers. Classification Clay minerals can be classified as 1:1 or 2:1. A 1:1 clay would consist of one tetrahedral sheet and one octahedral sheet, and examples would be kaolinite and serpentinite. A 2:1 clay consists of an octahedral sheet sandwiched between two tetrahedral sheets, and examples are talc, vermiculite, and montmorillonite. The layers in 1:1 clays are uncharged and are bonded by hydrogen bonds between layers, but 2:1 layers have a net negative charge and may be bonded together either by individual cations (such as potassium in illite or sodium or calcium in smectites) or by positively charged octahedral sheets (as in chlorites). Clay minerals include the following groups: Kaolin group which includes the minerals kaolinite, dickite, halloysite, and nacrite (polymorphs of ). Some sources include the kaolinite-serpentine group due to structural similarities. Smectite group which includes dioctahedral smectites, such as montmorillonite, nontronite and beidellite, and trioctahedral smectites, such as saponite. In 2013, analytical tests by the Curiosity rover found results consistent with the presence of smectite clay minerals on the planet Mars. Illite group which includes the clay-micas. Illite is the only common mineral in this group. Chlorite group includes a wide variety of similar minerals with considerable chemical variation. Other 2:1 clay types exist such as palygorskite (also known as attapulgite) and sepiolite, clays with long water channels internal to their structure. Mixed layer clay variations exist for most of the above groups. Ordering is described as a random or regular order and is further described by the term reichweite, which is German for range or reach. Literature articles will refer to an R1 ordered illite-smectite, for example. This type would be ordered in an illite-smectite-illite-smectite (ISIS) fashion. R0 on the other hand describes random ordering, and other advanced ordering types are also found (R3, etc.). Mixed layer clay minerals which are perfect R1 types often get their own names. R1 ordered chlorite-smectite is known as corrensite, R1 illite-smectite is rectorite. X-ray rf(001) is the spacing between layers in nanometers, as determined by X-ray crystallography. Glycol (mg/g) is the adsorption capacity for glycol, which occupies the interlayer sites when the clay is exposed to a vapor of ethylene glycol at for eight hours. CEC is the cation exchange capacity of the clay. (%) is the percent content of potassium oxide in the clay. DTA describes the differential thermal analysis curve of the clay. Clay and the origins of life The clay hypothesis for the origin of life was proposed by Graham Cairns-Smith in 1985. It postulates that complex organic molecules arose gradually on pre-existing, non-organic replication surfaces of silicate crystals in contact with an aqueous solution. The clay mineral montmorillonite has been shown to catalyze the polymerization of RNA in aqueous solution from nucleotide monomers, and the formation of membranes from lipids. In 1998, Hyman Hartman proposed that "the first organisms were self-replicating iron-rich clays which fixed carbon dioxide into oxalic acid and other dicarboxylic acids. This system of replicating clays and their metabolic phenotype then evolved into the sulfide rich region of the hot spring acquiring the ability to fix nitrogen. Finally phosphate was incorporated into the evolving system which allowed the synthesis of nucleotides and phospholipids." Biomedical applications of clays The structural and compositional versatility of clay minerals gives them interesting biological properties. Due to disc-shaped and charged surfaces, clay interacts with a range of drugs, protein, polymers, DNA, or other macromolecules. Some of the applications of clays include drug delivery, tissue engineering, and bioprinting. Mortar applications Clay minerals can be incorporated in lime-metakaolin mortars to improve mechanical properties. Electrochemical separation helps to obtain modified saponite-containing products with high smectite-group minerals concentrations, lower mineral particles size, more compact structure, and greater surface area. These characteristics open possibilities for the manufacture of high-quality ceramics and heavy-metal sorbents from saponite-containing products. Furthermore, tail grinding occurs during the preparation of the raw material for ceramics; this waste reprocessing is of high importance for the use of clay pulp as a neutralizing agent, as fine particles are required for the reaction. Experiments on the histosol deacidification with the alkaline clay slurry demonstrated that neutralization with the average pH level of 7.1 is reached at 30% of the pulp added and an experimental site with perennial grasses proved the efficacy of the technique. Moreover, the reclamation of disturbed lands is an integral part of the social and environmental responsibility of the mining company and this scenario addresses the community necessities at both local and regional levels. The tests which verify that clay minerals are present The results of glycol adsorption, cation exchange capacity, X-ray diffraction, differential thermal analysis, and chemical tests all give data that may be used for quantitative estimations. After the quantities of organic matter, carbonates, free oxides, and nonclay minerals have been determined, the percentages of clay minerals are estimated using the appropriate glycol adsorption, cation exchange capacity, K20, and DTA data. The amount of illite is estimated from the K20 content since this is the only clay mineral containing potassium. Argillaceous rocks Argillaceous rocks are those in which clay minerals are a significant component. For example, argillaceous limestones are limestones consisting predominantly of calcium carbonate, but including 10-40% of clay minerals: such limestones, when soft, are often called marls. Similarly, argillaceous sandstones such as greywacke, are sandstones consisting primarily of quartz grains, with the interstitial spaces filled with clay minerals.
Physical sciences
Silicate minerals
Earth science
2444759
https://en.wikipedia.org/wiki/Shrimpfish
Shrimpfish
Shrimpfish, also called razorfish, are five small species of marine fishes in the subfamily Centriscinae of the family Centriscidae. The species in the genera Aeoliscus and Centriscus are found in relatively shallow tropical parts of the Indo-Pacific, while the banded bellowsfish, which often is placed in the subfamily Macroramphosinae instead, is restricted to deeper southern oceans. Shrimpfish are nearly transparent and flattened from side to side with long snouts and sharp-edged bellies. A thin, dark stripe runs along their bodies. These stripes and their shrimp-like appearance are the source of their name. They swim in a synchronized manner with their heads pointing downwards. Adult shrimpfish are up to long, including their snouts. The banded bellowsfish more closely resembles members of the subfamily Macroramphosinae (especially Notopogon) in both behaviour and body shape, and reaches a length of up to . Species
Biology and health sciences
Acanthomorpha
Animals
2445877
https://en.wikipedia.org/wiki/Bastion%20fort
Bastion fort
A bastion fort or trace italienne (a phrase derived from non-standard French, meaning 'Italian outline') is a fortification in a style developed during the early modern period in response to the ascendancy of gunpowder weapons such as cannon, which rendered earlier medieval approaches to fortification obsolete. It appeared in the mid-fifteenth century in Italy. Some types, especially when combined with ravelins and other outworks, resembled the related star fort of the same era. The design of the fort is normally a polygon with bastions at the corners of the walls. These outcroppings eliminated protected blind spots, called "dead zones", and allowed fire along the curtain wall from positions protected from direct fire. Many bastion forts also feature cavaliers, which are raised secondary structures based entirely inside the primary structure. Origins Their predecessors, medieval fortresses, were usually placed on high hills. From there, arrows were shot at the enemies. The enemies' hope was to either ram the gate or climb over the wall with ladders and overcome the defenders. For the invading force these fortifications proved quite difficult to overcome and, accordingly, fortresses occupied a key position in warfare. Passive ring-shaped (Enceinte) fortifications of the Medieval era proved vulnerable to damage or destruction when attackers directed cannon fire on to perpendicular masonry wall. In addition, attackers that could get close to the wall were able to conduct undermining operations in relative safety, as the defenders could not shoot at them from nearby walls, until the development of machicolation. In contrast, the bastion fortress was a very flat structure composed of many triangular bastions, specifically designed to cover each other, and a ditch. To counteract the cannonballs, defensive walls were made lower and thicker. To counteract the fact that lower walls were easier to climb, the ditch was widened so that attacking infantry were still exposed to fire from a higher elevation, including enfilading fire from the bastions. The outer side of the ditch was usually provided with a glacis to deflect cannonballs aimed at the lower part of the main wall. Further structures, such as ravelins, tenailles, hornworks or crownworks, and even detached forts could be added to create complex outer works to further protect the main wall from artillery, and sometimes provide additional defensive positions. They were built of many materials, usually earth and brick, as brick does not shatter on impact from a cannonball as stone does. Bastion fortifications were further developed in the late fifteenth and early sixteenth centuries, primarily in response to the French invasion of the Italian peninsula. The French army was equipped with new cannon and bombards that were easily able to destroy traditional fortifications built in the Middle Ages. Star forts were employed by Michelangelo in the defensive earthworks of Florence, and refined in the sixteenth century by Baldassare Peruzzi and Vincenzo Scamozzi. The design spread out of Italy in the 1530s and 1540s. It was employed heavily throughout Europe for the following three centuries. Italian engineers were heavily in demand throughout Europe to help build the new fortifications. The late-seventeenth-century architects Menno van Coehoorn and especially Vauban, Louis XIV's military engineer, are considered to have taken the form to its logical extreme. "Fortresses... acquired ravelins and redoubts, bonnettes and lunettes, tenailles and tenaillons, counterguards and crownworks and hornworks and curvettes and faussebrayes and scarps and cordons and banquettes and counterscarps..." The star-shaped fortification had a formative influence on the patterning of the Renaissance ideal city: "The Renaissance was hypnotized by one city type which for a century and a half—from Filarete to Scamozzi—was impressed upon all utopian schemes: this is the star-shaped city". In the nineteenth century, the development of the explosive shell changed the nature of defensive fortifications. Elvas, in Portugal is considered by some to be the best surviving example of the Dutch school of fortifications. Slopes When the newly-effective manoeuvrable siege cannon came into military strategy in the fifteenth century, the response from military engineers was to arrange for the walls to be embedded into ditches fronted by earthen slopes (glacis) so that they could not be attacked by destructive direct fire and to have the walls topped by earthen banks that absorbed and largely dissipated the energy of plunging fire. Where conditions allowed, as in Fort Manoel in Malta, the ditches were cut into the native rock, and the wall at the inside of the ditch was simply unquarried native rock. As the walls became lower, they also became more vulnerable to assault. Dead zone The rounded shape that had previously been dominant for the design of turrets created "dead space", or "dead zones", which were relatively sheltered from defending fire, because direct fire from other parts of the defences could not be directed around curved walls. To prevent this, what had previously been round or square turrets were extended into diamond-shaped points to eliminate potential cover for attacking troops. The ditches and walls channelled the attackers into carefully constructed zwinger, bailey, or similar "kill zone" areas where the attackers had no place to shelter from the fire of the defenders. Enfilade A further and more subtle change was to move from a passive model of defence to an active one. The lower walls were more vulnerable to being stormed, and the protection that the earthen banking provided against direct fire failed if the attackers could occupy the slope on the outside of the ditch and mount an attacking cannon there. Therefore, the shape was designed to make maximum use of enfilade (or flanking) fire against any attackers on the outer edge of the ditch and also any who should reach the base of any of the walls. The indentations in the base of each point on the star sheltered cannons. Those cannons would have a clear line of fire directly down the edge of the neighbouring points, while their point of the star was protected by fire from the base of those points. The evolution of these ideas can be seen in transitional fortifications such as Sarzana in northwest Italy. Other changes Thus forts evolved complex shapes that allowed defensive batteries of cannon to command interlocking fields of fire. Forward batteries commanded the slopes which defended walls deeper in the complex from direct fire. The defending cannon were not simply intended to deal with attempts to storm the walls, but to actively challenge attacking cannon and deny them approach close enough to the fort to engage in direct fire against the vulnerable walls. The key to the fort's defence moved to the outer edge of the ditch surrounding the fort, known as the covered way, or covert way. Defenders could move relatively safely in the cover of the ditch and could engage in active countermeasures to keep control of the glacis, the open slope that lay outside the ditch, by creating defensive earthworks to deny the enemy access to the glacis and thus to firing points that could bear directly onto the walls and by digging counter mines to intercept and disrupt attempts to mine the fort walls. Compared to medieval fortifications, forts became both lower and larger in area, providing defence in depth, with tiers of defences that an attacker needed to overcome in order to bring cannon to bear on the inner layers of defences. Firing emplacements for defending cannon were heavily defended from bombardment by external fire, but open towards the inside of the fort, not only to diminish their usefulness to the attacker should they be overcome, but also to allow the large volumes of smoke that the defending cannon would generate to dissipate. Fortifications of this type continued to be effective while the attackers were armed only with cannon, where the majority of the damage inflicted was caused by momentum from the impact of solid shot. Because only low explosives such as black powder were available, explosive shells were largely ineffective against such fortifications. The development of mortars, high explosives, and the consequent large increase in the destructive power of explosive shells and thus plunging fire rendered the intricate geometry of such fortifications irrelevant. Warfare was to become more mobile. It took, however, many years to abandon the old fortress thinking. Construction Bastion forts were very expensive. Amsterdam's 22 bastions cost 11 million florins, and Siena in 1544 bankrupted itself to pay for its defences. For this reason, bastion forts were often improvised from earlier defences. Medieval curtain walls were torn down, and a ditch was dug in front of them. The earth used from the excavation was piled behind the walls to create a solid structure. While purpose-built fortifications would often have a brick fascia because of the material's ability to absorb the shock of artillery fire, many improvised defences cut costs by leaving this stage out and instead opting for more earth. Improvisation could also consist of lowering medieval round towers and infilling them with earth to strengthen the structures. It was also often necessary to widen and deepen the ditch outside the walls to create a more effective barrier to frontal assault and mining. Engineers from the 1520s were also building massive, gently sloping banks of earth called glacis in front of ditches so that the walls were almost totally hidden from horizontal artillery fire. The main benefit of the glaces was to deny enemy artillery the ability to fire point-blank. The lower the angle of elevation, the higher the stopping power. The first key instance of a trace Italianate was at the Papal port of Civitavecchia, where the original walls were lowered and thickened because the stone tended to shatter under bombardment. Effectiveness The first major battle which truly showed the effectiveness of trace Italienne was the defence of Pisa in 1500 against a combined Florentine and French army. With the original medieval fortifications beginning to crumble to French cannon fire, the Pisans constructed an earthen rampart behind the threatened sector. It was discovered that the sloping earthen rampart could be defended against escalade and was also much more resistant to cannon fire than the curtain wall it had replaced. The second siege was that of Padua in 1509. A monk engineer named Fra Giocondo, trusted with the defence of the Venetian city, cut down the city's medieval wall and surrounded the city with a broad ditch that could be swept by flanking fire from gun ports set low in projections extending into the ditch. Finding that their cannon fire made little impression on these low ramparts, the French and allied besiegers made several bloody and fruitless assaults and then withdrew. The new type of fortification also played a role in the numerous Mediterranean wars, slowing down the Ottoman expansion. Although Rhodes had been partially upgraded to the new type of fortifications after the 1480 siege, it was still conquered in 1522; nevertheless it was a long and bloody siege, and the besieged had no hope of outside relief because the island was close to the Ottoman power base and far from any allies. On the other hand, the Ottomans failed to take Corfu in 1537 in no small part because of the new fortifications, and several attempts spanning almost two centuries (another major one was in 1716) also failed. Two star forts were built by the Order of Saint John on the island of Malta in 1552, Fort Saint Elmo and Fort Saint Michael. Fort Saint Elmo played a critical role in the Ottoman siege of 1565 when it managed to hold out heavy bombardment for over a month. Eventually it fell, but the Ottoman casualties were very high, and it bought time for the relief force which arrived from Sicily to relieve the rest of the besieged island. The star fort therefore played a crucial and decisive role in the siege. After the fall of Venice to Napoleon, Corfu was occupied in 1797 by the French republican armies. The now ancient fortifications were still of some value at this point. A Russian–Ottoman–English alliance led at sea by Admiral Ushakov and with troops sent by Ali Pasha retook Corfu in 1799 after a four-month siege, when the garrison led by general Louis François Jean Chabot, being short of provisions and having lost the key island of Vido at the entrance of the port, surrendered and was allowed passage back to France. Theories about role in the Military Revolution The Military Revolution thesis originally proposed by Michael Roberts in 1955, as he focused on Sweden (1560–1660) searching for major changes in the European way of war caused by the introduction of portable firearms. Roberts linked military technology with larger historical consequences, arguing that innovations in tactics, drill and doctrine by the Dutch and Swedes (1560–1660), which maximized the utility of firearms, led to a need for more trained troops and thus for permanent forces (standing armies). According to Geoffrey Parker in his article, The Military Revolution 1560–1660: A Myth?, the appearance of the trace Italienne in early modern Europe, and the difficulty of taking such fortifications, is what resulted in a profound change in military strategy, most importantly, Parker argued, an increase in army sizes necessary to attack these forts. "Wars became a series of protracted sieges", Parker suggests, and open-pitch battles became "irrelevant" in regions where the trace Italienne existed. Ultimately, Parker argues, "military geography", in other words, the existence or absence of the trace Italienne in a given area, shaped military strategy in the early modern period. This is a profound alteration of the Military Revolution thesis. Parker's emphasis on the fortification as the key element has attracted substantial criticism from some academics, such as John A. Lynn and M. S. Kingra, particularly with respect to the claimed causal link between the new fortress design and increases in army sizes during this period. Obsolescence In the nineteenth century, with the development of more powerful artillery and explosive shells, star forts were replaced by simpler but more robust polygonal forts. In the twentieth century, with the development of tanks and aerial warfare during and after the First World War, fixed fortifications became and have remained less important than in previous centuries. Star forts reappeared during the early twenty-first-century French intervention in Mali where they were built by the 17th Parachute Engineer Regiment. Gallery
Technology
Fortification
null
1762926
https://en.wikipedia.org/wiki/Molecular%20machine
Molecular machine
Molecular machines are a class of molecules typically described as an assembly of a discrete number of molecular components intended to produce mechanical movements in response to specific stimuli, mimicking macromolecular devices such as switches and motors. Naturally occurring or biological molecular machines are responsible for vital living processes such as DNA replication and ATP synthesis. Kinesins and ribosomes are examples of molecular machines, and they often take the form of multi-protein complexes. For the last several decades, scientists have attempted, with varying degrees of success, to miniaturize machines found in the macroscopic world. The first example of an artificial molecular machine (AMM) was reported in 1994, featuring a rotaxane with a ring and two different possible binding sites. In 2016 the Nobel Prize in Chemistry was awarded to Jean-Pierre Sauvage, Sir J. Fraser Stoddart, and Bernard L. Feringa for the design and synthesis of molecular machines. AMMs have diversified rapidly over the past few decades and their design principles, properties, and characterization methods have been outlined better. A major starting point for the design of AMMs is to exploit the existing modes of motion in molecules, such as rotation about single bonds or cis-trans isomerization. Different AMMs are produced by introducing various functionalities, such as the introduction of bistability to create switches. A broad range of AMMs has been designed, featuring different properties and applications; some of these include molecular motors, switches, and logic gates. A wide range of applications have been demonstrated for AMMs, including those integrated into polymeric, liquid crystal, and crystalline systems for varied functions (such as materials research, homogenous catalysis and surface chemistry). Terminology Several definitions describe a "molecular machine" as a class of molecules typically described as an assembly of a discrete number of molecular components intended to produce mechanical movements in response to specific stimuli. The expression is often more generally applied to molecules that simply mimic functions that occur at the macroscopic level. A few prime requirements for a molecule to be considered a "molecular machine" are: the presence of moving parts, the ability to consume energy, and the ability to perform a task. Molecular machines differ from other stimuli-responsive compounds that can produce motion (such as cis-trans isomers) in their relatively larger amplitude of movement (potentially due to chemical reactions) and the presence of a clear external stimulus to regulate the movements (as compared to random thermal motion). Piezoelectric, magnetostrictive, and other materials that produce a movement due to external stimuli on a macro-scale are generally not included, since despite the molecular origin of the motion the effects are not useable on the molecular scale. This definition generally applies to synthetic molecular machines, which have historically gained inspiration from the naturally occurring biological molecular machines (also referred to as "nanomachines"). Biological machines are considered to be nanoscale devices (such as molecular proteins) in a living system that convert various forms of energy to mechanical work in order to drive crucial biological processes such as intracellular transport, muscle contractions, ATP generation and cell division. History Biological molecular machines have been known and studied for years given their vital role in sustaining life, and have served as inspiration for synthetically designed systems with similar useful functionality. The advent of conformational analysis, or the study of conformers to analyze complex chemical structures, in the 1950s gave rise to the idea of understanding and controlling relative motion within molecular components for further applications. This led to the design of "proto-molecular machines" featuring conformational changes such as cog-wheeling of the aromatic rings in triptycenes. By 1980, scientists could achieve desired conformations using external stimuli and utilize this for different applications. A major example is the design of a photoresponsive crown ether containing an azobenzene unit, which could switch between cis and trans isomers on exposure to light and hence tune the cation-binding properties of the ether. In his seminal 1959 lecture There's Plenty of Room at the Bottom, Richard Feynman alluded to the idea and applications of molecular devices designed artificially by manipulating matter at the atomic level. This was further substantiated by Eric Drexler during the 1970s, who developed ideas based on molecular nanotechnology such as nanoscale "assemblers", though their feasibility was disputed. Though these events served as inspiration for the field, the actual breakthrough in practical approaches to synthesize artificial molecular machines (AMMs) took place in 1991 with the invention of a "molecular shuttle" by Sir Fraser Stoddart. Building upon the assembly of mechanically linked molecules such as catenanes and rotaxanes as developed by Jean-Pierre Sauvage in the early 1980s, this shuttle features a rotaxane with a ring that can move across an "axle" between two ends or possible binding sites (hydroquinone units). This design realized the well-defined motion of a molecular unit across the length of the molecule for the first time. In 1994, an improved design allowed control over the motion of the ring by pH variation or electrochemical methods, making it the first example of an AMM. Here the two binding sites are a benzidine and a biphenol unit; the cationic ring typically prefers staying over the benzidine ring, but moves over to the biphenol group when the benzidine gets protonated at low pH or if it gets electrochemically oxidized. In 1998, a study could capture the rotary motion of a decacyclene molecule on a copper-base metallic surface using a scanning tunneling microscope. Over the following decade, a broad variety of AMMs responding to various stimuli were invented for different applications. In 2016, the Nobel Prize in Chemistry was awarded to Sauvage, Stoddart, and Bernard L. Feringa for the design and synthesis of molecular machines. Artificial molecular machines Over the past few decades, AMMs have diversified rapidly and their design principles, properties, and characterization methods have been outlined more clearly. A major starting point for the design of AMMs is to exploit the existing modes of motion in molecules. For instance, single bonds can be visualized as axes of rotation, as can be metallocene complexes. Bending or V-like shapes can be achieved by incorporating double bonds, that can undergo cis-trans isomerization in response to certain stimuli (typically irradiation with a suitable wavelength), as seen in numerous designs consisting of stilbene and azobenzene units. Similarly, ring-opening and -closing reactions such as those seen for spiropyran and diarylethene can also produce curved shapes. Another common mode of movement is the circumrotation of rings relative to one another as observed in mechanically interlocked molecules (primarily catenanes). While this type of rotation can not be accessed beyond the molecule itself (because the rings are confined within one another), rotaxanes can overcome this as the rings can undergo translational movements along a dumbbell-like axis. Another line of AMMs consists of biomolecules such as DNA and proteins as part of their design, making use of phenomena like protein folding and unfolding. AMM designs have diversified significantly since the early days of the field. A major route is the introduction of bistability to produce molecular switches, featuring two distinct configurations for the molecule to convert between. This has been perceived as a step forward from the original molecular shuttle which consisted of two identical sites for the ring to move between without any preference, in a manner analogous to the ring flip in an unsubstituted cyclohexane. If these two sites are different from each other in terms of features like electron density, this can give rise to weak or strong recognition sites as in biological systems — such AMMs have found applications in catalysis and drug delivery. This switching behavior has been further optimized to acquire useful work that gets lost when a typical switch returns to its original state. Inspired by the use of kinetic control to produce work in natural processes, molecular motors are designed to have a continuous energy influx to keep them away from equilibrium to deliver work. Various energy sources are employed to drive molecular machines today, but this was not the case during the early years of AMM development. Though the movements in AMMs were regulated relative to the random thermal motion generally seen in molecules, they could not be controlled or manipulated as desired. This led to the addition of stimuli-responsive moieties in AMM design, so that externally applied non-thermal sources of energy could drive molecular motion and hence allow control over the properties. Chemical energy (or "chemical fuels") was an attractive option at the beginning, given the broad array of reversible chemical reactions (heavily based on acid-base chemistry) to switch molecules between different states. However, this comes with the issue of practically regulating the delivery of the chemical fuel and the removal of waste generated to maintain the efficiency of the machine as in biological systems. Though some AMMs have found ways to circumvent this, more recently waste-free reactions such based on electron transfers or isomerization have gained attention (such as redox-responsive viologens). Eventually, several different forms of energy (electric, magnetic, optical and so on) have become the primary energy sources used to power AMMs, even producing autonomous systems such as light-driven motors. Types Various AMMs are tabulated below along with indicative images: Biological molecular machines Many macromolecular machines are found within cells, often in the form of multi-protein complexes. Examples of biological machines include motor proteins such as myosin, which is responsible for muscle contraction, kinesin, which moves cargo inside cells away from the nucleus along microtubules, and dynein, which moves cargo inside cells towards the nucleus and produces the axonemal beating of motile cilia and flagella. "[I]n effect, the [motile cilium] is a nanomachine composed of perhaps over 600 proteins in molecular complexes, many of which also function independently as nanomachines ... Flexible linkers allow the mobile protein domains connected by them to recruit their binding partners and induce long-range allostery via protein domain dynamics." Other biological machines are responsible for energy production, for example ATP synthase which harnesses energy from proton gradients across membranes to drive a turbine-like motion used to synthesise ATP, the energy currency of a cell. Still other machines are responsible for gene expression, including DNA polymerases for replicating DNA, RNA polymerases for producing mRNA, the spliceosome for removing introns, and the ribosome for synthesising proteins. These machines and their nanoscale dynamics are far more complex than any molecular machines that have yet been artificially constructed. Biological machines have potential applications in nanomedicine. For example, they could be used to identify and destroy cancer cells. Molecular nanotechnology is a speculative subfield of nanotechnology regarding the possibility of engineering molecular assemblers, biological machines which could re-order matter at a molecular or atomic scale. Nanomedicine would make use of these nanorobots, introduced into the body, to repair or detect damages and infections, but these are considered to be far beyond current capabilities. Research and applications Advances in this area are inhibited by the lack of synthetic methods. In this context, theoretical modeling has emerged as a pivotal tool to understand the self-assembly or -disassembly processes in these systems. Possible applications have been demonstrated for AMMs, including those integrated into polymeric, liquid crystal, and crystalline systems for varied functions. Homogenous catalysis is a prominent example, especially in areas like asymmetric synthesis, utilizing noncovalent interactions and biomimetic allosteric catalysis. AMMs have been pivotal in the design of several stimuli-responsive smart materials, such as 2D and 3D self-assembled materials and nanoparticle-based systems, for versatile applications ranging from 3D printing to drug delivery. AMMs are gradually moving from the conventional solution-phase chemistry to surfaces and interfaces. For instance, AMM-immobilized surfaces (AMMISs) are a novel class of functional materials consisting of AMMs attached to inorganic surfaces forming features like self-assembled monolayers; this gives rise to tunable properties such as fluorescence, aggregation and drug-release activity. Most of these "applications" remain at the proof-of-concept level. Challenges in streamlining macroscale applications include autonomous operation, the complexity of the machines, stability in the synthesis of the machines and the working conditions.
Technology
Basics
null
1763424
https://en.wikipedia.org/wiki/Turbidity%20current
Turbidity current
A turbidity current is most typically an underwater current of usually rapidly moving, sediment-laden water moving down a slope; although current research (2018) indicates that water-saturated sediment may be the primary actor in the process. Turbidity currents can also occur in other fluids besides water. Researchers from the Monterey Bay Aquarium Research Institute found that a layer of water-saturated sediment moved rapidly over the seafloor and mobilized the upper few meters of the preexisting seafloor. Plumes of sediment-laden water were observed during turbidity current events but they believe that these were secondary to the pulse of the seafloor sediment moving during the events. The belief of the researchers is that the water flow is the tail-end of the process that starts at the seafloor. In the most typical case of oceanic turbidity currents, sediment laden waters situated over sloping ground will flow down-hill because they have a higher density than the adjacent waters. The driving force behind a turbidity current is gravity acting on the high density of the sediments temporarily suspended within a fluid. These semi-suspended solids make the average density of the sediment bearing water greater than that of the surrounding, undisturbed water. As such currents flow, they often have a "snow-balling-effect", as they stir up the ground over which they flow, and gather even more sedimentary particles in their current. Their passage leaves the ground over which they flow scoured and eroded. Once an oceanic turbidity current reaches the calmer waters of the flatter area of the abyssal plain (main oceanic floor), the particles borne by the current settle out of the water column. The sedimentary deposit of a turbidity current is called a turbidite. Seafloor turbidity currents are often the result of sediment-laden river outflows, and can sometimes be initiated by earthquakes, slumping and other soil disturbances. They are characterized by a well-defined advance-front, also known as the current's head, and are followed by the current's main body. In terms of the more often observed and more familiar above sea-level phenomenon, they somewhat resemble flash floods. Turbidity currents can sometimes result from submarine seismic instability, which is common with steep underwater slopes, and especially with submarine trench slopes of convergent plate margins, continental slopes and submarine canyons of passive margins. With an increasing continental shelf slope, current velocity increases, as the velocity of the flow increases, turbulence increases, and the current draws up more sediment. The increase in sediment also adds to the density of the current, and thus increases its velocity even further. Definition Turbidity currents are traditionally defined as those sediment gravity flows in which sediment is suspended by fluid turbulence. However, the term "turbidity current" was adopted to describe a natural phenomenon whose exact nature is often unclear. The turbulence within a turbidity current is not always the support mechanism that keeps the sediment in suspension; however it is probable that turbulence is the primary or sole grain support mechanism in dilute currents (<3%). Definitions are further complicated by an incomplete understanding of the turbulence structure within turbidity currents, and the confusion between the terms turbulent (i.e. disturbed by eddies) and turbid (i.e. opaque with sediment). Kneller & Buckee, 2000 define a suspension current as 'flow induced by the action of gravity upon a turbid mixture of fluid and (suspended) sediment, by virtue of the density difference between the mixture and the ambient fluid'. A turbidity current is a suspension current in which the interstitial fluid is a liquid (generally water); a pyroclastic current is one in which the interstitial fluid is gas. Triggers Hyperpycnal plume When the concentration of suspended sediment at the mouth of a river is so large that the density of river water is greater than the density of sea water a particular kind of turbidity current can form called a hyperpycnal plume. The average concentration of suspended sediment for most river water that enters the ocean is much lower than the sediment concentration needed for entry as a hyperpycnal plume. Although some rivers can often have continuously high sediment load that can create a continuous hyperpycnal plume, such as the Haile River (China), which has an average suspended concentration of 40.5 kg/m3. The sediment concentration needed to produce a hyperpycnal plume in marine water is 35 to 45 kg/m3, depending on the water properties within the coastal zone. Most rivers produce hyperpycnal flows only during exceptional events, such as storms, floods, glacier outbursts, dam breaks, and lahar flows. In fresh water environments, such as lakes, the suspended sediment concentration needed to produce a hyperpycnal plume is quite low (1 kg/m3). Sedimentation in reservoirs The transport and deposition of the sediments in narrow alpine reservoirs is often caused by turbidity currents. They follow the thalweg of the lake to the deepest area near the dam, where the sediments can affect the operation of the bottom outlet and the intake structures. Controlling this sedimentation within the reservoir can be achieved by using solid and permeable obstacles with the right design. Earthquake triggering Turbidity currents are often triggered by tectonic disturbances of the sea floor. The displacement of continental crust in the form of fluidization and physical shaking both contribute to their formation. Earthquakes have been linked to turbidity current deposition in many settings, particularly where physiography favors preservation of the deposits and limits the other sources of turbidity current deposition. Since the famous case of breakage of submarine cables by a turbidity current following the 1929 Grand Banks earthquake, earthquake triggered turbidites have been investigated and verified along the Cascadia subduction Zone, the Northern San Andreas Fault, a number of European, Chilean and North American lakes, Japanese lacustrine and offshore regions and a variety of other settings. Canyon-flushing When large turbidity currents flow into canyons they may become self-sustaining, and may entrain sediment that has previously been introduced into the canyon by littoral drift, storms or smaller turbidity currents. Canyon-flushing associated with surge-type currents initiated by slope failures may produce currents whose final volume may be several times that of the portion of the slope that has failed (e.g. Grand Banks). Slumping Sediment that has piled up at the top of the continental slope, particularly at the heads of submarine canyons can create turbidity current due to overloading, thus consequent slumping and sliding. Convective sedimentation beneath river plumes A buoyant sediment-laden river plume can induce a secondary turbidity current on the ocean floor by the process of convective sedimentation. Sediment in the initially buoyant hypopycnal flow accumulates at the base of the surface flow, so that the dense lower boundary become unstable. The resulting convective sedimentation leads to a rapid vertical transfer of material to the sloping lake or ocean bed, potentially forming a secondary turbidity current. The vertical speed of the convective plumes can be much greater than the Stokes settling velocity of an individual particle of sediment. Most examples of this process have been made in the laboratory, but possible observational evidence of a secondary turbidity current was made in Howe Sound, British Columbia, where a turbidity current was periodically observed on the delta of the Squamish River. As the vast majority of sediment laden rivers are less dense than the ocean, rivers cannot readily form plunging hyperpycnal flows. Hence convective sedimentation is an important possible initiation mechanism for turbidity currents. Effect on ocean floor Large and fast-moving turbidity currents can carve gulleys and ravines into the ocean floor of continental margins and cause damage to artificial structures such as telecommunication cables on the seafloor. Understanding where turbidity currents flow on the ocean floor can help to decrease the amount of damage to telecommunication cables by avoiding these areas or reinforcing the cables in vulnerable areas. When turbidity currents interact with regular ocean currents, such as contour currents, they can change their direction. This ultimately shifts submarine canyons and sediment deposition locations. One example of this is located in the western part of the Gulf of Cadiz, where the ocean current leaving the Mediterranean Sea (also known as the Mediterranean outflow water) pushes turbidity currents westward. This has changed the shape of submarine valleys and canyons in the region to also curve in that direction. Deposits When the energy of a turbidity current lowers, its ability to keep suspended sediment decreases, thus sediment deposition occurs. When the material comes to rest, it is the sand and other coarse material which settles first followed by mud and eventually the very fine particulate matter. It is this sequence of deposition that creates the so called Bouma sequences that characterize turbidite deposits. Because turbidity currents occur underwater and happen suddenly, they are rarely seen as they happen in nature, thus turbidites can be used to determine turbidity current characteristics. Some examples: grain size can give indication of current velocity, grain lithology and the use of foraminifera for determining origins, grain distribution shows flow dynamics over time and sediment thickness indicates sediment load and longevity. Turbidites are commonly used in the understanding of past turbidity currents, for example, the Peru-Chile Trench off Southern Central Chile (36°S–39°S) contains numerous turbidite layers that were cored and analysed. From these turbidites the predicted history of turbidity currents in this area was determined, increasing the overall understanding of these currents. Antidune deposits Some of the largest antidunes on Earth are formed by turbidity currents. One observed sediment-wave field is located on the lower continental slope off Guyana, South America. This sediment-wave field covers an area of at least 29 000 km2 at a water depth of 4400–4825 meters. These antidunes have wavelengths of 110–2600 m and wave heights of 1–15 m. Turbidity currents responsible for wave generation are interpreted as originating from slope failures on the adjacent Venezuela, Guyana and Suriname continental margins. Simple numerical modelling has been enabled to determine turbidity current flow characteristics across the sediment waves to be estimated: internal Froude number = 0.7–1.1, flow thickness = 24–645 m, and flow velocity = 31–82 cm·s−1. Generally, on lower gradients beyond minor breaks of slope, flow thickness increases and flow velocity decreases, leading to an increase in wavelength and a decrease in height. Reversing buoyancy The behaviour of turbidity currents with buoyant fluid (such as currents with warm, fresh or brackish interstitial water entering the sea) has been investigated to find that the front speed decreases more rapidly than that of currents with the same density as the ambient fluid. These turbidity currents ultimately come to a halt as sedimentation results in a reversal of buoyancy, and the current lifts off, the point of lift-off remaining constant for a constant discharge. The lofted fluid carries fine sediment with it, forming a plume that rises to a level of neutral buoyancy (if in a stratified environment) or to the water surface, and spreads out. Sediment falling from the plume produces a widespread fall-out deposit, termed hemiturbidite. Experimental turbidity currents and field observations suggest that the shape of the lobe deposit formed by a lofting plume is narrower than for a similar non-lofting plume Prediction Prediction of erosion by turbidity currents, and of the distribution of turbidite deposits, such as their extent, thickness and grain size distribution, requires an understanding of the mechanisms of sediment transport and deposition, which in turn depends on the fluid dynamics of the currents. The extreme complexity of most turbidite systems and beds has promoted the development of quantitative models of turbidity current behaviour inferred solely from their deposits. Small-scale laboratory experiments therefore offer one of the best means of studying their dynamics. Mathematical models can also provide significant insights into current dynamics. In the long term, numerical techniques are most likely the best hope of understanding and predicting three-dimensional turbidity current processes and deposits. In most cases, there are more variables than governing equations, and the models rely upon simplifying assumptions in order to achieve a result. The accuracy of the individual models thus depends upon the validity and choice of the assumptions made. Experimental results provide a means of constraining some of these variables as well as providing a test for such models. Physical data from field observations, or more practical from experiments, are still required in order to test the simplifying assumptions necessary in mathematical models. Most of what is known about large natural turbidity currents (i.e. those significant in terms of sediment transfer to the deep sea) is inferred from indirect sources, such as submarine cable breaks and heights of deposits above submarine valley floors. Although during the 2003 Tokachi-oki earthquake a large turbidity current was observed by the cabled observatory which provided direct observations, which is rarely achieved. Oil exploration Oil and gas companies are also interested in turbidity currents because the currents deposit organic matter that over geologic time gets buried, compressed and transformed into hydrocarbons. The use of numerical modelling and flumes are commonly used to help understand these questions. Much of the modelling is used to reproduce the physical processes which govern turbidity current behaviour and deposits. Modeling approaches Shallow-water models The so-called depth-averaged or shallow-water models are initially introduced for compositional gravity currents and then later extended to turbidity currents. The typical assumptions used along with the shallow-water models are: hydrostatic pressure field, clear fluid is not entrained (or detrained), and particle concentration does not depend on the vertical location. Considering the ease of implementation, these models can typically predict flow characteristic such as front location or front speed in simplified geometries, e.g. rectangular channels, fairly accurately. Depth-resolved models With the increase in computational power, depth-resolved models have become a powerful tool to study gravity and turbidity currents. These models, in general, are mainly focused on the solution of the Navier-Stokes equations for the fluid phase. With dilute suspension of particles, a Eulerian approach proved to be accurate to describe the evolution of particles in terms of a continuum particle concentration field. Under these models, no such assumptions as shallow-water models are needed and, therefore, accurate calculations and measurements are performed to study these currents. Measurements such as, pressure field, energy budgets, vertical particle concentration and accurate deposit heights are a few to mention. Both Direct numerical simulation (DNS) and Turbulence modeling are used to model these currents. Notable examples of turbidity currents Within minutes after the 1929 Grand Banks earthquake occurred off the coast of Newfoundland, transatlantic telephone cables began breaking sequentially, farther and farther downslope, away from the epicenter. Twelve cables were snapped in a total of 28 places. Exact times and locations were recorded for each break. Investigators suggested that an estimated 60 mile per hour (100 km/h) submarine landslide or turbidity current of water saturated sediments swept 400 miles (600 km) down the continental slope from the earthquake's epicenter, snapping the cables as it passed. Subsequent research of this event have shown that continental slope sediment failures mostly occurred below 650 meter water depth. The slumping that occurred in shallow waters (5–25 meters) passed down slope into turbidity currents that evolved ignitively. The turbidity currents had sustained flow for many hours due to the delayed retrogressive failure and transformation of debris flows into turbidity currents through hydraulic jumps. The Cascadia subduction zone, off the northwestern coast of North America, has a record of earthquake triggered turbidites that is well-correlated to other evidence of earthquakes recorded in coastal bays and lakes during the Holocene. Forty–one Holocene turbidity currents have been correlated along all or part of the approximately 1000 km long plate boundary stretching from northern California to mid-Vancouver island. The correlations are based on radiocarbon ages and subsurface stratigraphic methods. The inferred recurrence interval of Cascadia great earthquakes is approximately 500 years along the northern margin, and approximately 240 years along the southern margin. Taiwan is a hot spot for submarine turbidity currents as there are large amounts of sediment suspended in rivers, and it is seismically active, thus large accumulation of seafloor sediments and earthquake triggering. During the 2006 Pingtung earthquake off SW Taiwan, eleven submarine cables across the Kaoping canyon and Manila Trench were broken in sequence from 1500 to 4000 m deep, as a consequence of the associated turbidity currents. From the timing of each cable break the velocity of the current was determined to have a positive relationship with bathymetric slope. Current velocities were on the steepest slopes and on the shallowest slopes. One of the earliest observations of a turbidity currents was by François-Alphonse Forel. In the late 1800s he made detailed observations of the plunging of the Rhône river into Lake Geneva at Port Valais. These papers were possibly the earliest identification of a turbidity current and he discussed how the submarine channel formed from the delta. In this freshwater lake, it is primarily the cold water that leads to plunging of the inflow. The sediment load by itself is generally not high enough to overcome the summer thermal stratification in Lake Geneva. The longest turbidity current ever recorded occurred in January 2020 and flowed for through the Congo Canyon over the course of two days, damaging two submarine communications cables. The current was a result of sediment deposited by the 2019–2020 Congo River floods.
Physical sciences
Oceanography
Earth science
1764276
https://en.wikipedia.org/wiki/Autobahn
Autobahn
The (; German , ) is the federal controlled-access highway system in Germany. The official term is (abbreviated BAB), which translates as 'federal motorway'. The literal meaning of the word is 'Federal Auto(mobile) Track'. Much of the system has no speed limit for some classes of vehicles. However, limits are posted and enforced in areas that are urbanised, substandard, accident-prone, or under construction. On speed-unrestricted stretches, an advisory speed limit () of applies. While driving faster is not illegal in the absence of a speed limit, it can cause an increased liability in the case of a collision (which mandatory auto insurance has to cover); courts have ruled that an "ideal driver" who is exempt from absolute liability for "inevitable" tort under the law would not exceed the advisory speed limit. A 2017 report by the Federal Road Research Institute reported that in 2015, 70.4% of the Autobahn network had only the advisory speed limit, 6.2% had temporary speed limits due to weather or traffic conditions, and 23.4% had permanent speed limits. Measurements from the German state of Brandenburg in 2006 showed average speeds of on a 6-lane section of Autobahn in free-flowing conditions. Names Only federally built controlled-access highways with certain construction standards including at least two lanes per direction are called Bundesautobahn. They have their own white-on-blue signs and numbering system. In the 1930s, when construction began on the system, the official name was Reichsautobahn. Various other controlled-access highways exist on the federal (Bundesstraße), state (Landesstraße), district, and municipal level but are not part of the Autobahn network and are officially referred to as Kraftfahrstraße (with rare exceptions, like A 995 Munich-Giesing–Brunntal until 2018). These highways are considered autobahnähnlich (autobahn-like) and are sometimes colloquially called Gelbe Autobahn (yellow autobahn) because most of them are Bundesstraßen (federal highways) with yellow signs. Some controlled-access highways are classified as "Bundesautobahn" in spite of not meeting the autobahn construction standard (for example, the A 62 near Pirmasens). Similar to some other German words, the term autobahn when used in English is usually understood to refer specifically to the national highway system of Germany, whereas in German the word autobahn is applied to any controlled highway in any country. For this reason in German, the more specific term Bundesautobahn is strongly preferred when the intent is to make specific reference to Germany's Autobahn network. Construction Similar to high-speed motorways in other countries, autobahns have multiple lanes of traffic in each direction, separated by a central barrier with grade-separated junctions and access restricted to motor vehicles with a top speed greater than . Nearly all exits are to the right; rare left-hand exits result from incomplete interchanges where the "straight-on" leads into the exit. The earliest motorways were flanked by shoulders about in width, constructed of varying materials; right-hand shoulders on many autobahns were later retrofitted to in width when it was realized cars needed the additional space to pull off the autobahn safely. In the postwar years, a thicker asphaltic concrete cross-section with fully paved hard shoulders came into general use. The top design speed was approximately in flat country but lower design speeds were used in hilly or mountainous terrain. A flat-country autobahn that was constructed to meet standards during the Nazi period could support speeds of up to on curves. Numbering system The current autobahn numbering system in use in Germany was introduced in 1974. All autobahns are named by using the capital letter A, which simply stands for "Autobahn" followed by a blank and a number (for example A 8). The main autobahns going all across Germany have a single-digit number. Shorter autobahns that are of regional importance (e.g. connecting two major cities or regions within Germany) have a double-digit number (e.g. A 24, connecting Berlin and Hamburg). The system is as follows: A 10 to A 19 are in eastern Germany (Berlin, Saxony-Anhalt, parts of Saxony and Brandenburg) A 20 to A 29 are in northern and northeastern Germany A 30 to A 39 are in Lower Saxony (northwestern Germany) and Thuringia A 40 to A 49 are in the Rhine-Ruhr to Frankfurt Rhine-Main A 52 to A 59 are in the Lower Rhine region to Cologne A 60 to A 67 are in Rhineland-Palatinate, Saarland, Hesse and northern Baden-Württemberg A 70 to A 73 are in Thuringia, northern Bavaria and parts of Saxony A 81 is in Baden-Württemberg A 90 to A 99 are in (southern) Bavaria A 98 is in Baden-Württemberg There are also some very short autobahns built just for local traffic (e.g. ring roads or the A 555 from Cologne to Bonn) that usually have three digits for numbering. The first digit used is similar to the system above, depending on the region. East–west routes are even-numbered, north–south routes are odd-numbered. The north–south autobahns are generally numbered from west to east; that is to say, the more easterly roads are given higher numbers. Similarly, the east–west routes are numbered from north (lower numbers) to south (higher numbers). History Early years The idea for the construction of the autobahn was first conceived in the mid-1920s during the days of the Weimar Republic, but the construction was slow, and most projected sections did not progress much beyond the planning stage due to economic problems and a lack of political support. One project was the private initiative HaFraBa which planned a "car-only road" crossing Germany from Hamburg in the north via central Frankfurt am Main to Basel in Switzerland. Parts of the HaFraBa were completed in the late 1930s and early 1940s, but construction eventually was halted by World War II. The first public road of this kind was completed in 1932 between Cologne and Bonn and opened by Konrad Adenauer (Lord Mayor of Cologne and future Chancellor of West Germany) on 6 August 1932. Today, that road is the Bundesautobahn 555. This road was not yet called Autobahn and lacked a centre median like modern motorways, but instead was termed a Kraftfahrstraße ("motor vehicle road") with two lanes each direction without intersections, pedestrians, bicycles, or animal-powered transportation. 1930s Just days after the 1933 Nazi takeover, Adolf Hitler enthusiastically embraced an ambitious autobahn construction project, appointing Fritz Todt, the Inspector General of German Road Construction, to lead it. By 1936, 130,000 workers were directly employed in construction, as well as an additional 270,000 in the supply chain for construction equipment, steel, concrete, signage, maintenance equipment, etc. In rural areas, new camps to house the workers were built near construction sites. The job creation program aspect was not especially important because full employment was almost reached by 1936. However, according to one source autobahn workers were often conscripted through the compulsory Reich Labor Service (and thereby removed from the unemployment registry). The autobahns were not primarily intended as major infrastructure improvement of special value to the military as sometimes stated. Their military value was limited as all large-scale military transportation in Germany was done by train to save fuel. The propaganda ministry turned the construction of the autobahns into a major media event that attracted international attention. The autobahns formed the first limited-access, high-speed road network in the world, with the first section from Frankfurt am Main to Darmstadt opening in 1935. This straight section was used for high-speed record attempts by the Grand Prix racing teams of Mercedes-Benz and Auto Union until a fatal accident involving popular German race driver Bernd Rosemeyer in early 1938. The world record of set by Rudolf Caracciola on this stretch just prior to the accident remains one of the highest speeds ever achieved on a public motorway. In the 1930s, a ten-kilometre stretch of what is today Bundesautobahn 9 just south of Dessau—called the Dessauer Rennstrecke—had bridges with no piers and was designed for cars like the Mercedes-Benz T80 to attempt to make land speed records. The T80 was to make a record attempt in January 1940, but plans were abandoned after the outbreak of World War II in Europe in September 1939. World War II During World War II, many of Germany's workers were required for various war production tasks. Therefore, construction work on the autobahn system increasingly relied on forced workers and concentration camp inmates, and working conditions were very poor. As of 1942, when the war turned against the Third Reich, only out of a planned of autobahn had been completed. Meanwhile, the median strips of some autobahns were paved over to allow their conversion into auxiliary airstrips. Aircraft were either stashed in numerous tunnels or camouflaged in nearby woods. However, for the most part during the war, the autobahns were not militarily significant. Motor vehicles, such as trucks, could not carry goods or troops as quickly or in as much bulk and in the same numbers as trains could, and the autobahns could not be used by tanks as their weight and caterpillar tracks damaged the road surface. The general shortage of petrol in Germany during much of the war, as well as the low number of trucks and motor vehicles needed for direct support of military operations, further decreased the autobahn's significance. As a result, most military and economic freight was carried by rail. After the war, numerous sections of the autobahns were in bad shape, severely damaged by heavy Allied bombing and military demolition. Furthermore, thousands of kilometres of autobahns remained unfinished, their construction brought to a halt by 1943 due to the increasing demands of the war effort. West Germany: 1949–1990 In West Germany (FRG), most existing autobahns were repaired soon after the war. During the 1950s, the West German government restarted the construction program. It invested in new sections and in improvements to older ones. Finishing the incomplete sections took longer, with some stretches opened to traffic by the 1980s. Some sections cut by the Iron Curtain in 1945 were only completed after German reunification in 1990. Others were never completed, as more advantageous routes were found. An example is between Bad Brückenau and Gemünden am Main on the Fulda-Würzburg route, which was replaced by A7. East Germany: 1949–1990 The autobahns of East Germany (GDR) were neglected in comparison to those in West Germany after 1945. In 1956, the speed limit was set to in the new version of the Rules of the Road (Straßenverkehrsordnung), which adopted a lot of rules that corresponded with the international standards of the time. The reasons for this speed limit are unknown. Oftentimes it is argued that the roads were in a poor state, however, there is no proof that the road conditions were a relevant factor in introducing the speed limit, especially since the roads were not much used in the first 20 years after the Second World War and the majority of the road network was based on the Reichsautobahn of Nazi-Germany just like in West Germany, and thus were in a good state. Speed limit violations on the autobahns of the GDR were rare because most cars didn’t have the engine power to go much faster than the set limit. For example, the most common car of the GDR, the Trabant, could reach a maximum of only . Reunification: 1990–present day The last of the remaining original Reichsautobahn, a section of A 11 northeast of Berlin near Gartz built in 1936—the westernmost remainder of the never-finished Berlinka— was scheduled for replacement around 2015. Roadway condition is described as "deplorable"; the -long concrete slabs, too long for proper expansion, are cracking under the weight of the traffic as well as the weather. Length Germany's autobahn network has a total length of about in 2021), and a density of 36 motorway kilometres per thousand square kilometers (Eurostat) which ranks it among the densest and longest controlled-access systems in the world, and fifth in density within the EU in 2016 (Netherlands 66, Finland 3). Longer similar systems can be found in the United States () and in China (). However both the U.S. and China have an area nearly 30 times bigger than Germany, which demonstrates the high density of Germany's highway system. German-built Reichsautobahnen in other countries The first autobahn in Austria was the West Autobahn from Wals near Salzburg to Vienna. Building started by command of Adolf Hitler shortly after the Anschluss in 1938. It extended the Reichsautobahn 26 from Munich (the present-day A 8), however only including the branch-off of the planned Tauern Autobahn was opened to the public on 13 September 1941. Construction works discontinued the next year and were not resumed until 1955. There are sections of the former German Reichsautobahn system in the former eastern territories of Germany, i.e. East Prussia, Farther Pomerania, and Silesia; these territories became parts of Poland and the Soviet Union with the implementation of the Oder–Neisse line after World War II. Parts of the planned autobahn from Berlin to Königsberg (the Berlinka) were completed as far as Stettin (Szczecin) on 27 September 1936. After the war, they were incorporated as the A6 autostrada of the Polish motorway network. A single-carriageway section of the Berlinka east of the former "Polish Corridor" and the Free City of Danzig opened in 1938; today it forms the Polish S22 expressway from Elbląg (Elbing) to the border with the Russian Kaliningrad Oblast, where it is continued by the R516 regional road. Also on 27 September 1936, a section from Breslau (Wrocław) to Liegnitz (Legnica) in Silesia was inaugurated, which today is part of the Polish A4 autostrada, followed by the (single vehicle) Reichsautobahn 9 from Bunzlau (Bolesławiec) to Sagan (Żagań) the next year, today part of the Polish A18 autostrada. After the German occupation of Czechoslovakia, plans for a motorway connecting Breslau with Vienna via Brno (Brünn) in the "Protectorate of Bohemia and Moravia" were carried out from 1939 until construction works discontinued in 1942. A section of the former Strecke 88 near Brno is today part of the D52 motorway of the Czech Republic. Also, there is the isolated and abandoned twin-carriageway Borovsko Bridge southeast of Prague, on which construction started in July 1939 and halted after the assassination of Reinhard Heydrich by former Czechoslovak army soldiers at the end of May 1942. Current density , Germany's autobahn network has a total length of about . From 2009 Germany has embarked on a massive widening and rehabilitation project, expanding the lane count of many of its major arterial routes, such as the A 5 in the southwest and A 8 going east–west. Most sections of Germany's autobahns have two or three, sometimes four lanes in each direction in addition to an emergency lane (hard shoulder). A few sections have only two lanes in each direction without emergency lanes, and short slip-roads and ramps. The motorway density in Germany is 36 kilometers per thousand square kilometer in 2016, close to that of the smaller countries nearby (Netherlands, Belgium, Luxembourg, Switzerland, Slovenia). Facilities Emergency telephones About 17,000 emergency telephones are distributed at regular intervals all along the autobahn network, with triangular stickers on the armco barriers pointing the way to the nearest one. Despite the increasing use of mobile phones, there are still about 150 calls made each day on average (after some 700 in 2013). This still equals four calls per kilometre each year. The location of the caller is automatically sent to the operator. Parking, rest areas, and truck stops For breaks during longer journeys, parking sites, rest areas, and truck stops are distributed over the complete Autobahn network. Parking on the autobahn is prohibited in the strictest terms outside these designated areas. There is a distinction between "managed" and "unmanaged" rest areas. (German: bewirtschaftet / unbewirtschaftet). Unmanaged rest areas are basically only parking spaces, sometimes with toilets. They form a part of the German highway system; the plots of land are federal property. Autobahn exits leading to such parking areas are marked at least (mostly ) in advance with a blue sign with the white letter "P". They are usually found every few kilometres. Some of them bear local or historic names. A managed rest area (German: Autobahnraststätte or Raststätte () for short) usually also includes a filling station, charging station, lavatories, toilets, and baby changes. Most rest areas also have restaurants, shops, public telephones, Internet access, and a playground. Some have hotels. Mandated every or so, rest areas are usually open all night. Both kinds of rest areas are directly on the autobahn, with their own exits, and any service roads connecting them to the rest of the road network are usually closed to general traffic. Apart from rare exceptions, the autobahn must not be left nor entered at rest areas. Truck stops (German Autohof (), plural Autohöfe ()) are large filling stations located at general exits, usually at a small distance from the autobahn, combined with fast food facilities and/or restaurants, but have no ramps of their own. They mostly sell fuel at normal price level while the Raststätten fuel prices are significantly higher. Rest areas and truck stops are marked several times as motorists approach, starting several kilometres in advance, and include large signs that often include icons announcing what kinds of facilities travellers can expect, such as hotels, filling stations, rest areas, etc. Speed limits Germany's autobahns are famous for being among the few public roads in the world without blanket speed limits for cars and motorcycles. As such, they are important German cultural identifiers, "often mentioned in hushed, reverential tones by motoring enthusiasts and looked at with a mix of awe and terror by outsiders." Some speed limits are implemented on different autobahns. Certain limits are imposed on some classes of vehicles: Additionally, speed limits are posted at most on- and off-ramps and interchanges and other danger points like sections under construction or in need of repair. Where no general limit exists, the advisory speed limit is , referred to in German as the Richtgeschwindigkeit. The advisory speed is not enforceable; however, being involved in an accident driving at higher speeds can lead to the driver being deemed at least partially responsible due to "increased operating danger" (Erhöhte Betriebsgefahr). The Federal Road Research Institute (Bundesanstalt für Straßenwesen) solicited information about speed regulations on autobahns from the sixteen States and reported the following, comparing the years 2006 and 2008: Except at construction sites, the general speed limits, where they apply, are usually between and ; construction sites usually have a speed limit of but the limit may be as low as . In rare cases, sections may have limits of , or on one ramp . Certain stretches have lower speed limits during wet weather. Some areas have a speed limit of in order to reduce noise pollution during overnight hours (usually 10 pm – 6 am) or because of increased traffic during daytime (6 am – 8 pm). Some limits were imposed to reduce pollution and noise. Limits can also be temporarily put into place through dynamic traffic guidance systems that display the corresponding message. More than half of the total length of the German autobahn network has no speed limit, about one third has a permanent limit, and the remaining parts have a temporary or conditional limit. Some cars with very powerful engines can reach speeds of well over . Major German car manufacturers, except Porsche, follow a gentlemen's agreement by electronically limiting the top speeds of their cars—with the exception of some top of the range models or engines—to . These limiters can be deactivated, so speeds up to might arise on the German autobahn, but due to other traffic, such speeds are generally not attainable except during certain times like between 10 p.m. and 6 a.m. or on Sundays (when truck drivers have to rest by law). Furthermore, there are certain autobahn sections which are known for having light traffic, making such speeds attainable during most days (especially some of those located in Eastern Germany). Most unlimited sections of the autobahn are located outside densely populated areas. Vehicles with a top speed less than (such as quads, low-end microcars, and agricultural/construction equipment) are not allowed to use the autobahn, nor are motorcycles and scooters with low engine capacity regardless of top speed (mainly applicable to mopeds which are typically limited to or anyway). To comply with this limit, heavy-duty trucks in Germany (e.g. mobile cranes, tank transporters etc.) often have a maximum design speed of (usually denoted by a round black-on-white sign with "62" on it), along with flashing orange beacons to warn approaching cars that they are travelling slowly. There is no general minimum speed but drivers are not allowed to drive at an unnecessarily low speed as this would lead to significant traffic disturbance and an increased collision risk. Public debate German national speed limits have a historical association with war-time restrictions and deprivations, the Nazi era, and the Soviet era in East Germany. After the Nazi dictatorship, German society was happy to overcome the traumas of war by freeing itself from most government restrictions, prohibitions and regulations. "Free driving for free citizens" ("freie Fahrt für freie Bürger"), a slogan promoted by the German Auto Club since the 1970s, is a popular slogan among those opposing autobahn speed restrictions. Tarek Al-Wazir, head of the Green Party in Hesse, and currently the Hessian Transport Minister has stated that "the speed limit in Germany has a similar status as the right to bear arms in the American debate. At some point, a speed limit will become reality here, and soon we will not be able to remember the time before. It's like the smoking ban in restaurants." Early history The Weimar Republic had no federally required speed limits. The first crossroads-free road for motorized vehicles only, now A 555 between Bonn and Cologne, had a limit when it opened in 1932. In October 1939, the Nazis instituted the first national maximum speed limit, throttling speeds to in order to conserve gasoline for the war effort. After the war, the four Allied occupation zones established their own speed limits until the divided East German and West German republics were constituted in 1949; initially, the Nazi speed limits were restored in both East and West Germany. After the World Wars In December 1952 the West German legislature voted to abolish all national speed limits, reverting to State-level decisions. National limits were reestablished incrementally. The urban limit was enacted in 1956, effective in 1957. The limit on rural roads—except autobahns—became effective in 1972. Oil crisis of the 1970s Just prior to the 1973 oil crisis, Germany, Switzerland, and Austria all had no general speed restriction on autobahns. During the crisis, like other nations, Germany imposed temporary speed restrictions; for example, on autobahns effective 13 November 1973. Automakers projected a 20% plunge in sales, which they attributed in part to the lowered speed limits. The 100 km/h limit championed by Transportation Minister Lauritz Lauritzen lasted 111 days. Adjacent nations with unlimited speed autobahns, Austria and Switzerland, imposed permanent limits after the crisis. However, after the crisis eased in 1974, the upper house of the German parliament, which was controlled by conservative parties, successfully resisted the imposition of a permanent mandatory limit supported by Chancellor Brandt. The upper house insisted on a recommended limit until a thorough study of the effects of a mandatory limit could be conducted. Accordingly, the Federal Highway Research Institute conducted a multiple-year experiment, switching between mandatory and recommended limits on two test stretches of autobahn. In the final report issued in 1977, the Institute stated the mandatory speed limit could reduce the autobahn death toll but there would be economic impacts, so a political decision had to be made due to the trade-offs involved. At that time, the federal government declined to impose a mandatory limit. The fatality rate trend on the German autobahn mirrored those of other nations' motorways that imposed a general speed limit. Environmental concerns of the 1980s In the mid-1980s, acid rain and sudden forest destruction renewed debate on whether or not a general speed limit should be imposed on autobahns. A car's fuel consumption increases with high speed, and fuel conservation is a key factor in reducing air pollution. Environmentalists argued that enforcing limits of limit on autobahns and on other rural roads would save lives as well as the forest, reducing the annual death toll by 30% (250 lives) on autobahns and 15% (1,000 lives) on rural roads; the German motor vehicle death toll was about 10,000 at the time. The federal government sponsored a large-scale experiment with a speed limit in order to measure the impact of reduced speeds on emissions and compliance. Afterward, again, the federal government declined to impose a mandatory limit, deciding the modest measured emission reduction would have no meaningful effect on forest loss. By 1987, all restrictions on test sections had been removed, even in Hesse where the state government was controlled by a "red-green" coalition. German reunification Prior to German reunification in 1990, eastern German states focused on restrictive traffic regulation such as a autobahn speed limit and of on other rural roads. Within two years after the opening, availability of high-powered vehicles and a 54% increase in motorized traffic led to a doubling of annual traffic deaths, despite "interim arrangements [which] involved the continuation of the speed limit of on autobahns and of outside cities". An extensive program of the four Es (enforcement, education, engineering, and emergency response) brought the number of traffic deaths back to pre-unification levels after a decade of effort while traffic regulations were conformed to western standards (e.g., freeway advisory limit, on other rural roads, and 0.05 percent BAC). Since reunification In 1993, the Social Democratic-Green Party coalition controlling the State of Hesse experimented with a limit on autobahns and on other rural roads. These limits were attempts to reduce ozone pollution. During his term of office (1998 to 2005) as Chancellor of Germany, Gerhard Schröder opposed an autobahn speed limit, famously referring to Germany as an Autofahrernation (a "nation of drivers"). In October 2007, at a party congress held by the Social Democratic Party of Germany, delegates narrowly approved a proposal to introduce a blanket speed limit of on all German autobahns. While this initiative is primarily a part of the SPD's general strategic outline for the future and, according to practices, not necessarily meant to affect immediate government policy, the proposal had stirred up a debate once again; Germany's chancellor since 2005, Angela Merkel, and leading cabinet members expressed outspoken disapproval of such a measure. In 2008, the Social Democratic-Green Party coalition controlling Germany's smallest state, the paired City-State of Bremen and Bremerhaven, imposed a 120-kilometre-per-hour (75 mph) limit on its last of speed-unlimited autobahn in hopes of leading other States to do likewise. In 2011, the first-ever Green minister-president of any German state, Winfried Kretschmann of Baden-Württemberg initially argued for a similar, state-level limit. However, Baden-Württemberg is an important location for the German motor industry, including the headquarters of Daimler AG and Porsche; the ruling coalition ultimately decided a state-level limit on its of speed-unlimited roads—arguing for nationwide speed limit instead. In 2014, the conservative-liberal ruling coalition of Saxony confirmed its rejection of a general speed limit on autobahns, instead advocating dynamic traffic controls where appropriate. Between 2010 and 2014 in the State of Hesse, transportation ministers Dieter Posch and his successor Florian Rentsch, both members of the Free Democratic Party, removed or raised speed limits on several sections of autobahn following regular 5-year reviews of speed limit effectiveness; some sections just prior to the installation of Tarek Al-Wazir (Green Party) as Transportation Minister in January 2014 as part of an uneasy CDU-green coalition government. In 2015, the left-green coalition government of Thuringia declared that a general autobahn limit was a federal matter; Thuringia would not unilaterally impose a general statewide limit, although the Thuringian environmental minister had recommended a limit. In late 2015, Winfried Hermann, Baden-Württemberg's Green minister of transportation, promised to impose a trial speed limit of on about 10% of the state's autobahns beginning in May 2016. However, the ruling Green-Social Democratic coalition lost its majority in the March 2016 elections; while Mr Hermann retained his post in the new Green-Christian Democratic government, he put aside preparations for a speed limit due to opposition from his new coalition partners. In 2019, the Green Party introduced a motion to introduce a hard 130 km/h speed limit on the autobahn, but it was defeated in the Bundestag. A second attempt to reopen debate on the issue was made by the Left Party in 2022, rejected by the majority of the opposition CDU/CSU and Alternative for Germany (AfD) and the governing Free Democratic Party (FDP). However, Alliance 90/The Greens and the SPD were obliged by their traffic light coalition with the FDP to reject the proposal. Safety In 2014, autobahns carried 31% of motorized road traffic while accounting for 11% of Germany's traffic deaths. The autobahn fatality rate of 1.6 deaths per billion travel-kilometres compared favorably with the 4.6 rate on urban streets and 6.5 rate on rural roads. However, these types of roads are not comparable according to German traffic researcher Bernhard Schlag: "You don't have some of the problems that are accident-prone there at all. No cyclists, no pedestrians, no crossing traffic, hardly any direct oncoming traffic. In that sense, it's not surprising that autobahns are relatively safe roads [compared to other road types]." According to official statistics from 2018, unlimited highways in Germany claimed about 71% of fatalities on highways. However, autobahns without speed limits also account for 70% of the entire autobahn network, which puts the high proportion of collision fatalities on stretches without speed limits into perspective. The often resulting thinking that speed limits would not make roads significantly safer, however, is a fallacy, since it is precisely those roads that have a high volume of traffic and thus a high risk of collisions that are given speed limits. According to Schlag, unsafe and older drivers, in particular, would avoid the autobahn because they perceive the high-speed differentials and very fast drivers as scary, and instead congregate on rural roads where the risk of collisions is higher anyway. In contrast to other road types, where the number of collisions has continuously decreased, the number of collisions on autobahns has remained relatively stable or even increased for several years since 2009. According to a report by the Federal Statistical Office, fast driving is the main cause of collisions on autobahns. According to the 2018 edition of the European Road Safety Observatory's Traffic Safety Basic Facts report, an above-average number of accidents end in fatalities on a 1000-kilometer stretch of highway in Germany compared to other EU countries. Although Germany has a very low total traffic-related death rate, if only the mortality rate on highways is considered, Germany is in the rear midfield in a Europe-wide comparison of the number of traffic fatalities per thousand kilometers driven on highways in 2016. In addition, Germany's percentage of fatalities that occurred on highways is above the EU average. An evaluation by the shows that in 2016 statistically 26% fewer people died on autobahns with a speed limit per kilometer than on autobahns without. A similar trend could be observed in the number of serious injuries. Between 1970 and 2010, overall German road fatalities decreased by almost 80% from 19,193 to 3,648; over the same time period, autobahn deaths halved from 945 to 430 deaths. Statistics for 2013 show total German traffic deaths had declined to the lowest count ever recorded: 3,340 (428 on autobahns); a representative of the Federal Statistical Office attributed the general decline to harsh winter weather that delayed the start of the motorcycle-riding season. In 2014, there was a total of 3,377 road fatalities, while autobahn deaths dropped to 375. * per 1,000,000,000 travel-kilometres In 2012, the leading cause of autobahn accidents was "excessive speed (for conditions)": 6,587 so-called "speed related" crashes claimed the lives of 179 people, which represents almost half (46.3%) of 387 autobahn fatalities that year. However, "excessive speed" does not mean that a speed limit has been exceeded, but that police determined at least one party travelled too fast for existing road or weather conditions. On autobahns 22 people died per 1,000 injury crashes, a lower rate than the 29 deaths per 1,000 injury accidents on conventional rural roads, which in turn is five times higher than the risk on urban roads—speeds are higher on rural roads and autobahns than urban roads, increasing the severity potential of a crash. Safety: international comparison A few countries publish the safety record of their motorways; the Federal Highway Research Institute provided IRTAD statistics for the year 2012: For example, a person yearly traversing on regular roads and on motorways has an approximately chance of dying in a car accident on a German road in any particular year ( on an autobahn), compared to in Czech Republic, in Denmark, or in the United States. However, there are many differences between countries in their geography, economy, traffic growth, highway system size, degree of urbanization and motorization, and so on. The European Union publishes statistics reported by its members. Travel speeds The federal government does not regularly measure or estimate travel speeds. One study reported in a transportation engineering journal offered historical perspective on the increase in travel speeds over a decade: Source: Kellermann, G: Geschwindigkeitsverhalten im Autobahnnetz 1992. Straße+Autobahn, Issue 5/1995. The Federal Environmental Office reported that, on a free-flowing section in 1992, the recorded average speed was with 51% of drivers exceeding the recommended speed. In 2006, speeds were recorded using automated detection loops in the State of Brandenburg at two points: on a six-lane section of A 9 near Niemegk with a advisory speed limit; and on a four-lane section of A 10 bypassing Berlin near Groß Kreutz with a mandatory limit. The results were: At peak times on the "free-flowing" section of A 9, over 60% of road users exceeded the recommended maximum speed, more than 30% of motorists exceeded , and more than 15% exceeded —in other words the so-called "85th-percentile speed" was in excess of 170 km/h. Toll roads On 1 January 2005, a new system came into effect for mandatory tolls (Mautpflicht) on heavy trucks (those weighing more than 12 t) while using the German autobahn system (LKW-Maut). The German government contracted with a private company, Toll Collect GmbH, to operate the toll collection system, which has involved the use of vehicle-mounted transponders and roadway-mounted sensors installed throughout Germany. The toll is calculated depending on the toll route, as well as based on the pollution class of the vehicle, its weight and the number of axles on the vehicles. Certain vehicles, such as emergency vehicles and buses, are exempt from the toll. An average user is charged €0.15 per kilometre, or about $0.31 per mile (Toll Collect, 2007). Traffic laws and enforcement Driving in Germany is regulated by the Straßenverkehrs-Ordnung (road traffic regulations, abbreviated StVO). Enforcement on the federal Autobahnen is handled by each state's highway patrol (Autobahnpolizei), often using unmarked police cars and motorcycles and usually equipped with video cameras, thus allowing easier enforcement of laws such as tailgating. Notable laws The right lane should be used when it is free (Rechtsfahrgebot) and the left lane is generally intended only for overtaking unless traffic is too dense to justify driving only on the right lane. It is legal to give a short horn or light signal (flashing headlights or Lichthupe) in order to indicate the intention of overtaking, but a safe distance to the vehicle in front must be maintained, otherwise this might be regarded as an act of coercion. Trucks only drive on the right lane, this is common throughout Europe wherever two travel lanes in a direction are present. They are well known for doing the "Elephant Race" (Elefantenrennen ), which has little to do with actual elephants. This occurs when one truck tries to overtake another with a minimum speed difference. However on sections with three or more travel lanes in a direction, trucks and buses are prohibited from using the far left lane. In some places which are indicated by signs, truck drivers are not allowed to overtake at all. Penalties for tailgating were increased in May 2006 to a maximum of €375 (now €400) and three months' license suspension: "drivers must keep a distance in metres that is equal to half their speed. For example, a driver going 100 km/h on the autobahn must keep a distance of at least 50 metres (165 feet)". The penalty increase followed uproar after an infamous fatal crash on Autobahn 5 in 2003. In a traffic jam, drivers must form a rescue lane (Rettungsgasse ) to allow emergency services to reach the scene of an accident. This emergency corridor is to be created on the dividing line between the two leftmost lanes; following the guiding principle of if on the left, drive left, else drive right, vehicles may cross into another lane if need be. It is unlawful to stop for any reason on the autobahn, except for emergencies and when unavoidable, like traffic jams or being involved in an accident. This includes stopping on emergency lanes. Running out of fuel is considered an avoidable occurrence, as by law there are petrol stations directly on the autobahn approximately every . Drivers may face fines and up to six months' suspension, should it come to a stop that was deemed unnecessary by the police. In some cases (if there is a direct danger to life and limb or property e.g. cars and highway infrastructure) it may also be considered a crime and the driver could receive a prison sentence (up to 5 years). Overtaking on the right (undertaking) is strictly forbidden, except when stuck in traffic jams. Up to a speed of , if the left lane is crowded or driving slowly, it is permitted to pass cars on the right side if the speed difference is not greater than or the vehicle on the left lane is stationary. This is not referred to as overtaking, but driving past. Even if the car overtaken is illegally occupying the left-hand lane, it is not an acceptable excuse; in such cases, the police will routinely stop and fine both drivers. However, exceptions can and have sometimes been made. In popular culture Film and television (Alarm for Cobra 11 – The Autobahn Police, 1996–), a famous German TV series focusing on the work of a team of motorway police officers and their investigations, set in the autobahn-intertwined Rhine-Ruhr metropolitan area. Reichsautobahn (documentary/b&w) by (West Germany, 1986) Music "Autobahn", a song and album by German electronic band Kraftwerk (1974) "Autobahn", a song by South Korean boy band Monsta X Under their tenth extended play "No Limit" (2021) Video games Need for Speed: ProStreet and Burnout Dominator use autobahn as one of their tracks. Euro Truck Simulator, German Truck Simulator, and Euro Truck Simulator 2 feature the Autobahn in their open world maps. Burnout Dominator divided them into two (Autobahn and Autobahn Loop). Need for Speed: Porsche Unleashed also had a track that had the player drive across different sections of the autobahn. The entire game world of Crash Time: Autobahn Pursuit is set on the autobahn. In Gran Turismo 5, Gran Turismo 6 and Gran Turismo 7, a trophy is awarded to those who have driven the same distance as the autobahn total length. In December 2010 video game developer Synetic GmbH and Conspiracy Entertainment released the title Alarm für Cobra 11 – Die Autobahnpolizei featuring real world racing and mission-based gameplay. It is taken from the popular German television series about a two-man team of Autobahnpolizei first set in Berlin then later in North Rhine-Westphalia. Autobahn Police Simulator is a 2015 German police driving simulation game set on the Autobahn.
Technology
Ground transportation networks
null
1765204
https://en.wikipedia.org/wiki/Fuji%20%28apple%29
Fuji (apple)
The is an apple cultivar developed by growers at the in Fujisaki, Aomori, Japan, in the late 1930s, and brought to market in 1962. It originated as a cross between two American apple varieties—the Red Delicious and old Virginia Ralls Janet (sometimes cited as "Rawls Jennet") apples. According to the US Apple Association website it is one of the nine most popular apple cultivars in the United States. Its name is derived from the first part of the town where it was developed: Fujisaki. Overview Fuji apples are typically round and range from large to very large, averaging in diameter. They contain from 9–11% sugars by weight and have a dense flesh that is sweeter and crisper than many other apple cultivars, making them popular with consumers around the world. Fuji apples also have a very long shelf life compared to other apples, even without refrigeration. With refrigeration, Fuji apples can remain fresh for up to a year. In Japan, Fuji apples continue to be an unrivaled best-seller. Japanese consumers prefer the crispy texture and sweetness of Fuji apples (which is somewhat reminiscent of the coveted Nashi pear) almost to the exclusion of other varieties and Japan's apple imports remain low. Aomori Prefecture, home of the Fuji apple, is the best known apple growing region of Japan. Of the roughly 900,000 tons of Japanese apples produced annually, 500,000 tons come from Aomori. Outside Japan, the popularity of Fuji apples continues to grow. In 2016 and 2017, Fuji apples accounted for nearly 70% of China's 43 million tons grown. Since their introduction into the US market in the 1980s, Fuji apples have gained popularity with American consumers — as of 2016, Fuji apples ranked number 3 on the US Apple Association's list of most popular apples, only trailing Red Delicious and Gala. Fuji apples are grown in traditional apple-growing states such as Washington, Michigan, Pennsylvania, New York, and California. Washington State, where more than half of America's apple crop is grown, produces about 135,000 tons of Fuji apples each year, third in volume behind Red Delicious and Gala varieties. In the United States and Canada, the Price look-up code (PLU code) for Fuji apples is 4131. Gallery Mutant cultivars Many sports (mutant cultivars) of the Fuji apple have been recognized and propagated. In addition to those that have remained unpatented, twenty had received US plant patents by August 2008: Unpatented Fuji mutants include: BC 2 Desert Rose Fuji Nagafu 2 Nagafu 6 Nagafu 12 Redsport Type 1 Redsport Type 2
Biology and health sciences
Pomes
Plants
1765281
https://en.wikipedia.org/wiki/Climate%20sensitivity
Climate sensitivity
Climate sensitivity is a key measure in climate science and describes how much Earth's surface will warm for a doubling in the atmospheric carbon dioxide () concentration. Its formal definition is: "The change in the surface temperature in response to a change in the atmospheric carbon dioxide (CO2) concentration or other radiative forcing." This concept helps scientists understand the extent and magnitude of the effects of climate change. The Earth's surface warms as a direct consequence of increased atmospheric , as well as increased concentrations of other greenhouse gases such as nitrous oxide and methane. The increasing temperatures have secondary effects on the climate system. These secondary effects are called climate feedbacks. Self-reinforcing feedbacks include for example the melting of sunlight-reflecting ice as well as higher evapotranspiration. The latter effect increases average atmospheric water vapour, which is itself a greenhouse gas. Scientists do not know exactly how strong these climate feedbacks are. Therefore, it is difficult to predict the precise amount of warming that will result from a given increase in greenhouse gas concentrations. If climate sensitivity turns out to be on the high side of scientific estimates, the Paris Agreement goal of limiting global warming to below will be even more difficult to achieve. There are two main kinds of climate sensitivity: the transient climate response is the initial rise in global temperature when levels double, and the equilibrium climate sensitivity is the larger long-term temperature increase after the planet adjusts to the doubling. Climate sensitivity is estimated by several methods: looking directly at temperature and greenhouse gas concentrations since the Industrial Revolution began around the 1750s, using indirect measurements from the Earth's distant past, and simulating the climate. Fundamentals The rate at which energy reaches Earth as sunlight and leaves Earth as heat radiation to space must balance, or the total amount of heat energy on the planet at any one time will rise or fall, which results in a planet that is warmer or cooler overall. A driver of an imbalance between the rates of incoming and outgoing radiation energy is called radiative forcing. A warmer planet radiates heat to space faster and so a new balance is eventually reached, with a higher temperature and stored energy content. However, the warming of the planet also has knock-on effects, which create further warming in an exacerbating feedback loop. Climate sensitivity is a measure of how much temperature change a given amount of radiative forcing will cause. Radiative forcing Radiative forcings are generally quantified as Watts per square meter (W/m2) and averaged over Earth's uppermost surface defined as the top of the atmosphere. The magnitude of a forcing is specific to the physical driver and is defined relative to an accompanying time span of interest for its application. In the context of a contribution to long-term climate sensitivity from 1750 to 2020, the 50% increase in atmospheric is characterized by a forcing of about +2.1 W/m2. In the context of shorter-term contributions to Earth's energy imbalance (i.e. its heating/cooling rate), time intervals of interest may be as short as the interval between measurement or simulation data samplings, and are thus likely to be accompanied by smaller forcing values. Forcings from such investigations have also been analyzed and reported at decadal time scales. Radiative forcing leads to long-term changes in global temperature. A number of factors contribute radiative forcing: increased downwelling radiation from the greenhouse effect, variability in solar radiation from changes in planetary orbit, changes in solar irradiance, direct and indirect effects caused by aerosols (for example changes in albedo from cloud cover), and changes in land use (deforestation or the loss of reflective ice cover). In contemporary research, radiative forcing by greenhouse gases is well understood. , large uncertainties remain for aerosols. Key numbers Carbon dioxide () levels rose from 280 parts per million (ppm) in the 18th century, when humans in the Industrial Revolution started burning significant amounts of fossil fuel such as coal, to over 415 ppm by 2020. As is a greenhouse gas, it hinders heat energy from leaving the Earth's atmosphere. In 2016, atmospheric levels had increased by 45% over preindustrial levels, and radiative forcing caused by increased was already more than 50% higher than in pre-industrial times because of non-linear effects. Between the 18th-century start of the Industrial Revolution and the year 2020, the Earth's temperature rose by a little over one degree Celsius (about two degrees Fahrenheit). Societal importance Because the economics of climate change mitigation depend greatly on how quickly carbon neutrality needs to be achieved, climate sensitivity estimates can have important economic and policy-making implications. One study suggests that halving the uncertainty of the value for transient climate response (TCR) could save trillions of dollars. A higher climate sensitivity would mean more dramatic increases in temperature, which makes it more prudent to take significant climate action. If climate sensitivity turns out to be on the high end of what scientists estimate, the Paris Agreement goal of limiting global warming to well below 2 °C cannot be achieved, and temperature increases will exceed that limit, at least temporarily. One study estimated that emissions cannot be reduced fast enough to meet the 2 °C goal if equilibrium climate sensitivity (the long-term measure) is higher than . The more sensitive the climate system is to changes in greenhouse gas concentrations, the more likely it is to have decades when temperatures are much higher or much lower than the longer-term average. Factors that determine sensitivity The radiative forcing caused by a doubling of atmospheric levels (from the pre-industrial 280 ppm) is approximately 3.7 watts per square meter (W/m2). In the absence of feedbacks, the energy imbalance would eventually result in roughly of global warming. That figure is straightforward to calculate by using the Stefan–Boltzmann law and is undisputed. A further contribution arises from climate feedbacks, both self-reinforcing and balancing. The uncertainty in climate sensitivity estimates is entirely from the modelling of feedbacks in the climate system, including water vapour feedback, ice–albedo feedback, cloud feedback, and lapse rate feedback. Balancing feedbacks tend to counteract warming by increasing the rate at which energy is radiated to space from a warmer planet. Exacerbating feedbacks increase warming; for example, higher temperatures can cause ice to melt, which reduces the ice area and the amount of sunlight the ice reflects, which in turn results in less heat energy being radiated back into space. Climate sensitivity depends on the balance between those feedbacks. Types Depending on the time scale, there are two main ways to define climate sensitivity: the short-term transient climate response (TCR) and the long-term equilibrium climate sensitivity (ECS), both of which incorporate the warming from exacerbating feedback loops. They are not discrete categories, but they overlap. Sensitivity to atmospheric increases is measured in the amount of temperature change for doubling in the atmospheric concentration. Although the term "climate sensitivity" is usually used for the sensitivity to radiative forcing caused by rising atmospheric , it is a general property of the climate system. Other agents can also cause a radiative imbalance. Climate sensitivity is the change in surface air temperature per unit change in radiative forcing, and the climate sensitivity parameter is therefore expressed in units of °C/(W/m2). Climate sensitivity is approximately the same whatever the reason for the radiative forcing (such as from greenhouse gases or solar variation). When climate sensitivity is expressed as the temperature change for a level of atmospheric double the pre-industrial level, its units are degrees Celsius (°C). Transient climate response The transient climate response (TCR) is defined as "the change in the global mean surface temperature, averaged over a 20-year period, centered at the time of atmospheric carbon dioxide doubling, in a climate model simulation" in which the atmospheric concentration increases at 1% per year. That estimate is generated by using shorter-term simulations. The transient response is lower than the equilibrium climate sensitivity because slower feedbacks, which exacerbate the temperature increase, take more time to respond in full to an increase in the atmospheric concentration. For instance, the deep ocean takes many centuries to reach a new steady state after a perturbation during which it continues to serve as heatsink, which cools the upper ocean. The IPCC literature assessment estimates that the TCR likely lies between and . A related measure is the transient climate response to cumulative carbon emissions (TCRE), which is the globally averaged surface temperature change after 1000 GtC of has been emitted. As such, it includes not only temperature feedbacks to forcing but also the carbon cycle and carbon cycle feedbacks. Equilibrium climate sensitivity The equilibrium climate sensitivity (ECS) is the long-term temperature rise (equilibrium global mean near-surface air temperature) that is expected to result from a doubling of the atmospheric concentration (ΔT2×). It is a prediction of the new global mean near-surface air temperature once the concentration has stopped increasing, and most of the feedbacks have had time to have their full effect. Reaching an equilibrium temperature can take centuries or even millennia after has doubled. ECS is higher than TCR because of the oceans' short-term buffering effects. Computer models are used for estimating the ECS. A comprehensive estimate means that modelling the whole time span during which significant feedbacks continue to change global temperatures in the model, such as fully-equilibrating ocean temperatures, requires running a computer model that covers thousands of years. There are, however, less computing-intensive methods. The IPCC Sixth Assessment Report (AR6) stated that there is high confidence that ECS is within the range of 2.5 °C to 4 °C, with a best estimate of 3 °C. The long time scales involved with ECS make it arguably a less relevant measure for policy decisions around climate change. Effective climate sensitivity A common approximation to ECS is the effective equilibrium climate sensitivity, is an estimate of equilibrium climate sensitivity by using data from a climate system in model or real-world observations that is not yet in equilibrium. Estimates assume that the net amplification effect of feedbacks, as measured after some period of warming, will remain constant afterwards. That is not necessarily true, as feedbacks can change with time. In many climate models, feedbacks become stronger over time and so the effective climate sensitivity is lower than the real ECS. Earth system sensitivity By definition, equilibrium climate sensitivity does not include feedbacks that take millennia to emerge, such as long-term changes in Earth's albedo because of changes in ice sheets and vegetation. It includes the slow response of the deep oceans' warming, which also takes millennia, and so ECS fails to reflect the actual future warming that would occur if is stabilized at double pre-industrial values. Earth system sensitivity (ESS) incorporates the effects of these slower feedback loops, such as the change in Earth's albedo from the melting of large continental ice sheets, which covered much of the Northern Hemisphere during the Last Glacial Maximum and still cover Greenland and Antarctica). Changes in albedo as a result of changes in vegetation, as well as changes in ocean circulation, are also included. The longer-term feedback loops make the ESS larger than the ECS, possibly twice as large. Data from the geological history of Earth is used in estimating ESS. Differences between modern and long-ago climatic conditions mean that estimates of the future ESS are highly uncertain. Unlike ECS and TCR, the carbon cycle is not included in the definition of the ESS, but all other elements of the climate system are included. Sensitivity to nature of forcing Different forcing agents, such as greenhouse gases and aerosols, can be compared using their radiative forcing, the initial radiative imbalance averaged over the entire globe. Climate sensitivity is the amount of warming per radiative forcing. To a first approximation, the cause of the radiative imbalance does not matter whether it is greenhouse gases or something else. However, radiative forcing from sources other than can cause a somewhat larger or smaller surface warming than a similar radiative forcing from . The amount of feedback varies mainly because the forcings are not uniformly distributed over the globe. Forcings that initially warm the Northern Hemisphere, land, or polar regions are more strongly systematically effective at changing temperatures than an equivalent forcing from , which is more uniformly distributed over the globe. That is because those regions have more self-reinforcing feedbacks, such as the ice–albedo feedback. Several studies indicate that human-emitted aerosols are more effective than at changing global temperatures, and volcanic forcing is less effective. When climate sensitivity to forcing is estimated by using historical temperature and forcing (caused by a mix of aerosols and greenhouse gases), and that effect is not taken into account, climate sensitivity is underestimated. State dependence Climate sensitivity has been defined as the short- or long-term temperature change resulting from any doubling of , but there is evidence that the sensitivity of Earth's climate system is not constant. For instance, the planet has polar ice and high-altitude glaciers. Until the world's ice has completely melted, an exacerbating ice–albedo feedback loop makes the system more sensitive overall. Throughout Earth's history, multiple periods are thought to have snow and ice cover almost the entire globe. In most models of "Snowball Earth", parts of the tropics were at least intermittently free of ice cover. As the ice advanced or retreated, climate sensitivity must have been very high, as the large changes in area of ice cover would have made for a very strong ice–albedo feedback. Volcanic atmospheric composition changes are thought to have provided the radiative forcing needed to escape the snowball state. Throughout the Quaternary period (the most recent 2.58 million years), climate has oscillated between glacial periods, the most recent one being the Last Glacial Maximum, and interglacial periods, the most recent one being the current Holocene, but the period's climate sensitivity is difficult to determine. The Paleocene–Eocene Thermal Maximum, about 55.5 million years ago, was unusually warm and may have been characterized by above-average climate sensitivity. Climate sensitivity may further change if tipping points are crossed. It is unlikely that tipping points will cause short-term changes in climate sensitivity. If a tipping point is crossed, climate sensitivity is expected to change at the time scale of the subsystem that hits its tipping point. Especially if there are multiple interacting tipping points, the transition of climate to a new state may be difficult to reverse. The two most common definitions of climate sensitivity specify the climate state: the ECS and the TCR are defined for a doubling with respect to the levels in the pre-industrial era. Because of potential changes in climate sensitivity, the climate system may warm by a different amount after a second doubling of from after a first doubling. The effect of any change in climate sensitivity is expected to be small or negligible in the first century after additional is released into the atmosphere. Estimation Using Industrial Age (1750–present) data Climate sensitivity can be estimated using the observed temperature increase, the observed ocean heat uptake, and the modelled or observed radiative forcing. The data are linked through a simple energy-balance model to calculate climate sensitivity. Radiative forcing is often modelled because Earth observation satellites measuring it has existed during only part of the Industrial Age (only since the late 1950s). Estimates of climate sensitivity calculated by using these global energy constraints have consistently been lower than those calculated by using other methods, around or lower. Estimates of transient climate response (TCR) that have been calculated from models and observational data can be reconciled if it is taken into account that fewer temperature measurements are taken in the polar regions, which warm more quickly than the Earth as a whole. If only regions for which measurements are available are used in evaluating the model, the differences in TCR estimates are negligible. A very simple climate model could estimate climate sensitivity from Industrial Age data by waiting for the climate system to reach equilibrium and then by measuring the resulting warming, (°C). Computation of the equilibrium climate sensitivity, S (°C), using the radiative forcing (W/m2) and the measured temperature rise, would then be possible. The radiative forcing resulting from a doubling of , , is relatively well known, at about 3.7 W/m2. Combining that information results in this equation: . However, the climate system is not in equilibrium since the actual warming lags the equilibrium warming, largely because the oceans take up heat and will take centuries or millennia to reach equilibrium. Estimating climate sensitivity from Industrial Age data requires an adjustment to the equation above. The actual forcing felt by the atmosphere is the radiative forcing minus the ocean's heat uptake, (W/m2) and so climate sensitivity can be estimated: The global temperature increase between the beginning of the Industrial Period, which is (taken as 1750, and 2011 was about . In 2011, the radiative forcing from and other long-lived greenhouse gases (mainly methane, nitrous oxide, and chlorofluorocarbon) that have been emitted since the 18th century was roughly 2.8 W/m2. The climate forcing, , also contains contributions from solar activity (+0.05 W/m2), aerosols (−0.9 W/m2), ozone (+0.35 W/m2), and other smaller influences, which brings the total forcing over the Industrial Period to 2.2 W/m2, according to the best estimate of the IPCC Fifth Assessment Report in 2014, with substantial uncertainty. The ocean heat uptake, estimated by the same report to be 0.42 W/m2, yields a value for S of . Other strategies In theory, Industrial Age temperatures could also be used to determine a time scale for the temperature response of the climate system and thus climate sensitivity: if the effective heat capacity of the climate system is known, and the timescale is estimated using autocorrelation of the measured temperature, an estimate of climate sensitivity can be derived. In practice, however, the simultaneous determination of the time scale and heat capacity is difficult. Attempts have been made to use the 11-year solar cycle to constrain the transient climate response. Solar irradiance is about 0.9 W/m2 higher during a solar maximum than during a solar minimum, and those effect can be observed in measured average global temperatures from 1959 to 2004. Unfortunately, the solar minima in the period coincided with volcanic eruptions, which have a cooling effect on the global temperature. Because the eruptions caused a larger and less well-quantified decrease in radiative forcing than the reduced solar irradiance, it is questionable whether useful quantitative conclusions can be derived from the observed temperature variations. Observations of volcanic eruptions have also been used to try to estimate climate sensitivity, but as the aerosols from a single eruption last at most a couple of years in the atmosphere, the climate system can never come close to equilibrium, and there is less cooling than there would be if the aerosols stayed in the atmosphere for longer. Therefore, volcanic eruptions give information only about a lower bound on transient climate sensitivity. Using data from Earth's past Historical climate sensitivity can be estimated by using reconstructions of Earth's past temperatures and levels. Paleoclimatologists have studied different geological periods, such as the warm Pliocene (5.3 to 2.6 million years ago) and the colder Pleistocene (2.6 million to 11,700 years ago), and sought periods that are in some way analogous to or informative about current climate change. Climates further back in Earth's history are more difficult to study because fewer data are available about them. For instance, past concentrations can be derived from air trapped in ice cores, but , the oldest continuous ice core is less than one million years old. Recent periods, such as the Last Glacial Maximum (LGM) (about 21,000 years ago) and the Mid-Holocene (about 6,000 years ago), are often studied, especially when more information about them becomes available. A 2007 estimate of sensitivity made using data from the most recent 420 million years is consistent with sensitivities of current climate models and with other determinations. The Paleocene–Eocene Thermal Maximum (about 55.5 million years ago), a 20,000-year period during which massive amount of carbon entered the atmosphere and average global temperatures increased by approximately , also provides a good opportunity to study the climate system when it was in a warm state. Studies of the last 800,000 years have concluded that climate sensitivity was greater in glacial periods than in interglacial periods. As the name suggests, the Last Glacial Maximum was much colder than today, and good data on atmospheric concentrations and radiative forcing from that period are available. The period's orbital forcing was different from today's but had little effect on mean annual temperatures. Estimating climate sensitivity from the Last Glacial Maximum can be done by several different ways. One way is to use estimates of global radiative forcing and temperature directly. The set of feedback mechanisms active during the period, however, may be different from the feedbacks caused by a present doubling of , which introduces additional uncertainty. In a different approach, a model of intermediate complexity is used to simulate conditions during the period. Several versions of this single model are run, with different values chosen for uncertain parameters, such that each version has a different ECS. Outcomes that best simulate the LGM's observed cooling probably produce the most realistic ECS values. Using climate models Climate models simulate the -driven warming of the future as well as the past. They operate on principles similar to those underlying models that predict the weather, but they focus on longer-term processes. Climate models typically begin with a starting state and then apply physical laws and knowledge about biology to generate subsequent states. As with weather modelling, no computer has the power to model the complexity of the entire planet and so simplifications are used to reduce that complexity to something manageable. An important simplification divides Earth's atmosphere into model cells. For instance, the atmosphere might be divided into cubes of air ten or one hundred kilometers on a side. Each model cell is treated as if it were homogeneous. Calculations for model cells are much faster than trying to simulate each molecule of air separately. A lower model resolution (large model cells and long time steps) takes less computing power but cannot simulate the atmosphere in as much detail. A model cannot simulate processes smaller than the model cells or shorter-term than a single time step. The effects of the smaller-scale and shorter-term processes must therefore be estimated by using other methods. Physical laws contained in the models may also be simplified to speed up calculations. The biosphere must be included in climate models. The effects of the biosphere are estimated by using data on the average behaviour of the average plant assemblage of an area under the modelled conditions. Climate sensitivity is therefore an emergent property of these models. It is not prescribed, but it follows from the interaction of all the modelled processes. To estimate climate sensitivity, a model is run by using a variety of radiative forcings (doubling quickly, doubling gradually, or following historical emissions) and the temperature results are compared to the forcing applied. Different models give different estimates of climate sensitivity, but they tend to fall within a similar range, as described above. Testing, comparisons, and climate ensembles Modelling of the climate system can lead to a wide range of outcomes. Models are often run that use different plausible parameters in their approximation of physical laws and the behaviour of the biosphere, which forms a perturbed physics ensemble, which attempts to model the sensitivity of the climate to different types and amounts of change in each parameter. Alternatively, structurally-different models developed at different institutions are put together, creating an ensemble. By selecting only the simulations that can simulate some part of the historical climate well, a constrained estimate of climate sensitivity can be made. One strategy for obtaining more accurate results is placing more emphasis on climate models that perform well in general. A model is tested using observations, paleoclimate data, or both to see if it replicates them accurately. If it does not, inaccuracies in the physical model and parametrizations are sought, and the model is modified. For models used to estimate climate sensitivity, specific test metrics that are directly and physically linked to climate sensitivity are sought. Examples of such metrics are the global patterns of warming, the ability of a model to reproduce observed relative humidity in the tropics and subtropics, patterns of heat radiation, and the variability of temperature around long-term historical warming. Ensemble climate models developed at different institutions tend to produce constrained estimates of ECS that are slightly higher than . The models with ECS slightly above simulate the above situations better than models with a lower climate sensitivity. Many projects and groups exist to compare and to analyse the results of multiple models. For instance, the Coupled Model Intercomparison Project (CMIP) has been running since the 1990s. Historical estimates Svante Arrhenius in the 19th century was the first person to quantify global warming as a consequence of a doubling of the concentration of . In his first paper on the matter, he estimated that global temperature would rise by around if the quantity of was doubled. In later work, he revised that estimate to . Arrhenius used Samuel Pierpont Langley's observations of radiation emitted by the full moon to estimate the amount of radiation that was absorbed by water vapour and by . To account for water vapour feedback, he assumed that relative humidity would stay the same under global warming. The first calculation of climate sensitivity that used detailed measurements of absorption spectra, as well as the first calculation to use a computer for numerical integration of the radiative transfer through the atmosphere, was performed by Syukuro Manabe and Richard Wetherald in 1967. Assuming constant humidity, they computed an equilibrium climate sensitivity of 2.3 °C per doubling of , which they rounded to 2 °C, the value most often quoted from their work, in the abstract of the paper. The work has been called "arguably the greatest climate-science paper of all time" and "the most influential study of climate of all time." A committee on anthropogenic global warming, convened in 1979 by the United States National Academy of Sciences and chaired by Jule Charney, estimated equilibrium climate sensitivity to be , plus or minus . The Manabe and Wetherald estimate (), James E. Hansen's estimate of , and Charney's model were the only models available in 1979. According to Manabe, speaking in 2004, "Charney chose 0.5 °C as a reasonable margin of error, subtracted it from Manabe's number, and added it to Hansen's, giving rise to the range of likely climate sensitivity that has appeared in every greenhouse assessment since ...." In 2008, climatologist Stefan Rahmstorf said: "At that time [it was published], the [Charney report estimate's] range [of uncertainty] was on very shaky ground. Since then, many vastly improved models have been developed by a number of climate research centers around the world." Assessment reports of IPCC Despite considerable progress in the understanding of Earth's climate system, assessments continued to report similar uncertainty ranges for climate sensitivity for some time after the 1979 Charney report. The First Assessment Report of the Intergovernmental Panel on Climate Change (IPCC), published in 1990, estimated that equilibrium climate sensitivity to a doubling of lay between , with a "best guess in the light of current knowledge" of . The report used models with simplified representations of ocean dynamics. The IPCC supplementary report, 1992, which used full-ocean circulation models, saw "no compelling reason to warrant changing" the 1990 estimate; and the IPCC Second Assessment Report stated, "No strong reasons have emerged to change [these estimates]," In the reports, much of the uncertainty around climate sensitivity was attributed to insufficient knowledge of cloud processes. The 2001 IPCC Third Assessment Report also retained this likely range. Authors of the 2007 IPCC Fourth Assessment Report stated that confidence in estimates of equilibrium climate sensitivity had increased substantially since the Third Annual Report. The IPCC authors concluded that ECS is very likely to be greater than and likely to lie in the range , with a most likely value of about . The IPCC stated that fundamental physical reasons and data limitations prevent a climate sensitivity higher than from being ruled out, but the climate sensitivity estimates in the likely range agreed better with observations and the proxy climate data. The 2013 IPCC Fifth Assessment Report reverted to the earlier range of (with high confidence), because some estimates using industrial-age data came out low. The report also stated that ECS is extremely unlikely to be less than (high confidence), and it is very unlikely to be greater than (medium confidence). Those values were estimated by combining the available data with expert judgement. In preparation for the 2021 IPCC Sixth Assessment Report, a new generation of climate models was developed by scientific groups around the world. Across 27 global climate models, estimates of a higher climate sensitivity were produced. The values spanned and exceeded in 10 of them. The estimates for equilibrium climate sensitivity changed from 3.2 °C to 3.7 °C and the estimates for the transient climate response from 1.8 °C, to 2.0 °C. The cause of the increased ECS lies mainly in improved modelling of clouds. Temperature rises are now believed to cause sharper decreases in the number of low clouds, and fewer low clouds means more sunlight is absorbed by the planet and less reflected to space. Remaining deficiencies in the simulation of clouds may have led to overestimates, as models with the highest ECS values were not consistent with observed warming. A fifth of the models began to 'run hot', predicting that global warming would produce significantly higher temperatures than is considered plausible. According to these models, known as hot models, average global temperatures in the worst-case scenario would rise by more than 5°C above preindustrial levels by 2100, with a "catastrophic" impact on human society. In comparison, empirical observations combined with physics models indicate that the "very likely" range is between 2.3 and 4.7°C. Models with a very high climate sensitivity are also known to be poor at reproducing known historical climate trends, such as warming over the 20th century or cooling during the last ice age. For these reasons the predictions of hot models are considered implausible, and have been given less weight by the IPCC in 2022.
Physical sciences
Climate change
Earth science
1765503
https://en.wikipedia.org/wiki/Water%20deer
Water deer
The water deer (Hydropotes inermis) is a small deer species native to Korea and China. Its prominent tusks, similar to those of musk deer, have led to both subspecies being colloquially named vampire deer in English-speaking areas to which they have been imported. It was first described to the Western world by Robert Swinhoe in 1870. Taxonomy There are two subspecies: the Chinese water deer (H. i. inermis) and the Korean water deer (H. i. argyropus). The water deer is superficially more similar to a musk deer than a true deer; despite anatomical peculiarities, including a pair of prominent tusks (downward-pointing canine teeth) and its lack of antlers, it is classified as a cervid. Yet its unique anatomical characteristics have caused it to be classified in its own genus (Hydropotes) as well as historically its own subfamily (Hydropotinae). However, studies of mitochondrial control region and cytochrome b DNA sequences placed it near Capreolus within an Old World section of the subfamily Capreolinae, and all later molecular analysis show that Hydropotes is a sister taxon of Capreolus. Etymology The genus name Hydropotes derives from the two ancient Greek words (), meaning "water", and (), meaning "drinker", and refers to the preference of this cervid for rivers and swamps. The etymology of the species name corresponds to the Latin word inermis meaning unarmed, defenceless—itself constructed from the prefix in- meaning without and the stem arma meaning defensive arms, armor—, and refers to the water deer's lack of antlers. Habitat and distribution Archeological studies indicate water deer was once distributed among much broader range than currently during the Pleistocene and the Holocene periods; records have been obtained from eastern Tibet in the west, Inner Mongolia and northeastern China in the north, southeastern Korean Peninsula (Holocene) and Japanese archipelago (Pleistocene) in the east, southern China and northern Vietnam in the south. Water deer also inhabited Taiwan historically, however this population presumably became extinct as late as the early 19th century. Water deer are indigenous to the lower reaches of the Yangtze River, coastal Jiangsu province (Yancheng Coastal Wetlands), and islands of Zhejiang of east-central China, and in Korea, where the demilitarized zone has provided a protected habitat for a large number. The Korean water deer (Hydropotes inermis argyropus) is one of the two subspecies of water deer. While the population of Chinese subspecies is critically endangered in China, the Korean subspecies are known to number 700,000 throughout South Korea. In China, water deer are found in Zhoushan Islands in the Zhejiang (600–800), Jiangsu (500–1,000), Hubei, Henan, Anhui (500), Guangdong, Fujian, Poyang Lake in Jiangxi (1,000), Shanghai, and Guangxi. They are now extinct in southern and western China. Since 2006, water deer have been reintroduced in Shanghai, with a population increase from 21 individuals in 2007 to 227–299 individuals in 2013. In Korea, water deer are found nationwide and are known as gorani (고라니). Water deer inhabit the land alongside rivers, where they are protected from sight by the tall reeds and rushes. They are also seen on mountains, swamps, grasslands, and even open cultivated fields. Water deer are proficient swimmers, and can swim several miles to reach remote river islands. An introduced population of Chinese water deer exists in the United Kingdom and another was extirpated from France. South Korea Despite a listing of "vulnerable" by the International Union for Conservation of Nature (IUCN) in South Korea, the animal is thriving because of the extinction of natural predators, such as Korean tigers and leopards. Since 1994, Korean water deer have been designated as "harmful wildlife", a term given by the Ministry of Environment to wild creatures that can cause harm to humans or their property. Currently, certain local governments offer bounties from 30,000 won ($30) to 50,000 won ($50) during the farming season. However, the hunting of water deer is not restricted to the warm season, as 18 hunting grounds were currently in operation in the winter of 2018. Britain Chinese water deer were first introduced into Great Britain in the 1870s. The animals were kept in the London Zoo until 1896, when the Duke of Bedford oversaw their transfer to Woburn Abbey, Bedfordshire. More of the animals were imported and added to the herd over the next three decades. In 1929 and 1930, 32 deer were transferred from Woburn to Whipsnade, also in Bedfordshire, and released into the park. The current population of Chinese water deer at Whipsnade is estimated to be more than 600, while the population at Woburn is probably more than 250. The majority of the current population of Chinese water deer in Britain derives from escapees, with the remainder being descended from many deliberate releases. Most of these animals still reside close to Woburn Abbey. It appears that the deer's strong preference for a particular habitat – tall reed and grass areas in rich alluvial deltas - has restricted its potential to colonize further afield. The main area of distribution is from Woburn east into Cambridgeshire, Norfolk, Suffolk and North Essex, and south towards Whipsnade. There have been small colonies reported in other areas. The British Deer Society coordinated a survey of wild deer in the United Kingdom between 2005 and 2007 and identified the Chinese water deer as "notably increasing its range" since the previous census in 2000. France A small population existed in France originating from animals that had escaped an enclosure in 1960 in western France (Haute-Vienne, near Poitiers). The population was reinforced in 1965 and 1970 and the species has been protected since 1973. Despite efforts to locate the animals with the help of local hunters, there have been no sightings since 2000, and the population is assumed to be extinct. Russia On April 1, 2019, a water deer was spotted using a photo trap in the "Land of the Leopard" national park in the Khasan district of Primorsky Krai, Russia, 4.5 km from the border with China. In 2022, the population of water deer in Primorsky Krai was about 170 individuals. Thus, the water deer became the newest, and 327th, mammal species to be listed among the fauna of Russia. Morphology Physical attributes The water deer has narrow pectoral and pelvic girdles, long legs, and a long neck. The powerful hind legs are longer than the front legs so that the haunches are carried higher than the shoulders. They run with rabbit-like jumps. In the groin of each leg is an inguinal gland used for scent marking; this deer is the only member of the Cervidae to possess such glands. The short tail is no more than in length and is almost invisible, except when it is held raised by the male during the rut. The ears are short and very rounded, and both sexes lack antlers. The coat is an overall golden brown color and may be interspersed with black hairs, while the undersides are white. The strongly tapered face is reddish-brown or gray, and the chin and upper throat are cream-colored. The hair is longest on the flanks and rump. In the fall, the summer coat is gradually replaced by a thicker, coarse-haired winter coat that varies from light brown to grayish brown. Neither the head nor the tail poles are well differentiated as in gregarious deer; consequently, this deer's coat is little differentiated. Young are born dark brown with white stripes and spots along their upper torso. Teeth The water deer have developed long canine teeth which protrude from the upper jaw like the canines of musk deer. The canines are fairly large in the bucks, ranging in length from on average to as long as . Does, in comparison, have tiny canines that are an average of in length. The teeth usually erupt in the autumn of the deer's first year at approximately 6–7 months of age. By early spring, the recently erupted tusks reach approximately 50% of their final length. As the tusks develop, the root remains open until the deer is about eighteen months to two years old. When fully grown, only about 60% of the tusk is visible below the gum. These canines are held loosely in their sockets, with their movements controlled by facial muscles. The buck can draw them backwards out of the way when eating. In aggressive encounters, he thrusts his canines out and draws in his lower lip to pull his teeth closer together. He then presents an impressive two-pronged weapon to rival males. It is due to these teeth that this animal is often referred to as a "vampire deer". Genetic diversity The mitochondrial DNA of samples from the native Chinese population and the introduced UK population was analysed to infer each population's genetic structure and genetic diversity. It was found that the UK population displays lower levels of genetic diversity, and that there is genetic differentiation between the native and introduced population. It was also found that the source population of the British deer is likely to be extinct. This has implications for the conservation of the different populations, especially as Hydropotes inermis is classified as Vulnerable in its native range according to the IUCN Red List. Behaviour Apart from mating during the rutting season, water deer are solitary animals, and males are highly territorial. Each buck marks out his territory with urine and feces. Sometimes a small pit is dug and it is possible that in digging, the male releases scent from the interdigital glands on its feet. The male also scent-marks by holding a thin tree in his mouth behind the upper canines and rubbing his preorbital glands against it. Males may also bite off vegetation to delineate territorial boundaries. Water deer use their tusks for territorial fights and are not related to carnivores. Confrontations between males begin with the animals walking slowly and stiffly towards each other, before turning to walk in parallel apart, to assess one another. At this point, one male may succeed in chasing off his rival, making clicking noises during the pursuit. However, if the conflict is not resolved at the early stage, the bucks will fight. Each would try to wound the other on the head, shoulders, or back, by stabbing or tearing with his upper canines. The fight is ended by the loser, who either lays his head and neck flat on the ground or turns tail and is chased out of the territory. Numerous long scars and torn ears seen on males indicate that fighting is frequent. The fights are seldom fatal but may leave the loser considerably debilitated. Tufts of hair are most commonly found on the ground in November and December, showing that encounters are heavily concentrated around the rut. Females do not seem to be territorial outside the breeding season and can be seen in small groups, although individual deer do not appear to be associated; they will disperse separately at any sign of danger. Females show aggression towards each other immediately before and after the birth of their young and will chase other females from their birth territories. Communication Water deer are capable of emitting several sounds. The main call is a bark, and this has more of a growling tone when compared with the sharper yap of a muntjac. The bark is used as an alarm, and water deer will bark repeatedly at people and each other for reasons unknown. If challenged during the rut, a buck will emit a clicking sound. It is uncertain how this unique sound is generated, although it is possibly by using its molar teeth. During the rut, a buck following a doe will make a weak whistle or squeak. The does emit a soft pheep to call to their fawns, whilst an injured deer will emit a screaming wail. Reproduction During the annual rut in November and December, the male will seek out and follow females, giving soft squeaking contact calls and checking for signs of estrus by lowering his neck and rotating his head with ears flapping. Scent plays an important part in courtship, with both animals sniffing each other. Mating among water deer is polygynous, with most females being mated inside the buck's territory. After repeated mountings, copulation is brief. Water deer have been known to produce up to seven young, but two to three is normal for this species, the most prolific of all deer. The doe often gives birth to her spotted young in the open, but they are quickly taken to concealing vegetation, where they will remain most of the time for up to a month. During these first few weeks, fawns come out to play. Once driven from the natal territory in late summer, young deer sometimes continue to associate with each other, later separating to begin their solitary existence. Young water deer are also known to grow faster and be more precocious in comparison to other similar species.
Biology and health sciences
Deer
Animals
1765998
https://en.wikipedia.org/wiki/Wildlife%20conservation
Wildlife conservation
Wildlife conservation refers to the practice of protecting wild species and their habitats in order to maintain healthy wildlife species or populations and to restore, protect or enhance natural ecosystems. Major threats to wildlife include habitat destruction, degradation, fragmentation, overexploitation, poaching, pollution, climate change, and the illegal wildlife trade. The IUCN estimates that 42,100 species of the ones assessed are at risk for extinction. Expanding to all existing species, a 2019 UN report on biodiversity put this estimate even higher at a million species. It is also being acknowledged that an increasing number of ecosystems on Earth containing endangered species are disappearing. To address these issues, there have been both national and international governmental efforts to preserve Earth's wildlife. Prominent conservation agreements include the 1973 Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) and the 1992 Convention on Biological Diversity (CBD). There are also numerous nongovernmental organizations (NGO's) dedicated to conservation such as the Nature Conservancy, World Wildlife Fund, and Conservation International. Threats to wildlife Habitat destruction Habitat destruction decreases the number of places where wildlife can live in. Habitat fragmentation breaks up a continuous tract of habitat, often dividing large wildlife populations into several smaller ones. Human-caused habitat loss and fragmentation are primary drivers of species declines and extinctions. Key examples of human-induced habitat loss include deforestation, agricultural expansion, and urbanization. Habitat destruction and fragmentation can increase the vulnerability of wildlife populations by reducing the space and resources available to them and by increasing the likelihood of conflict with humans. Moreover, destruction and fragmentation create smaller habitats. Smaller habitats support smaller populations, and smaller populations are more likely to go extinct. The COVID-19 pandemic has caused a significant shift in human behavior, resulting in mandatory and voluntary limitations on movement. As a result, people have started utilizing green spaces more frequently, which were previously habitats for wildlife. Unfortunately, this increased human activity has caused destruction to the natural habitat of various species. Deforestation Deforestation is the clearing and cutting down forests on purpose. Deforestation is a cause of human-induced habitat action destruction, by cutting down habitats of different species in the process of removing trees. Deforestation is often done for several reasons, often for either agricultural purposes or for logging, which is the obtainment of timber and wood for use in construction or fuel. Deforestation causes many threats to wildlife as it not only causes habitat destruction for the many animals that survive in forests, as more than 80% of the world's species live in forests but also leads to further climate change. Deforestation is a main concern in the tropical forests of the world. Tropical forests, like the Amazon, are home to the most biodiversity out of any other biome, making deforestation there an even more prevalent issue, especially in populated areas, as in these areas deforestation leads to habitat destruction and the endangerment of many species in one area. Some policies have been enacted to attempt to stop deforestation in different parts of the world, like the Wilderness Act of 1964 which designated specific areas wilderness to be protected. Overexploitation Overexploitation is the harvesting of animals and plants at a rate that's faster than the species' ability to recover. While often associated with Overfishing, overexploitation can apply to many groups including mammals, birds, amphibians, reptiles, and plants. The danger of overexploitation is that if too many of a species offspring are taken, then the species may not recover. For example, overfishing of top marine predatory fish like tuna and salmon over the past century has led to a decline in fish sizes as well as fish numbers. Poaching Poaching for illegal wildlife trading is a major threat to certain species, particularly endangered ones whose status makes them economically valuable. Such species include many large mammals like African elephants, tigers, and rhinoceros (traded for their tusks, skins, and horns respectively). Less well-known targets of poaching include the harvest of protected plants and animals for souvenirs, food, skins, pets, and more. Poaching causes already small populations to decline even further as hunters tend to target threatened and endangered species because of their rarity and large profits. Ocean Acidification As carbon dioxide levels increase concentration in the atmosphere, they increase in the ocean as well. Typically, the ocean will absorb carbon from the atmosphere, where it can be sequestered in the deep ocean and sea floor; this is a process called the biological pump. Increased carbon dioxide emissions and increased stratification (which slows the biological pump) decrease the ocean pH, making it more acidic. Calcifying organisms such as coral are especially susceptible to decreased pH, resulting in mass bleaching events, inevitably destroying a habitat for many of coral's diverse inhabitants. Research (conducted through methods such as coral fossils and ancient ice core carbon analysis) suggests ocean acidification has occurred in the geological past (more likely at a slower pace), and correlate with past extinction events. Culling Culling is the deliberate and selective killing of wildlife by governments for various purposes. An example of this is shark culling, in which "shark control" programs in Queensland and New South Wales (in Australia) have killed thousands of sharks, as well as turtles, dolphins, whales, and other marine life. The Queensland "shark control" program alone has killed about 50,000 sharks — it has also killed more than 84,000 marine animals. There are also examples of population culling in the United States, such as bison in Montana and swans, geese, and deer in New York and other places. Pollution A wide range of pollutants negatively impact wildlife health. For some pollutants, simple exposure is enough to do damage (e.g. pesticides). For others, its through inhaling (e.g. air pollutants) or ingesting it (e.g. toxic metals). Pollutants affect different species in different ways so a pollutant that is bad for one might not affect another. Air pollutants: Most air pollutants come from burning fossil fuels and industrial emissions. These have direct and indirect effects on the health of wildlife and their ecosystems. For example, high levels of sulfur oxides (SOx) can damage plants and stunt their growth. Sulfur oxides also contribute to acid rain, harming both terrestrial and aquatic ecosystems. Other air pollutants like smog, ground-level ozone, and particulate matter decrease air quality. Heavy metals: Heavy metals like arsenic, lead, and mercury naturally occur at low levels in the environment, but when ingested in high doses, can cause organ damage and cancer. How toxic they are depends on the exact metal, how much was ingested, and the animal that ingested it. Human activities such as mining, smelting, burning fossil fuels, and various industrial processes have contributed to the rise in heavy metal levels in the environment. Toxic chemicals: There are many sources of toxic chemical pollution including industrial wastewater, oil spills, and pesticides. There's a wide range of toxic chemicals so there's also a wide range of negative health effects. For example, synthetic pesticides and certain industrial chemicals are persistent organic pollutants. These pollutants are long-lived and can cause cancer, reproductive disorders, immune system problems, and nervous system problems. Climate change Humans are responsible for present-day climate change currently changing Earth's environmental conditions. It is related to some of the aforementioned threats to wildlife like habitat destruction and pollution. Rising temperatures, melting ice sheets, changes in precipitation patterns, severe droughts, more frequent heat waves, storm intensification, ocean acidification, and rising sea levels are some of the effects of climate change. Phenomena like droughts, wildfires, heatwaves, intense storms, ocean acidification, and rising sea levels, directly lead to habitat destruction. For example, longer dry seasons, warmer springs, and dry soil has been observed to increase the length of wildfire season in forests, shrublands and grasslands. Increased severity and longevity of wildfires can completely wipe out entire ecosystems, causing them to take decades to fully recover. Wildfires are a prime example of the direct negative effect climate change has on wildlife and ecosystems. Meanwhile, a warming climate, fluctuating precipitation, and changing weather patterns will impact species ranges. Overall, the effects of climate change increase stress on ecosystems, and species unable to cope with the rapidly changing conditions will go extinct. While modern climate change is caused by humans, past climate change events occurred naturally and have led to extinctions. Illegal Wildlife Trade The illegal wildlife trade is the illegal trading of plants and wildlife. This illegal trading is worth an estimate of 7-23 billion and an annual trade of around 100 million plants and animals. In 2021 it was found that this trade has caused a 60% decline in species abundance, and 80% for endangered species. This trade can be devastating to both humans and animals. It has the capacity to spread zoonotic diseases to humans, as well as contribute to local extinction. The pathogens to humans may be spread through small animal vectors like ticks, or through ingestion of food and water. Extinction can be caused due to non-native species being introduced that become invasive. An example of how this may happen is through by-catch.These new species will outcompete the native species and take over, therefore causing the local or global extinction of a species. Due to the fittest animals in the species being hunted or poached, the less fit organisms will mate, causing less fitness in the generations to come. In addition to species fitness being lowered and therefore endangering species, the illegal wildlife trade has ecological costs. Sex-ratio balances may be tipped or reproduction rates are slowed, which can be detrimental to vulnerable species. The recovery of these populations may take longer due to the reproduction rates being slower. The wildlife trade also causes issues for natural resources that people use in their everyday lives. Ecotourism is how some people bring in money to their homes, and with depleting the wildlife, this may be a factor in taking away jobs. Illegal wildlife trade has also become normalized through various social media outlets. There are TikTok accounts that have gone viral for their depiction of exotic pets, such as various monkey and bird species. These accounts show a cute and fun side of owning exotic pets, therefore indirectly encouraging illegal wildlife trade. On March 30, 2021, TikTik joined the Coalition to End Wildlife Trafficking Online. They, along with other big social media companies work to protect species from illegal, harmful trade online. Research has shown that machine learning can filter through social media posts to identify indications of illegal wildlife trade. This filtration system is able to search for keywords, pictures, and phrases that indicate illegal wildlife trade, and report it. Species conservation It is estimated that, because of human activities, current species extinction rates are about 1000 times greater than the background extinction rate (the 'normal' extinction rate that occurs without additional influence). According to the IUCN, out of all species assessed, over 42,100 are at risk of extinction and should be under conservation. Of these, 25% are mammals, 14% are birds, and 40% are amphibians. However, because not all species have been assessed, these numbers could be even higher. A 2019 UN report assessing global biodiversity extrapolated IUCN data to all species and estimated that 1 million species worldwide could face extinction. Conservation of a select species are often prioritized on several factors which include significant economic and ecological value, as well as desirability or attractiveness. Yet, because resources are limited, sometimes it is not possible to give all species that need conservation due consideration. The species problem occurring in some cases due to natural hybridization, cryptic species, and natural evolution of species can be represented for species conservation by different approaches, such as multicriteria species approaches, subspecies, evolutionarily significant units, distinct population segments or species-population continuum. Leatherback sea turtle The leatherback sea turtle (Dermochelys coriacea) is the largest turtle in the world, is the only turtle without a hard shell, and is endangered. It is found throughout the central Pacific and Atlantic Oceans but several of its populations are in decline across the globe (though not all). The leatherback sea turtle faces numerous threats including being caught as bycatch, harvest of its eggs, loss of nesting habitats, and marine pollution. In the US where the leatherback is listed under the Endangered Species Act, measures to protect it include reducing bycatch captures through fishing gear modifications, monitoring and protecting its habitat (both nesting beaches and in the ocean), and reducing damage from marine pollution. There is currently an international effort to protect the leatherback sea turtle. Habitat conservation Habitat conservation is the practice of protecting a habitat in order to protect the species within it. This is sometimes preferable to focusing on a single species especially if the species in question has very specific habitat requirements or lives in a habitat with many other endangered species. The latter is often true of species living in biodiversity hotspots, which are areas of the world with an exceptionally high concentration of endemic species (species found nowhere else in the world). Many of these hotspots are in the tropics, mainly tropical forests like the Amazon. Habitat conservation is usually carried out by setting aside protected areas like national parks or nature reserves. Even when an area isn't made into a park or reserve, it can still be monitored and maintained. Red-cockaded woodpecker The red-cockaded woodpecker (Picoides borealis) is an endangered bird in the southeastern US. It only lives in longleaf pine savannas which are maintained by wildfires in mature pine forests. Today, it is a rare habitat (as fires have become rare and many pine forests have been cut down for agriculture) and is commonly found on land occupied by US military bases, where pine forests are kept for military training purposes and occasional bombings (also for training) set fires that maintain pine savannas. Woodpeckers live in tree cavities they excavate in the trunk. In an effort to increase woodpecker numbers, artificial cavities (essentially birdhouses planted within tree trunks) were installed to give woodpeckers a place to live. An active effort is made by the US military and workers to maintain this rare habitat used by red-cockaded woodpeckers. Conservation genetics Conservation genetics studies genetic phenomena that impact the conservation of a species. Most conservation efforts focus on managing population size, but conserving genetic diversity is typically a high priority as well. High genetic diversity increases survival because it means greater capacity to adapt to future environmental changes. Meanwhile, effects associated with low genetic diversity, such as inbreeding depression and loss of diversity from genetic drift, often decrease species survival by reducing the species' capacity to adapt or by increasing the frequency of genetic problems. Though not always the case, certain species are under threat because they have very low genetic diversity. As such, the best conservation action would be to restore their genetic diversity. Florida panther The Florida panther is a subspecies of cougar (specifically Puma concolor coryi) that resides in the state of Florida and is currently endangered. Historically, the Florida panther's range covered the entire southeastern US. In the early 1990s, only a single population with 20-25 individuals were left. The population had very low genetic diversity, was highly inbred, and suffered from several genetic issues including kinked tails, cardiac defects, and low fertility. In 1995, eight female Texas cougars were introduced to the Florida population. The goal was to increase genetic diversity by introducing genes from a different, unrelated puma population. By 2007, the Florida panther population had tripled and offspring between Florida and Texas individuals had higher fertility and less genetic problems. In 2015, the US Fish and Wildlife Service estimated there were 230 adult Florida panthers and in 2017, there were signs that the population's range was expanding within Florida. Conservation methods Wildlife Monitoring Monitoring of wildlife populations is an important part of conservation because it allows managers to gather information about the status of threatened species and to measure the effectiveness of management strategies. Monitoring can be local, regional, or range-wide, and can include one or many distinct populations. Metrics commonly gathered during monitoring include population numbers, geographic distribution, and genetic diversity, although many other metrics may be used. Monitoring methods can be categorized as either "direct" or "indirect". Direct methods rely on directly seeing or hearing the animals, whereas indirect methods rely on "signs" that indicate the animals are present. For terrestrial vertebrates, common direct monitoring methods include direct observation, mark-recapture, transects, and variable plot surveys. Indirect methods include track stations, fecal counts, food removal, open or closed burrow-opening counts, burrow counts, runaway counts, knockdown cards, snow tracks, or responses to audio calls. For large, terrestrial vertebrates, a popular method is to use camera traps for population estimation along with mark-recapture techniques. This method has been used successfully with tigers, black bears and numerous other species. Trail cameras can be triggered remotely and automatically via sound, infrared sensors, etc. Computer vision-based animal individual re-identification methods have been developed to automate such sight-resight calculations. Mark-recapture methods are also used with genetic data from non-invasive hair or fecal samples. Such information can be analyzed independently or in conjunction with photographic methods to get a more complete picture of population viability. When designing a wildlife monitoring strategy, it is important to minimize harm to the animal and implement the 3Rs principles (Replacement, Reduction, Refinement). In wildlife research, this can be done through the use of non-invasive methods, sharing samples and data with other research groups, or optimizing traps to prevent injuries. Vaccine administration Distributing vaccinations to wildlife who are particularly vulnerable is useful in conservation to prevent or decelerate extreme population declination in a species from disease and also decrease the risk of a zoonotic spillover to humans. A pathogen that has never once been exposed to a specific species' evolutionary pathway can have detrimental impacts on the population. In most cases, these risks escalate in conjunction to other anthropogenic stressors, such as climate change or habitat loss, that ultimately lead a population to extinction without human intervention. Methods of vaccination varies depending on both the extent and efficiency of limiting the transmission of disease, and can be applied orally, topically, intranasally (IN), or injected either subcutaneously (SC) or intramuscularly (IM). Conservation efforts regarding vaccinations often only serve the purpose of preventing disease related extinction. Rather than completely cleansing the population of the pathogen, infection rates are limited to a smaller percentage of the population. Case study: Ethiopian WolfThe Ethiopian Wolf (Canis simensis), a canid native to Ethiopia, is an endangered species with less than 440 wolves remaining in the wild. These wolves are primarily exposed to the rabies virus by domestic dogs and are facing extreme population declines, especially in the southern Ethiopia region of the Bale Mountains. To counter this, oral vaccinations are administered to these wolves within favorable bait that is widely distributed around their territories. The wolves consume the bait and with it ingest the vaccine, developing an immunity to rabies as antibodies are produced at significant levels. Wolves within these packs who did not ingest the vaccine will be protected by herd immunity as fewer wolves are exposed to the virus. With continued periodic vaccinations, conservationists will be able to spend more resources on further proactive efforts to help prevent their extinction. Government involvement In the US, the Endangered Species Act of 1973 was passed to protect US species deemed in danger of extinction. The concern at the time was that the country was losing species that were scientifically, culturally, and educationally important. In the same year, the Convention on International Trade in Endangered Species of Fauna and Flora (CITES) was passed as part of an international agreement to prevent the global trade of endangered wildlife. In 1980, the World Conservation Strategy was developed by the IUCN with help from the UN Environmental Programme, World Wildlife Fund, UN Food and Agricultural Organization, and UNESCO. Its purpose was to promote the conservation of living resources important to humans. In 1992, the Convention on Biological Diversity (CBD) was agreed on at the UN Conference on Environment and Development (often called the Rio Earth Summit) as an international accord to protect the Earth's biological resources and diversity. According to the National Wildlife Federation, wildlife conservation in the US gets a majority of its funding through appropriations from the federal budget, annual federal and state grants, and financial efforts from programs such as the Conservation Reserve Program, Wetlands Reserve Program and Wildlife Habitat Incentives Program. A substantial amount of funding comes from the sale of hunting/fishing licenses, game tags, stamps, and excise taxes from the purchase of hunting equipment and ammunition. The Endangered Species Act is a continuously updated list that remains up-to-date on species that are endangered or threatened. Along with the update of the list, the Endangered Species Act also seeks to implement actions to protect the species within its list. Furthermore, the Endangered Species Act also lists the species that the act has recovered. It is estimated that the act has prevented the extinction of about 291 species, like bald eagles and humpback whales, since its implementation through its different recovery plans and the protection that it provides for these threatened species. Non-government involvement In the late 1980s, as the public became dissatisfied with government environmental conservation efforts, people began supporting private sector conservation efforts which included several non-governmental organizations (NGOs) . Seeing this rise in support for NGOs, the U.S. Congress made amendments to the Foreign Assistance Act in 1979 and 1986 “earmarking U.S. Agency for International Development (USAID) funds for [biodiversity]”. From 1990 till now, environmental conservation NGOs have become increasingly more focused on the political and economic impact of USAID funds dispersed for preserving the environment and its natural resources. After the terrorist attacks on 9/11 and the start of former President Bush's War on Terror, maintaining and improving the quality of the environment and its natural resources became a “priority” to “prevent international tensions” according to the Legislation on Foreign Relations Through 2002 and section 117 of the 1961 Foreign Assistance Act. Non-governmental organizations Many NGOs exist to actively promote, or be involved with, wildlife conservation: The Nature Conservancy World Wide Fund for Nature (WWF) Conservation International Fauna and Flora International WildTeam Wildlife Conservation Society Audubon Society Traffic (conservation programme) Born Free Foundation African Wildlife Defence Force Save Cambodia's Wildlife WildEarth Guardians Zoological Society of London
Biology and health sciences
Ecology
null
1767352
https://en.wikipedia.org/wiki/Taung%20Child
Taung Child
The Taung Child (or Taung Baby) is the fossilised skull of a young Australopithecus africanus. It was discovered in 1924 by quarrymen working for the Northern Lime Company in Taung, South Africa. Raymond Dart described it as a new species in the journal Nature in 1925. The Taung skull is in repository at the University of Witwatersrand. Dean Falk, a specialist in brain evolution, has called it "the most important anthropological fossil of the twentieth century." Description The Taung Child was originally thought to have been about six years old at death because of the presence of deciduous teeth, but they are now believed to have been about three or four, based on studies of rates of enamel deposition on the teeth. There was some debate over the age of the child, initially because it was unclear if they grew at the speed of a human, or of an ape. Compared to an ape, they would have been aged about 4 years, and compared to a human, they would have been aged around 5–7 years old. The skull has a cranial capacity of 400–500 cc, which is comparable to that of a modern adult chimpanzee. Because mature brain size is attained within the first few years of life, the relatively small size is unlikely to be attributed to the specimen being a juvenile. The skull also possesses features more commonly found in humans than apes, including a rising forehead and round eye sockets. Although the lower portion of the nose resembles a chimpanzee, the overall shorter shape is human-like. Likewise, the lower portion of the face is protruded, though to a lesser degree than in modern apes. A bony shelf found within the inner jaw of apes could not be found. Dart opted to describe the remains as a "man-ape" rather than as an "ape-man" to highlight the more human features present compared to the remains found of the more recent Pithecanthropus erectus. In 2006, Lee Berger announced the Taung Child probably was killed by an eagle or other large predatory bird, citing the similarity of the damage to the skull and eye sockets of the Taung Child to that seen in modern primates that are known to have been killed by eagles. There are talon marks in the eyes as well as a depression along the skull that is common in creatures that have been preyed upon by eagles. History Background In the early 20th century, the workers at limestone quarries in Southern Africa routinely uncovered fossils from the tufa formations that they mined. The tufa did not form consistently, and over time cavities were left open and they became beneficial areas for animals to take shelter in. As a result, many bones began to build up in these areas. These areas were mostly sandstone, and they stood in the way of successful mining. So, miners would use explosives to clear these areas, and discard all the debris. However, many fossils began to show up, and these were saved by many of the miners. Many were of extinct fauna, which included baboons and other primates, and the more complete or somehow more interesting fossils were kept as curiosities by the Europeans who managed operations. Discovery In 1924, workers at the Buxton Limeworks, near Taung, showed a fossilized primate skull to Edwin Gilbert Izod, the visiting director of the Northern Lime Company, the managing company of the quarry. The director gave it to his son, Pat Izod, who displayed it on the mantle over the fireplace. When Josephine Salmons, a friend of the Izod family, paid a visit to Pat's home, she noticed the primate skull, identified it as from an extinct monkey and realised its possible significance to her mentor, Raymond Dart. Salmons was the first female student of Dart, an anatomist at the University of Witwatersrand. Salmons was permitted to take the fossilized skull and presented it to Dart, who also recognized it as a significant find. Dart asked the company to send any more interesting fossilized skulls that were unearthed. When a consulting geologist, Robert Young, paid a visit to the quarry office, the director, A. E. Speirs, presented him with a collection of fossilised primate skulls that had been gathered by a miner, Mr. De Bruyn. A. E. Speirs was using a particular fossil as a paperweight, and Young asked him for this as well. Young sent some of the skulls back to Dart. When Dart examined the contents of the crate, he found a fossilized endocast of a skull showing the impression of a complex brain. He quickly searched through the rest of the fossils in the crates, and matched it to a fossilized skull of a juvenile primate, which had a shallow face and fairly small teeth. Only forty days after he first saw the fossil, Dart completed a paper that named the species of Australopithecus africanus, the "southern ape from Africa", and described it as "an extinct race of apes intermediate between living anthropoids and man". The paper appeared in the 7 February 1925 issue of the journal Nature. The fossil was soon nicknamed the Taung Child. Initial criticism of Dart's claims Reception Scientists were initially reluctant to accept that the Taung Child and the new genus Australopithecus were ancestral to modern humans. In the issue of Nature immediately following the one in which Dart's paper was published, several authorities in British paleoanthropology criticized Dart's conclusion. Three of the four scholars were members of the Piltdown Man committee: Sir Arthur Keith, Grafton Elliot Smith, and Sir Arthur Smith Woodward. They were much more skeptical about this fossil's place in evolutionary history, and believed it deserved to be categorized as a chimp or gorilla rather than a human ancestor. However, Dart still had the hesitant support of W.L.H. Duckworth, but he still asked for more information on the brain to support this claim. Dart's former mentor, Keith, one of the most prominent anatomists of his time, claimed that there was insufficient evidence to accept Dart's claim that Australopithecus was transitional between apes and humans. Grafton Elliot Smith stated that he needed more evidence and a larger picture of the skull before he could judge the significance of the new fossil. Arthur Smith Woodward dismissed the Taung Child as having "little bearing" on the issue of "whether the direct ancestors of man are to be sought in Asia or Africa". The critiques became more fervent a few months later. Elliot Smith concluded that the Taung fossil was "essentially identical" to the skull of "the infant gorilla and chimpanzee". Infant apes appear more human like because of the "shape of their forehead and the lack of fully developed brow ridges". Addressing the claim that the fossil was "the missing link between ape and human", Keith stated in a letter to Nature that In 1926, a year after the publication of Dart's article, Aleš Hrdlička reviewed and approved German and Portuguese articles for the American Journal of Physical Anthropology. Both articles asserted that the Taung Child should not be placed within the human phylum due to a lack of justification for the classification. The next year, Hrdlička personally commented on another of Dart's articles, this time in Natural History, saying that the author "very ingeniously, but, it seems obvious, more or less artificially, endeavors to humanize the 'Australopithecus'. It is not known that this effort thus far has found favor with any other student who gave truly earnest and critical attention to the otherwise very interesting and important Taung relic." Reasons for dissent There were several reasons that it took decades for the field to accept Dart's claim that Australopithecus africanus was in the human line of descent. First and foremost was the fact that the British scientific establishment had been fooled by the hoax of the Piltdown Man, which had a large brain and ape-like teeth. Expecting human ancestors to have evolved a large brain very early, they found that the Taung Child's small brain and human-like teeth made it an unlikely ancestor to modern humans. Secondly, until the 1940s, most anthropologists believed that humans had evolved in Asia, not in Africa. A third reason is that, despite accepting that modern humans had emerged by evolution, many anthropologists believed that the genus Homo had split from the great apes as long as 30 million years ago and so felt uneasy about accepting that humans had a small-brained, ape-like ancestor, like Australopithecus africanus, only two million years ago. Lastly, many people disputed the role of this fossil because of their religious affiliation. When Taung was first announced in February 1925, many anti-evolutionists began to rise up in protest of this fossil. Dart began receiving many threats from members of various religious communities that proclaimed his ideas blasphemous. Some were able to reconcile the science with the religious theology through the lens of "creation science", but there was still significant opposition. However, by this time, many other fossils such as Java Man, Neanderthal Man, and Rhodesian Man were being discovered, and the theory of evolution was becoming more difficult to refute. Solly Zuckerman, who had studied anatomy under Dart in South Africa, concluded as early as 1928 that Australopithecus was little more than an ape. He and a four-member team carried out further studies of the Australopithecine family in the 1940s and 1950s. Using a "metrical and statistical approach" that he thought was superior to purely descriptive methods, he decided that the creatures had not walked on two legs and so were not an intermediate form between humans and apes. For the rest of his life, Zuckerman continued to deny that Australopithecus was part of the human family tree, even when that was the conclusion that had become "universally accepted" by scientists. Acceptance Dart's claim that Australopithecus africanus, the species name that he had given to the Taung Child, was a transitional form between apes and humans was almost universally rejected. Robert Broom, a Scottish doctor who worked in South Africa, was one of the few scientists to believe Dart. Two weeks after Dart announced the discovery of the Taung Child in Nature, Broom visited Dart in Johannesburg to see the fossil. After he became a paleontologist in 1933, Broom found adult fossils of Australopithecus africanus and discovered more robust fossils, which were eventually renamed Australopithecus robustus (AKA Paranthropus robustus). Even after Dart chose to take a break from his work in anthropology, Broom undertook more excavations, and slowly began to find more Australopithecus africanus specimens that proved Dart was correct in his analysis of the Taung Child; it did have human-like morphology. In 1946, Broom and his colleague Gerrit Schepers published a volume consolidating all the information they had found about Australopithecus africanus in a volume titled The South African Fossil Men: The Australopithecinae. In the late 1920s, American paleontologist William King Gregory also accepted that Australopithecus was part of the human family tree. Employed by the American Museum of Natural History in New York, Gregory supported Charles Darwin and Thomas Henry Huxley's then-unpopular view that humans were closely related to African apes. The director of the museum, however, was Henry Fairfield Osborn; despite being "the chief public defender of evolution in the United States" at the time of the Scopes Trial in 1925, he disagreed with Darwin's views on the origins of humanity. Gregory and Osborn repeatedly debated the issue in public forums, but Osborn's view that humans had evolved from early ancestors who did not look like apes prevailed among American anthropologists in the 1930s and 1940s. In 1938, Gregory visited South Africa and saw the Taung Child and the fossils that Broom had recently discovered. More convinced than ever that Dart and Broom were right, he called Australopithecus africanus "the missing link no longer missing". The turning point in the acceptance of Dart's analysis of the Taung Child came in 1947, when the prominent British anthropologist Wilfrid Le Gros Clark announced that he supported it. Le Gros Clark, who would also play an important role in exposing the fraud of the Piltdown Man in 1953, visited Johannesburg in late 1946 to study Dart's Taung skull and Broom's adult fossils, with the intention of proving that they were only apes. After two weeks of studies and visiting the caves in which Broom had found his fossils (the Taung cave had been destroyed by miners soon after the discovery of the Taung skull), however, Clark became convinced that these fossils were hominids rather than pongids. In 1947, Keith published in Nature, announcing his support of Dart and Broom's research. He stated "the evidence submitted by Dr. Robert Broom and Professor Dart was right and I was wrong", agreeing that with the new evidence along with the Taung fossil indicated that this fossil was human-like in posture, dental elements, and its bipedal walk. In early January 1947, at the First Pan-African Congress on Prehistory, Le Gros Clark was the first anthropologist of such stature to call the Taung Child a "hominid": an early human. An anonymous article, published in Nature on 15 February 1947, announced Clark's conclusions to a wider public. On that day, Keith, who had been one of Dart's most virulent critics, composed a letter to the editor of Nature announcing that he supported Clark's analysis: "I was one of those who took the point of view that when the adult form [of Australopithecus] was discovered it would prove to be near akin to the living African anthropoidsthe gorilla and the chimpanzee. I am now convinced... that Prof. Dart was right and that I was wrong. The Australopithecinae are in or near the line which culminated in the human form". As Roger Lewin put it in his book Bones of Contention, "a prompter and more thorough capitulation could hardly be imagined". Identification Dart drew conclusions that were unavoidably controversial due to the lack of more fossil evidence at the time. The idea that the skull belonged to a new genus was identified by comparison with skulls of chimpanzees. Its skull was larger than a fully-grown chimpanzee's. The forehead of the chimpanzee receded to form a heavy browridge and a jutting jaw; the Taung Child's forehead recedes but leaves no browridge. Its foramen magnum, a void in the cranium, where the spinal cord is continuous with the brain, is beneath the cranium so the creature must have stood upright. This is an indication of bipedal locomotion. Dean Falk, a specialist in neuroanatomy, noted that Dart had not fully considered certain apelike attributes for Taung. This mainly pertains to the lunate sulcas, which Dart had described as having human-like placement, Upon further examination however, Falk determined that these patterns were much more similar to that of an ape's similar sized brain. This however was of great debate, as the sulcas was not incredibly visible on the endocast, as it often is not in apes. Ralph Holloway stood in opposition of this idea, as he had long been known as a supporter of Dart's analysis of Taung. He believed that the sulcus would be in the area of the lambdoid structure. Falk however, believed the sulcas was placed higher on the skull, in a more ape-like manner. However, studies surrounding this have been controversial, as there is no concrete place on the brain where they can place these features. Paleoneurologists have been tasked with looking at various depressions in the brain and attempting to determine what they are. These scientists are often met with skepticism, just as Falk in her continued support of an ape-like placement of the lunate sulcas. However, now many professionals believe that the sulcas is not visible in Taung and many other Australopithecus africanus specimens. However, a newer endocast specimen title Stw 505 has been examined, and many believe that it supports Dart's hypothesis, but this aspect of Taung is still highly debated, and many still believe it has ape-like placement. Subsequently, Falk unearthed an unpublished manuscript that Dart completed in 1929 in the Archives of the University of Witwatersrand, which provides a much more thorough description and analysis of the Taung endocast than Dart's earlier announcement in Nature. This was barred from being published to Dart's dismay in 1931. It remains unpublished in these archives. In this writing, Falk discovered that she and Dart had come to similar conclusions surrounding the evolutionary process of the brain that Taung indicates. Whereas Dart had identified only two potential sulci on the Taung endocast in 1925, he identified and illustrated 14 additional sulci in this still-unpublished monograph. There, too, Dart detailed how Taung's endocast was expanded globally in three different regions, contrary to the suggestion that he believed hominin brains evolved back-end-first, in a so-called mosaic fashion. This goes against Holloway's interpretation as he has indicated that the back area of the brain evolved before other regions of the brain, but it stands in agreement with Falk's belief that the brain evolved equally in a coordinated fashion instead.
Biology and health sciences
Australopithecines
Biology
19210530
https://en.wikipedia.org/wiki/Browsing%20%28herbivory%29
Browsing (herbivory)
Browsing is a type of herbivory in which a herbivore (or, more narrowly defined, a folivore) feeds on leaves, soft shoots, or fruits of high-growing, generally woody plants such as shrubs. This is contrasted with grazing, usually associated with animals feeding on grass or other lower vegetations. Alternatively, grazers are animals eating mainly grass, and browsers are animals eating mainly non-grasses, which include both woody and herbaceous dicots. In either case, an example of this dichotomy are goats (which are primarily browsers) and sheep (which are primarily grazers). Browse The plant material eaten is known as browse and is in nature taken directly from the plant, though owners of livestock such as goats and deer may cut twigs or branches for feeding to their stock. In temperate regions, owners take browse before leaf fall, then dry and store it as a winter feed supplement. In time of drought, herdsmen may cut branches from beyond the reach of their stock, as forage at ground level. In the tropical regions, where population pressure leads owners to resort to this more often, there is a danger of permanent depletion of the supply. Animals in captivity may be fed browse as a replacement for their wild food sources; in the case of pandas, the browse may consist of bunches of banana leaves, bamboo shoots, slender pine, spruce, fir and willow branches, straw and native grasses. If the population of browsers grows too high, all of the browse that they can reach may be devoured. The resulting level below which few or no leaves are found is known as the browse line. If over-browsing continues for too long, the ability of the ecosystem's trees to reproduce may be impaired, as young plants cannot survive long enough to grow too tall for browsers to reach. Overbrowsing Overbrowsing occurs when overpopulated or densely-concentrated herbivores exert extreme pressure on plants, reducing the carrying capacity and altering the ecological functions of their habitat. Examples of overbrowsing herbivores around the world include koalas in Southern Australia, introduced mammals in New Zealand, and cervids in forests of North America and Europe. Overview Moose exclosures (fenced-off areas) are used to determine the ecological impacts of cervids, allowing scientists to compare flora, fauna, and soil in areas inside and outside of exclosures. Changes in plant communities in response to herbivory reflect the differential palatability of plants to the overabundant herbivore, as well as the variable ability of plants to tolerate high levels of browsing. The heights of plants preferred by herbivores can give indications of the local and regional herbivore density. Compositional and structural changes in forest vegetation can have cascading effects on the entire ecosystem, including impacts on soil quality and stability, micro- and macro- invertebrates, small mammals, songbirds, and perhaps even large predators. Causes There are several causes of overabundant herbivores and subsequent overbrowsing. Herbivores are introduced to landscapes in which native plants have not evolved to withstand browsing, and predators have not adapted to hunt the invading species. In other cases, populations of herbivores exceed historic levels due to reduced hunting or predation pressure. For example, carnivores declined in North America throughout the 19th century and hunting regulations became stricter, contributing to increased cervid populations across North America. Also, landscape changes due to human development, such as in agriculture and forestry, can produce fragmented forest patches between which deer travel, browsing in early successional habitat at the periphery. Agricultural fields and young silvicultural stands provide deer with high quality food leading to overabundance and increased browsing pressure on forest understory plants. Impacts on plants Overbrowsing impacts plants at individual, population, and community levels. The negative effects of browsing are greater among intolerant species, such as members of the genus Trillium, which have all photosynthetic tissues and reproductive organs at the apex of a singular stem. This means that a deer may eat all the reproductive and photosynthetic tissues at once, reducing the plant's height, photosynthetic capabilities, and reproductive output. This is one example of how overbrowsing can lead to the loss of reproductive individuals in a population, and a lack of recruitment of young plants. Plants also differ in their palatability to herbivores. At high densities of herbivores, plants that are highly selected as browse may be missing small and large individuals from the population. At the community level, intense browsing by deer in forests leads to reductions in the abundance of palatable understory herbaceous shrubs, and increases in graminoid and bryophyte abundance which are released from competition for light. Browsing Pressure and Plant Palatability The intensity of browsing pressure often varies depending on the palatability of plant species to herbivores. Some plant species may be heavily browsed due to their high palatability, while others may be avoided or less affected. Effects on Plant Reproduction Browsing can affect plant reproduction by reducing the availability of leaves for photosynthesis and flowers for pollination. Overbrowsing can lead to a decrease in seed production, hinder the recruitment of new individuals and alter the genetic diversity of plant population. Impacts on other animals Overbrowsing can change near-ground forest structure, plant species composition, vegetation density, and leaf litter, with consequences for other forest-dwelling animals. Many species of ground-dwelling invertebrates rely on near-ground vegetation cover and leaf litter layers for habitat; these invertebrates may be lost from areas with intense browsing. Further, preferential selection of certain plant species by herbivores can impact invertebrates closely associated with those plants. Migratory forest-dwelling songbirds depend on dense understory vegetation for nesting and foraging habitat; reductions in understory plant biomass caused by deer can lead to declines in forest songbird populations. Finally, loss of understory plant diversity associated with ungulate overbrowsing can impact small mammals that rely on this vegetation for cover and food. Management and recovery Overbrowsing can lead plant communities towards equilibrium states which are only reversible if herbivore numbers are greatly reduced for a sufficient period, and actions are taken to restore the original plant communities. Management to reduce deer populations has a three-method approach: (1) large areas of contiguous old forest with closed canopies are set aside, (2) predator populations are increased, and (3) hunting of the overabundant herbivore is increased. Encouragement of tree recovery by promoting seed sources of native trees is an important aspect of managing recovery from overbrowsing. Refugia in the form of windthrow mounds, rocky outcrops, or horizontal logs elevated above the forest floor can provide plants with substrate protected from browsing by cervids. These refugia can contain a proportion of the plant community that would exist without browsing pressure, and may differ significantly from the flora found in nearby browsed areas. If management efforts were to reduce cervid populations in the landscape, these refugia could serve as a model for understory recovery in the surrounding plant community.
Biology and health sciences
Ethology
Biology
19217674
https://en.wikipedia.org/wiki/Mawsonia%20%28fish%29
Mawsonia (fish)
Mawsonia is an extinct genus of prehistoric coelacanth fish. It is amongst the largest of all coelacanths, with one quadrate specimen (DGM 1.048-P) possibly belonging to an individual measuring in length. It lived in freshwater and brackish environments from the late Jurassic to the mid-Cretaceous (Kimmeridgian to Cenomanian stages, about 152 to 96 million years ago) of South America, eastern North America, and Africa. Mawsonia was first described by British paleontologist Arthur Smith Woodward in 1907. Description The fish has six fins: two on the top of the body, two on the sides, one at the end of its tail and one at the bottom of its tail. Rather than having teeth, the inside of the mouth was covered in small (1-2 mm) denticles. It reached at least in length, although a 2021 study suggest one specimen known from a fragmentary quadrate skull bone possibly exceeded . It was only rivaled in size among coelocanths by the related Trachymetopon, A 2024 study suggested that very large size estimates should be treated with caution due to being based on fragmentary remains and uncertain scaling relationships between skull elements and total body length. Taxonomy The genus was named by Arthur Smith Woodward in 1907, from specimens found in the Early Cretaceous (Hauterivian) aged Ilhas Group of Bahia, Brazil. Fossils have been found on three continents; in South America they have been found in the Bahia Group, Romualdo, Alcântara, Brejo Santo and Missão Velha Formations of Brazil, and the Tacuarembó Formation of Uruguay. In Africa, they are known from the Continental Intercalaire of Algeria and Tunisia, the Ain el Guettar Formation of Tunisia, the Kem Kem Group of Morocco, and the Babouri Figuil Basin of Cameroon, spanning from the Late Jurassic, to early Late Cretaceous. Fossils assigned to Mawsonia have also been found in Woodbine Formation of Texas, USA, then part of the island continent Appalachia. The type species is Mawsonia gigas, named and described in 1907. Numerous distinct species have been described since then. M. brasiliensis, M. libyca, M. minor, and M. ubangiensis have all been proposed to be synonyms of M. gigas, although Léo Fragoso's 2014 thesis on mawsoniids finds M. brasiliensis valid and cautions against synonymizing M. minor without further examination. Several recent publications consider M. brasiliensis to be valid as well. Although initially considered to belong to this genus, "Mawsonia" lavocati is most likely referable to Axelrodichthys instead. Ecology Mawsonia was native to freshwater and brackish ecosystems. The diet of Mawsonia and their mechanism of feeding is uncertain. It has been suggested that the denticles were used to crush hard shelled organisms (durophagy) or that prey was swallowed whole using suction feeding.
Biology and health sciences
Prehistoric osteichthyans
Animals
5980254
https://en.wikipedia.org/wiki/Desert%20farming
Desert farming
Desert farming is the practice of developing agriculture in deserts. As agriculture depends upon irrigation and water supply, farming in arid regions where water is scarce is a challenge. However, desert farming has been practiced by humans for thousands of years. In the Negev, there is evidence to suggest agriculture as far back as 5000 BC. Today, the Imperial Valley in southern California, Australia, Saudi Arabia, and Israel are examples of modern desert agriculture. Water efficiency has been important to the growth of desert agriculture. Water reuse, desalination, and drip irrigation are all modern ways that regions and countries have expanded their agriculture despite being in an arid climate. History Humans have been practicing and refining agriculture for millennia. Many of the earliest civilizations such as ancient Assyria, Ancient Egypt, and the Indus Valley Civilization were founded in irrigated regions surrounded by desert. As these civilizations grew, the ability to rear crops in the desert became of increasing importance. There are also instances of civilizations that subsisted primarily in the desert with little irrigation or rainfall, such as various western American indigenous tribes. Negev desert Byzantine rule in the 4th century gave rise to agriculture-based cities in the Negev desert and the population grew exponentially, with water harvesting efforts and general prosperity reaching a peak in the 6th century under Justinian, after which a sharp decline followed. German-Israeli researcher Michael Evenari has shown how novel techniques were developed, such as runoff rainwater collection and management systems, which harvested water from larger areas and directed it onto smaller plots. This allowed the cultivation of plants with much higher water needs than the given arid environment could provide for. Other techniques included wadi terracing and flash-flood dams, and imaginative features used for collecting and directing runoff water. The Tuleilat el-Anab, Arabic for 'grape mounds', i.e. large pile of rocks dotting the desert, probably served two purposes: of removing the rocks from the cultivated plots, and of accelerating the erosion and water transportation of topsoil from the runoff collection area onto those plots. A massive rise in grape production in the northwestern Negev for the needs of the wine industry was documented for the early 6th century, by studying ancient trash mounds at the settlements of Shivta, Elusa and Nessana. There is a sharp peak in the presence of grape pips and broken "Gaza jars" used to export local sweet wine and other Levantine goods from the port of Gaza, after a slower rise during the fourth and fifth centuries, and followed in the mid-6th century by a sudden fall. Two major calamities occurring in those days strike the empire and large parts of the world: the Late Antique Little Ice Age (536-545), caused by huge volcanic eruptions, resulting in the extreme weather events of 535–536; and the first outbreak of bubonic plague in the Old World, the Justinianic Plague of the 540s. These events likely resulted in almost a cessation of the international trade with luxury goods such as Gaza wine, grape production in the Negev settlements again giving way to subsistence farming, focused on barley and wheat. This seems to show that the wine industry of the semiarid region of the Negev could well be sustained over centuries through appropriate agricultural techniques, but that the grape monoculture was economically unsustainable in the long run. Native Americans The Native Americans practicing this agriculture included the ancient and no longer present Anasazi, the long-present Hopi, the Tewa people, Zuni people, and many other regional tribes, including the relatively recently arriving (about 1000 to 1400 CE) Navajo. These various tribes were characterized generally by the Spanish occupiers of the region as Sinagua Indians, sinagua meaning "without water", although this term is not applied to the modern Native Americans of the region. Owing to the great dependence upon weather, an element considered to be beyond human control, substantial religious beliefs, rites, and prayer evolved around the growing of crops, and in particular the growing of the four principal corn types of the region, characterized by their colors: red, yellow, blue, and white. The presence of corn as a spiritual symbol can often be seen in the hands of the "Yeh" spirit figures represented in Navajo carpets, in the rituals associated with the "Corn Maiden" and other kachinas of the Hopi, and in various fetish objects of tribes of the region. American Indians in the Sonoran Desert and elsewhere relied both on irrigation and "Ak-Chin" farming—a type of farming that depended on "washes" (the seasonal flood plains by winter snows and summer rains). The Ak-Chin people employed this natural form of irrigation by planting downslope from a wash, allowing floodwaters to slide over their crops. In the Salt River Valley, now characterized by Maricopa County, Arizona, a vast canal system was created and maintained from about 600 AD to 1450 AD. Several hundred miles of canals fed crops of the area surrounding Phoenix, Tempe, Chandler and Mesa, Arizona. Unfortunately, the intense irrigation increased the salinity of the topsoil, making it no longer fit for the growing of crops. This seems to have contributed to the abandonment of the canals and the adoption of Ak-Chin farming. The ancient canals served as a model for modern irrigation engineers, with the earliest "modern" historic canals being formed largely by cleaning out the Hohokam canals or being laid out over the top of ancient canals. The ancient ruins and canals of the Hohokam Indians were a source of pride to the early settlers who envisioned their new agricultural society rising as the mythical phoenix bird from the ashes of Hohokam society, hence the name Phoenix, Arizona. The system is especially impressive because it was built without the use of metal implements or the wheel. It took remarkable knowledge of geography and hydrology for ancient engineers to lay out the canals, but it also took remarkable socio-political organization to plan workforce deployment, including meeting the physical needs of laborers and their families as well as maintaining and administering the water resources. Contemporary desert farming Desert agriculture is more important than ever before as global population rises. Countries and regions that are not water-secure are no exception to increasing population and thus increasing demand for food. The Middle East and North Africa is perhaps the largest example of growing nations with little to no water security or food security. By 2025, it is estimated that 1.8 billion people will live in countries or regions with absolute water scarcity. Israel Agriculture in modern-day Israel has pioneered several techniques for desert agriculture. The invention of drip irrigation by Simcha Blass has led to a large expansion of agriculture in arid regions, and in many places drip irrigation is the de facto irrigation technique utilized. Studies have consistently shown large water use reduction with drip irrigation or fertigation, with one study returning an 80% decrease in water use and 100% increase in crop yields. The same study (conducted on a sub-Saharan African village) found that this resulted in an improvement in the standard of living in the village by 80%. Another hurdle for many water-scarce nations is consumption of water. Israel has chosen to take a focus on wastewater reuse to combat losing its water resources. The small desert nation reuses 86% of its wastewater as of 2011, and 40% of the total water used by agriculture was reclaimed wastewater. Desalination, brackish, or effluent water also accounts for 44% of Israel's water supply, and the world's largest seawater desalination plant is the Sorek Desalination Plant located in Tel Aviv. The plant is able to produce 624,000 m3 of water per day. The agricultural output of Israel has increased sevenfold since the country's independence in 1948, and total farmland has increased from 165,000 hectares to 420,000 hectares. The country produces 70% of its own food (in dollar value). Imperial Valley The Imperial Valley is a valley in the Sonoran Desert that has been farmed for 90 years in southern California. Prior to the 20th century, the valley was unsettled except for a few small settlements in the 19th century. It is supplied with water via the All-American Canal, a canal from the Colorado River. It is estimated that around 2/3 of vegetables consumed in the winter in the United States originate from the Imperial Valley. Imperial County is responsible for the most lamb and sheep production in the country. Australia Despite Australia being a vastly arid nation, agriculture has been a staple of the Australian economy since its founding. Australia produces cattle, wheat, milk, wool, barley, poultry, lamb, sugar cane, fruits, nuts, and vegetables. Agriculture provides 2.2% of Australia's total employment, and 47% of the total area in Australia is occupied by farms or stations.
Technology
Agriculture_2
null
3350402
https://en.wikipedia.org/wiki/Cuckoo%20bee
Cuckoo bee
The term cuckoo bee is used for a variety of different bee lineages which have evolved the kleptoparasitic behaviour of laying their eggs in the nests of other bees, reminiscent of the behavior of cuckoo birds. The name is perhaps best applied to the apid subfamily Nomadinae, but is sometimes used in Europe to mean bumblebees (Bombus) in the subgenus Psithyrus. Females of cuckoo bees are easy to recognize in almost all cases, as they lack pollen-collecting structures (the scopa) and do not construct their own nests. They often have reduced body hair, abnormally thick and/or heavily sculptured exoskeleton, and saber-like mandibles, although this is not universally true; other less visible changes are also common. The number of times kleptoparasitic behavior has independently evolved within the bees is remarkable; Charles Duncan Michener (2000) lists 16 lineages in which parasitism of social species has evolved (mostly in the family Apidae), and 31 lineages that parasitize solitary hosts (mostly in Apidae, Megachilidae, and Halictidae), collectively representing several thousand species, and therefore a very large proportion of overall bee diversity. There are no cuckoo bees in the families Andrenidae, Melittidae, or Stenotritidae, and possibly the Colletidae (there are only unconfirmed suspicions that one group of Hawaiian Hylaeus species may be parasitic). Cuckoo bees typically enter the nests of pollen-collecting species, and lay their eggs in cells provisioned by the host bee. When the cuckoo bee larva hatches it consumes the host larva's pollen ball, and, if the female kleptoparasite has not already done so, kills and eats the host larva. In a few cases in which the hosts are social species (e.g., the subgenus Psithyrus of the genus Bombus, which are parasitic bumble bees, and infiltrate nests of non-parasitic species of Bombus), the kleptoparasite remains in the host nest and lays many eggs, sometimes even killing the host queen and replacing her – such species are often called social parasites, although a few of them are also what are referred to as "brood parasites." Many cuckoo bees are closely related to their hosts, and may bear similarities in appearance reflecting this relationship. This common pattern gave rise to the ecological principle known as "Emery’s Rule". Others parasitize bees in families different from their own, like Townsendiella, a nomadine apid, one species of which is a kleptoparasite of the melittid genus Hesperapis, whereas the other species in the same genus attack halictid bees.
Biology and health sciences
Hymenoptera
Animals
3352423
https://en.wikipedia.org/wiki/Hydrogen%20fluoride
Hydrogen fluoride
Hydrogen fluoride (fluorane) is an inorganic compound with chemical formula . It is a very poisonous, colorless gas or liquid that dissolves in water to yield hydrofluoric acid. It is the principal industrial source of fluorine, often in the form of hydrofluoric acid, and is an important feedstock in the preparation of many important compounds including pharmaceuticals and polymers such as polytetrafluoroethylene (PTFE). HF is also widely used in the petrochemical industry as a component of superacids. Due to strong and extensive hydrogen bonding, it boils near room temperature, a much higher temperature than other hydrogen halides. Hydrogen fluoride is an extremely dangerous gas, forming corrosive and penetrating hydrofluoric acid upon contact with moisture. The gas can also cause blindness by rapid destruction of the corneas. History In 1771 Carl Wilhelm Scheele prepared the aqueous solution, hydrofluoric acid in large quantities, although hydrofluoric acid had been known in the glass industry before then. French chemist Edmond Frémy (1814–1894) is credited with discovering hydrogen fluoride (HF) while trying to isolate fluorine. Structure and reactions HF is diatomic in the gas-phase. As a liquid, HF forms relatively strong hydrogen bonds, hence its relatively high boiling point. Solid HF consists of zig-zag chains of HF molecules. The HF molecules, with a short covalent H–F bond of 95 pm length, are linked to neighboring molecules by intermolecular H–F distances of 155 pm. Liquid HF also consists of chains of HF molecules, but the chains are shorter, consisting on average of only five or six molecules. Comparison with other hydrogen halides Hydrogen fluoride does not boil until 20 °C in contrast to the heavier hydrogen halides, which boil between −85 °C (−120 °F) and −35 °C (−30 °F). This hydrogen bonding between HF molecules gives rise to high viscosity in the liquid phase and lower than expected pressure in the gas phase. Aqueous solutions HF is miscible with water (dissolves in any proportion). In contrast, the other hydrogen halides exhibit limiting solubilities in water. Hydrogen fluoride forms a monohydrate HF.H2O with melting point −40 °C (−40 °F), which is 44 °C (79 °F) above the melting point of pure HF. Aqueous solutions of HF are called hydrofluoric acid. When dilute, hydrofluoric acid behaves like a weak acid, unlike the other hydrohalic acids, due to the formation of hydrogen-bonded ion pairs [·F−]. However concentrated solutions are strong acids, because bifluoride anions are predominant, instead of ion pairs. In liquid anhydrous HF, self-ionization occurs: which forms an extremely acidic liquid (). Reactions with Lewis acids Like water, HF can act as a weak base, reacting with Lewis acids to give superacids. A Hammett acidity function (H0) of −21 is obtained with antimony pentafluoride (SbF5), forming fluoroantimonic acid. Production Hydrogen fluoride is typically produced by the reaction between sulfuric acid and pure grades of the mineral fluorite: About 20% of manufactured HF is a byproduct of fertilizer production, which generates hexafluorosilicic acid. This acid can be degraded to release HF thermally and by hydrolysis: Use In general, anhydrous hydrogen fluoride is more common industrially than its aqueous solution, hydrofluoric acid. Its main uses, on a tonnage basis, are as a precursor to organofluorine compounds and a precursor to cryolite for the electrolysis of aluminium. Precursor to organofluorine compounds HF reacts with chlorocarbons to give fluorocarbons. An important application of this reaction is the production of tetrafluoroethylene (TFE), precursor to Teflon. Chloroform is fluorinated by HF to produce chlorodifluoromethane (R-22): Pyrolysis of chlorodifluoromethane (at 550-750 °C) yields TFE. HF is a reactive solvent in the electrochemical fluorination of organic compounds. In this approach, HF is oxidized in the presence of a hydrocarbon and the fluorine replaces C–H bonds with C–F bonds. Perfluorinated carboxylic acids and sulfonic acids are produced in this way. 1,1-Difluoroethane is produced by adding HF to acetylene using mercury as a catalyst. The intermediate in this process is vinyl fluoride or fluoroethylene, the monomeric precursor to polyvinyl fluoride. Precursor to metal fluorides and fluorine The electrowinning of aluminium relies on the electrolysis of aluminium fluoride in molten cryolite. Several kilograms of HF are consumed per ton of Al produced. Other metal fluorides are produced using HF, including uranium tetrafluoride. HF is the precursor to elemental fluorine, F2, by electrolysis of a solution of HF and potassium bifluoride. The potassium bifluoride is needed because anhydrous HF does not conduct electricity. Several thousand tons of F2 are produced annually. Catalyst HF serves as a catalyst in alkylation processes in refineries. It is used in the majority of the installed linear alkyl benzene production facilities in the world. The process involves dehydrogenation of n-paraffins to olefins, and subsequent reaction with benzene using HF as catalyst. For example, in oil refineries "alkylate", a component of high-octane petrol (gasoline), is generated in alkylation units, which combine C3 and C4 olefins and iso-butane. Solvent Hydrogen fluoride is an excellent solvent. Reflecting the ability of HF to participate in hydrogen bonding, even proteins and carbohydrates dissolve in HF and can be recovered from it. In contrast, most non-fluoride inorganic chemicals react with HF rather than dissolving. Health effects Hydrogen fluoride is highly corrosive and a powerful contact poison. Exposure requires immediate medical attention. It can cause blindness by rapid destruction of the corneas. Breathing in hydrogen fluoride at high levels or in combination with skin contact can cause death from an irregular heartbeat or from pulmonary edema (fluid buildup in the lungs).
Physical sciences
Hydrogen compounds
Chemistry
3352536
https://en.wikipedia.org/wiki/Exotic%20star
Exotic star
An exotic star is a hypothetical compact star composed of exotic matter (something not made of electrons, protons, neutrons, or muons), and balanced against gravitational collapse by degeneracy pressure or other quantum properties. Types of exotic stars include quark stars (composed of quarks) strange stars (composed of strange quark matter, a condensate of up, down, and strange quarks) s (speculative material composed of preons, which are hypothetical particles and "building blocks" of quarks and leptons, should quarks be decomposable into component sub-particles). Of the various types of exotic star proposed, the most well evidenced and understood is the quark star, although its existence is not confirmed. In Newtonian mechanics, objects dense enough to trap any emitted light are called dark stars,, as opposed to black holes in general relativity. However, the same name is used for hypothetical ancient "stars" which derived energy from dark matter. Exotic stars are hypothetical – partly because it is difficult to test in detail how such forms of matter may behave, and partly because prior to the fledgling technology of gravitational-wave astronomy, there was no satisfactory means of detecting compact astrophysical objects that do not radiate either electromagnetically or through known particles. While candidate objects are occasionally identified based on indirect evidence, it is not yet possible to distinguish their observational signatures from those of known objects. Quark stars and strange stars A quark star is a hypothesized object that results from the decomposition of neutrons into their constituent up and down quarks under gravitational pressure. It is expected to be smaller and denser than a neutron star, and may survive in this new state indefinitely, if no extra mass is added. Effectively, it is a single, very large hadron. Quark stars that contain strange matter are called strange stars. Based on observations released by the Chandra X-Ray Observatory on 10 April 2002, two objects, named RX J1856.5−3754 and were suggested as quark star candidates. The former appeared to be much smaller and the latter much colder than expected for a neutron star, suggesting that they were composed of material denser than neutronium. However, these observations were met with skepticism by researchers who said the results were not conclusive. After further analysis, RX J1856.5−3754 was excluded from the list of quark star candidates. Electroweak stars An electroweak star is a hypothetical type of exotic star in which the gravitational collapse of the star is prevented by radiation pressure resulting from electroweak burning; that is, the energy released by the conversion of quarks into leptons through the electroweak force. This proposed process might occur in a volume at the star's core approximately the size of an apple, containing about two Earth masses, and reaching temperatures on the order of 1015 K (1 PK). Electroweak stars could be identified through the equal number of neutrinos emitted of all three generations, taking into account neutrino oscillation. Preon stars A preon star is a proposed type of compact star made of preons, a group of hypothetical subatomic particles. Preon stars would be expected to have huge densities, exceeding  kg/m3. They may have greater densities than quark stars, and they would be heavier but smaller than white dwarfs and neutron stars. Preon stars could originate from supernova explosions or the Big Bang. Such objects could be detected in principle through gravitational lensing of gamma rays. Preon stars are a potential candidate for dark matter. However, current observations from particle accelerators speak against the existence of preons, or at least do not prioritize their investigation, since the only particle detector presently able to explore very high energies (the Large Hadron Collider) is not designed specifically for this and its research program is directed towards other areas, such as studying the Higgs boson, quark–gluon plasma and evidence related to physics beyond the Standard Model. Boson stars A boson star is a hypothetical astronomical object formed out of particles called bosons (conventional stars are formed from mostly protons and electrons, which are fermions, but also contain a large proportion of helium-4 nuclei, which are bosons, and smaller amounts of various heavier nuclei, which can be either). For this type of star to exist, there must be a stable type of boson with self-repulsive interaction; one possible candidate particle is the still-hypothetical "axion" (which is also a candidate for the not-yet-detected "non-baryonic dark matter" particles, which appear to compose roughly 25% of the mass of the Universe). It is theorized that unlike normal stars (which emit radiation due to gravitational pressure and nuclear fusion), boson stars would be transparent and invisible. The immense gravity of a compact boson star would bend light around the object, creating an empty region resembling the shadow of a black hole's event horizon. Like a black hole, a boson star would absorb ordinary matter from its surroundings, but because of the transparency, matter (which would probably heat up and emit radiation) would be visible at its center. Simulations suggest that rotating boson stars would be torus-shaped, as centrifugal forces would give the bosonic matter that form. There is no significant evidence that such stars exist. However, it may become possible to detect them by the gravitational radiation emitted by a pair of co-orbiting boson stars. GW190521, thought to be the most energetic black hole merger ever recorded, may be the head-on collision of two boson stars. The invisible companion to a Sun-like star identified by Gaia mission could be a black hole or either a boson star or an exotic star of other types. Boson stars may have formed through gravitational collapse during the primordial stages of the Big Bang. At least in theory, a supermassive boson star could exist at the core of a galaxy, which may explain many of the observed properties of active galactic cores. Boson stars have also been proposed as candidate dark matter objects, and it has been hypothesized that the dark matter haloes surrounding most galaxies might be viewed as enormous "boson stars." The compact boson stars and boson shells are often studied involving fields like the massive (or massless) complex scalar fields, the U(1) gauge field and gravity with conical potential. The presence of a positive or negative cosmological constant in the theory facilitates a study of these objects in de Sitter and anti-de Sitter spaces. Boson stars composed of elementary particles with spin-1 have been labelled Proca stars. Braaten, Mohapatra, and Zhang have theorized that a new type dense axion-star may exist in which gravity is balanced by the mean-field pressure of the axion Bose–Einstein condensate. The possibility that dense axion stars exist has been challenged by other work that does not support this claim. Planck stars In loop quantum gravity, a Planck star is a hypothetically possible astronomical object that is created when the energy density of a collapsing star reaches the Planck energy density. Under these conditions, assuming gravity and spacetime are quantized, there arises a repulsive "force" derived from Heisenberg's uncertainty principle. In other words, if gravity and spacetime are quantized, the accumulation of mass-energy inside the Planck star cannot collapse beyond this limit to form a gravitational singularity because it would violate the uncertainty principle for spacetime itself. Q-stars Q-stars are hypothetical objects that originate from supernovae or the big bang. They are theorized to be massive enough to bend space-time to a degree such that some, but not all light could escape from its surface. These are predicted to be denser than neutron stars or even quark stars.
Physical sciences
Stellar astronomy
Astronomy
12502446
https://en.wikipedia.org/wiki/Fairchild%20Republic%20A-10%20Thunderbolt%20II
Fairchild Republic A-10 Thunderbolt II
The Fairchild Republic A-10 Thunderbolt II is a single-seat, twin-turbofan, straight-wing, subsonic attack aircraft developed by Fairchild Republic for the United States Air Force (USAF). In service since 1977, it is named after the Republic P-47 Thunderbolt, but is commonly referred to as the "Warthog" or simply "Hog". The A-10 was designed to provide close air support (CAS) to ground troops by attacking enemy armored vehicles, tanks, and other ground forces; it is the only production-built aircraft designed solely for CAS to have served with the U.S. Air Force. Its secondary mission is to direct other aircraft in attacks on ground targets, a role called forward air controller (FAC)-airborne; aircraft used primarily in this role are designated OA-10. The A-10 was intended to improve on the performance and firepower of the Douglas A-1 Skyraider. The Thunderbolt II's airframe was designed around the high-power 30 mm GAU-8 Avenger rotary autocannon. The airframe was designed for durability, with measures such as of titanium armor to protect the cockpit and aircraft systems, enabling it to absorb damage and continue flying. Its ability to take off and land from relatively short and/or unpaved runways permits operation from airstrips close to the front lines, and its simple design enables maintenance with minimal facilities. It served in the Gulf War (Operation Desert Storm), the American-led intervention against Iraq's invasion of Kuwait, where the aircraft distinguished itself. The A-10 also participated in other conflicts such as the Balkans, Afghanistan, the Iraq War, and against the Islamic State in the Middle East. The A-10A single-seat variant was the only version produced, though one pre-production airframe was modified into the YA-10B twin-seat prototype to test an all-weather night-capable version. In 2005, a program was started to upgrade the remaining A-10A aircraft to the A-10C configuration, with modern avionics for use with precision weaponry. The U.S. Air Force had stated the Lockheed Martin F-35 Lightning II would replace the A-10 as it entered service, but this remains highly contentious within the USAF and in political circles. The USAF gained congressional permission to start retiring A-10s in 2023, but further retirements were paused until the USAF can demonstrate that the A-10's close-air-support capabilities can be replaced. Development Background The development of conventionally armed attack aircraft in the United States stagnated after World War II, as design efforts for tactical aircraft focused on the delivery of nuclear weapons using high-speed designs such as the McDonnell F-101 Voodoo and Republic F-105 Thunderchief. As the U.S. military entered the Vietnam War, its main ground-attack aircraft was the Korean War-era Douglas A-1 Skyraider. A capable aircraft for its era, with a relatively large payload and long loiter time, the propeller-driven design had become relatively slow, vulnerable, particularly to ground fire, and incapable of providing adequate firepower. The U.S. Air Force and Navy lost some 266 A-1s in action in Vietnam, largely from small-arms fire. The lack of modern conventional attack capability prompted calls for a specialized attack aircraft. On 7 June 1961, the Secretary of Defense Robert McNamara ordered the USAF to develop two tactical aircraft, one for the long-range strike and interdictor role, and the other focusing on the fighter-bomber mission. The former was the Tactical Fighter Experimental (TFX) intended to be a common design for the USAF and the US Navy, which emerged as the General Dynamics F-111 Aardvark, while the second was filled by a version of the U.S. Navy's McDonnell Douglas F-4 Phantom II. While the Phantom went on to be one of the most successful fighter designs of the 1960s and proved to be a capable fighter-bomber, its short loiter time was a major problem, as was its poor low-speed performance, albeit to a lesser extent. It was also expensive to buy and operate, with a flyaway cost of $2 million in FY1965 ($ million today), and operational costs over $900 per hour ($ per hour today). After a broad review of its tactical force structure, the USAF decided to adopt a low-cost aircraft to supplement the F-4 and F-111. It first focused on the Northrop F-5, which had air-to-air capability. A 1965 cost-effectiveness study shifted the focus from the F-5 to the less expensive A-7D variant of the LTV A-7 Corsair II, and a contract was awarded. However, this aircraft doubled in cost with demands for an upgraded engine and new avionics. Army helicopter competition During this period, the United States Army had been introducing the Bell UH-1 Iroquois into service. First used in its intended role as a transport, it was soon modified in the field to carry more machine guns in what became known as the helicopter gunship role. This proved effective against the lightly armed enemy, and new gun and rocket pods were added. Soon the Bell AH-1 Cobra was introduced. This was an attack helicopter armed with long-range BGM-71 TOW missiles able to destroy tanks from outside the range of defensive fire. The helicopter was effective and prompted the U.S. military to change its defensive strategy in Europe into blunting any Warsaw Pact advance with anti-tank helicopters instead of the tactical nuclear weapons that had been the basis for NATO's battle plans since the 1950s. The Cobra was a quickly-made helicopter based on the UH-1 Iroquois and was introduced in the mid-1960s as an interim design until the U.S. Army's "Advanced Aerial Fire Support System" helicopter could be delivered. The Army selected the Lockheed AH-56 Cheyenne, a more capable attack aircraft with greater speed for initial production. The development of the anti-tank helicopter concerned the USAF; a 1966 USAF study of existing close air support (CAS) capabilities revealed gaps in the escort and fire suppression roles that the Cheyenne could fill. The study concluded that the service should acquire a simple, inexpensive, dedicated CAS aircraft at least as capable as the A-1, and that it should develop doctrine, tactics, and procedures for such aircraft to accomplish the missions for which the attack helicopters were provided. A-X program On 8 September 1966, General John P. McConnell, Chief of Staff of the USAF, ordered that a specialized CAS aircraft be designed, developed, and obtained. On 22 December, a Requirements Action Directive was issued for the A-X CAS airplane, and the Attack Experimental (A-X) program office was formed. On 6 March 1967, the USAF released a request for information to 21 defense contractors for the A-X. In May 1970, the USAF issued a modified, more detailed request for proposals for the aircraft. The threat of Soviet armored forces and all-weather attack operations had become more serious. The requirements now included that the aircraft would be designed specifically for the 30 mm rotary cannon. The RFP also specified a maximum speed of , takeoff distance of , external load of , mission radius, and a unit cost of US$1.4 million ($ million today). The A-X would be the first USAF aircraft designed exclusively for CAS. During this time, a separate RFP was released for A-X's 30 mm cannon with requirements for a high rate of fire (4,000 rounds per minute) and a high muzzle velocity. Six companies submitted aircraft proposals, with Northrop and Fairchild Republic in Germantown, Maryland, selected to build prototypes: the YA-9A and YA-10A, respectively. General Electric and Philco-Ford were selected to build and test GAU-8 cannon prototypes. Two YA-10 prototypes were built in the Republic factory in Farmingdale, New York, and first flown on 10 May 1972 by pilot Howard "Sam" Nelson. Production A-10s were built by Fairchild in Hagerstown, Maryland. After trials and a fly-off against the YA-9, on 18 January 1973, the USAF announced the YA-10's selection for production. General Electric was selected to build the GAU-8 cannon in June 1973. The YA-10 had an additional fly-off in 1974 against the Ling-Temco-Vought A-7D Corsair II, the principal USAF attack aircraft at the time, to prove the need for a new attack aircraft. The first production A-10 flew in October 1975, and deliveries commenced in March 1976. One experimental two-seat A-10 Night Adverse Weather (N/AW) version was built by Fairchild by converting the first Demonstration Testing and Evaluation (DT&E) A-10A for consideration by the USAF. It included a second seat for a weapon systems officer responsible for electronic countermeasures (ECM), navigation and target acquisition. The N/AW version did not interest the USAF or export customers. The two-seat trainer version was ordered by the USAF in 1981, but funding was canceled by U.S. Congress and was not produced. The only two-seat A-10 resides at Edwards Air Force Base's Flight Test Center Museum. Production On 10 February 1976, Deputy Secretary of Defense Bill Clements authorized full-rate production while the first A-10 was accepted by the USAF Tactical Air Command on 30 March 1976. Production continued and reached a peak rate of 13 aircraft per month. By 1984, 715 airplanes, including two prototypes and six development aircraft, had been delivered. When full-rate production was first authorized, the A-10's planned service life was 6,000 hours. A small design reinforcement was quickly adopted when initial fatigue testing failed at 80% of testing; the A-10 passed fatigue tests with the fix. 8,000-flight-hour service lives were becoming common at the time, so fatigue testing of the A-10 continued with a new 8,000-hour target. This new target quickly discovered serious cracks at Wing Station 23 (WS23) where the outboard portions of the wings are joined to the fuselage. The first production change was to address this problem by adding cold working at WS23. Soon after, the USAF found that the real-world A-10 fleet fatigue was harsher than estimated, forcing a change to fatigue testing and introduced "spectrum 3" equivalent flight-hour testing. Spectrum 3 fatigue testing started in 1979. This round of testing quickly determined that more drastic reinforcement would be needed. The second change in production, starting with aircraft No. 442, was to increase the thickness of the lower skin on the outer wing panels. A tech order was issued to retrofit the "thick skin" to the whole fleet, but the tech order was rescinded after roughly 242 planes, leaving about 200 planes with the original "thin skin". Starting with aircraft No. 530, cold working at WS0 was performed, and this retrofit was performed on earlier aircraft. A fourth, even more drastic change was initiated with aircraft No. 582, again to address the problems discovered with spectrum 3 testing. This change increased the thickness of the lower skin on the center wing panel, but it required modifications to the lower spar caps to accommodate the thicker skin. The USAF found it economically unfeasible to retrofit earlier planes with this modification. Upgrades The A-10 has received many upgrades since entering service. In 1978, it received the Pave Penny laser receiver pod, mounted on a pylon attached below the right side of the cockpit, which receives reflected laser radiation from laser designators to allow the aircraft to deliver laser-guided munitions. In 1980, the A-10 began receiving an inertial navigation system. In the early 1990s, the A-10 began to receive the Low-Altitude Safety and Targeting Enhancement (LASTE) upgrade, which provided computerized weapon-aiming equipment, an autopilot, and a ground-collision warning system. In 1999, aircraft began receiving Global Positioning System navigation systems and a multi-function display. The LASTE system was upgraded with an Integrated Flight & Fire Control Computer (IFFCC). Proposed further upgrades included integrated combat search and rescue locator systems and improved early warning and anti-jam self-protection systems, and the USAF recognized that the A-10's engine power was sub-optimal and had planned to replace them with more powerful engines since at least 2001 at an estimated cost of $2 billion. HOG UP and Wing Replacement Program In 1987, Grumman Aerospace took over support for the A-10 program. In 1993, Grumman updated the damage tolerance assessment and Force Structural Maintenance Plan and Damage Threat Assessment. Over the next few years, problems with wing structure fatigue, first noticed in production years earlier, began to come to the fore. Implementation of the maintenance plan was greatly delayed by the base realignment and closure commission (BRAC), which led to 80% of the original workforce being let go. During inspections in 1995 and 1996, cracks at the WS23 location were found on many A-10s; while many were in line with updated predictions from 1993, two of these were classified as "near-critical" size, well beyond predictions. In August 1998, Grumman produced a new plan to address these issues and increase life span to 16,000 hours. This led to the "HOG UP" program, which commenced in 1999. Additional aspects were added to HOG UP over time, including new fuel bladders, flight control system changes, and engine nacelle inspections. In 2001, the cracks were reclassified as "critical", which meant they were considered repairs and not upgrades, which allowed bypassing normal acquisition channels for more rapid implementation. An independent review of the HOG UP program, presented in September 2003, concluded that the data on which the wing upgrade relied could no longer be trusted. Shortly thereafter, fatigue testing on a test wing failed prematurely and also mounting problems with wings failing in-service inspections at an increasing rate became apparent. The USAF estimated that they would run out of wings by 2011. Of the plans explored, replacing the wings with new ones was the least expensive, at an initial cost of $741 million and a total cost of $1.72 billion over the program's life. In 2005, a business case was produced with three options to extend the fleet's life. The first two options involved expanding the service life extension program (SLEP) at a cost of $4.6 billion and $3.16 billion, respectively. The third option, worth $1.72 billion, was to build 242 new wings and avoid the need to expand the SLEP. In 2006, option 3 was chosen and Boeing won the contract. The base contract is for 117 wings with options for 125 additional wings. In 2013, the USAF exercised a portion of the option to add 56 wings, putting 173 wings on order with options remaining for 69 additional wings. In November 2011, two A-10s flew with the new wings fitted. The new wings improved mission readiness, decreased maintenance costs, and allowed the A-10 to be operated up to 2035 if necessary. Re-winging work was organized under the Thick-skin Urgent Spares Kitting (TUSK) Program. In 2014, as part of plans to retire the A-10, the USAF considered halting the wing replacement program to save an additional $500 million; however, by May 2015 the re-winging program was too advanced to be financially efficient to cancel. Boeing stated in February 2016 that the A-10 could operate to 2040 with the new TUSK wings. Modernization (A-10C) From 2005 to June 2011, the entire fleet of 356 A-10s and OA-10s were modernized in the Precision Engagement program and redesignated A-10C. Upgrades included all-weather combat capability, an improved fire-control system (FCS), electronic countermeasures (ECM), smart bomb targeting, a modern communications suite including a Link 16 radio and Satcom, and cockpit upgrades comprising two multifunction displays and HOTAS configuration mixing the F-16's flight stick with the F-15's throttle. The Government Accountability Office in 2007 estimated the cost of upgrading, refurbishing, and service life extension plans to total $2.25 billion through 2013. In July 2010, the USAF issued Raytheon a contract to integrate a Helmet Mounted Integrated Targeting (HMIT) system into the A-10C. The LASTE system was replaced with the integrated flight and fire control computer (IFFCC) included in the PE upgrade. Throughout its life, multiple software upgrades have been made. While this work was to be stopped under plans to retire the A-10 in February 2014, Secretary of the Air Force Deborah Lee James ordered that the latest upgrade, designated Suite 8, continue in response to congressional pressure. Suite 8 software includes IFF Mode 5, which modernizes the ability to identify the A-10 to friendly units. Additionally, the Pave Penny pods and pylons were removed as their receive-only capability has been replaced by the AN/AAQ-28(V)4 LITENING AT targeting pods or Sniper XR targeting pod, which both have laser designators and laser rangefinders. In 2012, Air Combat Command requested the testing of a external fuel tank which would extend the A-10's loitering time by 45–60 minutes; flight testing of such a tank had been conducted in 1997 but did not involve combat evaluation. Over 30 flight tests were conducted by the 40th Flight Test Squadron to gather data on the aircraft's handling characteristics and performance across different load configurations. It was reported that the tank slightly reduced stability in the yaw axis, but there was no decrease in aircraft tracking performance. Design Overview The A-10 has a cantilever low-wing monoplane wing with a wide chord. It has superior maneuverability at low speeds and altitude due to its large wing area, high wing aspect ratio, and large ailerons. The wing also allows short takeoffs and landings, permitting operations from austere forward airfields near front lines. The A-10 can loiter for extended periods and operate under ceilings with visibility. It typically flies at a relatively low speed of , which makes it a better platform for the ground-attack role than fast fighter-bombers, which often have difficulty targeting small, slow-moving targets. The leading edge of the wing has a honeycomb structure panel construction, providing strength with minimal weight; similar panels cover the flap shrouds, elevators, rudders and sections of the fins. The skin panels are integral with the stringers and are fabricated using computer-controlled machining, reducing production time and cost. Combat experience has shown that this type of panel is more resistant to damage. The skin is not load-bearing, so damaged skin sections can be easily replaced in the field, with makeshift materials if necessary. The ailerons are at the far ends of the wings for greater rolling moment and have two distinguishing features: The ailerons are larger than is typical, almost 50 percent of the wingspan, providing improved control even at slow speeds; the aileron is also split, making it a deceleron. The A-10 is designed to be refueled, rearmed, and serviced with minimal equipment. Its simple design enables maintenance at forward bases with limited facilities. An unusual feature is that many of the aircraft's parts are interchangeable between the left and right sides, including the engines, main landing gear, and vertical stabilizers. The sturdy landing gear, low-pressure tires and large, straight wings allow operation from short rough strips even with a heavy aircraft ordnance load, allowing the aircraft to operate from damaged airbases, flying from taxiways, or even straight roadway sections. The front landing gear is offset to the aircraft's right to allow placement of the 30 mm cannon with its firing barrel along the centerline of the aircraft. During ground taxi, the offset front landing gear causes the A-10 to have dissimilar turning radii; turning to the right on the ground takes less distance than turning left. The wheels of the main landing gear partially protrude from their nacelles when retracted, making gear-up belly landings easier to control and less damaging. All landing gears retract forward; if hydraulic power is lost, a combination of gravity and aerodynamic drag can lower and lock the gear in place. Survivability The A-10 is able to survive direct hits from armor-piercing and high-explosive projectiles up to 23 mm. It has double-redundant hydraulic flight systems, and a mechanical system as a backup if hydraulics are lost. Flight without hydraulic power uses the manual reversion control system; pitch and yaw control engages automatically, and roll control is pilot-selected. In manual reversion mode, the A-10 is sufficiently controllable under favorable conditions to return to base, though control forces are greater than normal. It is designed to be able to fly with one engine, half of the tail, one elevator, and half of a wing missing. As the A-10 operates close to enemy positions, making it an easy target for man-portable air-defense system (MANPADS), surface-to-air missiles (SAMs), and enemy aircraft, it carries both flares and chaff cartridges. The cockpit and parts of the flight-control systems are protected by of titanium aircraft armor, referred to as a "bathtub". The armor has been tested to withstand strikes from cannon fire and some indirect hits from shell fragments. It is made up of titanium plates with thicknesses varying from determined by a study of likely trajectories and deflection angles. The armor makes up almost six percent of the A-10's empty weight. Any interior surface of the tub directly exposed to the pilot is covered by a multi-layer nylon spall shield to protect against shell fragmentation. The front windscreen and canopy are resistant to small arms fire. Its durability was demonstrated on 7 April 2003 when Captain Kim Campbell, while flying over Baghdad during the 2003 invasion of Iraq, suffered extensive flak damage that damaged one engine and crippled the hydraulic system, requiring the stabilizer and flight controls to be operated via manual reversion mode. Despite this, Campbell's A-10 flew for nearly an hour and landed safely. The A-10 was intended to fly from forward air bases and semi-prepared runways where foreign object damage to an aircraft's engines is normally a high risk. The unusual location of the General Electric TF34-GE-100 turbofan engines decreases ingestion risk and also allows the engines to run while the aircraft is serviced and rearmed by ground crews, reducing turn-around time. The wings are also mounted closer to the ground, simplifying servicing and rearming operations. The heavy engines require strong support: four bolts connect the engine pylons to the airframe. The engines' high 6:1 bypass ratio ±contributes to a relatively small infrared signature, and their position directs exhaust over the tailplanes further shielding it from detection by infrared homing surface-to-air missiles. To reduce the likelihood of damage to the fuel system, all four fuel tanks are located near the aircraft's center and are separated from the fuselage; projectiles would need to penetrate the aircraft's skin before reaching a fuel tank's outer skin. Compromised fuel transfer lines self-seal; if damage exceeds a tank's self-sealing capabilities, check valves to prevent fuel from flowing into a compromised tank. Most fuel system components are inside the tanks so component failure will not lead to fuel loss. The refueling system is also purged after use. Reticulated polyurethane foam lines both the inner and outer sides of the fuel tanks, retaining debris and restricting fuel spillage in the event of damage. The engines are shielded from the rest of the airframe by firewalls and fire extinguishing equipment. If all four main tanks were lost, two self-sealing sump tanks contain fuel for 230 miles (370 km) of flight. Weapons The A-10's primary built-in weapon is the 30×173 mm GAU-8/A Avenger autocannon. One of the most powerful aircraft cannons ever flown, the GAU-8 is a hydraulically driven seven-barrel rotary cannon designed for the anti-tank role with a high rate of fire. The original design could be switched by the pilot to 2,100 or 4,200 depleted uranium armor-piercing shells per minute; this was later changed to a fixed rate of 3,900 rounds per minute. The cannon takes about a half second to spin up to its maximum rate of fire, firing 50 rounds during the first second, and 65 or 70 rounds per second thereafter. It is accurate enough to place 80 percent of its shots within a 40-foot (12.4 m) diameter circle from 4,000 feet (1,220 m) while in flight. The GAU-8 is optimized for a slant range of with the A-10 in a 30-degree dive. The aircraft's fuselage was designed around the cannon. The GAU-8 is mounted slightly to the port side; the barrel in the firing location is on the starboard side so it is aligned with the aircraft's centerline. The gun's 5-foot, 11.5-inch (1.816 m) ammunition drum can hold up to 1,350 rounds of 30 mm ammunition, but generally holds 1,174 rounds. To protect the rounds from enemy fire, armor plates of differing thicknesses between the aircraft skin and the drum are designed to detonate incoming shells. The A-10 commonly carries the AGM-65 Maverick air-to-surface missile. Targeted via electro-optical (TV-guided) or infrared systems, the Maverick can hit targets much farther away than the cannon, and thus incur less risk from anti-aircraft systems. During Desert Storm, in the absence of dedicated forward-looking infrared (FLIR) cameras for night vision, the Maverick's infrared camera was used for night missions as a "poor man's FLIR". Other weapons include cluster bombs and Hydra 70 rocket pods. The A-10 is equipped to carry GPS- and laser-guided bombs, such as the GBU-39 Small Diameter Bomb, Paveway series bombs, Joint Direct Attack Munitions (JDAM), Wind Corrected Munitions Dispenser and AGM-154 Joint Standoff Weapon glide bombs. A-10s usually fly with an ALQ-131 Electronic countermeasures (ECM) pod under one wing and two AIM-9 Sidewinder air-to-air missiles for self-defense under the other wing. Colors and markings Aircraft camouflage is used to make the A-10 more difficult to see as it flies low to the ground at subsonic speeds. Many types of paint schemes have been tried. These have included a "peanut scheme" of sand, yellow, and field drab; black and white colors for winter operations; and a tan, green, and brown mixed pattern. The most common Cold War-era scheme was the European I woodland camouflage, whose dark green, medium green, and dark gray was meant to blend in with the typical European forest terrain. It reflected the assumption that the threat from hostile fighter aircraft outweighed that from ground fire. After the 1991 Gulf War, the threat from ground fire was deemed more pressing than the air-to-air threat, leading to the "Compass Ghost" scheme with darker gray on top and a lighter gray on the underside of the aircraft. Many A-10s also had a false canopy painted in dark gray on the underside of the aircraft, just behind the gun. This form of automimicry is an attempt to confuse the enemy as to aircraft attitude and maneuver direction. Many A-10s feature nose art, such as shark mouth or warthog head features. Operational history Service entry The first unit to receive the A-10 was the 355th Tactical Training Wing, based at Davis-Monthan Air Force Base, Arizona, in March 1976. The first unit to achieve initial operating capability was the 354th Tactical Fighter Wing at Myrtle Beach Air Force Base, South Carolina, in October 1977. A-10 deployments followed at bases both at home and abroad, including England AFB, Louisiana; Eielson AFB, Alaska; Osan Air Base, South Korea; and RAF Bentwaters/RAF Woodbridge, England. The 81st TFW of RAF Bentwaters/RAF Woodbridge operated rotating detachments of A-10s at four bases in Germany known as Forward Operating Locations (FOLs): Leipheim, Sembach Air Base, Nörvenich Air Base, and RAF Ahlhorn. A-10s were initially an unwelcome addition to many in the USAF; most pilots did not want to switch to it as fighter pilots traditionally favored speed and appearance. In 1987, many A-10s were shifted to the forward air control (FAC) role and redesignated OA-10. In the FAC role, the OA-10 is typically equipped with up to six pods of 2.75 inch (70 mm) Hydra rockets, usually with smoke or white phosphorus warheads used for target marking. OA-10s are physically unchanged and remain fully combat capable despite the redesignation. The 23rd TFW's A-10s were deployed to Bridgetown, Barbados during Operation Urgent Fury, the 1983 American Invasion of Grenada. They provided air cover for the U.S. Marine Corps landings on the island of Carriacou in late October 1983, but did not fire weapons as no resistance was met. Gulf War and Balkans The A-10 was used in combat for the first time during the Gulf War in 1991, with 132 being deployed. A-10s shot down two Iraqi helicopters with the GAU-8 cannon. The first of these was shot down by Captain Robert Swain over Kuwait on 6 February 1991 for the A-10's first air-to-air victory. Four A-10s were shot down during the war by surface-to-air missiles and eleven A-10s were hit by anti-air artillery rounds. Another two battle-damaged A-10s and OA-10As returned to base and were written off. Some sustained additional damage in crash landings. At the beginning of the war, A-10s flew missions against the Iraqi Republican Guard, but due to heavy attrition, from 15 February they were restricted to within 20 nautical miles (37 km) of the southern border. A-10s also flew missions hunting Iraqi Scud missiles. The A-10 had a mission capable rate of 95.7 percent, flew 8,100 sorties, and launched 90 percent of the AGM-65 Maverick missiles fired in the conflict. Shortly after the Gulf War, the USAF abandoned the idea of replacing the A-10 with a CAS version of the F-16. A-10s fired approximately 10,000 30 mm rounds in Bosnia and Herzegovina in 1994–95. Following the seizure of heavy weapons by Bosnian Serbs from a warehouse in Ilidža, multiple sorties were launched to locate and destroy the captured equipment. On 5 August 1994, two A-10s located and strafed an anti-tank vehicle. Afterward, the Serbs agreed to return the remaining heavy weapons. In August 1995, NATO launched an offensive called Operation Deliberate Force. A-10s flew CAS missions, attacking Bosnian Serb artillery and positions. In late September, A-10s began flying patrols again. A-10s returned to the Balkan region as part of Operation Allied Force in Kosovo beginning in March 1999. In March 1999, A-10s escorted and supported search and rescue helicopters in finding a downed F-117 pilot. The A-10s were deployed to support search and rescue missions, but gradually received more ground attack missions. The A-10's first successful attack in Operation Allied Force happened on 6 April 1999; A-10s remained in action until the end of combat in June 1999. Afghanistan, Iraq, Libya, and recent deployments During the 2001 invasion of Afghanistan, A-10s did not initially take part. Beginning in March 2002, A-10 squadrons were deployed to Pakistan and Bagram Air Base, Afghanistan for the campaign against Taliban and Al-Qaeda, known as Operation Anaconda. Afterward, they remained in-country, fighting Taliban and Al Qaeda remnants. Operation Iraqi Freedom began on 20 March 2003. Sixty OA-10/A-10s took part in early combat. United States Air Forces Central Command issued Operation Iraqi Freedom: By the Numbers, a declassified report about the aerial campaign in the conflict on 30 April 2003. During the initial invasion of Iraq, A-10s had a mission capable rate of 85 percent and fired 311,597 rounds of 30 mm ammunition. The type also flew 32 missions to airdrop propaganda leaflets. A single A-10 was shot down near Baghdad International Airport by Iraqi fire late in the campaign. In September 2007, the A-10C with the Precision Engagement Upgrade reached initial operating capability. The A-10C first deployed to Iraq in 2007 with the 104th Fighter Squadron of the Maryland Air National Guard. The A-10C's digital avionics and communications systems greatly reduced the time to acquire and attack CAS targets. A-10s flew 32 percent of combat sorties in Operation Iraqi Freedom and Operation Enduring Freedom. These sorties ranged from 27,800 to 34,500 annually between 2009 and 2012. In the first half of 2013, they flew 11,189 sorties in Afghanistan. From the start of 2006 to October 2013, A-10s conducted 19 percent of CAS missions in Iraq and Afghanistan, more than the F-15E Strike Eagle and B-1B Lancer, but less than the 33 percent flown by F-16s. In March 2011, six A-10s were deployed as part of Operation Odyssey Dawn, the coalition intervention in Libya. They participated in attacks on Libyan ground forces there. The USAF 122nd Fighter Wing revealed it would deploy to the Middle East in October 2014 with 12 A-10s. Although the deployment had been planned a year in advance in a support role, the timing coincided with the ongoing Operation Inherent Resolve against ISIL militants. From mid-November, U.S. commanders began sending A-10s to hit IS targets in central and northwestern Iraq on an almost daily basis. Over a two–month period, A-10s flew 11 percent of all USAF sorties since the start of operations in August 2014. On 15 November 2015, two days after the ISIL attacks in Paris, A-10s and AC-130s destroyed a convoy of over 100 ISIL-operated oil tanker trucks in Syria as part of an intensification of the U.S.-led intervention against ISIL called Operation Tidal Wave II (named after Operation Tidal Wave during World War II, a failed attempt to raid German oil fields) in an attempt to stop oil smuggling as a source of funds for the group. The A-10 was involved in the killing of 35 Afghan civilians from 2010 to 2015, more than any other U.S. military aircraft and also involved in killing ten U.S. troops in friendly fire over four incidents between 2001 and 2015. These incidents have been assessed as "inconclusive and statistically insignificant" in terms of the plane's capability. On 19 January 2018, 12 A-10s from the 303d Expeditionary Fighter Squadron were deployed to Kandahar Airfield, Afghanistan, to provide CAS, marking the first time in more than three years A-10s had been deployed to Afghanistan. On 29 November and 3 December 2024, USAF A-10s were used against targets in Syria to defend US forces in eastern Syria as part of the ongoing Syrian civil war. The USAF said the strikes destroyed vehicles, mortars, and a T-64 tank. Concurrent with the fall of the Assad regime on 8 December, A-10s participated alongside B-52s and F-15Es in what the USAF said were "dozens" of airstrikes against over 75 ISIS targets. The strikes were intended to prevent ISIS from benefitting from the political upheaval in Syria. Future The A-10's future remains a subject of debate. In 2007, the USAF expected it to remain in service until 2028 and possibly later, when it would likely be replaced by the Lockheed Martin F-35 Lightning II. Director of the Straus Military Reform Project of the Project On Government Oversight Winslow Wheeler, a critic of this plan, said that replacing the A-10 with the F-35 would be a "giant leap backwards" given the A-10's performance and the F-35's high costs. In 2012, the USAF considered the F-35B STOVL variant as a replacement CAS aircraft, but concluded that it could not generate sufficient sorties. In August 2013, Congress and the USAF examined various proposals, including the F-35 and the MQ-9 Reaper unmanned aerial vehicle filling the A-10's role. Proponents state that the A-10's armor and cannon are superior to aircraft such as the F-35 for ground attack, that guided munitions could be jammed, and that ground commanders commonly request A-10 support. In the USAF's FY 2015 budget, the service considered retiring the A-10 and other single-mission aircraft, prioritizing multi-mission aircraft; cutting a whole fleet and its infrastructure was seen as the only method for major savings. The U.S. Army had expressed interest in obtaining some A-10s were the USAF to retire them, but later stated there was "no chance" of that happening. The USAF stated that retirement would save $3.7 billion from 2015 to 2019. Guided munitions allow more aircraft to perform CAS duties and reduce the need for specialized aircraft; since 2001, multirole aircraft and bombers have performed 80 percent of operational CAS missions. The USAF also said that the A-10 was increasingly vulnerable to modern anti-aircraft weapons, but the Army replied that it had proved invaluable due to its versatile weapons loads, psychological impact, and limited logistics needs. In January 2015, USAF officials told lawmakers that it would take 15 years to fully develop a new attack aircraft to replace the A-10; that year General Herbert J. Carlisle, the head of Air Combat Command, stated that a follow-on weapon system for the A-10 may need development. It planned for F-16s and F-15Es to initially take up CAS sorties, and later by the F-35A once sufficient numbers become operationally available over the next decade. In July 2015, Boeing held initial discussions on the prospects of selling retired or stored A-10s in near-flyaway condition to international customers. However, the USAF stated that it would not permit any to be sold. Plans to develop a replacement aircraft were announced by the US Air Combat Command in August 2015. In 2016, the USAF began studying future CAS aircraft to succeed the A-10 in low-intensity "permissive conflicts" like counterterrorism and regional stability operations, noting the F-35 to be too expensive to operate in day-to-day roles. Various platforms were considered, including low-end AT-6 Wolverine and A-29 Super Tucano turboprops and the Textron AirLand Scorpion as more basic off-the-shelf options to more sophisticated clean-sheet attack aircraft or "AT-X" derivatives of the T-X next-generation trainer as wholly new attack platforms. In January 2016, the USAF was "indefinitely freezing" plans to retire the A-10. Beyond congressional opposition, its use in anti-ISIS operations, deployments to Eastern Europe as a response to Russia's military intervention in Ukraine, and reevaluation of F-35 numbers necessitated its retention. In February 2016, the USAF deferred the final retirement date until 2022 after F-35s replace it on a squadron-by-squadron basis. In October 2016, the USAF Materiel Command brought the depot maintenance line back to full capacity in preparation for re-winging the fleet. In June 2017, it was announced that the A-10 is retained indefinitely. The 2022 Russian invasion of Ukraine led to some observers pushing for A-10s to be loaned to Ukraine while critics noted the diplomatic and tactical complications involved. In an interview in December 2022, Ukrainian Defense Minister Oleksii Reznikov said that in late March he asked the US Secretary of Defense Lloyd Austin for 100 surplus A-10s, noting their value against Russian tank columns. However, Austin reportedly told Minister Reznikov that the plan was "impossible", and that the "old-fashioned and slow" A-10 would be a "squeaky target" for Russian air defenses. Due to opposition from Congress, the USAF has failed to retire the A-10 for many years. However, the Air Force's plan to divest 21 A-10s gained congressional approval in the 2023 National Defense Authorization Act (NDAA). The retired A-10s at Fort Wayne will be replaced by an equal number of F-16s. The 2024 NDAA is expected to retire an additional 42 aircraft, with Air Force Chief of Staff Charles Brown expecting all A-10s to be retired by 2028 or 2029. However, Congress would pause further cuts unless the Air Force demonstrates how other aircraft can fulfill the Close Air Support missions currently undertaken by the A-10. According to Dan Grazier from Project on Government Oversight, the Air Force is ill-prepared for this transition because it requires no Close Air Support training for its F-35 pilots, despite the F-35 being advertised as the main replacement for the A-10. Other uses On 25 March 2010, an A-10 conducted the first flight of an aircraft with all engines powered by a biofuel blend comprising a 1:1 blend of JP-8 and Camelina-based fuel. On 28 June 2012, the A-10 became the first aircraft to fly using a new fuel blend derived from alcohol; known as ATJ (Alcohol-to-Jet), the fuel is cellulosic-based and can be produced using wood, paper, grass, or any cellulose-based material, which are fermented into alcohols before being hydro-processed into aviation fuel. ATJ is the third alternative fuel to be evaluated by the USAF as a replacement for the petroleum-derived JP-8 fuel. Previous types were synthetic paraffinic kerosene derived from coal and natural gas and a bio-mass fuel derived from plant oils and animal fats known as Hydroprocessed Renewable Jet. In 2011, the National Science Foundation granted $11 million to modify an A-10 for weather research for CIRPAS at the U.S. Naval Postgraduate School and in collaboration with scientists from the South Dakota School of Mines & Technology (SDSM&T), replacing SDSM&T's retired North American T-28 Trojan. In 2018, this plan was found to be too risky due to the costly modifications required, thus the program was canceled. Variants YA-10A Pre-production variant. 12 were built. A-10A Single-seat close air support, ground-attack production version. OA-10A A-10As used for airborne forward air control. YA-10B Night/Adverse Weather (N/AW) Two-seat experimental prototype, for work at night and in bad weather. The one YA-10B prototype was converted from an A-10A. A-10C A-10As updated under the incremental Precision Engagement (PE) program. A-10PCAS Proposed unmanned version developed by Raytheon and Aurora Flight Sciences as part of DARPA's Persistent Close Air Support program. The PCAS program eventually dropped the idea of using an optionally manned A-10. SPA-10 Proposed by the South Dakota School of Mines and Technology to replace its North American T-28 Trojan thunderstorm penetration aircraft. The A-10 would have its military engines, avionics, and oxygen system replaced by civilian versions. The engines and airframe would receive protection from hail, and the GAU-8 Avenger would be replaced with ballast or scientific instruments. Project canceled after partial modification of a single A-10C. Operators The A-10 has been flown exclusively by the United States Air Force and its Air Reserve components, the Air Force Reserve Command (AFRC) and the Air National Guard (ANG). , 282 A-10C aircraft are reported as operational, divided as follows: 141 USAF, 55 AFRC, 86 ANG. United States Air Force (USAF) Air Force Materiel Command (AFMC) 514th Flight Test Squadron (Hill AFB, Utah) (1993–present) 23rd Wing 74th Fighter Squadron (Moody AFB, Georgia) (1980–1992, 1996–present) 75th Fighter Squadron (Moody AFB, Georgia) (1980–1991, 1992–present) 51st Fighter Wing 25th Fighter Squadron (Osan AFB, South Korea) (1982–1989, 1993–present) 53rd Wing 422d Test and Evaluation Squadron (Nellis AFB, Nevada) (1977–present) 85th Test and Evaluation Squadron (Eglin AFB, Florida) (1977–present) 57th Wing 66th Weapons Squadron (Nellis AFB, Nevada) (1977–1981, 2003–present) 96th Test Wing 40th Flight Test Squadron (Eglin AFB, Florida) (1982–present) 124th Fighter Wing (Idaho ANG) 190th Fighter Squadron (Gowen Field ANGB, Idaho) (1996–present) 127th Wing (Michigan ANG) 107th Fighter Squadron (Selfridge ANGB, Michigan) (2008–present) 175th Wing (Maryland ANG) 104th Fighter Squadron (Warfield ANGB, Maryland) (1979–present) 355th Fighter Wing 357th Fighter Squadron (Davis-Monthan AFB, Arizona) (1979–present) 442nd Fighter Wing (AFRC) 303d Fighter Squadron (Whiteman AFB, Missouri) (1982–present) 476th Fighter Group (AFRC) 76th Fighter Squadron (Moody AFB, Georgia) (1981–1992, 2009–present) 495th Fighter Group 358th Fighter Squadron (Whiteman AFB, Missouri) (1979–2014, 2015–present) 924th Fighter Group (AFRC) 45th Fighter Squadron (Davis-Monthan AFB, Arizona) (1981–1994, 2009–present) 47th Fighter Squadron (Davis-Monthan AFB, Arizona) (1980–present) Former squadrons 18th Tactical Fighter Squadron (1982–1991) 23rd Tactical Air Support Squadron (1987–1991) (OA-10 unit) 55th Tactical Fighter Squadron (1994–1996) 70th Fighter Squadron (1995–2000) 78th Tactical Fighter Squadron (1979–1992) 81st Fighter Squadron (1994–2013) 91st Tactical Fighter Squadron (1978–1992) 92nd Tactical Fighter Squadron (1978–1993) 103rd Fighter Squadron (Pennsylvania ANG) (1988–2011) (OA-10 unit) 118th Fighter Squadron (Connecticut ANG) (1979–2008) 131st Fighter Squadron (Massachusetts ANG) (1979–2007) 138th Fighter Squadron (New York ANG) (1979–1989) 163rd Fighter Squadron (Indiana ANG) (2010–2023) 172nd Fighter Squadron (Michigan ANG) (1991–2009) 176th Tactical Fighter Squadron (Wisconsin ANG) (1981–1993) 184th Fighter Squadron (Arkansas ANG) (2007–2014) 353rd Tactical Fighter Squadron (1978–1992) 354th Fighter Squadron (Davis-Monthan AFB, Arizona) (1979–1982, 1991–2024) 355th Tactical Fighter Squadron (1978–1992, 1993–2007) 356th Tactical Fighter Squadron (1977–1992) 509th Tactical Fighter Squadron (1979–1992) 510th Tactical Fighter Squadron (1979–1994) 511th Tactical Fighter Squadron (1980–1992) 706th Fighter Squadron (1982–1992, 1997–2007) Notable incidents On 8 December 1988, an A-10A of the U.S. Air Forces in Europe crashed into a residential area in the city of Remscheid, West Germany. The aircraft crashed into the upper floor of an apartment complex. The pilot and six other people were killed. Fifty others were injured, many of them seriously. The cause of the accident was attributed to spatial disorientation, after both the mishap aircraft and its flight lead encountered difficult and adverse weather conditions for visual flying. The number of cancer cases in the vicinity of the accident rose disproportionately in the years after, raising the possibility that the aircraft may have been loaded with ammunition containing depleted uranium, contrary to U.S. statements. On 2 April 1997, Captain Craig D. Button was piloting a USAF A-10 when he inexplicably flew hundreds of miles off-course without radio contact, appeared to maneuver purposefully and did not attempt to eject before the crash. His death is regarded as a suicide because no other hypothesis explains the events. The incident caused widespread public speculation about Button's intentions and whereabouts until the crash site was found three weeks later. The aircraft carried live bombs which have not been recovered. On 28 March 2003, British Lance-Corporal of Horse Matty Hull was killed by U.S. A-10 Thunderbolt II ground attack aircraft as well as five others wounded in the 190th Fighter Squadron, Blues and Royals friendly fire incident. Aircraft on display Germany A-10A 77-0264 – Spangdahlem AB, Bitburg. South Korea A-10A 76-0515 – Osan AB. United Kingdom A-10A 77-0259 – American Air Museum at Imperial War Museum Duxford 80-0219 – Bentwaters Cold War Museum United States YA-10A 71-1370 – Joint Base Langley-Eustis (Langley AFB), Hampton, Virginia. YA-10B 73-1664 – Air Force Flight Test Center Museum, Edwards AFB, California A-10A 73-1666 – Hill Aerospace Museum, Hill AFB, Utah 73-1667 – Flying Tiger Heritage Park at the former England AFB, Louisiana (repainted as 73–3667). 75-0263 – Empire State Aerosciences Museum, Glenville, New York 75-0270 – McChord Air Museum, McChord AFB, Washington 75-0293 – Wings of Eagles Discovery Center, Elmira, New York 75-0288 – Air Force Armament Museum, Eglin AFB, Florida 75-0289 – Heritage Park, Eielson AFB, Alaska. 75-0298 – Pima Air & Space Museum (adjacent to Davis-Monthan AFB), Tucson, Arizona 75-0305 – Museum of Aviation, Robins AFB, Warner Robins, Georgia 75-0308 – Moody Heritage Park, Moody AFB, Valdosta, Georgia. 75-0309 – Shaw AFB, Sumter, South Carolina. Marked as AF Ser. No. 81-0964 assigned to the 55 FS from 1994 to 1996. The represented aircraft was credited with downing an Iraqi Mi-8 Hip helicopter on 15 February 1991 while assigned to the 511 TFS. 76-0516 – Wings of Freedom Aviation Museum at the former NAS Willow Grove, Horsham, Pennsylvania 76-0530 – Whiteman AFB, Missouri 76-0535 – Cradle of Aviation, Garden City, New York 76-0540 – Aerospace Museum of California, McClellan Airport (former McClellan AFB), Sacramento, California 77-0199 – Stafford Air & Space Museum, Weatherford, Oklahoma. 77-0205 – USAF Academy collection, Colorado Springs, Colorado. 77-0228 – Grissom Air Museum, Grissom ARB (former Grissom AFB), Peru, Indiana 77-0244 – Wisconsin Air National Guard Museum, Volk Field ANGB, Wisconsin. 77-0252 – Cradle of Aviation, Garden City, New York (nose section only) 78-0681 – National Museum of the United States Air Force, Wright-Patterson AFB, Dayton, Ohio 78-0687 – Don F. Pratt Memorial Museum, Fort Campbell, Kentucky. 79-0097 – Warbird Park, former Myrtle Beach Air Force Base, South Carolina. 79-0100 – Barnes Air National Guard Base, Westfield, Massachusetts. 79-0103 – Bradley Air National Guard Base, Windsor Locks, Connecticut. 79-0116 – Warrior Park, Davis-Monthan AFB, Tucson, Arizona. 79-0173 – New England Air Museum, Windsor Locks, Connecticut 79-0195 – Russell Military Museum Zion, Illinois 80-0168 – Fort Wayne Air National Guard Base, Fort Wayne, Indiana. 80-0247 – American Airpower Museum, Republic Airport, Farmingdale, New York 80-0708 – Selfridge Military Air Museum, Selfridge Air National Guard Base, Harrison Township, Michigan 81-0987 – Seymour Johnson Air Force Base, Goldsboro, North Carolina Specifications (A-10C) Notable appearances in media Nicknames The A-10 Thunderbolt II received its popular nickname "Warthog" from the pilots and crews of the USAF attack squadrons who flew and maintained it. The A-10 is the last of Republic's jet attack aircraft to serve with the USAF. The Republic F-84 Thunderjet was nicknamed the "Hog", F-84F Thunderstreak nicknamed "Superhog", and the Republic F-105 Thunderchief tagged "Ultra Hog". The saying Go Ugly Early has been associated with the aircraft for calling in the A-10 early to support troops in ground combat.
Technology
Specific aircraft
null
4543466
https://en.wikipedia.org/wiki/Future%20Air%20Navigation%20System
Future Air Navigation System
The Future Air Navigation System (FANS) is an avionics system which provides direct data link communication between the pilot and the air traffic controller. The communications include air traffic control clearances, pilot requests and position reporting. In the FANS-B equipped Airbus A320 family aircraft, an Air Traffic Services Unit (ATSU) and a VHF Data Link radio (VDR3) in the avionics rack and two data link control and display units (DCDUs) in the cockpit enable the flight crew to read and answer the controller–pilot data link communications (CPDLC) messages received from the ground. Overview of FANS The world's air traffic control system still uses components defined in the 1940s following the 1944 meeting in Chicago which launched the creation of the International Civil Aviation Organization (ICAO). This traditional ATC system uses analog radio systems for aircraft Communication, navigation and surveillance (CNS). Air traffic control's ability to monitor aircraft was being rapidly outpaced by the growth of flight as a mode of travel. In an effort to improve aviation communication, navigation, surveillance, and air traffic management ICAO, standards for a future system were created. This integrated system is known as the Future Air Navigation System (FANS) and allows controllers to play a more passive monitoring role through the use of increased automation and satellite-based navigation. In 1983, ICAO established the special committee on the Future Air Navigation System (FANS), charged with developing the operational concepts for the future of air traffic management (ATM). The FANS report was published in 1988 and laid the basis for the industry's future strategy for ATM through digital CNS using satellites and data links. Work then started on the development of the technical standards needed to realize the FANS Concept. In the early 1990s, the Boeing Company announced a first generation FANS product known as FANS-1. This was based on the early ICAO technical work for automatic dependent surveillance (ADS) and controller–pilot data link communications (CPDLC), and implemented as a software package on the flight management computer of the Boeing 747-400. It used existing satellite based ACARS communications (Inmarsat Data-2 service) and was targeted at operations in the South Pacific Oceanic region. The deployment of FANS-1 was originally justified by improving route choice and thereby reducing fuel burn. A similar product (FANS-A) was later developed by Airbus for the A340 and A330. Boeing also extended the range of aircraft supported to include the Boeing 777 and 767. Together, the two products are collectively known as FANS-1/A. The main industry standards describing the operation of the FANS-1/A products are ARINC 622 and EUROCAE ED-100/RTCA DO-258. Both the new Airbus A380 and Boeing 787 have FANS-1/A capability. ATC services are now provided to FANS 1/A equipped aircraft in other oceanic airspaces, such as the North Atlantic. However, although many of FANS-1/A's known deficiencies with respect to its use in high density airspace were addressed in later versions of the product (FANS-1/A+), it has never been fully adopted for use in continental airspace. The ICAO work continued after FANS-1 was announced, and continued to develop the CNS/ATM concepts. The ICAO standard for CPDLC using the Aeronautical Telecommunications Network (ATN) is preferred for continental airspace and is currently being deployed in the core European Airspace by the EUROCONTROL Agency under the LINK2000+ Programme. Mandatory carriage of the ICAO compliant system is now the subject of an Implementing Rule (for aircraft flying above FL280) issued by the European Commission. This rule accommodates the use of FANS-1/A by long haul aircraft. All other airspace users must be ICAO compliant. Several vendors provide ICAO ATN/CPDLC compliant products. The Airbus ICAO compliant product for the A320 family is known as FANS-B. Rockwell Collins, Honeywell and Spectralux provide ICAO compliant products for Boeing aircraft, such as the Boeing 737 and 767, and the Boeing 787 will also support ICAO ATN/CPDLC compliant communications. The main standards describing the operation of ICAO compliant products are the ICAO Technical Manual, ICAO Docs 9705 and 9896, Eurocae ED-110B/RTCA DO-280B and Eurocae ED-120/RTCA DO-290. Background Aircraft are operated using two major methods; positive control and procedural control. Positive control is used in areas which have radar and so is commonly referred to as radar control. The controller "sees" the airplanes in the control area and uses VHF voice to provide instructions to the flight crews to ensure separation. Because the position of the aircraft is updated frequently and VHF voice contact timely, separation standards (the distance by which one aircraft must be separated from another) are less. This is because the air traffic controller can recognize problems and issue corrective directions to multiple airplanes in a timely fashion. Separation standards are what determine the number of airplanes which can occupy a certain volume of airspace. Procedural control is used in areas (oceanic or land) which do not have radar. The FANS concept was developed to improve the safety and efficiency of airplanes operating under procedural control. This method uses time-based procedures to keep aircraft separated. The separation standard is determined by the accuracy of the reported positions, frequency of position reports, and timeliness of communication with respect to intervention. Non-FANS procedural separation uses Inertial Navigation Systems for position, flight crew voice reports of position (and time of next waypoint), and High Frequency radio for communication. The INS systems have error introduced by drifting after initial alignment. This error can approach . HF radio communication involves contacting an HF operator who then transcribes the message and sends it to the appropriate ATC service provider. Responses from the ATC Service Provider go to the HF radio operator who contacts the airplane. The voice quality of the connection is often poor, leading to repeated messages. The HF radio operator can also be saturated with requests for communication. This leads to procedures which keep airplanes separated by as much as laterally, 10 minutes in trail, and in altitude. These procedures reduce the number of airplanes which can operate in a given airspace. If market demand pushes airlines to operate at the same time on a given route, this can lead to airspace congestion, which is handled by delaying departures or separating the airplanes by altitude. The latter can lead to very inefficient operation due to longer flying times and increased fuel burn. ATC using FANS The FANS concept involves improvements to Communication, navigation and surveillance (CNS). Communication improvements This involved a transition from voice communications to digital communications. Specifically ACARS was used as the communication medium. This allowed other application improvements. An application was hosted on the airplane known as controller–pilot data link communications (CPDLC). This allows the flight crew to select from a menu of standard ATC communications, send the message, and receive a response. A peer application exists on the ground for the air traffic controller. They can select from a set of messages and send communications to the airplane. The flight crew will respond with a WILCO, STANDBY, or REJECT. The current standard for message delivery is under 60 seconds one way. Navigation improvements This involves a transition from inertial navigation to satellite navigation using GNSS satellites. This also introduced the concept of actual navigation performance (ANP). Previously, flight crews would be notified of the system being used to calculate the position (radios, or inertial systems alone). Because of the deterministic nature of the satellites (constellation geometry), the navigation systems can calculate the worst case error based on the number of satellites tuned and the geometry of those satellites. (Note: it can also characterize the potential errors in other navigation modes as well). So, the improvement not only provides the airplane with a much more accurate position, it also provides an alert to the flight crew should the actual navigation performance not satisfy the required navigation performance (RNP). Surveillance improvements This involves the transition from voice reports (based on inertial position) to automatic digital reports. The application is known as ADS-C (automatic dependent surveillance, contract). In this system, an air traffic controller can set up a "contract" (software arrangement) with the airplane's navigational system, to automatically send a position report on a specified periodic basis – every 5 minutes, for example. The controller can also set up a deviation contract, which would automatically send a position report if a certain lateral deviation was exceeded. These contracts are set up between ATC and the aircraft's systems, so that the flight crew has no workload associated with set-up. FANS procedural control The improvements to CNS allow new procedures which reduce the separation standards for FANS controlled airspace. In the South Pacific, they are targeting 30/30 (this is lateral and in trail). This makes a huge difference in airspace capacity. History ICAO The International Civil Aviation Organization (ICAO) first developed the high level concepts starting with the initiation of the Special Committee on Future Air Navigation Systems in 1983. The final report was released in 1991 with a plan released in 1993. Pacific engineering trials FANS as we know it today had its beginning in 1991 with the Pacific Engineering Trials (PET). During these trials, airplanes installed applications in their ACARS units which would automatically report positions. These trials demonstrated the potential benefits to the airlines and airspace managers. Implementation United Airlines, Cathay Pacific, Qantas, and Air New Zealand approached the Boeing Company in 1993 and requested that Boeing support the development of a FANS capability for the 747-400 airplane. Boeing worked with the airlines to develop a standard which would control the interface between FANS-capable airplanes and air traffic service providers. The development of the FANS-capable aircraft systems proceeded simultaneously with the ATC ground system improvements necessary to make it work. These improvements were certified (using a QANTAS airplane VH-OJQ) on June 20, 1995. Both Boeing and Airbus continue to further develop their FANS implementations, Boeing on FANS-2 and Airbus on FANS-B. In the interim, Airbus came out with some enhancements to FANS-A, now referred to as FANS-A+. Various ground systems have been built, mainly by ATC organizations, to interoperate with FANS-1/A. FANS interoperability team The FANS interoperability team (FIT) was initiated in the South Pacific in 1998. The purpose of this team is to monitor the performance of the end-to-end system, identify problems, assign problems and assure they are solved. The members include airframe manufacturers, avionics suppliers, communication service providers, and air navigation service providers. Since this time, other regions have initiated FIT groups. Service providers Customers that operate aircraft need to get their FANS 1/A capable aircraft connected to both the ATN (Aeronautical Telecommunication Network) and to the Iridium and/or Inmarsat Satellite network. Commercial aircraft operators typically get their long haul fleet connected and have dedicated personnel to monitor and maintain the satellite and ground link while business aircraft and military aircraft operators contact companies like AirSatOne to commission the system for the first time, conduct functionality testing and to provide ongoing support. AirSatOne provide advanced FANS 1/A services through their Flight Deck Connect portfolio of products. Flight Deck Connect includes a connection to the Iridium and/or Inmarsat satellites for FANS 1/A (via Datalink), and Safety Voice Services, along with ancillary services (AFIS/ACARS) such as weather information, engine/airframe health and fault reports. Operational approval Some of the more advanced service providers such as AirSatOne and ARINC offer FANS 1/A testing services. When an aircraft is outfitted with FANS 1/A equipment either through the Type Certificate or STC process the equipment must demonstrate compliance with AC 20-140B for operational approval. As an example AirSatOne offers testing through the satellite and ATN network to support FANS 1/A functionality in accordance with RTCA DO-258A/ED-100A and provides test reports to meet the requirements of RTCA DO-258A/ED-100A, RTCA DO-306/ED-122 and FAA Advisory Circular AC 20-140B. AirSatOne also provides first time system commissioning on each aircraft, troubleshooting testing and pre-flight maintenance checks to test FANS 1/A functionality either monthly or prior to flight in the FANS environment. Milestones On June 20, 1995, a Qantas B747-400 (VH-OJQ) became the first aircraft to certify the Rolls-Royce FANS-1 package by remote type certification (RTC) in Sydney, Australia. It was followed by the first commercial flight from Sydney to Los Angeles on June 21. Subsequently, Air New Zealand certified the General Electric FANS-1 package, and United Airlines certified the Pratt & Whitney FANS-1 package. On May 24, 2004, a Boeing Business Jet completed the first North Atlantic flight by a business jet equipped with FANS. The airplane touched down at the European Business Aviation Convention and Exhibition (EBACE) in Geneva, Switzerland. The non-stop eight-hour, flight originating from Gary/Chicago International Airport in Gary, Indiana, was part of a North Atlantic Traffic trial conducted by the FANS Central Monitoring Agency (FCMA). In August 2010, Aegean Airlines became the first airline to commit to upgrading its Airbus A320 fleet with a FANS-B+ retrofit system offered by Airbus.
Technology
Aircraft components
null
4544079
https://en.wikipedia.org/wiki/Songket
Songket
Songket or sungkit is a tenun fabric that belongs to the brocade family of textiles of Brunei, Indonesia, and Malaysia. It is hand-woven in silk or cotton, and intricately patterned with gold or silver threads. The metallic threads stand out against the background cloth to create a shimmering effect. In the weaving process the metallic threads are inserted in between the silk or cotton weft (latitudinal) threads in a technique called supplementary weft weaving technique. Songket is often associated with the Srivijaya Empire as the origin of the songket tradition, several types of popular Songket can not be separated from locations that were once under Srivijaya rule, one of the dominant locations which is also believed to be the capital of the Srivijaya Empire in the past, namely Palembang, which located in South Sumatra, Indonesia. Besides Palembang, several areas in Sumatra are also the best-in-class Songket producing locations, which include areas in Minangkabau or West Sumatra such as Pandai Sikek, Silungkang, Koto Gadang, and Padang. Outside of Sumatra, songket is also produced by regions such as Bali, Lombok, Sambas, Sumba, Makassar, Sulawesi, and other areas in Indonesia. Due to the historical factors of the Srivijaya Empire, trade, and mixed marriages, Songket has also become popular in the Maritime Southeast Asia region, especially in countries around Indonesia such as Brunei, Malaysia, and Singapore. Based on the analysis conducted on the statues at the Bumiayu temple, South Sumatra, it can be seen that songket has been worn by the people of South Sumatra since the 8th century CE, when Srivijaya was based in Palembang. This statue was found at the Bumiayu Temple Archaeological Site which is located on the downstream bank of Lematang River which empties into Musi River, precisely in Tanah Abang District, Penukal Abab Lematang Ilir Regency approximately 120 km to the west of Palembang City. In Indonesia, five songket traditions are recognised as Intangible Cultural Heritage by the Indonesian Ministry of Education and Culture. They are songket traditions of Palembang and Sambas, both appointed in 2013; Pandai Sikek songket of West Sumatra, appointed in 2014; songket tradition of Beratan, Bali appointed in 2018; and Silungkang songket tradition of West Sumatra, appointed in 2019. In 2021, UNESCO (United Nations Educational, Scientific and Cultural Organization) officially recognized Songket as a Masterpiece of the Oral and Intangible Heritage of Humanity. Etymology The term songket derived from the Malay word of sungkit, which means "to hook". It is referred to the method of songket making; to hook and pick a group of threads, and then slip the gold and silver threads in it. Another theory suggested that it was constructed from the combination of two terms; tusuk (prick) and cukit (pick) that combined as sukit, modified further as sukit and finally songket. Some says that the word songket was derived from songka, a Palembang cap in which gold threads was first woven. The earliest confirmable written proof of this clothing in Malay texts always mentioned sungkit instead of songket, for example the Hikayat Aceh of 1620s and Hikayat Banjar of 1660s. The Malay word menyongket means ‘to embroider with gold or silver threads’. Songket is a luxury product traditionally worn during ceremonial occasions as sarong, shoulder cloths or head ties and tanjak, a headdress songket. Songket were worn at the courts of Kingdoms in Sumatra especially the Srivijaya, as the source and the origin of Malay culture in Southeast Asia. In the early kingdom age, Songkets are also traditionally worn as an apparel by the Malay royal families in Sumatra such as the Deli Sultanate in Medan, Serdang Sultanate, Palembang Sultanate in Palembang and the recently restored royal house in Jambi and sultanates in Malay Peninsula such as Pattani, Kelantan and Terengganu. The fabric is even mandated as part of the ceremonial court dress of Bruneian royalty since the time of Omar Ali Saifuddien III. Traditionally women are the weavers of songket, however in this modern time men also are known to weave it as well. Songket is known in many names in vernacular Indonesian languages. Other than in Sumatra and Malay peninsula, it is also commonly known as songket in Bali and Java. While it is known as songke in Manggarai, Flores, and Bima in Sumbawa. The Karo Batak of North Sumatra, call it jongkit. People in Ternate, Maluku, call it suje, while the Buginese in South Sulawesi call it subbi’ and arekare’ and the Iban Dayak in West Kalimantan and Sarawak call it pilih or pileh. History Songket weaving traditions at first, historically associated with Srivijaya empire, a wealthy 7th to 13th-century maritime trading empire based on Sumatra. Palembang and Minangkabau Pandai Sikek area are the best and the most famous songket producers in Indonesia. According to a Palembang folk tradition that has been narrated for generations, the origin of songket came from the Chinese traders who brought silk threads, while the Indian or Middle Eastern traders brought gold threads. Subsequently, the woven combination has become the exquisitely shimmering golden songket. It associated with areas of Malay settlement in Sumatra, and the production techniques could have been introduced by Indian or Arab merchants. Songket is a luxurious textile that required some amount of real gold leaves and gold threads to be hand-woven into exquisite fabrics, surely it has become a symbol of luxury and social status. Historically the gold mines are located in Sumatra hinterland; Jambi and Minangkabau Highlands. Although gold threads were found buried in the Srivijaya ruins in Sumatra, along with unpolished rubies and pieces of gold plate, there is no corroborating evidence that the local weavers used gold threads as early as 7th century to early 8th century. Based on archaeological data, it can be seen that songket has been known by the people of South Sumatra between the 8th to the 9th century CE, as seen in ancient statues cloths motifs from the site of the Bumiayu temple complex in Penukal Abab Lematang Ilir Regency, South Sumatra Province, Indonesia. At that time the use of songket was reserved only for the nobility, as seen from the statues which were probably the deified personification of a king. The evidence for the existence of songket can be seen on the lepus motifs found on the vest worn by Figure 1 at the Bumiayu temple complex. The use of lepus motif shows the continuity of that motif that has been around since the 9th century. The description of textiles reminiscent of songket can be found in 10th century Chinese source from Song dynasty. According to this Song chronicle, in 992 the envoy from She-po (Java) arrived in Chinese court bearing a lot of gifts, consists of silk "woven with floral motifs made of gold threads", ivories, pearls, silk of various colours, fragrant sandalwood, cotton clothes of various colours, turtle shells, betel nut preparation kit, kris dagger with exquisite hilt made of rhino horn and gold, rattan mat with the image of white cockatoo, and a small model of house made of sandalwood adorned with valuable ornaments. Studies of Javanese statues dated from Indonesian Hindu-Buddhist period between 8th to 15th centuries provides a glimpse of the fashion during that period. These statues were decorated elaborately including textiles pattern. The details of kain lower garment of Durga Mahisasuramardini form the 13th-century Singhasari temple near Malang, shows elaborately carved tassels which suggests goldwork decoration. The costume is completed with two sashes draped over the legs carved with bunga bintang or "star flower" motifs, a pattern that continues today in songket design. The precision of stone carved textile suggests the designs are unlikely an invention of sculptor's imagination, and more likely to have replicated a cloth that existed at the time. Various Chinese and Arab accounts mentioned the presence of textiles produced within the region and emphasized the prevalence of weaving in the Malay Peninsula. According to Kelantan tradition this weaving technique came from the north, somewhere in the Cambodia-Siam region and expanded south into Pattani, and finally reach the Malay court of Kelantan and Terengganu as early as the 16th century. The weaving of songket continues as a small cottage industry on the outskirts of Kota Bharu and Terengganu. However, Terengganu weavers believe that songket weaving technique was introduced to Malaysia from India through Sumatra's Palembang and Jambi where it probably originated during the time of Srivijaya (7th to 11th century). Nevertheless, Zani Bin Ismail put forth the argument that the origins of songket can be traced to China and subsequently spread to Indochina, including Cambodia and Thailand. His assertion was based on the similarities observed in the handweaving looms of Terengganu, Cambodia, and Thailand. Another possible of origin of songket based on Liang dynasty record (502-557) is from Langkasuka kingdom, an ancient kingdom dressed in the Malay Peninsula. It's King dressed ‘rose-colored cloth with gold flowers’, which could have been a songket of some kind, as red is traditional color of Songket. Much documentation is sketchy about the origins of the songket but it is most likely that songket weaving was brought to Peninsular Malaysia through intermarriages between royal families. This was a common occurrence in the 15th century for sealing strategic alliances. Production was located in politically significant kingdoms because of the high cost of materials; the gold thread used was originally wound with real gold leaf. The use of songket vest with lepus motif as described in the statue of the Bumiayu temple, was also popular during the Islamic Palembang Sultanate period from the 16th to 19th centuries, and limited only for the upper class of the society. After the collapse of the sultanate, songket began to spread among non-aristocrats. Songket as king's dress was also mentioned by Abdullah bin Abdul Kadir writings in 1849. Tradition Songket is traditionally considered an exquisite, luxurious and prestigious traditional fabric, only worn for special occasions, religious festivals, and traditional social functions. It has become a required garment for brides and grooms for their weddings, as in the traditional wedding costumes of Palembangese, Minangkabau and Balinese people. In Indonesian tradition, songket has become a marker of social status. Traditionally a certain songket motif is reserved for particular social status. For example in Palembang songket tradition, the lepus motifs were originally reserved only for bangsawan (royalty, nobles or aristocrats). Indeed songket is employed as the social marker of the wearer, even as far as to inform the marital status of the wearer. In old Palembang, widows wore outstanding selendang (shoulder cloth) songket to disclose their social and marital status. There are two kinds of specific songket motifs for widows; those for widows eligible for remarriage is called songket janda berias (dress up widow songket), and those for widow brides is called songket janda pengantin (widow bride songket). Today, songket are usually made from affordable materials, such as using artificial gold threads made of nylon instead of pure gold threads. Nevertheless, there are few rare songket that is actually made from real gold threads. These are precious textiles that are held as pusaka or heirloom passed down for generations within a family. Today, songket is mostly worn in traditional settings as traditional costumes for weddings or any traditional ceremonies. Several efforts has been conducted to promote songket as a popular fabric for fashion, either locally and abroad. During the Dutch colonial era, West Sumatran songket were exhibited in the Netherlands. The Sawahlunto Songket Carnival was held in Sawahlunto, West Sumatra in August 2015. The songket carnival featured a parade and exhibition with participants from numbers of songket studios across West Sumatra. The carnival, held on 28 August 2015, was recorded in the Indonesian Museum of Records for the most people wearing songket at a same time, with 17,290 people wearing Silungkang songket during the event. Several exhibitions has been held to preserve and promote the traditional art of songket making, such as the songket exhibition held in 2015 by Jakarta Textile Museum, which showcased around 100 pieces of songket from various Indonesian provinces. Today, songket has become a source of inspiration for contemporary fashion designers who draw ideas from this traditional art. Songket Minangkabau Songket Minangkabau is a traditional songket woven cloth from West Sumatra that is an important part of cultural identity in the Minangkabau tradition. Songket is closely related to the Minangkabau community because it has been widely used as a material for traditional clothing and other traditional core crafts. There are various types of Minangkabau songket motifs and philosophies, each motif passed down from generation to generation for use in the Pepatih custom. The history of the Songket Minangkabau itself comes from the Srivijaya which was then developed through the Sumatran kingdom until it finally entered the Minang realm. Songket was created as a means of expression because the Minang people in ancient times could not write and finally they also expressed their feelings into songket so that each songket has a different meaning. As a characteristic of cloth in Minangkabau, the famous songket cloth in West Sumatra is Songket Pandai Sikek and Songket Silungkang. The names of the two songkets are taken from the name of the place where this songket comes from, namely Pandai Sikek in Tanah Datar and Silungkang in Sawahlunto. Songket Minangkabau is a unique traditional art form. This weaving art is quite complicated and requires precision and perseverance in the weaving process. In addition, the ornaments or motifs of Minangkabau songket are not just decorations or ornaments. Minangkabau songket motifs or decorations each have a name and meaning, namely about the journey of Minangkabau culture and society. Songket Minangkabau motifs are displayed in the form of natural symbols, especially plants, which are rich in explicit and implied meanings. Songket motifs are often named after plants, animals or objects in the natural environment. For example, Bungo Malur, Kudo-Kudo,Balapak Gadang, Ranggo Patai Pucuak,Pucuak,Pucuak kelapa, and many more. The decorative motifs on the edge of the songket cloth are also named, such as Bungo Tanjung, Lintahu Ayahah, Bareh Diatua, Ula Gerang , and others. Like the motifs of Batik which are full of meaning, the Silungkang songket motifs are also studded with philosophy. The motif of Kaluak Paku (the curve of a young fern shoot) means "Before correcting others, we should look inside ourselves first". While the Ilalang Rabah motif (falling down) means "Vigilance, prudence and accuracy of a leader are the main things". The most popular and sacred motif for the Minangkabau community is the Pucuk Rebung motif or in the local language called Pucuak Rabuang symbolizing a useful life throughout. It all appears in the evolution of bamboo shoots (Bambu muda) to aging which reflects the process of human life towards a useful person. Songket making Equipments and materials There are two categories of songket weaving equipments; the main weaving equipment made from wooden or bamboo frame; and the supporting equipment which includes thread stretching tool, motif making tool, thread inserting and picking tools. The materials for making songket consist of cotton or silk threads or other fibers as the base fabric and decoration threads made from golden, silver or silk threads. It is believed that in ancient times, real gold threads were used to create songket; the cotton threads were run along heated liquid gold, coating the cotton and creating gold thread. However today because the scarcity and the expensiveness of real gold threads, imitation gold or silver threads are commonly used instead. Technique The songket technique itself involves the insertion of decorative threads in between the wefts as they are woven into the warp, which is fixed to the loom. They are inserted as part of the weaving process, but not necessary in the making of the cloth. There are four types of supplementary weft weaving technique: continuous, discontinuous, inlaid and wrapped. Songket weaving is done in two stages, weaving the basic cloth with even or plain weaving and weaving the decoration inserted into basic cloth, this method is called "inlay weaving system". The shining gold, silver or silk threads were inserted and woven into the plain weave base cloth in certain motifs, creating a shimmering effect of golden pattern against darker plain background. Songket weaving is traditionally done as a part-time job by young girls and older women in between their daily domestic chores. The complicated process of songket making is believed to cultivate virtues, as it reflects the values of diligence, carefulness and patience. Patterns There are hundreds of songket motifs. In Palembang tradition, songket is inseparable from the lives of the people who wear it during important events such as births, marriages, and death. Palembang songket recognises several types of songket patterns; they are lepus, tretes, limar, tawur, bungo, and rumpak songkets. Examples of Palembang songket motifs are naga besaung, pucuk rebung, biji pare, bintang berante, bintang kayu apuy, bungo mawar, bungo melati, bungo cino, bungo jepang, bungo intan, bungo pacik, cantik manis, lepus berakam, pulir, nampan perak, tabur limar and tigo negeri. Production centers In Indonesia, songket is produced in Sumatra, Kalimantan, Bali, Sulawesi, Lombok and Sumbawa. In Sumatra the famous songket production centers is in Minangkabau Pandai Sikek in Tanah Datar Regency, and Koto Gadang in Agam Regency, also Silungkang area in Sawahlunto, West Sumatra, Jambi City, Jambi and Palembang, South Sumatra. In Bali, songket production villages can be found in Klungkung regency, especially at Sidemen and Gelgel village. The Klungkung Market is a popular spot to shop Balinese songket, as it offers wide collection of this traditional fabrics. While in the neighboring island of Lombok, the Sukarara village in Jonggat district, Central Lombok regency, is also famous for songket making. In this village, learning how to weave a good songket is an obligation for the Sasak women. Weaving songket is usually done by women during their spare time, and subsequently this traditional skill has enabled them to earn money for their family. In Malaysia production area included the east coast of the Malay Peninsula especially in the city of Kuala Terengganu, Terengganu and the Kota Bharu, Kelantan. Gallery
Technology
Weaving
null
4546155
https://en.wikipedia.org/wiki/Structural%20basin
Structural basin
A structural basin is a large-scale structural formation of rock strata formed by tectonic warping (folding) of previously flat-lying strata into a syncline fold. They are geological depressions, the inverse of domes. Elongated structural basins are a type of geological trough. Some structural basins are sedimentary basins, aggregations of sediment that filled up a depression or accumulated in an area; others were formed by tectonic events long after the sedimentary layers were deposited. Basins may appear on a geologic map as roughly circular or elliptical, with concentric layers. Because the strata dip toward the center, the exposed strata in a basin are progressively younger from the outside in, with the youngest rocks in the center. Basins are often large in areal extent, often hundreds of kilometers across. Structural basins are often important sources of coal, petroleum, and groundwater. Examples Europe Hampshire Basin, United Kingdom London Basin, United Kingdom Paris Basin, France Permian Basin, Poland, northern Germany, Denmark, the Netherlands, the North Sea, and Scotland Turgay Basin, Kazakhstan North America Canada Hudson Bay Trinidad and Tobago Southern Basin, Trinidad United States Albuquerque Basin, New Mexico Appalachian Basin, Eastern United States Big Horn Basin, Wyoming Black Warrior Basin, Alabama and Mississippi Delaware Basin, Texas and New Mexico Denver Basin, Colorado Illinois Basin, Illinois Los Angeles Basin, California Michigan Basin, Michigan North Park Colorado Basin Paradox Basin, Utah and Colorado Permian Basin, Texas and New Mexico Piceance Basin, Colorado Powder River Basin, Wyoming and Montana Raton Basin, Colorado and New Mexico Sacramento Basin, California San Juan Basin, New Mexico and Colorado Uinta Basin, Utah Williston Basin, Montana and North Dakota Wind River Basin, Wyoming Oceania Australia Amadeus Basin Bowen Basin Cooper Basin Galilee Basin Great Artesian Basin Wilpena Pound South America Chaco Basin, Argentina, Bolivia and Paraguay Magallanes Basin, Chile Neuquén Basin, Argentina and Chile Paraná Basin, Argentina, Brazil, Paraguay and Uruguay Llanos Basin, Colombia
Physical sciences
Structural geology
Earth science
420229
https://en.wikipedia.org/wiki/Sarcomere
Sarcomere
A sarcomere (Greek σάρξ sarx "flesh", μέρος meros "part") is the smallest functional unit of striated muscle tissue. It is the repeating unit between two Z-lines. Skeletal muscles are composed of tubular muscle cells (called muscle fibers or myofibers) which are formed during embryonic myogenesis. Muscle fibers contain numerous tubular myofibrils. Myofibrils are composed of repeating sections of sarcomeres, which appear under the microscope as alternating dark and light bands. Sarcomeres are composed of long, fibrous proteins as filaments that slide past each other when a muscle contracts or relaxes. The costamere is a different component that connects the sarcomere to the sarcolemma. Two of the important proteins are myosin, which forms the thick filament, and actin, which forms the thin filament. Myosin has a long fibrous tail and a globular head that binds to actin. The myosin head also binds to ATP, which is the source of energy for muscle movement. Myosin can only bind to actin when the binding sites on actin are exposed by calcium ions. Actin molecules are bound to the Z-line, which forms the borders of the sarcomere. Other bands appear when the sarcomere is relaxed. The myofibrils of smooth muscle cells are not arranged into sarcomeres. Bands The sarcomeres give skeletal and cardiac muscle their striated appearance, which was first described by Van Leeuwenhoek. A sarcomere is defined as the segment between two neighbouring Z-lines (or Z-discs). In electron micrographs of cross-striated muscle, the Z-line (from the German "zwischen" meaning between) appears in between the I-bands as a dark line that anchors the actin myofilaments. Surrounding the Z-line is the region of the I-band (for isotropic). I-band is the zone of thin filaments that is not superimposed by thick filaments (myosin). Following the I-band is the A-band (for anisotropic). Named for their properties under a polarized light microscope. An A-band contains the entire length of a single thick filament. The anisotropic band contains both thick and thin filaments. Within the A-band is a paler region called the H-zone (from the German "heller", brighter). Named for their lighter appearance under a polarization microscope. H-band is the zone of the thick filaments that has no actin. Within the H-zone is a thin M-line (from the German "mittel" meaning middle), appears in the middle of the sarcomere formed of cross-connecting elements of the cytoskeleton. The relationship between the proteins and the regions of the sarcomere are as follows: Actin filaments, the thin filaments, are the major component of the I-band and extend into the A-band. Myosin filaments, the thick filaments, are bipolar and extend throughout the A-band. They are cross-linked at the centre by the M-band. The giant protein titin (connectin) extends from the Z-line of the sarcomere, where it binds to the thick filament (myosin) system, to the M-band, where it is thought to interact with the thick filaments. Titin (and its splice isoforms) is the biggest single highly elasticated protein found in nature. It provides binding sites for numerous proteins and is thought to play an important role as sarcomeric ruler and as blueprint for the assembly of the sarcomere. Another giant protein, nebulin, is hypothesised to extend along the thin filaments and the entire I-Band. Similar to titin, it is thought to act as a molecular ruler along for thin filament assembly. Several proteins important for the stability of the sarcomeric structure are found in the Z-line as well as in the M-band of the sarcomere. Actin filaments and titin molecules are cross-linked in the Z-disc via the Z-line protein alpha-actinin. The M-band proteins myomesin as well as C-protein crosslink the thick filament system (myosins) and the M-band part of titin (the elastic filaments). The M-line also binds creatine kinase, which facilitates the reaction of ADP and phosphocreatine into ATP and creatine. The interaction between actin and myosin filaments in the A-band of the sarcomere is responsible for the muscle contraction (based on the sliding filament model). Contraction The protein tropomyosin covers the myosin-binding sites of the actin molecules in the muscle cell. For a muscle cell to contract, tropomyosin must be moved to uncover the binding sites on the actin. Calcium ions bind with troponin C molecules (which are dispersed throughout the tropomyosin protein) and alter the structure of the tropomyosin, forcing it to reveal the cross-bridge binding site on the actin. The concentration of calcium within muscle cells is controlled by the sarcoplasmic reticulum, a unique form of endoplasmic reticulum in the sarcoplasm. Muscle cells are stimulated when a motor neuron releases the neurotransmitter acetylcholine, which travels across the neuromuscular junction (the synapse between the terminal button of the neuron and the muscle cell). Acetylcholine binds to a post-synaptic nicotinic acetylcholine receptor. A change in the receptor conformation allows an influx of sodium ions and initiation of a post-synaptic action potential. The action potential then travels along T-tubules (transverse tubules) until it reaches the sarcoplasmic reticulum. Here, the depolarized membrane activates voltage-gated L-type calcium channels, present in the plasma membrane. The L-type calcium channels are in close association with ryanodine receptors present on the sarcoplasmic reticulum. The inward flow of calcium from the L-type calcium channels activates ryanodine receptors to release calcium ions from the sarcoplasmic reticulum. This mechanism is called calcium-induced calcium release (CICR). It is not understood whether the physical opening of the L-type calcium channels or the presence of calcium causes the ryanodine receptors to open. The outflow of calcium allows the myosin heads access to the actin cross-bridge binding sites, permitting muscle contraction. Muscle contraction ends when calcium ions are pumped back into the sarcoplasmic reticulum, allowing the contractile apparatus and, thus, muscle cell to relax. Upon muscle contraction, the A-bands do not change their length (1.85 micrometer in mammalian skeletal muscle), whereas the I-bands and the H-zone shorten. This causes the Z-lines to come closer together. Rest At rest, the myosin head is bound to an ATP molecule in a low-energy configuration and is unable to access the cross-bridge binding sites on the actin. However, the myosin head can hydrolyze ATP into adenosine diphosphate (ADP) and an inorganic phosphate ion. A portion of the energy released in this reaction changes the shape of the myosin head and promotes it to a high-energy configuration. Through the process of binding to the actin, the myosin head releases ADP and an inorganic phosphate ion, changing its configuration back to one of low energy. The myosin remains attached to actin in a state known as rigor, until a new ATP binds the myosin head. This binding of ATP to myosin releases the actin by cross-bridge dissociation. The ATP-associated myosin is ready for another cycle, beginning with hydrolysis of the ATP. The A-band is visible as dark transverse lines across myofibers; the I-band is visible as lightly staining transverse lines, and the Z-line is visible as dark lines separating sarcomeres at the light-microscope level. Energy Storage Most muscle cells can only store enough ATP for a small number of muscle contractions. While muscle cells also store glycogen, most of the energy required for contraction is derived from phosphagens. One such phosphagen, creatine phosphate, is used to provide ADP with a phosphate group for ATP synthesis in vertebrates. Comparative structure The structure of the sarcomere affects its function in several ways. The overlap of actin and myosin gives rise to the length-tension curve, which shows how sarcomere force output decreases if the muscle is stretched so that fewer cross-bridges can form or compressed until actin filaments interfere with each other. Length of the actin and myosin filaments (taken together as sarcomere length) affects force and velocity – longer sarcomeres have more cross-bridges and thus more force, but have a reduced range of shortening. Vertebrates display a very limited range of sarcomere lengths, with roughly the same optimal length (length at peak length-tension) in all muscles of an individual as well as between species. Arthropods, however, show tremendous variation (over seven-fold) in sarcomere length, both between species and between muscles in a single individual. The reasons for the lack of substantial sarcomere variability in vertebrates is not fully known.
Biology and health sciences
Muscular system
null
420275
https://en.wikipedia.org/wiki/Galactic%20bulge
Galactic bulge
In astronomy, a galactic bulge (or simply bulge) is a tightly packed group of stars within a larger star formation. The term almost exclusively refers to the central group of stars found in most spiral galaxies (see galactic spheroid). Bulges were historically thought to be elliptical galaxies that happened to have a disk of stars around them, but high-resolution images using the Hubble Space Telescope have revealed that many bulges lie at the heart of a spiral galaxy. It is now thought that there are at least two types of bulges: bulges that are like ellipticals and bulges that are like spiral galaxies. Classical bulges Bulges that have properties similar to those of elliptical galaxies are often called "classical bulges" due to their similarity to the historic view of bulges. These bulges are composed primarily of stars that are older, Population II stars, and hence have a reddish hue (see stellar evolution). These stars are also in orbits that are essentially random compared to the plane of the galaxy, giving the bulge a distinct spherical form. Due to the lack of dust and gases, bulges tend to have almost no star formation. The distribution of light is described by a Sersic profile. Classical bulges are thought to be the result of collisions of smaller structures. Convulsing gravitational forces and torques disrupt the orbital paths of stars, resulting in the randomised bulge orbits. If either progenitor galaxy was gas-rich, the tidal forces can also cause inflows to the newly merged galaxy nucleus. Following a major merger, gas clouds are more likely to convert into stars, due to shocks (see star formation). One study has suggested that about 80% of galaxies in the field lack a classical bulge, indicating that they have never experienced a major merger. The bulgeless galaxy fraction of the Universe has remained roughly constant for at least the last 8 billion years. In contrast, about two thirds of galaxies in dense galaxy clusters (such as the Virgo Cluster) do possess a classical bulge, demonstrating the disruptive effect of their crowding. Disk-like bulges Many bulges have properties more similar to those of the central regions of spiral galaxies than elliptical galaxies. They are often referred to as pseudobulges or disky-bulges. These bulges have stars that are not orbiting randomly, but rather orbit in an ordered fashion in the same plane as the stars in the outer disk. This contrasts greatly with elliptical galaxies. Subsequent studies (using the Hubble Space Telescope) show that the bulges of many galaxies are not devoid of dust, but rather show a varied and complex structure. This structure often looks similar to a spiral galaxy, but is much smaller. Giant spiral galaxies are typically 2–100 times the size of those spirals that exist in bulges. Where they exist, these central spirals dominate the light of the bulge in which they reside. Typically the rate at which new stars are formed in pseudobulges is similar to the rate at which stars form in disk galaxies. Sometimes bulges contain nuclear rings that are forming stars at much higher rate (per area) than is typically found in outer disks, as shown in NGC 4314 (see photo). Properties such as spiral structure and young stars suggest that some bulges did not form through the same process that made elliptical galaxies and classical bulges. Yet the theories for the formation of pseudobulges are less certain than those for classical bulges. Pseudobulges may be the result of extremely gas-rich mergers that happened more recently than those mergers that formed classical bulges (within the last 5 billion years). However, it is difficult for disks to survive the merging process, casting doubt on this scenario. Many astronomers suggest that bulges that appear similar to disks form outside of the disk, and are not the product of a merging process. When left alone, disk galaxies can rearrange their stars and gas (as a response to instabilities). The products of this process (called secular evolution) are often observed in such galaxies; both spiral disks and galactic bars can result from secular evolution of galaxy disks. Secular evolution is also expected to send gas and stars to the center of a galaxy. If this happens that would increase the density at the center of the galaxy, and thus make a bulge that has properties similar to those of disk galaxies. If secular evolution, or the slow, steady evolution of a galaxy, is responsible for the formation of a significant number of bulges, then that many galaxies have not experienced a merger since the formation of their disk. This would then mean that current theories of galaxy formation and evolution greatly over-predict the number of mergers in the past few billion years. Boxy/peanut bulge for edge-on galaxies Edge-on galaxies can sometimes have a boxy/peanut bulge with an X-shape. The boxy nature of the Milky Way bulge was revealed by the COBE satellite and later confirmed with the VVV survey with the help of red clump stars. The VVV survey also found two overlapping populations of red clump stars and an X-shape of the bulge. The WISE satellite later confirmed the X-shape of the bulge. The X-shape makes up 45% of the mass of the bulge in the Milky Way. The boxy/peanut bulges are in fact the bar of a galaxy seen edge-on. Other edge-on galaxies can also show a boxy/peanut bar sometimes with an X-shape. Central compact mass Most bulges and pseudo-bulges are thought to host a central relativistic compact mass, which is traditionally assumed to be a supermassive black hole. Such black holes by definition cannot be observed directly (light cannot escape them), but various pieces of evidence suggest their existence, both in the bulges of spiral galaxies and in the centers of ellipticals. The masses of the black holes correlate tightly with bulge properties. The M–sigma relation relates black hole mass to the velocity dispersion of bulge stars, while other correlations involve the total stellar mass or luminosity of the bulge, the central concentration of stars in the bulge, the richness of the globular cluster system orbiting in the galaxy's far outskirts, and the winding angle of the spiral arms. Until recently it was thought that one could not have a supermassive black hole without a surrounding bulge. Galaxies hosting supermassive black holes without accompanying bulges have now been observed. The implication is that the bulge environment is not strictly essential to the initial seeding and growth of massive black holes.
Physical sciences
Basics_2
Astronomy
420661
https://en.wikipedia.org/wiki/False%20killer%20whale
False killer whale
The false killer whale (Pseudorca crassidens) is a species of oceanic dolphin that is the only extant representative of the genus Pseudorca. It is found in oceans worldwide but mainly in tropical regions. It was first described in 1846 as a species of porpoise based on a skull, which was revised when the first carcasses were observed in 1861. The name "false killer whale" comes from having a skull similar to the orca (Orcinus orca), or killer whale. The false killer whale reaches a maximum length of , though size can vary around the world. It is highly sociable, known to form pods of up to 50 members, and can also form pods with other dolphin species, such as the common bottlenose dolphin (Tursiops truncatus). It can form close bonds with other species, as well as have sexual interactions with them. But the false killer whale has also been known to eat other dolphins, though it typically eats squid and fish. It is a deep-diver; maximum known depth is ; maximum speed is ~ . Several aquariums around the world keep one or more false killer whales, though its aggression toward other dolphins makes it less desirable. It is threatened by fishing operations, as it can entangle in fishing gear. It is drive hunted in some Japanese villages. The false killer whale has a tendency to mass-strand given its highly social nature; the largest stranding consisted of over 800 beached at Mar del Plata, Argentina, in 1946. Most of what is known of this species comes from examining stranded individuals. Taxonomy The false killer whale was first described by British paleontologist and biologist Richard Owen in his 1846 book, A history of British fossil mammals and birds, based on a fossil skull discovered in 1843. This specimen was unearthed from the Lincolnshire Fens near Stamford in England, a subfossil deposited in a marine environment that existed around 126,000 years ago. The skull was reported as present in a number of museum collections, but noted as lost by William Henry Flower in 1884. Owen compared the skull to those of the long-finned pilot whale (Globicephala melas), beluga whale (Delphinapterus leucas), and Risso's dolphin (Grampus griseus)–in fact, he gave it the nickname "thick toothed grampus" in light of this and assigned the animal to the genus Phocaena (a genus of porpoises) which Risso's dolphin was also assigned to in 1846. The species name crassidens means "thick toothed". In 1846, zoologist John Edward Gray put the false killer whale in the genus Orcinus, which had been known as the killer whale (Orcinus orca). Until 1861, when the first carcasses washed up on the shores of Kiel Bay, Denmark, the species was presumed extinct. Based on these and a pod that beached itself three months later in November, zoologist Johannes Theodor Reinhardt moved the species in 1862 to the newly erected genus Pseudorca, which established it as being neither a porpoise nor a killer whale. The name "false killer whale" comes from the apparent similarity between its skull and that of the killer whale. The false killer whale is in the family Delphinidae (oceanic dolphins). It is in the subfamily Globicephalinae; its closest living relatives are Risso's dolphin, the melon-headed whale (Peponocephala electra), the pygmy killer whale (Feresa attenuata), pilot whales (Globicephala spp.), and possibly snubfin dolphins (Orcaella spp.). William Henry Flower suggested in 1884 and later abandoned a distinction between northern and southern false killer whales. Paules Edward Pieris Deraniyagala proposed a subspecies, P. c. meridionalis, in 1945, though without enough justification. There are currently no recognized subspecies. Still, individuals in populations around the world can have different skull structure and vary in average length with Japanese false killers found to be 10–20% larger than South African ones. It can hybridize with the bottlenose dolphin (Tursiops truncatus) to produce fertile offspring called "wholphins". Description The false killer whale is black or dark gray; slightly lighter on the underside. It has a slender body with an elongated, tapered head and 44 teeth. The dorsal fin is sickle-shaped; and flippers are narrow, short, and pointed, with a distinctive bulge on the leading edge of the flipper (the side closest to the head). False killer whales are large marine predators. They are the fourth-largest extant species of oceanic dolphin, exceeded in size only by the orca, and the two species of pilot whales. Females reach a maximum size of in length and in weight, and males long and . Males are about 10–15 % larger than females. Newborns can be long. Body temperature ranges from , increasing during activity. The teeth are conical, and there are 14–21 in the upper jaw and 16–24 in the lower. A false killer reaches physical maturity at 8 to 14 years; maximum age in captivity is 57 years for males and 62 for females. Sexual maturity happens at 8 to 11 years. In one population, calving was at 7 year intervals; calving can occur year-round, though it usually occurs in late winter. Gestation takes ~15 months; lactation, 9 months to 2 years. The false killer is one of three toothed whales, the other two being the pilot whales, identified as having a sizable lifespan after menopause, which occurs at age 45 to 55. As a toothed whale, a false killer can echolocate using its melon organ in the forehead to create sound, which it uses to navigate and find prey. The melon is larger in males than in females. Behaviour The false killer whale has been known to interact non-aggressively with some dolphins: the common bottlenose dolphin, the Pacific white-sided dolphin (Lagenorhynchus obliquidens), the rough-toothed dolphin (Steno bredanensis), the pilot whales, the melon-headed whale, the pantropical spotted dolphin (Stenella attenuata), the pygmy killer whale, and Risso's dolphin. They have been shown to engage in depredation at fisheries with killer whales (Orcinus orca), though their diets differ with the killer whales and false killer whales preferring swordfish (4) and smaller fish respectively. A false killer may respond to distress calls and protect other species from predators, aid in childbirth by helping to remove the afterbirth, and has been known to interact sexually with bottlenose dolphins (see Wholphin) and pilot whales, including homosexually. It has been known to form mixed-species pods with those dolphins, probably due to shared feeding grounds. In Japan, these only occur in winter, suggesting it is tied to seasonal food shortages. A pod near Chile had a cruising speed, and false killer whales in captivity were recorded to have a maximum speed of , similar to a bottlenose dolphin. Diving behavior is not well recorded, but one individual near Japan dove for 12 minutes to a depth of . In Japan, one individual had a documented dive of , and one in Hawaii , comparable to pilot whales and other similarly-sized dolphins. Its maximum dive time is likely 18.5 minutes. The false killer travels in large pods, evidenced by mass strandings; usually 10 to 20 members, though these smaller groups can be part of larger groups; it is highly social and can travel in groups of more than 500 whales. These large groups may break up into smaller family groups of 4 to 6 members while feeding. Members stay with the pod long-term, some recorded as 15 years, and, indicated by mass strandings, share strong bonds with other members. It is thought it has a matrifocal family structure, with mothers heading the pod instead of the father, like in sperm whales and pilot whales. Different populations around the world have different vocalizations, similar to other dolphins. The false killer whale is probably polygynous, with males mating with multiple females. Ecology The false killer whale is an apex predator, inhabiting tropical and subtropical waters. Generally, the false killer whale targets a wide array of squid and fish of various sizes during daylight hours. They typically target large species of fish, such as mahi-mahi, wahoo and tuna. They are also known to prey on marine mammals, such as some species of dolphins and whales. In captivity, it eats 3.4 to 4.3% of its body weight per day. A video taken in 2016 near Sydney shows a group hunting a juvenile shark. It sometimes discards the tail, gills, and stomach of captured fish, and pod members have been known to share food. In the Eastern Pacific, the false killer whale has been known to target smaller dolphins during tuna purse-seine fishing operations; there are attacks on sperm whales (Physeter macrocephalus), and one instance against a humpback whale (Megaptera novaeangliae) calf. Killer whales are known to prey on the false killer, and it also possibly faces a threat from large sharks, though there are no documented instances. The false killer is known to host parasites: trematode Nasitrema in the sinuses, nematode Stenurus in the sinuses and lungs, an unidentified crassicaudine nematode in the sinuses, stomach nematodes Anisakis simplex and Anisakis typica, acanthocephalan worm Bolbosoma capitatum in the intestines, whale lice Syncyamus pseudorcae and Isocyamus delphinii, and the whale barnacle Xenobalanus globicipitis. Some strandings had whales with large Bolbosoma infestations, such as the 1976 and 1986 strandings in Florida. Population and distribution The false killer whale appears to have a widespread presence in tropical and semitropical oceans. The species has been found in temperate waters, but these occurrences were possibly stray individuals, or associated with warm water events. It generally does not go beyond 50°N or below 50°S. It usually inhabits open ocean and deep-water areas, though it may frequent coastal areas near oceanic islands. Distinct populations inhabit the seas near the Hawaiian Islands and in the eastern North Pacific. The false killer whale is thought to be common around the world, though no total estimate has been made. The population in the Eastern Pacific is probably in the low tens-of-thousands, and ~16,000 near China and Japan. The population around Hawaii has been declining. Human interaction The false killer whale is known to be much more adaptable in captivity than other dolphins, being easily trained and highly sociable with other species, and as such it has been kept in several public aquariums around the world, such as in Japan, the United States, the Netherlands, Hong Kong, and Australia. Individuals were mainly captured off California and Hawaii, and then in Japan and Taiwan after 1980. It has also been successfully bred in captivity. Chester, an orphaned calf that had been stranded near Tofino in 2014 and rescued by Vancouver Aquarium, probably died from a bacterial erysipelas infection in 2017 at the age of approximately three and a half. The false killer has been known to approach and offer fish it has caught to humans diving or boating. It also takes fish off hooks, which sometimes leads to entanglement or swallowing the hook. Entanglement can cause drowning, loss of circulation to an appendage, or impede the animal's ability to hunt, and swallowing the hook can puncture the digestive tract or can become a blockage. In Hawaii, this is likely leading to the decline in local populations, reducing them by 75% from 1989 to 2009. The false killer is more susceptible to organochloride buildup than other dolphins, being higher up on the food chain, and stranded individuals around the world show higher levels than other dolphins. It has been known to ride the wakes of large boats, which could put it at risk of hitting the propeller. In a few Japanese villages, the false killer is killed in drive hunts using sound to herd individuals together and cause a mass stranding or corral them into nets before being killed. Beachings The false killer whale regularly beaches itself, for reasons largely unknown, on coasts around the world, with the largest stranding consisting of 835 individuals on 9 October 1946 at Mar del Plata in Argentina. Unlike other dolphins, but similar to other globicephalines, the false killer usually mass strands in pods, leading to such high mortality rates. These can also occur in temperate waters outside its normal range, such as with the mass strandings in Britain in 1927, 1935, and 1936. The 30 July 1986 mass stranding of 114 false killers in Flinders Bay, Western Australia was widely watched as volunteers and the newly created Department of Conservation and Land Management (CALM) saved 96 whales, and founded an informal network for whale strandings. The 2 June 2005 Geographe Bay stranding of 120 whales in Western Australia, the fourth in the bay, was caused by a storm preventing the animals from seeing the shoreline; this also caused a rescue effort of 1,500 volunteers by CALM. Since 2005, there have been seven mass strandings of false killer whales in New Zealand involving more than one individual, the largest on 8 April 1943 on the Māhia Peninsula with 300 stranded, and 31 March 1978 in Manukau Harbour with 253 stranded. Whale strandings are rare in southern Africa, but mass strandings in this area are typically associated with the false killer, with mass strandings averaging at 58 individuals. Hot-spots for mass stranding exist along the coast of the Western Cape in South Africa; the most recent in 30 May 2009 near the village of Kommetjie with 55 individuals. On 14 January 2017, a pod of ~100 beached themselves in Everglades National Park, Florida, US; the remoteness of the area was detrimental to rescue efforts, causing the deaths of 81 whales. The other two strandings in Florida were in 1986 with three beached whales from a pod of 40 in Cedar Key, and 1980 with 28 stranded in Key West. Conservation The false killer whale is covered by the Agreement on the Conservation of Small Cetaceans of the Baltic, North East Atlantic, Irish and North Seas (ASCOBANS), and the Agreement on the Conservation of Cetaceans in the Black Sea, Mediterranean Sea and Contiguous Atlantic Area (ACCOBAMS). The species is further included in the Memorandum of Understanding Concerning the Conservation of the Manatee and Small Cetaceans of Western Africa and Macaronesia (Western African Aquatic Mammals MoU) and the Memorandum of Understanding for the Conservation of Cetaceans and Their Habitats in the Pacific Islands Region (Pacific Cetaceans MoU). No accurate global estimates for the false killer whale exist, so the species is listed as Near Threatened by the IUCN Redlist. In November 2012, the United States' National Oceanic and Atmospheric Administration recognized the Hawaiian population of false killers, comprising ~150 whales, as endangered.
Biology and health sciences
Toothed whale
Animals
420732
https://en.wikipedia.org/wiki/Ground%20squirrel
Ground squirrel
Ground squirrels are rodents of the squirrel family (Sciuridae) that generally live on the ground or in burrows, rather than in trees like the tree squirrels. The term is most often used for the medium-sized ground squirrels, as the larger ones are more commonly known as marmots (genus Marmota) or prairie dogs, while the smaller and less bushy-tailed ground squirrels tend to be known as chipmunks (genus Tamias). Together, they make up the "marmot tribe" of squirrels, Marmotini, a clade within the large and mainly ground squirrel subfamily Xerinae, and containing six living genera. Well-known members of this largely Holarctic group are the marmots (Marmota), including the American groundhog, the chipmunks, the susliks (Spermophilus), and the prairie dogs (Cynomys). They are highly variable in size and habitus, but most are remarkably able to rise up on their hind legs and stand fully erect comfortably for prolonged periods. They also tend to be far more gregarious than other squirrels, and many live in colonies with complex social structures. Most Marmotini are rather short-tailed and large squirrels. At up to or more, certain marmots are the heaviest squirrels. The chipmunks of the genus Tamias frequently spend time in trees. Also closer to typical squirrels in other aspects, they are occasionally considered a tribe of their own (Tamiini). Evolution and systematics Palaeosciurus from Europe is the oldest known ground squirrel species, and it does not seem to be particularly close to any of the two to three living lineages (subtribes) of Marmotini. The oldest fossils are from the Early Oligocene, more than 30 million years ago (Mya), but the genus probably persisted at least until the mid-Miocene, some 15 Mya. Where the Marmotini originated is unclear. The subtribes probably diverged in the early to mid-Oligocene, as primitive marmots and chipmunks are known from the Late Oligocene of North America. The fossil record of the "true" ground squirrels is less well known, beginning only in the mid-Miocene, when modern susliks and prairie dogs are known to have inhabited their present-day range already. Whether the Marmotini dispersed between North America and Eurasia via "island-hopping" across the Bering Straits or the Greenland region—both of which were temperate habitat at that time—and from which continent they dispersed to which, or if both continents brought forth distinct subtribes which then spread to the other, is not known and would probably require more fossil material to be resolved. In any case, the fairly comprehensive fossil record of Europe—at the relevant time separated from Asia by the Turgai Sea—lacks ancient Marmotini except the indeterminate Palaeosciurus, which might be taken to indicate an East Asian or western North American origin with trans-Beringia dispersal being the slightly more satisfying hypothesis. This is also supported by the enigmatic Chinese genus Sciurotamias, which may be the most ancient living lineage of this group, or—if the chipmunks are not included here—close to the common ancestor of the Tamiini and the Marmotini sensu stricto. In any case, expansion of the Marmotini to Africa was probably prevented by competitive exclusion by their close relatives the Protoxerini and Xerini—the native terrestrial and palm squirrels of that continent, which must have evolved at the same time as the Marmotini did. Size Ground squirrels can measure anywhere from about in height up to nearly . They can weigh between and . Habitat Open areas including rocky outcrops, fields, pastures, and sparsely wooded hillsides comprise their habitat. Ground squirrels also live in grassy areas such as pastures, golf courses, cemeteries, and parks. Defense mechanisms Ground squirrels have developed several defense mechanisms to protect themselves from predators. When threatened, they emit high-pitched warning calls to alert others in their colony. This alarm call serves as an early warning system, allowing nearby squirrels to seek cover. The squirrels spend about one-third of their time standing to watch and when a predator is in sight, they stop and watch 60% of the time. Ground squirrels are also known for their burrowing behavior. They have intricate tunnel systems with multiple entrances, which provide escape routes from predators. When a threat approaches, they quickly retreat underground, where they are safe from most predators. Their burrows are designed with multiple chambers and ranges between , making it challenging for predators to reach them. This combination of vocal warnings and burrow construction makes ground squirrels highly adapted to evade danger and survive in the wild. Diet Ground squirrels are omnivorous, and not only eat a diet rich in fungi, nuts, fruits, and seeds, but also occasionally eat insects, eggs, and other small animals. Subtribes and genera Basal and incertae sedis genera Palaeosciurus (fossil) Callospermophilus Notocitellus Otospermophilus (American rock squirrels) Poliocitellus (Franklin's ground squirrel) Sciurotamias (Chinese rock squirrels) Urocitellus Xerospermophilus Subtribe Tamiina: chipmunks (might be full tribe) Eutamias Neotamias Nototamias (fossil) Tamias Subtribe Marmotina: marmots and prairie dogs Arctomyoides (fossil) Miospermophilus (fossil) Paenemarmota (fossil) Palaearctomys (fossil) Protospermophilus (fossil) Marmota Cynomys (prairie dogs) Subtribe Spermophilina: true ground squirrels Spermophilinus (fossil) Ammospermophilus Ictidomys: Thirteen-lined ground squirrel and related species Spermophilus Cladogram Below is a partial cladogram of ground squirrels (tribe Marmotini, but excluding the Tamiina subtribe and some basal genera) derived from maximum parsimony analysis.
Biology and health sciences
Rodents
Animals
420764
https://en.wikipedia.org/wiki/Integrated%20pest%20management
Integrated pest management
Integrated pest management (IPM), also known as integrated pest control (IPC) that integrates both chemical and non-chemical practices for economic control of pests. The UN's Food and Agriculture Organization defines IPM as "the careful consideration of all available pest control techniques and subsequent integration of appropriate measures that discourage the development of pest populations and keep pesticides and other interventions to levels that are economically justified and reduce or minimize risks to human health and the environment. IPM emphasizes the growth of a healthy crop with the least possible disruption to agro-ecosystems and encourages natural pest control mechanisms." Entomologists and ecologists have urged the adoption of IPM pest control since the 1970s. IPM is a safer pest control framework than reliance on the use of chemical pesticides, mitigating risks such as: insecticide-induced resurgence, pesticide resistance and (especially food) crop residues. History Shortly after World War II, when synthetic insecticides were introduced, entomologists in California developed the concept of "supervised insect control". Around the same time, entomologists in the US Cotton Belt were advocating a similar approach. Under this scheme, insect control was "supervised" by qualified entomologists and insecticide applications were based on conclusions reached from periodic monitoring of pest and natural-enemy populations. This was viewed as an alternative to calendar-based programs. Supervised control was based on knowledge of the ecology and analysis of projected trends in pest and natural-enemy populations. Supervised control formed much of the conceptual basis for the "integrated control" that University of California entomologists articulated in the 1950s. Integrated control sought to identify the best mix of chemical and biological controls for a given insect pest. Chemical insecticides were to be used in the manner least disruptive to biological control. The term "integrated" was thus synonymous with "compatible." Chemical controls were to be applied only after regular monitoring indicated that a pest population had reached a level that required treatment (the economic threshold) to prevent the population from reaching a level at which economic losses would exceed the cost of the control measures (the economic injury level). IPM extended the concept of integrated control to all classes of pests and was expanded to include all tactics. Controls such as pesticides were to be applied as in integrated control, but these now had to be compatible with tactics for all classes of pests. Other tactics, such as host-plant resistance and cultural manipulations, became part of the IPM framework. IPM combined entomologists, plant pathologists, nematologists and weed scientists. In the United States, IPM was formulated into national policy in February 1972 as directed by President Richard Nixon. In 1979, President Jimmy Carter established an interagency IPM Coordinating Committee to ensure development and implementation of IPM practices. Perry Adkisson and Ray F. Smith received the 1997 World Food Prize for encouraging the use of IPM. Applications IPM is used in agriculture, horticulture, forestry, human habitations, preventive conservation of cultural property and general pest control, including structural pest management, turf pest management and ornamental pest management. IPM practices help to prevent and slow the development of resistance, known as resistance management. Principles An American IPM system is designed around six basic components: Acceptable pest levels—The emphasis is on control, not eradication. IPM holds that wiping out an entire pest population is often impossible, and the attempt can be expensive and unsafe. IPM programmes first work to establish acceptable pest levels, called action thresholds, and apply controls if those thresholds are crossed. These thresholds are pest and site specific, meaning that it may be acceptable at one site to have a weed such as white clover, but not at another site. Allowing a pest population to survive at a reasonable threshold reduces selection pressure. This lowers the rate at which a pest develops resistance to a control, because if almost all pests are killed then those that have resistance will provide the genetic basis of the future population. Retaining a significant number of unresistant specimens dilutes the prevalence of any resistant genes that appear. Similarly, the repeated use of a single class of controls will create pest populations that are more resistant to that class, whereas alternating among classes helps prevent this. Preventive cultural practices—Selecting varieties best for local growing conditions and maintaining healthy crops is the first line of defense. Plant quarantine and 'cultural techniques' such as crop sanitation are next, e.g., removal of diseased plants, and cleaning pruning shears to prevent spread of infections. Beneficial fungi and bacteria are added to the potting media of horticultural crops vulnerable to root diseases, greatly reducing the need for fungicides. Monitoring—Regular observation is critically important. Observation is broken into inspection and identification. Visual inspection, insect and spore traps, and other methods are used to monitor pest levels. Record-keeping is essential, as is a thorough knowledge of target pest behavior and reproductive cycles. Since insects are cold-blooded, their physical development is dependent on area temperatures. Many insects have had their development cycles modeled in terms of degree-days. The degree days of an environment determines the optimal time for a specific insect outbreak. Plant pathogens follow similar patterns of response to weather and season. Automated systems based on AI have been developed to identify and monitor flies using e-trapping devices. Mechanical controls—Should a pest reach an unacceptable level, mechanical methods are the first options. They include simple hand-picking, barriers, traps, vacuuming and tillage to disrupt breeding. Biological controls—Natural biological processes and materials can provide control, with acceptable environmental impact, and often at lower cost. The main approach is to promote beneficial insects that eat or parasitize target pests. Biological insecticides, derived from naturally occurring microorganisms (e.g.—Bt, entomopathogenic fungi and entomopathogenic nematodes), also fall in this category. Further 'biology-based' or 'ecological' techniques are under evaluation. Responsible use—Synthetic pesticides are used as required and often only at specific times in a pest's life cycle. Many newer pesticides are derived from plants or naturally occurring substances (e.g.—nicotine, pyrethrum and insect juvenile hormone analogues), but the toxophore or active component may be altered to provide increased biological activity or stability. Applications of pesticides must reach their intended targets. Matching the application technique to the crop, the pest, and the pesticide is critical, for example, the use of low-volume spray equipment can considerably reduce overall pesticide use and operational costs. Although originally developed for agricultural pest management, IPM programmes now encompass diseases, weeds and other pests that interfere with management objectives for sites such as residential and commercial structures, lawn and turf areas, and home and community gardens. Predictive models have proved to be suitable tools supporting the implementation of IPM programmes. Process IPM is the selection and use of pest control actions that will ensure favourable economic condition, ecological and social consequences and is applicable to most agricultural, public health and amenity pest management situations. The IPM process starts with monitoring, which includes inspection and identification, followed by the establishment of economic injury levels. The economic injury levels set the economic threshold level. Economic Injury level is the pest population level at which crop damage exceeds the cost of treatment of pest. This can also be an action threshold level for determining an unacceptable level that is not tied to economic injury. Action thresholds are more common in structural pest management and economic injury levels in classic agricultural pest management. An example of an action threshold is one fly in a hospital operating room is not acceptable, but one fly in a pet kennel would be acceptable. Once a threshold has been crossed by the pest population action steps need to be taken to reduce and control the pest. Integrated pest management employs a variety of actions including cultural controls such as physical barriers, biological controls such as adding and conserving natural predators and enemies of the pest, and finally chemical controls or pesticides. Reliance on knowledge, experience, observation and integration of multiple techniques makes IPM appropriate for organic farming (excluding synthetic pesticides). These may or may not include materials listed on the Organic Materials Review Institute (OMRI) Although the pesticides and particularly insecticides used in organic farming and organic gardening are generally safer than synthetic pesticides, they are not always more safe or environmentally friendly than synthetic pesticides and can cause harm. For conventional farms IPM can reduce human and environmental exposure to hazardous chemicals, and potentially lower overall costs. Risk assessment usually includes four issues: 1) characterization of biological control agents, 2) health risks, 3) environmental risks and 4) efficacy. Mistaken identification of a pest may result in ineffective actions. E.g., plant damage due to over-watering could be mistaken for fungal infection, since many fungal and viral infections arise under moist conditions. Monitoring begins immediately, before the pest's activity becomes significant. Monitoring of agricultural pests includes tracking soil/planting media fertility and water quality. Overall plant health and resistance to pests is greatly influenced by pH, alkalinity, of dissolved mineral and oxygen reduction potential. Many diseases are waterborne, spread directly by irrigation water and indirectly by splashing. Once the pest is known, knowledge of its lifecycle provides the optimal intervention points. For example, weeds reproducing from last year's seed can be prevented with mulches and pre-emergent herbicide. Pest-tolerant crops such as soybeans may not warrant interventions unless the pests are numerous or rapidly increasing. Intervention is warranted if the expected cost of damage by the pest is more than the cost of control. Health hazards may require intervention that is not warranted by economic considerations. Specific sites may also have varying requirements. E.g., white clover may be acceptable on the sides of a tee box on a golf course, but unacceptable in the fairway where it could confuse the field of play. Possible interventions include mechanical/physical, cultural, biological and chemical. Mechanical/physical controls include picking pests off plants, or using netting or other material to exclude pests such as birds from grapes or rodents from structures. Cultural controls include keeping an area free of conducive conditions by removing waste or diseased plants, flooding, sanding, and the use of disease-resistant crop varieties. Biological controls are numerous. They include: conservation of natural predators or augmentation of natural predators, sterile insect technique (SIT). Augmentation, inoculative release and inundative release are different methods of biological control that affect the target pest in different ways. Augmentative control includes the periodic introduction of predators. With inundative release, predators are collected, mass-reared and periodically released in large numbers into the pest area. This is used for an immediate reduction in host populations, generally for annual crops, but is not suitable for long run use. With inoculative release a limited number of beneficial organisms are introduced at the start of the growing season. This strategy offers long term control as the organism's progeny affect pest populations throughout the season and is common in orchards. With seasonal inoculative release the beneficials are collected, mass-reared and released seasonally to maintain the beneficial population. This is commonly used in greenhouses. In America and other western countries, inundative releases are predominant, while Asia and the eastern Europe more commonly use inoculation and occasional introductions. The sterile insect technique (SIT) is an area-wide IPM program that introduces sterile male pests into the pest population to trick females into (unsuccessful) breeding encounters, providing a form of birth control and reducing reproduction rates. The biological controls mentioned above only appropriate in extreme cases, because in the introduction of new species, or supplementation of naturally occurring species can have detrimental ecosystem effects. Biological controls can be used to stop invasive species or pests, but they can become an introduction path for new pests. Chemical controls include horticultural oils or the application of insecticides and herbicides. A green pest management IPM program uses pesticides derived from plants, such as botanicals, or other naturally occurring materials. Pesticides can be classified by their modes of action. Rotating among materials with diverse modes of action minimizes pest resistance. Evaluation is the process of assessing whether the intervention was effective, whether it produced unacceptable side effects, whether to continue, revise or abandon the program. Southeast Asia The Green Revolution of the 1960s and '70s introduced sturdier plants that could support the heavier grain loads resulting from intensive fertilizer use. Pesticide imports by 11 Southeast Asian countries grew nearly sevenfold in value between 1990 and 2010, according to FAO statistics, with disastrous results. Rice farmers become accustomed to spraying soon after planting, triggered by signs of the leaf folder moth, which appears early in the growing season. It causes only superficial damage and doesn't reduce yields. In 1986, Indonesia banned 57 pesticides and completely stopped subsidizing their use. Progress was reversed in the 2000s, when growing production capacity, particularly in China, reduced prices. Rice production in Asia more than doubled. But it left farmers believing more is better—whether it's seed, fertilizer, or pesticides. The brown planthopper, Nilaparvata lugens, the farmers' main target, has become increasingly resistant. Since 2008, outbreaks have devastated rice harvests throughout Asia, but not in the Mekong Delta. Reduced spraying allowed natural predators to neutralize planthoppers in Vietnam. In 2010 and 2011, massive planthopper outbreaks hit 400,000 hectares of Thai rice fields, causing losses of about $64 million. The Thai government is now pushing the "no spray in the first 40 days" approach. By contrast early spraying kills frogs, spiders, wasps and dragonflies that prey on the later-arriving and dangerous planthopper and produced resistant strains. Planthoppers now require pesticide doses 500 times greater than originally. Overuse indiscriminately kills beneficial insects and decimates bird and amphibian populations. Pesticides are suspected of harming human health and became a common means for rural Asians to commit suicide. In 2001, 950 Vietnamese farmers tried IPM. In one plot, each farmer grew rice using their usual amounts of seed and fertilizer, applying pesticide as they chose. In a nearby plot, less seed and fertilizer were used and no pesticides were applied for 40 days after planting. Yields from the experimental plots were as good or better and costs were lower, generating 8% to 10% more net income. The experiment led to the "three reductions, three gains" campaign, claiming that cutting the use of seed, fertilizer and pesticide would boost yield, quality and income. Posters, leaflets, TV commercials and a 2004 radio soap opera that featured a rice farmer who gradually accepted the changes. It didn't hurt that a 2006 planthopper outbreak hit farmers using insecticides harder than those who didn't. Mekong Delta farmers cut insecticide spraying from five times per crop cycle to zero to one. The Plant Protection Center and the International Rice Research Institute (IRRI) have been encouraging farmers to grow flowers, okra, and beans on rice paddy banks, instead of stripping vegetation, as was typical. The plants attract bees and wasps that eat planthopper eggs, while the vegetables diversify farm incomes. Agriculture companies offer bundles of pesticides with seeds and fertilizer, with incentives for volume purchases. A proposed law in Vietnam requires licensing pesticide dealers and government approval of advertisements to prevent exaggerated claims. Insecticides that target other pests, such as Scirpophaga incertulas (stem borer), the larvae of moth species that feed on rice plants allegedly yield gains of 21% with proper use.
Technology
Pest and disease control
null
420944
https://en.wikipedia.org/wiki/Vampire%20squid
Vampire squid
The vampire squid (Vampyroteuthis infernalis, lit. 'vampire squid from hell') is a small cephalopod found throughout temperate and tropical oceans in extreme deep sea conditions. The vampire squid uses its bioluminescent organs and its unique oxygen metabolism to thrive in the parts of the ocean with the lowest concentrations of oxygen. It has two long retractile filaments, located between the first two pairs of arms on its dorsal side, which distinguish it from both octopuses and squids, and places it in its own order, Vampyromorphida, although its closest relatives are octopods. As a phylogenetic relict, it is the only known surviving member of its order. The first specimens were collected on the Valdivia Expedition and were originally described as an octopus in 1903 by German teuthologist Carl Chun, but later assigned to a new order together with several extinct taxa. Discovery The vampire squid was discovered during the Valdivia Expedition (1898–1899), led by Carl Chun. Chun was a zoologist who was inspired by the Challenger Expedition, and wanted to verify that life does indeed exist below 300 fathoms (550 meters). Chun later classified the vampire squid into its family, Vampyroteuthidae. This expedition was funded by the German society Gesellschaft Deutscher Naturforscher und Ärzte, a group of German scientists who believed there was life at depths greater than 550 meters, contrary to the Abyssus theory. was fitted with equipment for the collection of deep-sea organisms, as well as laboratories and specimen jars, in order to analyze and preserve what was caught. The voyage began in Hamburg, Germany, followed by Edinburgh, and then traced around the west coast of Africa. After navigating around the southern point of Africa, the expedition studied deep areas of the Indian and Antarctic Ocean. Researchers had not before discovered any species from this family that could be traced back to the Cenozoic. This suggests two ideas which are: a notable preservation bias called the Lazarus effect may exist; or an inaccurate determination of when vampire squids originally settled in the deep oceans. The Lazarus effect may result from the scarcity of post-Cretaceous research regions or from the reduced abundance and distribution of vampire squids. In any case, even while the search regions remain the same, it is more difficult to locate and analyze them. Description The vampire squid can reach a maximum total length around . Its gelatinous body varies in colour from velvety jet-black to pale reddish, depending on location and lighting conditions. A webbing of skin connects its eight arms, each lined with rows of fleshy spines or cirri; the inner side of this "cloak" is black. Only the distal halves (farthest from the body) of the arms have suckers. The name of the animal was inspired by its dark colour and cloaklike webbing, rather than its habits — it feeds on detritus, not blood. Its limpid, globular eyes, which appear red or blue, depending on lighting, are proportionately the largest in the animal kingdom at in diameter. Their large eyes are accompanied by the similarly expanded optic lobes of their brain. Mature adults have a pair of small fins projecting from the lateral sides of the mantle. These earlike fins serve as the adult's primary means of propulsion: vampire squid move through the water by flapping their fins. Their beaklike jaws are white. Within the webbing are two pouches wherein the tactile velar filaments are concealed. The filaments are analogous to a true squid's tentacles, extending well past the arms; but differ in origin, and represent the pair that was lost by the ancestral octopus. The vampire squid is almost entirely covered in light-producing organs called photophores, capable of producing disorienting flashes of light ranging in duration from fractions of a second to several minutes. The intensity and size of the photophores can also be modulated. Appearing as small, white discs, the photophores are larger and more complex at the tips of the arms and at the base of the two fins, but are absent from the undersides of the caped arms. Two larger, white areas on top of the head were initially believed to also be photophores, but are now identified as photoreceptors. The chromatophores (pigment organs) common to most cephalopods are poorly developed in the vampire squid. The animal is, therefore, incapable of changing its skin colour in the dramatic fashion of shallow-dwelling cephalopods, as such an ability would not be useful at the lightless depths where it lives. Systematics The Vampyromorphida is the extant sister taxon to all octopuses. Phylogenetic studies of cephalopods using multiple genes and mitochondrial genomes have shown that the Vampyromorphida are the first group of Octopodiformes to evolutionarily diverge from all others. The Vampyromorphida is characterized by derived characters such as the possession of photophores and of two velar filaments which are most probably modified arms. It also shares the inclusion of an internal gladius with other coleoids, including squid, and eight webbed arms with cirrate octopods. Vampyroteuthis shares its eight cirrate arms with the Cirrata, in which lateral cirri, or filaments, alternate with the suckers. Vampyroteuthis differs in that suckers are present only on the distal half of the arms while cirri run the entire length. In cirrate octopods suckers and cirri run and alternate on the entire length. Also, a close relationship between Vampyroteuthis and the Jurassic-Cretaceous Loligosepiina is indicated by the similarity of their gladii, the internal stiffening structure. Vampyronassa rhodanica from the middle Jurassic La Voulte-sur-Rhône of France is considered as one of a vampyroteuthid that shares some characters with Vampyroteuthis. The supposed vampyromorphids from the Kimmeridgian-Tithonian (156–146 mya) of Solnhofen, Plesioteuthis prisca, Leptotheuthis gigas, and Trachyteuthis hastiformis, cannot be positively assigned to this group; they are large species (from 35 cm in P. prisca to > 1 m in L. gigas) and show features not found in vampyromorphids, being somewhat similar to the true squids, Teuthida. Biology The vampire squid's worldwide range is confined to the tropics and subtropics. This species is an extreme example of a deep sea cephalopod, thought to reside at aphotic (lightless) depths from or more. Within this region of the world's oceans is a discrete habitat known as the oxygen minimum zone (OMZ). Within an OMZ, the saturation of oxygen is too low to support aerobic metabolism in most complex organisms. The vampire squid is the only cephalopod able to live its entire life cycle in the minimum zone, at oxygen saturations as low as 3%. What behavioral data is known has been gleaned from ephemeral encounters with remotely operated underwater vehicles (ROV). Vampire squid are frequently injured during capture, and can survive up to two months in aquaria. It has been hypothesized that they can live for over eight years. To cope with life in the suffocating depths, vampire squids have developed several adaptations: Of all deep-sea cephalopods, their mass-specific metabolic rate is the lowest. Their blue blood's hemocyanin binds and transports oxygen more efficiently than in other cephalopods, aided by gills possessing an especially large surface area. The animals have weak musculature and a greatly reduced shell, but maintain agility and buoyancy with little effort because of sophisticated statocysts (balancing organs akin to a human's inner ear) and ammonium-rich gelatinous tissues closely matching the density of the surrounding seawater. The vampire squid's ability to thrive in OMZs also keeps it safe from apex predators that require a large amount of oxygen to live. The vampire squid's large eyes and optic lobes (of their brain) may be an adaptation for greater sensitivity to distant bioluminescence; signs of animals, such as prey aggregations or potential mates. This sensitivity is useful when monitoring a vast area of the water column, which is largely featureless at these depths. Antipredator behavior Like many deep-sea cephalopods, the vampire squid lacks ink sacs. This, along with their low metabolic rate, lead to it adapting various alternate methods of defence. If disturbed, it will curl its arms up outwards and wrap them around its body, turning itself inside-out in a way, making itself seem larger and exposing the spiny projections on its tentacles (the cirri). The underside of the cape is heavily pigmented, concealing most of the body's photophores. The glowing arm tips are clustered together far above the animal's head, diverting attack away from critical areas. This anti-predator behavior is dubbed the "pumpkin" or "pineapple" posture. The armtips regenerate, so if they are bitten off, they can serve as a diversion allowing the animal to escape while its predator is distracted. If highly agitated, it may eject a sticky cloud of bioluminescent mucus containing innumerable orbs of blue light from its arm tips. This luminous barrage, which may last nearly 10 minutes, would presumably serve to dazzle would-be predators and allow the vampire squid to disappear into the dark without the need to swim far. The glowing "ink" is also able to stick to the predator, creating what is called the "burglar alarm effect" (making the vampire squid's would-be predator more visible to secondary predators, similar to the Atolla jellyfish's light display). The display is made only if the animal is very agitated, due to the metabolic cost of mucus regeneration. Their aforementioned bioluminescent "fireworks" are combined with the writhing of glowing arms, along with erratic movements and escape trajectories, making it difficult for a predator to identify the squid itself among multiple sudden targets. The vampire squid's retractile filaments have been suggested to play a larger role in predator avoidance via both detection and escape mechanisms. Despite these defence mechanisms, vampire squids have been found among the stomach contents of large deepwater fish, including giant grenadiers, and deep-diving mammals, such as whales and sea lions. Feeding Vampire squid have eight arms but lack feeding tentacles (like octopods), and instead use two retractile filaments in order to capture food. These filaments have small hairs on them, made up of many sensory cells, that help them detect and secure their prey. They combine waste with mucus secreted from the suckers to form balls of food. As sedentary generalist feeders, they feed on detritus, including the remains of gelatinous zooplankton (such as salps, larvaceans, and medusae jellies) and complete crustaceans, such as copepods, ostracods, amphipods, and isopods, as well as faecal pellets of other aquatic organisms that live above. Vampire squids also use a unique luring method where they purposefully agitate bioluminescent protists in the water as a way to attract larger prey for them to consume. Life cycle If hypotheses may be drawn from knowledge of other deep-sea cephalopods, the vampire squid likely reproduces slowly by way of a small number of large eggs, or a K-selected strategy. Ovulation is irregular and there is minimal energy devotion into the development of the gonad. Growth is slow, as nutrients are not abundant at depths frequented by the animals. The vastness of their habitat and its sparse population make reproductive encounters a fortuitous event. With iteroparity often seen in organisms with high adult survival rates, such as the vampire squid, many low-cost reproductive cycles would be expected for the species. Reproduction of the vampire squid is unlike any other coleoid cephalopod; the males pass a "packet" of sperm to a female and the female accepts it and stores it in a special pouch inside her mantle. The female may store a male's hydraulically implanted spermatophore for long periods before she is ready to fertilize her eggs. Once she does, she may need to brood over them for up to 400 days before they hatch. Their reproductive strategy appears to be iteroparous, which is an exception amongst the otherwise semelparous Coleoidea. During their life, coleoid cephalopods are thought to go through only one reproductive cycle whereas vampire squid have shown evidence of multiple reproductive cycles. After releasing their eggs, new batches of eggs are formed after the female vampire squid returns to resting. This process may repeat up to, and sometimes more than, twenty times in their lifespan. These spawning events happen quite far apart due to the vampire squid's low metabolic rate. Few specifics are known regarding the ontogeny of the vampire squid. Hatchlings are about 8 mm in length and are well-developed miniatures of the adults, with some differences: they are transparent, their arms lack webbing, their eyes are smaller proportionally, and their velar filaments are not fully formed. Their development progresses through three morphologic forms: the very young animals have a single pair of fins, an intermediate form has two pairs, and the mature form again has one pair of fins. At their earliest and intermediate phases of development, a pair of fins is located near the eyes; as the animal develops, this pair gradually disappears as the other pair develops. As the animals grow and their surface area to volume ratio drops, the fins are resized and repositioned to maximize gait efficiency. Whereas the young propel themselves primarily by jet propulsion, mature adults prefer the more efficient means of flapping their fins. This unique ontogeny caused confusion in the past, with the varying forms identified as several species in distinct families. The hatchlings survive on a generous internal yolk supply for an unknown period before they begin to actively feed. The younger animals frequent much deeper waters, perhaps feeding on marine snow (falling organic detritus). The mature vampire squid is also thought to be an opportunistic hunter of larger prey as fish bones, other squid flesh, and gelatinous matter has been recorded in mature vampire squid stomachs. Relationship with humans Conservation status The vampire squid is currently not on any endangered or threatened species list and they have no known impact on humans. Vampire squids are at increased risk for micro plastic pollution because their diet is mostly marine snow. Micro plastics can cause death by decreasing feeding activity as they take up space in the digestive tract causing the animal's stomach to feel full without providing nutrients. Popular culture Following an article in Rolling Stone magazine by Matt Taibbi after the subprime mortgage crisis of 2008, the term "vampire squid" has been regularly used in popular culture to refer to Goldman Sachs, the American investment bank. Live vampire squids are shown in the "Ocean Deep" episode of Planet Earth. The Monterey Bay Aquarium (California, United States) became the first facility to put this species on display, in May 2014. Vampire Squids are a species that can be caught and cooked in the 2023 video game Dave the Diver.
Biology and health sciences
Cephalopods
Animals
421049
https://en.wikipedia.org/wiki/Spirit%20%28rover%29
Spirit (rover)
Spirit, also known as MER-A (Mars Exploration Rover – A) or MER-2, is a Mars robotic rover, active from 2004 to 2010. Spirit was operational on Mars for sols or 3.3 Martian years ( days; ). It was one of two rovers of NASA's Mars Exploration Rover Mission managed by the Jet Propulsion Laboratory (JPL). Spirit landed successfully within the impact crater Gusev on Mars at 04:35 Ground UTC on January 4, 2004, three weeks before its twin, Opportunity (MER-B), which landed on the other side of the planet. Its name was chosen through a NASA-sponsored student essay competition. The rover got stuck in a "sand trap" in late 2009 at an angle that hampered recharging of its batteries; its last communication with Earth was on March 22, 2010. The rover completed its planned 90-sol mission (slightly less than 92.5 Earth days). Aided by cleaning events that resulted in more energy from its solar panels, Spirit went on to function effectively over twenty times longer than NASA planners expected. Spirit also logged of driving instead of the planned , allowing more extensive geological analysis of Martian rocks and planetary surface features. Initial scientific results from the first phase of the mission (the 90-sol prime mission) were published in a special issue of the journal Science. On May 1, 2009 (5 years, 3 months, 27 Earth days after landing; 21 times the planned mission duration), Spirit became stuck in soft sand. This was not the first of the mission's "embedding events" and for the following eight months NASA carefully analyzed the situation, running Earth-based theoretical and practical simulations, and finally programming the rover to make extrication drives in an attempt to free itself. These efforts continued until January 26, 2010, when NASA officials announced that the rover was likely irrecoverably obstructed by its location in soft sand, though it continued to perform scientific research from its current location. The rover continued in a stationary science platform role until communication with Spirit stopped on March 22, 2010 (sol ). JPL continued to attempt to regain contact until May 24, 2011, when NASA announced that efforts to communicate with the unresponsive rover had ended, calling the mission complete. A formal farewell took place at NASA headquarters shortly thereafter. Objectives The scientific objectives of the Mars Exploration Rover mission were to: Search for and characterize a variety of rocks and soils that hold clues to past water activity. In particular, samples sought include those that have minerals deposited by water-related processes such as precipitation, evaporation, sedimentary cementation, or hydrothermal activity. Determine the distribution and composition of minerals, rocks, and soils surrounding the landing sites. Determine what geologic processes have shaped the local terrain and influenced the chemistry. Such processes could include water or wind erosion, sedimentation, hydrothermal mechanisms, volcanism, and cratering. Perform calibration and validation of surface observations made by Mars Reconnaissance Orbiter (MRO) instruments. This will help determine the accuracy and effectiveness of various instruments that survey Martian geology from orbit. Search for iron-containing minerals, and to identify and quantify relative amounts of specific mineral types that contain water or were formed in water, such as iron-bearing carbonates. Characterize the mineralogy and textures of rocks and soils to determine the processes that created them. Search for geological clues to the environmental conditions that existed when liquid water was present. Assess whether those environments were conducive to life. Mission timeline Opportunity and Spirit rovers were part of the Mars Exploration Rover program in the long-term Mars Exploration Program. The Mars Exploration Program's four principal goals were to determine if the potential for life exists on Mars (in particular, whether recoverable water may be found on Mars), to characterize the Mars climate and its geology, and then to prepare for a potential human mission to Mars. The Mars Exploration Rovers were to travel across the Martian surface and perform periodic geologic analyses to determine if water ever existed on Mars as well as the types of minerals available, as well as to corroborate data taken by the Mars Reconnaissance Orbiter (MRO). Both rovers were designed with an expected 90 sols (92 Earth days) lifetime, but each lasted much longer than expected. Spirit mission lasted 20 times longer than its expected lifetime, and its mission was declared ended on May 25, 2011, after it got stuck in soft sand and expended its power reserves trying to free itself. Opportunity lasted 55 times longer than its 90 sol planned lifetime, operating for days from landing to mission end. An archive of weekly updates on the rover's status can be found at the Opportunity Update Archive. Launch and landing The MER-A (Spirit) and MER-B (Opportunity) were launched on June 10, 2003 and July 7, 2003, respectively. Though both probes launched on Boeing Delta II 7925-9.5 rockets from Cape Canaveral Space Launch Complex 17 (CCAFS SLC-17), MER-B was on the heavy version of that launch vehicle, needing the extra energy for Trans-Mars injection. The launch vehicles were integrated onto pads right next to each other, with MER-A on CCAFS SLC-17A and MER-B on CCAFS SLC-17B. The dual pads allowed for working the 15- and 21-day planetary launch periods close together; the last possible launch day for MER-A was June 19, 2003 and the first day for MER-B was June 25, 2003. NASA's Launch Services Program managed the launch of both spacecraft. Spirit successfully landed on the surface of Mars on 04:35 Spacecraft Event Time (SCET) on January 4, 2004. This was the start of its 90-sol mission, but solar cell cleaning events would mean it was the start of a much longer mission, lasting until 2010. Spirit was targeted to a site that appears to have been affected by liquid water in the past, the crater Gusev, a possible former lake in a giant impact crater about from the center of the target ellipse at . After the airbag-protected landing craft settled onto the surface, the rover rolled out to take panoramic images. These give scientists the information they need to select promising geological targets and drive to those locations to perform on-site scientific investigations. The MER team named the landing site "Columbia Memorial Station," in honor of the seven astronauts killed in the Space Shuttle Columbia disaster. On May 1, 2009 (sol ), the rover became stuck in soft sand, the machine resting upon a cache of iron(III) sulfate (jarosite) hidden under a veneer of normal-looking soil. Iron sulfate has very little cohesion, making it difficult for the rover's wheels to gain traction. On January 26, 2010 (sol ), after several months attempting to free the rover, NASA decided to redefine the mobile robot mission by calling it a stationary research platform. Efforts were directed in preparing a more suitable orientation of the platform in relation to the Sun in an attempt to allow a more efficient recharge of the platform's batteries. This was needed to keep some systems operational during the Martian winter. On March 30, 2010, Spirit skipped a planned communication session and as anticipated from recent power-supply projections, had probably entered a low-power hibernation mode. The last communication with the rover was March 22, 2010 (sol ) and there is a strong possibility the rover's batteries lost so much energy at some point that the mission clock stopped. In previous winters the rover was able to park on a Sun-facing slope and keep its internal temperature above , but since the rover was stuck on flat ground it is estimated that its internal temperature dropped to . If Spirit had survived these conditions and there had been a cleaning event, there was a possibility that with the southern summer solstice in March 2011, solar energy would increase to a level that would wake up the rover. Spirit remains silent at its location, called "Troy," on the west side of Home Plate. There was no communication with the rover after March 22, 2010 (sol ). It is likely that Spirit experienced a low-power fault and had turned off all sub-systems, including communication, and gone into a deep sleep, trying to recharge its batteries. It is also possible that the rover had experienced a mission clock fault. If that had happened, the rover would have lost track of time and tried to remain asleep until enough sunlight struck the solar arrays to wake it. This state is called "Solar Groovy." If the rover woke up from a mission clock fault, it would only listen. Starting on July 26, 2010 (sol ), a new procedure to address the possible mission clock fault was implemented. End of mission JPL continued attempts to regain contact with Spirit until May 25, 2011, when NASA announced the end of contact efforts and the completion of the mission. According to NASA, the rover likely experienced excessively cold "internal temperatures" due to "inadequate energy to run its survival heaters" that, in turn, was a result of "a stressful Martian winter without much sunlight." Many critical components and connections would have been "susceptible to damage from the cold." Assets that had been needed to support Spirit were transitioned to support Spirit's then still-active twin, Opportunity. The primary surface mission for Spirit was planned to last at least 90 sols. The mission received several extensions and lasted about 2,208 sols. On August 11, 2007, Spirit obtained the second longest operational duration on the surface of Mars for a lander or rover at 1282 Sols, one sol longer than the Viking 2 lander. Viking 2 was powered by a nuclear cell whereas Spirit is powered by solar arrays. Until Opportunity overtook it on May 19, 2010, the Mars probe with longest operational period was Viking 1 that lasted for 2245 Sols on the surface of Mars. On March 22, 2010, Spirit sent its last communication, thus falling just over a month short of surpassing Viking 1's operational record. An archive of weekly updates on the rover's status can be found at the Spirit Update Archive. Spirit's total odometry is . Design and construction Spirit (and its twin, Opportunity) are six-wheeled, solar-powered robots standing high, wide and long and weighing . Six wheels on a rocker-bogie system enabled mobility over rough terrain. Each wheel had its own motor. The vehicle was steered at front and rear and was designed to operate safely at tilts of up to 30 degrees. The maximum speed was ; , although the average speed was about . Both Spirit and Opportunity have pieces of the fallen World Trade Center's metal on them that were "turned into shields to protect cables on the drilling mechanisms". Solar arrays generated about 140 watts for up to fourteen hours per sol, while rechargeable lithium ion batteries stored energy for use at night. Spirits onboard computer uses a 20 MHz RAD6000 CPU with 128 MB of DRAM and 3 MB of EEPROM. The rover's operating temperature ranges from and radioisotope heaters provide a base level of heating, assisted by electrical heaters when necessary. Communications depended on an omnidirectional low-gain antenna communicating at a low data rate and a steerable high-gain antenna, both in direct contact with Earth. A low-gain antenna was also used to relay data to spacecraft orbiting Mars. Science payload The science instruments included: Panoramic Camera (Pancam) – examined the texture, color, mineralogy, and structure of the local terrain. Navigation Camera (Navcam) – monochrome with a higher field of view but lower resolution, for navigation and driving. Miniature Thermal Emission Spectrometer (Mini-TES) – identified promising rocks and soils for closer examination, and determined the processes that formed them. Hazcams, two B&W cameras with 120 degree field of view, that provided additional data about the rover's surroundings. The rover arm held the following instruments: Mössbauer spectrometer (MB) MIMOS II – used for close-up investigations of the mineralogy of iron-bearing rocks and soils. Alpha particle X-ray spectrometer (APXS) – close-up analysis of the abundances of elements that make up rocks and soils. Magnets – for collecting magnetic dust particles. Microscopic Imager (MI) – obtained close-up, high-resolution images of rocks and soils. Rock Abrasion Tool (RAT) – exposed fresh material for examination by instruments on board. Spirit was 'driven' by several operators throughout its mission. Power The rover uses a combination of solar cells and a rechargeable chemical battery. This class of rover has two rechargeable lithium batteries, each composed of 8 cells with 8 amp-hour capacity. At the start of the mission the solar panels could provide up to around 900 watt-hours (Wh) per day to recharge the battery and power system in one Sol, but this could vary due to a variety of factors. In Eagle crater the cells were producing about 840 Wh per day, but by Sol 319 in December 2004, it had dropped to 730 Wh per day. Like Earth, Mars has seasonal variations that reduce sunlight during winter. However, since the Martian year is longer than that of the Earth, the seasons fully rotate roughly once every 2 Earth years. By 2016, MER-B had endured seven Martian winters, during which times power levels drop which can mean the rover avoids doing activities that use a lot of power. During its first winter power levels dropped to under 300 Wh per day for two months, but some later winters were not as bad. Another factor that can reduce received power is dust in the atmosphere, especially dust storms. Dust storms have occurred quite frequently when Mars is closest to the Sun. Global dust storms in 2007 reduced power levels for Opportunity and Spirit so much they could only run for a few minutes each day. Due to the 2018 dust storms on Mars, Opportunity entered hibernation mode on June 12, but it remained silent after the storm subsided in early October. Discoveries The rocks on the plains of Gusev are a type of basalt. They contain the minerals olivine, pyroxene, plagioclase and magnetite. They look like volcanic basalt, as they are fine-grained with irregular holes (geologists would say they have vesicles and vugs). Much of the soil on the plains came from the breakdown of the local rocks. Fairly high levels of nickel were found in some soils; probably from meteorites. Analysis shows that the rocks have been slightly altered by tiny amounts of water. Outside coatings and cracks inside the rocks suggest water deposited minerals, maybe bromine compounds. All the rocks contain a fine coating of dust and one or more harder rinds of material. One type can be brushed off, while another needed to be ground off by the Rock Abrasion Tool (RAT). The dust in Gusev Crater is the same as dust all around the planet. All the dust was found to be magnetic. Moreover, Spirit found the magnetism was caused by the mineral magnetite, especially magnetite that contained the element titanium. One magnet was able to completely divert all dust, hence all Martian dust is thought to be magnetic. The spectra of the dust was similar to spectra of bright, low thermal inertia regions like Tharsis and Arabia that have been detected by orbiting satellites. A thin layer of dust, maybe less than one millimeter thick, covers all surfaces. Something in it contains a small amount of chemically bound water. Astronomy Spirit pointed its cameras towards the sky and observed a transit of the Sun by Mars' moon Deimos (see Transit of Deimos from Mars). It also took the first photo of Earth from the surface of another planet in early March 2004. In late 2005, Spirit took advantage of a favorable energy situation to make multiple nighttime observations of both of Mars' moons Phobos and Deimos. These observations included a "lunar" (or rather phobian) eclipse as Spirit watched Phobos disappear into Mars' shadow. Some of Spirits star gazing was designed to look for a predicted meteor shower caused by Halley's Comet, and although at least four imaged streaks were suspect meteors, they could not be unambiguously differentiated from those caused by cosmic rays. A transit of Mercury from Mars took place on January 12, 2005, from about 14:45 UTC to 23:05 UTC. Theoretically, this could have been observed by both Spirit and Opportunity; however, camera resolution did not permit seeing Mercury's 6.1" angular diameter. They were able to observe transits of Deimos across the Sun, but at 2' angular diameter, Deimos is about 20 times larger than Mercury's 6.1" angular diameter. Ephemeris data generated by JPL Horizons indicates that Opportunity would have been able to observe the transit from the start until local sunset at about 19:23 UTC Earth time, while Spirit would have been able to observe it from local sunrise at about 19:38 UTC until the end of the transit. Equipment wear and failures Both rovers passed their original mission time of 90 sols many times over. The extended time on the surface, and therefore additional stress on components, resulted in some issues developing. On March 13, 2006 (sol ), the right front wheel ceased working after having covered on Mars. Engineers began driving the rover backwards, dragging the dead wheel. Although this resulted in changes to driving techniques, the dragging effect became a useful tool, partially clearing away soil on the surface as the rover traveled, thus allowing areas to be imaged that would normally be inaccessible. However, in mid-December 2009, to the surprise of the engineers, the right front wheel showed slight movement in a wheel-test on sol 2113 and clearly rotated with normal resistance on three of four wheel-tests on sol 2117, but stalled on the fourth. On November 29, 2009 (sol ), the right rear wheel also stalled and remained inoperable for the remainder of the mission. Scientific instruments also experienced degradation as a result of exposure to the harsh Martian environment and use over a far longer period than had been anticipated by the mission planners. Over time, the diamond in the resin grinding surface of the Rock Abrasion Tool wore down, after that the device could only be used to brush targets. All of the other science instruments and engineering cameras continued to function until contact was lost; however, towards the end of Spirit'''s life, the MIMOS II Mössbauer spectrometer took much longer to produce results than it did earlier in the mission because of the decay of its cobalt-57 gamma ray source that has a half life of 271 days. Legacy and honors To commemorate Spirits great contribution to the exploration of Mars, the asteroid 37452 Spirit has been named after it. The name was proposed by Ingrid van Houten-Groeneveld who along with Cornelis Johannes van Houten and Tom Gehrels discovered the asteroid on September 24, 1960. To honor the rover, the JPL team named an area near Endeavour Crater explored by the Opportunity rover, 'Spirit Point'. Documentary film, Good Night Oppy, about the Opportunity, Spirit, and their long missions, was directed by Ryan White, and included support from JPL and Industrial Light & Magic. It was released in 2022. Gallery The rover could take pictures with its different cameras, but only the PanCam camera had the ability to photograph a scene with different color filters. The panorama views were usually built up from PanCam images. Spirit transferred 128,224 pictures in its lifetime.
Technology
Rovers
null
421051
https://en.wikipedia.org/wiki/Opportunity%20%28rover%29
Opportunity (rover)
Opportunity, also known as MER-B (Mars Exploration Rover – B) or MER-1, and nicknamed Oppy, is a robotic rover that was active on Mars from 2004 until 2018. Opportunity was operational on Mars for sols ( on Earth). Launched on July 7, 2003, as part of NASA's Mars Exploration Rover program, it landed in Meridiani Planum on January 25, 2004, three weeks after its twin, Spirit (MER-A), touched down on the other side of the planet. With a planned 90-sol duration of activity (slightly less than 92.5 Earth days), Spirit functioned until it got stuck in 2009 and ceased communications in 2010, while Opportunity was able to stay operational for sols after landing, maintaining its power and key systems through continual recharging of its batteries using solar power, and hibernating during events such as dust storms to save power. This careful operation allowed Opportunity to operate for 57 times its designed lifespan, exceeding the initial plan by (in Earth time). By June 10, 2018, when it last contacted NASA, the rover had traveled a distance of . Mission highlights included the initial 90-sol mission, finding meteorites such as Heat Shield Rock (Meridiani Planum meteorite), and over two years of exploring and studying Victoria crater. The rover survived moderate dust storms and in 2011 reached Endeavour crater, which has been considered as a "second landing site". The Opportunity mission is considered one of NASA's most successful ventures. Due to the planetary 2018 dust storm on Mars, Opportunity ceased communications on June 10 and entered hibernation on June 12, 2018. It was hoped it would reboot once the weather cleared, but it did not, suggesting either a catastrophic failure or that a layer of dust had covered its solar panels. NASA hoped to re-establish contact with the rover, citing a recurring windy period which was forecast for November 2018 to January 2019, that could potentially clean off its solar panels. On February 13, 2019, NASA officials declared that the Opportunity mission was complete, after the spacecraft had failed to respond to over 1,000 signals sent since August 2018. Objectives The scientific objectives of the Mars Exploration Rover mission were to: Search for and characterize a variety of rocks and soils that hold clues to past water activity. In particular, samples sought include those that have minerals deposited by water-related processes such as precipitation, evaporation, sedimentary cementation, or hydrothermal activity. Determine the distribution and composition of minerals, rocks, and soils surrounding the landing sites. Determine what geologic processes have shaped the local terrain and influenced the chemistry. Such processes could include water or wind erosion, sedimentation, hydrothermal mechanisms, volcanism, and cratering. Perform calibration and validation of surface observations made by Mars Reconnaissance Orbiter (MRO) instruments. This will help determine the accuracy and effectiveness of various instruments that survey Martian geology from orbit. Search for iron-containing minerals, and to identify and quantify relative amounts of specific mineral types that contain water or were formed in water, such as iron-bearing carbonates. Characterize the mineralogy and textures of rocks and soils to determine the processes that created them. Search for geological clues to the environmental conditions that existed when liquid water was present. Assess whether those environments were conducive to life. Mission timeline Opportunity and Spirit rovers were part of the Mars Exploration Rover program in the long-term Mars Exploration Program. The Mars Exploration Program's four principal goals were to determine if the potential for life exists on Mars (in particular, whether recoverable water may be found on Mars), to characterize the Mars climate and its geology, and then to prepare for a potential human mission to Mars. The Mars Exploration Rovers were to travel across the Martian surface and perform periodic geologic analyses to determine if water ever existed on Mars as well as the types of minerals available, as well as to corroborate data taken by the Mars Reconnaissance Orbiter (MRO). Both rovers were designed with an expected 90 sols (92 Earth days) lifetime, but each lasted much longer than expected. Spirit mission lasted 20 times longer than its expected lifetime, and its mission was declared ended on May 25, 2011, after it got stuck in soft sand and expended its power reserves trying to free itself. Opportunity lasted 55 times longer than its 90 sol planned lifetime, operating for days from landing to mission end. An archive of weekly updates on the rover's status can be found at the Opportunity Update Archive. Launch and landing Spirit and Opportunity were launched a month apart, on June 10 and July 8, 2003, and both reached the Martian surface by January 2004. Opportunitys launch was managed by NASA's Launch Services Program. This was the first launch of the Delta II Heavy. The launch period went from June 25 to July 15, 2003. The first launch attempt occurred on June 28, 2003, but the spacecraft launched nine days later on July 7, 2003, due to delays for range safety and winds, then later to replace items on the rocket (insulation and a battery). Each day had two instantaneous launch opportunities. On the day of launch, the launch was delayed to the second opportunity (11:18 p.m. EDT) in order to fix a valve. On January 25, 2004 (GMT) (January 24, 2004, PST), the airbag-protected landing craft settled onto the surface of Mars in the Eagle crater. From its initial landing into an impact crater amidst an otherwise generally flat plain, Opportunity successfully investigated regolith and rock samples and took panoramic photos of its landing site. Its sampling allowed NASA scientists to make hypotheses concerning the presence of hematite and past presence of water on the surface of Mars. Following this, it was directed to travel across the surface of Mars to investigate another crater site, Endurance crater, which it explored from June to December 2004. Subsequently, Opportunity examined the impact site of its own heat shield and discovered an intact meteorite, now known as Heat Shield Rock, on the surface of Mars. Opportunity was directed to proceed in a southerly direction to Erebus crater, a large, shallow, partially buried crater and a stopover on the way south towards Victoria crater, between October 2005 and March 2006. It experienced some mechanical problems with its robotic arm. In late September 2006, Opportunity reached Victoria crater and explored along the rim in a clockwise direction. In June 2007 it returned to Duck Bay, its original arrival point at Victoria crater; in September 2007 it entered the crater to begin a detailed study. In August 2008, Opportunity left Victoria crater for Endeavour crater, which it reached on August 9, 2011. At the rim of the Endeavour crater, the rover moved around a geographic feature named Cape York. The Mars Reconnaissance Orbiter had detected phyllosilicates there, and the rover analyzed the rocks with its instruments to check this sighting on the ground. This structure was analyzed in depth until summer 2013. In May 2013 the rover was heading south to a hill named Solander Point. Opportunitys total odometry by June 10, 2018 (sol 5111), was , while the dust factor was 10.8. Since January 2013, the solar array dust factor (one of the determinants of solar power production) varied from a relatively dusty 0.467 on December 5, 2013 (sol 3507), to a relatively clean 0.964 on May 13, 2014 (sol 3662). In December 2014, NASA reported that Opportunity was suffering from "amnesia" events in which the rover failed to write data, e.g. telemetry information, to non-volatile memory. The hardware failure was believed to be due to an age-related fault in one of the rover's seven memory banks. As a result, NASA had aimed to force the rover's software to ignore the failed memory bank; amnesia events continued to occur, however, which eventually resulted in vehicle resets. In light of this, on Sol 4027 (May 23, 2015), the rover was configured to operate in RAM-only mode, completely avoiding the use of non-volatile memory for storage. End of mission [[File:Mars Opportunity tau watt-hours graph.jpg|thumb|Graph of atmospheric opacity and Opportunitys energy reserve]] In early June 2018, a large planetary-scale dust storm developed, and within a few days the rover's solar panels were not generating enough power to maintain communications, with the last contact on June 10, 2018. NASA stated that they did not expect to resume communication until after the storm subsided, but the rover kept silent even after the storm ended in early October, suggesting either a catastrophic failure or a layer of dust covering its solar panels. The team remained hopeful that a windy period between November 2018 and January 2019 might clear the dust from its solar panels, as had happened before. Wind was detected nearby on January 8, and on January 26 the mission team announced a plan to begin broadcasting a new set of commands to the rover in case its radio receiver failed. On February 12, 2019, past and present members of the mission team gathered in the Jet Propulsion Laboratory (JPL)'s Space Flight Operations Facility to watch final commands being transmitted to Opportunity via the dish of the Goldstone Deep Space Communications Complex in California. Following 25 minutes of transmission of the final 4 sets of commands, communication attempts with the rover were handed off to Canberra, Australia. More than 835 recovery commands were transmitted since losing signal in June 2018 to the end of January 2019 with over 1000 recovery commands transmitted before February 13, 2019. NASA officials held a press conference on February 13 to declare an official end to the mission. NASA associate administrator Thomas Zurbuchen said, "It is therefore that I am standing here with a deep sense of appreciation and gratitude that I declare the Opportunity mission is complete." As NASA ended their attempts to contact the rover, the last data sent was the song "I'll Be Seeing You" performed by Billie Holiday. Assets that had been needed to support Opportunity were transitioned to support the Curiosity rover and the then-upcoming Perseverance rover. The final communication from the rover came on June 10, 2018 (sol 5111) from Perseverance Valley, and indicated a solar array energy production of 22 Watt-hours for the sol, and the highest atmospheric opacity (tau) ever measured on Mars: 10.8. Design and construction Opportunity (and its twin, Spirit) are six-wheeled, solar-powered robots standing high, wide and long and weighing . Six wheels on a rocker-bogie system enabled mobility over rough terrain. Each wheel had its own motor. The vehicle was steered at front and rear and was designed to operate safely at tilts of up to 30 degrees. The maximum speed was ; , although the average speed was about . Both Spirit and Opportunity have pieces of the fallen World Trade Center's metal on them that were "turned into shields to protect cables on the drilling mechanisms". Solar arrays generated about 140 watts for up to fourteen hours per sol, while rechargeable lithium ion batteries stored energy for use at night. Opportunitys onboard computer uses a 20 MHz RAD6000 CPU with 128 MB of DRAM and 3 MB of EEPROM. The rover's operating temperature ranges from and radioisotope heaters provide a base level of heating, assisted by electrical heaters when necessary. Communications depended on an omnidirectional low-gain antenna communicating at a low data rate and a steerable high-gain antenna, both in direct contact with Earth. A low-gain antenna was also used to relay data to spacecraft orbiting Mars. Science payload The science instruments included: Panoramic Camera (Pancam) – examined the texture, color, mineralogy, and structure of the local terrain. Navigation Camera (Navcam) – monochrome with a higher field of view but lower resolution, for navigation and driving. Miniature Thermal Emission Spectrometer (Mini-TES) – identified promising rocks and soils for closer examination, and determined the processes that formed them. Hazcams, two B&W cameras with 120 degree field of view, that provided additional data about the rover's surroundings. The rover arm held the following instruments: Mössbauer spectrometer (MB) MIMOS II – used for close-up investigations of the mineralogy of iron-bearing rocks and soils. Alpha particle X-ray spectrometer (APXS) – close-up analysis of the abundances of elements that make up rocks and soils. Magnets – for collecting magnetic dust particles. Microscopic Imager (MI) – obtained close-up, high-resolution images of rocks and soils. Rock Abrasion Tool (RAT) – exposed fresh material for examination by instruments on board. Opportunity was 'driven' by several operators throughout its mission, including JPL roboticist Vandi Verma. Power The rover uses a combination of solar cells and a rechargeable chemical battery. This class of rover has two rechargeable lithium batteries, each composed of 8 cells with 8 amp-hour capacity. At the start of the mission the solar panels could provide up to around 900 watt-hours (Wh) per day to recharge the battery and power system in one Sol, but this could vary due to a variety of factors. In Eagle crater the cells were producing about 840 Wh per day, but by Sol 319 in December 2004, it had dropped to 730 Wh per day. Like Earth, Mars has seasonal variations that reduce sunlight during winter. However, since the Martian year is longer than that of the Earth, the seasons fully rotate roughly once every 2 Earth years. By 2016, MER-B had endured seven Martian winters, during which times power levels drop which can mean the rover avoids doing activities that use a lot of power. During its first winter power levels dropped to under 300 Wh per day for two months, but some later winters were not as bad. Another factor that can reduce received power is dust in the atmosphere, especially dust storms. Dust storms have occurred quite frequently when Mars is closest to the Sun. Global dust storms in 2007 reduced power levels for Opportunity and Spirit so much they could only run for a few minutes each day. Due to the 2018 dust storms on Mars, Opportunity entered hibernation mode on June 12, but it remained silent after the storm subsided in early October. Scientific findings Opportunity has provided substantial evidence in support of the mission's primary scientific goals: to search for and characterize a wide range of rocks and regolith that hold clues to past water activity on Mars. In addition to investigating the water, Opportunity has also obtained astronomical observations and atmospheric data. Legacy and honors Following its launch, Opportunity was anthropomorphized by its operators: the rover was called a "she," drawing from nautical tradition, and given an affectionate nickname, "Oppy." One scientist, who worked with Opportunity for over a decade, attributed this to the rover's unexpectedly long lifespan, which he called a story of "an underdog beating the odds," and its "familiar, almost biologically inspired shape." The media attention surrounding Opportunity'''s shutdown spread this usage to the general public. With word on February 12, 2019, that NASA was likely to conclude the Opportunity mission, many media outlets and commentators issued statements praising the mission's success and stating their goodbyes to the rover. One journalist, Jacob Margolis, tweeted his translation of the last data transmission sent by Opportunity on June 10, 2018, as "My battery is low and it's getting dark." The phrase struck a chord with the public, inspiring a period of mourning, artwork, and tributes to the memory of Opportunity. When the quote became widely reported, some news reports mistakenly asserted that the rover sent that exact message in English, resulting in NASA being inundated with additional questions. Margolis wrote a clarifying article on February 16, making it clear he had taken statements from NASA officials who were interpreting the data sent by Opportunity, both on the state of its low power and Mars's high atmospheric opacity, and rephrased them in a poetic manner, never to imply the rover had sent the specific words. Honoring Opportunitys great contribution to the exploration of Mars, an asteroid was named Opportunity: 39382 Opportunity. The name was proposed by Ingrid van Houten-Groeneveld who, along with Cornelis Johannes van Houten and Tom Gehrels, discovered the asteroid on September 24, 1960. Opportunitys lander is Challenger Memorial Station. On March 24, 2015, NASA celebrated Opportunity having traveled the distance of a marathon race, . The rover covered the distance in 11 years and 2 months. The JPL technicians celebrated the occasion by running a race. A documentary film, Good Night Oppy, about the Opportunity, Spirit, and their long missions, was directed by Ryan White, and included support from JPL and Industrial Light & Magic. It was released in 2022. Images The rover could take pictures with its different cameras, but only the PanCam camera had the ability to photograph a scene with different color filters. The panorama views are usually built up from PanCam images. By February 3, 2018, Opportunity had returned 224,642 pictures. A selection of panoramas from the mission:
Technology
Rovers
null
421068
https://en.wikipedia.org/wiki/Prenex%20normal%20form
Prenex normal form
A formula of the predicate calculus is in prenex normal form (PNF) if it is written as a string of quantifiers and bound variables, called the prefix, followed by a quantifier-free part, called the matrix. Together with the normal forms in propositional logic (e.g. disjunctive normal form or conjunctive normal form), it provides a canonical normal form useful in automated theorem proving. Every formula in classical logic is logically equivalent to a formula in prenex normal form. For example, if , , and are quantifier-free formulas with the free variables shown then is in prenex normal form with matrix , while is logically equivalent but not in prenex normal form. Conversion to prenex form Every first-order formula is logically equivalent (in classical logic) to some formula in prenex normal form. There are several conversion rules that can be recursively applied to convert a formula to prenex normal form. The rules depend on which logical connectives appear in the formula. Conjunction and disjunction The rules for conjunction and disjunction say that is equivalent to under (mild) additional condition , or, equivalently, (meaning that at least one individual exists), is equivalent to ; and is equivalent to , is equivalent to under additional condition . The equivalences are valid when does not appear as a free variable of ; if does appear free in , one can rename the bound in and obtain the equivalent . For example, in the language of rings, is equivalent to , but is not equivalent to because the formula on the left is true in any ring when the free variable x is equal to 0, while the formula on the right has no free variables and is false in any nontrivial ring. So will be first rewritten as and then put in prenex normal form . Negation The rules for negation say that is equivalent to and is equivalent to . Implication There are four rules for implication: two that remove quantifiers from the antecedent and two that remove quantifiers from the consequent. These rules can be derived by rewriting the implication as and applying the rules for disjunction and negation above. As with the rules for disjunction, these rules require that the variable quantified in one subformula does not appear free in the other subformula. The rules for removing quantifiers from the antecedent are (note the change of quantifiers): is equivalent to (under the assumption that ), is equivalent to . The rules for removing quantifiers from the consequent are: is equivalent to (under the assumption that ), is equivalent to . For example, when the range of quantification is the non-negative natural number (viz. ), the statement is logically equivalent to the statement The former statement says that if x is less than any natural number, then x is less than zero. The latter statement says that there exists some natural number n such that if x is less than n, then x is less than zero. Both statements are true. The former statement is true because if x is less than any natural number, it must be less than the smallest natural number (zero). The latter statement is true because n=0 makes the implication a tautology. Note that the placement of brackets implies the scope of the quantification, which is very important for the meaning of the formula. Consider the following two statements: and its logically equivalent statement The former statement says that for any natural number n, if x is less than n then x is less than zero. The latter statement says that if there exists some natural number n such that x is less than n, then x is less than zero. Both statements are false. The former statement doesn't hold for n=2, because x=1 is less than n, but not less than zero. The latter statement doesn't hold for x=1, because the natural number n=2 satisfies x<n, but x=1 is not less than zero. Example Suppose that , , and are quantifier-free formulas and no two of these formulas share any free variable. Consider the formula . By recursively applying the rules starting at the innermost subformulas, the following sequence of logically equivalent formulas can be obtained: . , , , , , , . This is not the only prenex form equivalent to the original formula. For example, by dealing with the consequent before the antecedent in the example above, the prenex form can be obtained: , , . The ordering of the two universal quantifier with the same scope doesn't change the meaning/truth value of the statement. Intuitionistic logic The rules for converting a formula to prenex form make heavy use of classical logic. In intuitionistic logic, it is not true that every formula is logically equivalent to a prenex formula. The negation connective is one obstacle, but not the only one. The implication operator is also treated differently in intuitionistic logic than classical logic; in intuitionistic logic, it is not definable using disjunction and negation. The BHK interpretation illustrates why some formulas have no intuitionistically-equivalent prenex form. In this interpretation, a proof of is a function which, given a concrete x and a proof of , produces a concrete y and a proof of . In this case it is allowable for the value of y to be computed from the given value of x. A proof of on the other hand, produces a single concrete value of y and a function that converts any proof of into a proof of . If each x satisfying can be used to construct a y satisfying but no such y can be constructed without knowledge of such an x then formula (1) will not be equivalent to formula (2). The rules for converting a formula to prenex form that do fail in intuitionistic logic are: (1) implies , (2) implies , (3) implies , (4) implies , (5) implies , (x does not appear as a free variable of in (1) and (3); x does not appear as a free variable of in (2) and (4)). Use of prenex form Some proof calculi will only deal with a theory whose formulae are written in prenex normal form. The concept is essential for developing the arithmetical hierarchy and the analytical hierarchy. Gödel's proof of his completeness theorem for first-order logic presupposes that all formulae have been recast in prenex normal form. Tarski's axioms for geometry is a logical system whose sentences can all be written in universal–existential form, a special case of the prenex normal form that has every universal quantifier preceding any existential quantifier, so that all sentences can be rewritten in the form      , where is a sentence that does not contain any quantifier. This fact allowed Tarski to prove that Euclidean geometry is decidable.
Mathematics
Mathematical logic
null
421074
https://en.wikipedia.org/wiki/Skolem%20normal%20form
Skolem normal form
In mathematical logic, a formula of first-order logic is in Skolem normal form if it is in prenex normal form with only universal first-order quantifiers. Every first-order formula may be converted into Skolem normal form while not changing its satisfiability via a process called Skolemization (sometimes spelled Skolemnization). The resulting formula is not necessarily equivalent to the original one, but is equisatisfiable with it: it is satisfiable if and only if the original one is satisfiable. Reduction to Skolem normal form is a method for removing existential quantifiers from formal logic statements, often performed as the first step in an automated theorem prover. Examples The simplest form of Skolemization is for existentially quantified variables that are not inside the scope of a universal quantifier. These may be replaced simply by creating new constants. For example, may be changed to , where is a new constant (does not occur anywhere else in the formula). More generally, Skolemization is performed by replacing every existentially quantified variable with a term whose function symbol is new. The variables of this term are as follows. If the formula is in prenex normal form, then are the variables that are universally quantified and whose quantifiers precede that of . In general, they are the variables that are quantified universally (we assume we get rid of existential quantifiers in order, so all existential quantifiers before have been removed) and such that occurs in the scope of their quantifiers. The function introduced in this process is called a Skolem function (or Skolem constant if it is of zero arity) and the term is called a Skolem term. As an example, the formula is not in Skolem normal form because it contains the existential quantifier . Skolemization replaces with , where is a new function symbol, and removes the quantification over The resulting formula is . The Skolem term contains , but not , because the quantifier to be removed is in the scope of , but not in that of ; since this formula is in prenex normal form, this is equivalent to saying that, in the list of quantifiers, precedes while does not. The formula obtained by this transformation is satisfiable if and only if the original formula is. How Skolemization works Skolemization works by applying a second-order equivalence together with the definition of first-order satisfiability. The equivalence provides a way for "moving" an existential quantifier before a universal one. where is a function that maps to . Intuitively, the sentence "for every there exists a such that " is converted into the equivalent form "there exists a function mapping every into a such that, for every it holds that ". This equivalence is useful because the definition of first-order satisfiability implicitly existentially quantifies over functions interpreting the function symbols. In particular, a first-order formula is satisfiable if there exists a model and an evaluation of the free variables of the formula that evaluate the formula to true. The model contains the interpretation of all function symbols; therefore, Skolem functions are implicitly existentially quantified. In the example above, is satisfiable if and only if there exists a model , which contains an interpretation for , such that is true for some evaluation of its free variables (none in this case). This may be expressed in second order as . By the above equivalence, this is the same as the satisfiability of . At the meta-level, first-order satisfiability of a formula may be written with a little abuse of notation as , where is a model, is an evaluation of the free variables, and means that is true in under . Since first-order models contain the interpretation of all function symbols, any Skolem function that contains is implicitly existentially quantified by . As a result, after replacing existential quantifiers over variables by existential quantifiers over functions at the front of the formula, the formula still may be treated as a first-order one by removing these existential quantifiers. This final step of treating as may be completed because functions are implicitly existentially quantified by in the definition of first-order satisfiability. Correctness of Skolemization may be shown on the example formula as follows. This formula is satisfied by a model if and only if, for each possible value for in the domain of the model, there exists a value for in the domain of the model that makes true. By the axiom of choice, there exists a function such that . As a result, the formula is satisfiable, because it has the model obtained by adding the interpretation of to . This shows that is satisfiable only if is satisfiable as well. Conversely, if is satisfiable, then there exists a model that satisfies it; this model includes an interpretation for the function such that, for every value of , the formula holds. As a result, is satisfied by the same model because one may choose, for every value of , the value , where is evaluated according to . Uses of Skolemization One of the uses of Skolemization is within automated theorem proving. For example, in the method of analytic tableaux, whenever a formula whose leading quantifier is existential occurs, the formula obtained by removing that quantifier via Skolemization may be generated. For example, if occurs in a tableau, where are the free variables of , then may be added to the same branch of the tableau. This addition does not alter the satisfiability of the tableau: every model of the old formula may be extended, by adding a suitable interpretation of , to a model of the new formula. This form of Skolemization is an improvement over "classical" Skolemization in that only variables that are free in the formula are placed in the Skolem term. This is an improvement because the semantics of tableaux may implicitly place the formula in the scope of some universally quantified variables that are not in the formula itself; these variables are not in the Skolem term, while they would be there according to the original definition of Skolemization. Another improvement that may be used is applying the same Skolem function symbol for formulae that are identical up to variable renaming. Another use is in the resolution method for first-order logic, where formulas are represented as sets of clauses understood to be universally quantified. (For an example see drinker paradox.) An important result in model theory is the Löwenheim–Skolem theorem, which can be proven via Skolemizing the theory and closing under the resulting Skolem functions. Skolem theories In general, if is a theory and for each formula with free variables there is an n-ary function symbol that is provably a Skolem function for , then is called a Skolem theory. Every Skolem theory is model complete, i.e. every substructure of a model is an elementary substructure. Given a model M of a Skolem theory T, the smallest substructure of M containing a certain set A is called the Skolem hull of A. The Skolem hull of A is an atomic prime model over A. History Skolem normal form is named after the late Norwegian mathematician Thoralf Skolem.
Mathematics
Mathematical logic
null
421366
https://en.wikipedia.org/wiki/Mars%20rover
Mars rover
A Mars rover is a remote-controlled motor vehicle designed to travel on the surface of Mars. Rovers have several advantages over stationary landers: they examine more territory, they can be directed to interesting features, they can place themselves in sunny positions to weather winter months, and they can advance the knowledge of how to perform very remote robotic vehicle control. They serve a different purpose than orbital spacecraft like Mars Reconnaissance Orbiter. A more recent development is the Mars helicopter. , there have been six successful robotically operated Mars rovers; the first five, managed by the American NASA Jet Propulsion Laboratory, were (by date of Mars landing): Sojourner (1997), Spirit (2004–2010), Opportunity (2004–2018), Curiosity (2012–present), and Perseverance (2021–present). The sixth, managed by the China National Space Administration, is Zhurong (2021–2022). On January 24, 2016, NASA reported that then current studies on Mars by Opportunity and Curiosity would be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on Mars is now a primary NASA objective. The Soviet probes, Mars 2 and Mars 3, were physically tethered probes; Sojourner was dependent on the Mars Pathfinder base station for communication with Earth; Opportunity, Spirit and Curiosity were on their own. As of November 2023, Curiosity is still active, while Spirit, Opportunity, and Sojourner completed their missions before losing contact. On February 18, 2021, Perseverance, the newest American Mars rover, successfully landed. On May 14, 2021, China's Zhurong became the first non-American rover to successfully operate on Mars. Missions Multiple rovers have been dispatched to Mars: Active Curiosity of the Mars Science Laboratory (MSL) mission by NASA, was launched November 26, 2011 and landed at the Aeolis Palus plain near Aeolis Mons (informally "Mount Sharp") in Gale Crater on August 6, 2012. The Curiosity rover is still operational as of 2025. Perseverance is NASA's rover based on the successful Curiosity design. Launched with the Mars 2020 mission on July 30, 2020, it landed on February 18, 2021. It carried the Mars helicopter Ingenuity attached to its belly. Although Ingenuity's mission has ended, Perseverance remains operational as of March 2024. Past Sojourner rover, Mars Pathfinder, landed successfully on July 4, 1997. Communications were lost on September 27, 1997. Sojourner had traveled a distance of just over . Spirit (MER-A), Mars Exploration Rover (MER), launched on June 10, 2003, and landed on January 4, 2004. Nearly six years after the original mission limit, Spirit had covered a total distance of but its wheels became trapped in sand. The last communication received from the rover was on March 22, 2010, and NASA ceased attempts to re-establish communication on May 25, 2011. Opportunity (MER-B), Mars Exploration Rover, launched on July 7, 2003 and landed on January 25, 2004. Opportunity surpassed the previous records for longevity at 5,352 sols (5498 Earth days from landing to mission end; 15 Earth years or 8 Martian years) and covered . The rover sent its last status on 10 June 2018 when a global 2018 Mars dust storm blocked the sunlight needed to recharge its batteries. After hundreds of attempts to reactivate the rover, NASA declared the mission complete on February 13, 2019. Zhurong launched with the Tianwen-1 CNSA Mars mission on July 23, 2020, landed on May 14, 2021, in the southern region of Utopia Planitia, and deployed on May 22, 2021, while dropping a remote selfie camera on 1 June 2021. Designed for a lifespan of 90 sols (93 Earth days), Zhurong had been active for 347 sols (356.5 days) since its deployment and traveled on Mars's surface for . Since 20 May 2022, the rover was deactivated due to approaching sandstorms and Martian winter. But the larger-than-expected build-up of dust covering its solar panels prevented it from self-reactivation. On 25 April 2023, the mission designer Zhang Rongqiao announced that the buildup of dust from the last inactivation is greater than planned, indicating the rover could be inactive "forever". Failed Mars 2, PrOP-M rover, 1971, Mars 2 landing failed, destroying Prop-M with it. The Mars 2 and 3 spacecraft from the Soviet Union had identical Prop-M rovers. They were to move on skis while connected to the landers with cables. Mars 3, PrOP-M rover, landed successfully on December 2, 1971. rover tethered to the Mars 3 lander. Lost when the Mars 3 lander stopped communicating about 110 seconds after landing. The loss of communication may have been due to the extremely powerful Martian dust storm taking place at the time, or an issue with the Mars 3 orbiter's ability to relay communications. Planned ESA's ExoMars rover Rosalind Franklin was confirmed technically ready for launch in March 2022 and planned to launch in September 2022, but due to the suspension of cooperation with Roscosmos this is delayed until at least 2028. A fast-track study was started to determine alternative launch options. The Russian Moscow Aviation Institute and the Indian IIT are jointly developing a fixed-wing Mars UAV which is scheduled for launch in late 2025. Proposed The JAXA Melos rover was supposed to be launched in 2022. JAXA has not given an update since 2015. NASA Mars Geyser Hopper ISRO has proposed a Mars rover as part of Mars Lander Mission, its second Mars mission in 2030. Mars Tumbleweed Rover, a spherical wind-propelled rover. The concept was first investigated by NASA in the early 2000s. Since 2017, Team Tumbleweed has been developing a series of Tumbleweed Rovers. The research organization aims to land a swarm of 90 Tumbleweed rovers on the Martian surface by 2034. Undeveloped Marsokhod was proposed to be a part of Russian Mars 96 mission. Astrobiology Field Laboratory, proposed in the 2000-2010 period as a follow on to MSL. Mars Astrobiology Explorer-Cacher (MAX-C), cancelled 2011 Mars Surveyor 2001 rover Cushion-air rovers Timeline of rover surface operations Examples of instruments Examples of instruments onboard landed rovers include: Alpha particle X-ray spectrometer (MPF + MER + MSL) CheMin (MSL) Chemistry and Camera complex (MSL) Dynamic Albedo of Neutrons (MSL) Hazcam (MER + MSL + M20) MarsDial (MER + MSL + M20) Materials Adherence Experiment (MPF) MIMOS II (MER) Mini-TES (MER) Mars Hand Lens Imager (MSL) Navcam (MER + MSL + M20+TW1) Pancam (MER) Rock Abrasion Tool (MER) Radiation assessment detector (MSL) Rover Environmental Monitoring Station (MSL) Sample Analysis at Mars (MSL) EDL cameras on Rover (MSL + M20+TW1) Cachecam (M20) Mastcam-Z (M20) MEDA (M20) Microphones (M20+TW1) MOXIE (M20) PIXL (M20) RIMFAX (M20) SHERLOC (M20) SuperCam (M20) Remote Camera (TW1) Mars landing locations NASA Mars rover goals Circa the 2010s, NASA had established certain goals for the rover program. NASA distinguishes between "mission" objectives and "science" objectives. Mission objectives are related to progress in space technology and development processes. Science objectives are met by the instruments during their mission in space. The science instruments are chosen and designed based on the science objectives and goals. The primary goal of the Spirit and Opportunity rovers was to investigate "the history of water on Mars". The four science goals of NASA's long-term Mars Exploration Program are: Determine whether life ever arose on Mars Characterize the climate of Mars Characterize the geology of Mars Prepare for human exploration of Mars Gallery
Technology
Rovers
null
421517
https://en.wikipedia.org/wiki/Dobsonian%20telescope
Dobsonian telescope
A Dobsonian telescope is an altazimuth-mounted Newtonian telescope design popularized by John Dobson in 1965 and credited with vastly increasing the size of telescopes available to amateur astronomers. Dobson's telescopes featured a simplified mechanical design that was easy to manufacture from readily available components to create a large, portable, low-cost telescope. The design is optimized for observing faint deep-sky objects such as nebulae and galaxies. This type of observation requires a large objective diameter (i.e. light-gathering power) of relatively short focal length and portability for travel to less light-polluted locations. Dobsonians are intended to be what is commonly called a "light bucket". Operating at low magnification, the design therefore omits features found in other amateur telescopes such as equatorial tracking. Dobsonians are popular in the amateur telescope making community, where the design was pioneered and continues to evolve. A number of commercial telescope makers also sell telescopes based on this design. The term Dobsonian is currently used for a range of large-aperture Newtonian reflectors that use some of the basic Dobsonian design characteristics, regardless of the materials from which they are constructed. Origin and design It is hard to classify the Dobsonian telescope as a single invention. In the field of amateur telescope making most, if not all, of its design features had been used before. John Dobson, credited as having invented this design in 1965 pointed out that "for hundreds of years, wars were fought using cannon on 'Dobsonian' mounts". Dobson identified the characteristic features of the design as lightweight objective mirrors made from porthole glass, and mountings constructed from plywood, Teflon strips and other low-cost materials. Since he built these telescopes as aids in his avocation of instructional sidewalk astronomy, he preferred to call the design a "sidewalk telescope". Dobson combined all these innovations in a design focused towards one goal: building a very large, inexpensive, easy to use, portable telescope, one that could bring deep-sky astronomy to the masses. Dobson's design innovations Dobson's design allows a builder with minimal skills to make a very large telescope out of common items. Dobson optimized the design for observation of faint objects such as star clusters, nebulae, and galaxies (what amateur astronomers call deep sky objects). These dim objects require a large objective mirror able to gather a large amount of light. Because "deep sky" observing often requires travel to dark locations away from city lights, the design benefits from being more compact, portable, and rugged than standard large Newtonian telescopes of times past, which typically utilized massive German equatorial mounts. John Dobson's telescopes combined several innovations to meet these criteria, including: Nontraditional alt-azimuth mount: Instead of a standard mount using axial bearings, Dobson opted for a very stable design that was simple to build, and which had fewer mechanical limitations when used with large and heavy telescopes. He modified the classical fork mount into a free-standing three-piece construction, which holds the telescope steady on seven discrete support points and allows for easy and safe repositioning of a large and heavy telescope. The classical Dobsonian mount (refer to Fig.1) consists of a flat horizontal "ground board" platform (Fig.1, black) on top of which are attached three of the seven supports (Fig.1, bottom yellow). Upon these three supports rests a box construction called a "rocker box" (Fig.1, dark blue). A loose center-bolt (Fig.1, dark green) keeps the rocker box centered and allows it to pivot above the ground board. On opposing sides of the rocker box, semicircular depressions are cut out from the top edge of each wall (the rocker box is open on the top and on the back). Each depression has a widely spaced pair of supports installed in the cut (Fig.1, top yellow). The telescope optical tube assembly (OTA, Fig.1, light blue) has two large round trunnions (or arc-shaped rails for larger telescopes) secured on the left and right sides (Fig.1, red). Their common axis intersects the center of gravity of the telescope OTA. The trunnions (commonly known as altitude bearings) rest atop the aforementioned four supports in the top cutouts of the rocker box. To raise the telescope (altitude), just lift the tube and the trunnions will slide over the four supports. To move the telescope left or right (azimuth), push or pull the top rim of the OTA (some have a dedicated handle) so that the pivoting rocker box slides over the ground board's three supports. Classical Dobsonian mount parts are typically made from plywood and other cheap materials which are glued, screwed, or even nailed together. In contrast to other telescope mount types, no precision-machined mechanical parts are required. For smooth sliding motions, small Teflon (PTFE) blocks are used for the seven supports. Their surface sizes can be precisely calculated for the particular OTA weight. To improve the smoothness and steady position-holding, the bottom of the rocker box is typically covered with micro-textured Formica. The altitude trunnions often have a large diameter, and can also be covered with textured material. For larger telescopes, semicircular wood pieces or arc-shaped rails can be used instead of round trunnions. The use of Teflon over textured material combines with gravity-produced wedging forces to create a unique smooth action, transitioning from rock steady to smooth motion and back. Thus a clamp mechanism is not needed to prevent unintentional motion of the telescope, unlike most other telescope mounts. The steadiness of the classical Dobsonian is unparalleled, as the telescope is actually not rotating on two axes as other mounts but instead statically standing on seven solid blocks (until pushed to a new position). Only the Ball-Scope mount invented later can rival the steady smoothness of a Dobsonian. Thin mirrors: Instead of costly Pyrex mirror blanks with the standard 1:6 thickness ratios (1 cm thick for every 6 cm in diameter) so they don't flex and sag out of shape under their own weight, Dobson used mirrors made out of glass from surplus ship porthole covers usually with 1:16 thickness ratios. Since the telescope design has an alt-azimuth mount, the mirror only has to be supported in a simple cell with a backing of indoor/outdoor carpet to evenly support the weight of the much thinner mirror. Construction tubes: Dobson replaced the traditional aluminum or fiberglass telescope tube with the thick compressed paper tubes used in construction to pour concrete columns. "Sonotubes", the leading brand employed by Dobson, are less expensive than commercially available telescope tubes and are available in a wide variety of sizes. For protection against moisture, the tubes were usually painted or coated with plastic. Sonotubes are claimed to be more rugged than aluminum or fiberglass tubes which can dent or shatter from impacts during transport. They have the added advantage of being thermally stable and non-conductive, which minimizes unwanted convection currents in the light path caused by handling of the tube assembly. A square "mirrorbox": Dobson often used a plywood box for the tube base and mirror housing, into which the Sonotube was inserted. This gave a rigid flat surface to attach the mirror supports, and made it easy to attach the trunnions. The design of Dobsonian telescopes has evolved over the years (see ), but most commercial or amateur-built "Dobsonian" telescopes follow many or most of the design concepts and features listed above. Characteristics The Dobsonian design has the following characteristics: Altazimuth mount: An equatorial telescope mount with clock drive was left out of the design. Equatorial mounts tend to be massive (less portable), expensive, complicated, and have the characteristics of putting the eyepiece of Newtonian telescopes in very hard to access positions. Altazimuth mounts cut the size, weight and cost of the total telescope and keep the eyepiece in a relatively easy to access position on the side of the telescope. The altazimuth mount design used in Dobsonian designs also adds to simplicity and portability; there is no added mass or need to transport counter weights, drive components, or tripods/pedestals. Setting up for hard tube dobs simply involves placing the mount on the ground, and setting the tube on top of it. The weight of the Dobsonian style altazimuth mount is distributed over large simple bearing surfaces so the telescope can move smoothly under finger pressure with minimal backlash. The altazimuth mount does have its own limitations. Un-driven altazimuth mounted telescopes need to be "nudged" every few minutes along both axes to compensate for the rotation of the Earth to keep an object in view (as opposed to one axis for un-driven equatorial mounts), an exercise that becomes more difficult with higher magnifications. The altazimuth mount does not allow the use of conventional setting circles to help in aiming the telescope at the coordinates of known objects. They are known for being difficult to point at objects near the zenith, mainly because a large movement of the azimuth axis is needed to move the telescope pointing by even a small amount. Altazimuth mounts are also not well suited for astrophotography. Large objective diameter compared to mass/cost Low mass to objective size ratio: The Dobsonian design's structure as measured in volume and weight is relatively minimal for any given objective diameter when compared to other designs. Low cost to objective size ratio: From a cost perspective, a user typically gets more objective diameter per unit of cost with the Dobsonian design. Good "Deep Sky" telescope: The Dobsonian design of maximized objective diameter combined with portability makes the design ideal for observing dim star clusters, nebulae, and galaxies (deep sky objects), an activity that requires large objectives and travel to dark sky locations. Since these objects are relatively large, they are observed at low magnifications that do not require a clock-driven mount. Balance Issues: Designs that have the telescope tube fixed in relationship to its altitude bearings can be put out of balance by the addition or subtraction of equipment such as cameras, finderscopes or even unusually heavy eyepieces. Most Dobsonian telescopes have enough friction in the bearings to resist a moderate amount of imbalance; however, this friction can also make it difficult to position the telescope accurately. To correct such imbalance, counterweights are sometimes hooked or bolted onto the back of the mirror box. Derivative designs From its inception, telescope makers have been modifying the Dobsonian design to fit their needs. The original design fit the needs and available supplies of one person—John Dobson. Other people devised variants that fit their own needs, abilities, and access to parts. This has led to significant diversity in "Dobsonian" design. Collapsible tube assemblies "Classic" design tube assemblies would require a large van for transport. Designers started coming up with disassembleable or collapsible variants that could be brought to the site with a small SUV, hatchback, or even a sedan. This innovation allowed the amateur astronomy community access to even larger apertures. The truss tube Many designs have combined the advantages of a light truss tube and a collapsible design. Collapsible "truss tube" Dobsonians appeared in the amateur telescope making community as early as 1982 and allow the optical tube assembly, the largest component, to be broken down. As the name implies, the "tube" of this design is actually composed of an upper cage assembly, which contains the secondary mirror, and focuser, held in place by several rigid poles over a mirror box which contains the objective mirror. The poles are held in place by quick-disconnecting clamps which allow the entire telescope to be easily broken down into its smaller components, facilitating their transport by vehicle or other means to an observing site. These truss tube designs are sometimes incorrectly called a Serrurier truss, but since the main truss is not built with an opposing mirror cell truss it only performs one function of that design, i.e. keeping the optics parallel. Modifications to the altazimuth mount (rocker box) The main attribute of a Dobsonian's mount is that it resembles a "gun carriage" configuration with a "rocker box" consisting of a horizontal trunnion style altitude axis and a broadly supported azimuth axis, both making use of material such as plastic, Formica, and Teflon to achieve smooth operation. Many derivative mount designs have kept this basic form while heavily modifying the materials and configuration. Compact "rocker box" mounts Many designs have increased portability by shrinking the altazimuth (rocker box) mount down to a small rotating platform. The altitude trunnion style bearing in these designs becomes a large radius roughly equal to or greater than the radius of the objective mirror, attached to or integrated into the tube assembly which lowers the overall profile of the mount. The advantage of this is that it reduces the total telescope weight, and the telescope's balance becomes less sensitive to changes in the weight loading of telescope tube from the use of heavier eyepieces or the addition of cameras etc. Overcoming the limitations of the altazimuth mount Since the late 1990s many innovations in mount design and electronics by amateur telescope makers and commercial manufacturers have allowed users to overcome some of the limitations of the Dobsonian style altazimuth mount. Digital setting circles: The invention of microprocessor-based digital setting circles has allowed any altazimuth mounted telescope to be fitted or retrofitted with the ability to accurately display the coordinates of the telescope direction. These systems not only give the user a digital read-out for right ascension (RA) and declination (dec.), they also interface with digital devices such as laptop computers, tablet computers, and smartphones using live ephemeris calculating / charting planetarium software to give a current graphical representation of where the telescope is pointing, allowing the user to quickly find an object. Equatorial platform: The use of equatorial platforms (such as the Poncet Platform) fitted under the altazimuth mount has given users limited equatorial tracking for visual and astrophotographic work. Such platforms can incorporate a clock drive for ease of tracking, and with careful polar alignment sub-arc second precision CCD imaging is entirely possible. Roeser Observatory, Luxembourg (MPC observatory code 163) have contributed hundreds of astrometric measurements to the Minor Planet Center using a home-built 20" dobsonian on an equatorial platform. Commercial adaptations The original intent of the Dobsonian design was to provide an affordable, simple, and rugged large-aperture instrument at low cost. These same attributes facilitate their mass production. One of the first companies to offer Dobsonian telescopes commercially was the now defunct company Coulter Optical (now part of Murnaghan Instruments). In the 1980s, they helped popularize the design with "Odyssey" models of various sizes, with tubes made of Sonotube and following Dobson's original concept of simplicity. By the 1990s, Meade Instruments, Orion Telescopes and other manufacturers began to introduce upgraded Dobsonian models. These imported mass-produced scopes included such niceties as metal tubes and more refined hardware, and are still very affordable. Since the 1990s, manufactured Dobsonians using the truss tube design have become increasingly popular. The first commercial truss Dobsonian was released into the market by Obsession Telescopes in 1989. Later American manufacturers included StarStructure, Webster Telescopes, AstroSystems, Teeter's Telescopes, Hubble Optics, Waite Research, and New Moon Telescopes. These low-volume builders offer premium objective mirrors, high-end materials and custom craftmanship, as well as optional computer controlled GoTo systems. Some also produce "ultra-light" models that offer greater portability. In the 21st century, truss Dobsonian models are also mass-produced by Meade, Orion, Explore Scientific and others. Mostly manufactured in China, they offer good quality and value while being considerably less expensive than the premium scopes described above. In 2017, Sky-Watcher introduced its line of large Stargate models. Solid tube commercial Dobsonians typically have a maximum aperture of 12 inches (305 mm) due to the size of the tube. Truss Dobsonians of 12 to 18 inches (305 to 457 mm) are the most popular sizes, as they offer substantial aperture yet can still be easily set up by one person. Several manufacturers offer models of 24 inch (610 mm) aperture and greater. Truss Dobsonians are the largest telescopes commercially available today. A massive 36 inch (914 mm) aperture Hybrid model from New Moon Telescopes was displayed at the 2018 Northeast Astronomy Forum. In 2019, a huge 50 inch (1270 mm) aperture folded Newtonian from Canadian based Optiques Fullum was installed in New Jersey. The Dobsonian's effect on amateur astronomy The Dobsonian design is considered revolutionary due to the sheer size of telescopes it made available to amateur astronomers. The inherent simplicity and large aperture of the design began to attract interest through the 1970s since it offered the advantage of inexpensive large instruments that could be carried to dark sky locations and star parties in the back of a small car and set up in minutes. The result has been a proliferation of larger telescopes which would have been expensive to build or buy, and unwieldy to operate, using "traditional" construction methods. Whereas an 8 inch Newtonian telescope would have been considered large in the 1970s, today 16 inch systems are common, and huge 32 inch systems not all that rare. In combination with other improvements in observing equipment, such as narrow-pass optical filters, improved eyepieces, and digital visible and infrared photography, the large apertures of the Dobsonian have dramatically increased the number of objects observed as well as the amount of detail in each object observed. Whereas the amateur astronomer of the 1970s and 1980s typically did not explore much beyond the Messier and brighter NGC objects, thanks in part to Dobsonians, modern amateur astronomers routinely observe dim objects listed in obscure catalogues, such as the IC, Abell, Kohoutek, Minkowski, and others once considered reference works only for professional astronomers. When mounted on an equatorial platform the difficulties using a Dobsonian for short-exposure (≲ 1 hr) astrophotography are obviated. This has opened up the field of high precision asteroid astrometry (and discovery) to the amateur wishing to contribute minor planet positions to the Minor Planet Center. It also makes possible searches for new, faint objects such as novae / supernovae in local galaxies, and comets (for reports to the Central Bureau for Astronomical Telegrams).
Technology
Telescope
null
421940
https://en.wikipedia.org/wiki/Bragg%27s%20law
Bragg's law
In many areas of science, Bragg's law, Wulff–Bragg's condition, or Laue–Bragg interference are a special case of Laue diffraction, giving the angles for coherent scattering of waves from a large crystal lattice. It describes how the superposition of wave fronts scattered by lattice planes leads to a strict relation between the wavelength and scattering angle. This law was initially formulated for X-rays, but it also applies to all types of matter waves including neutron and electron waves if there are a large number of atoms, as well as visible light with artificial periodic microscale lattices. History Bragg diffraction (also referred to as the Bragg formulation of X-ray diffraction) was first proposed by Lawrence Bragg and his father, William Henry Bragg, in 1913 after their discovery that crystalline solids produced surprising patterns of reflected X-rays (in contrast to those produced with, for instance, a liquid). They found that these crystals, at certain specific wavelengths and incident angles, produced intense peaks of reflected radiation. Lawrence Bragg explained this result by modeling the crystal as a set of discrete parallel planes separated by a constant parameter . He proposed that the incident X-ray radiation would produce a Bragg peak if reflections off the various planes interfered constructively. The interference is constructive when the phase difference between the wave reflected off different atomic planes is a multiple of ; this condition (see Bragg condition section below) was first presented by Lawrence Bragg on 11 November 1912 to the Cambridge Philosophical Society. Although simple, Bragg's law confirmed the existence of real particles at the atomic scale, as well as providing a powerful new tool for studying crystals. Lawrence Bragg and his father, William Henry Bragg, were awarded the Nobel Prize in physics in 1915 for their work in determining crystal structures beginning with NaCl, ZnS, and diamond. They are the only father-son team to jointly win. The concept of Bragg diffraction applies equally to neutron diffraction and approximately to electron diffraction. In both cases the wavelengths are comparable with inter-atomic distances (~ 150 pm). Many other types of matter waves have also been shown to diffract, and also light from objects with a larger ordered structure such as opals. Bragg condition Bragg diffraction occurs when radiation of a wavelength comparable to atomic spacings is scattered in a specular fashion (mirror-like reflection) by planes of atoms in a crystalline material, and undergoes constructive interference. When the scattered waves are incident at a specific angle, they remain in phase and constructively interfere. The glancing angle (see figure on the right, and note that this differs from the convention in Snell's law where is measured from the surface normal), the wavelength , and the "grating constant" of the crystal are connected by the relation:where is the diffraction order ( is first order, is second order, is third order). This equation, Bragg's law, describes the condition on θ for constructive interference. A map of the intensities of the scattered waves as a function of their angle is called a diffraction pattern. Strong intensities known as Bragg peaks are obtained in the diffraction pattern when the scattering angles satisfy Bragg condition. This is a special case of the more general Laue equations, and the Laue equations can be shown to reduce to the Bragg condition with additional assumptions. Derivation In Bragg's original paper he describes his approach as a Huygens' construction for a reflected wave. Suppose that a plane wave (of any type) is incident on planes of lattice points, with separation , at an angle as shown in the Figure. Points A and C are on one plane, and B is on the plane below. Points ABCC' form a quadrilateral. There will be a path difference between the ray that gets reflected along AC' and the ray that gets transmitted along AB, then reflected along BC. This path difference is The two separate waves will arrive at a point (infinitely far from these lattice planes) with the same phase, and hence undergo constructive interference, if and only if this path difference is equal to any integer value of the wavelength, i.e. where and are an integer and the wavelength of the incident wave respectively. Therefore, from the geometry from which it follows that Putting everything together, which simplifies to which is Bragg's law shown above. If only two planes of atoms were diffracting, as shown in the Figure then the transition from constructive to destructive interference would be gradual as a function of angle, with gentle maxima at the Bragg angles. However, since many atomic planes are participating in most real materials, sharp peaks are typical. A rigorous derivation from the more general Laue equations is available (see page: Laue equations). Beyond Bragg's law The Bragg condition is correct for very large crystals. Because the scattering of X-rays and neutrons is relatively weak, in many cases quite large crystals with sizes of 100 nm or more are used. While there can be additional effects due to crystal defects, these are often quite small. In contrast, electrons interact thousands of times more strongly with solids than X-rays, and also lose energy (inelastic scattering). Therefore samples used in transmission electron diffraction are much thinner. Typical diffraction patterns, for instance the Figure, show spots for different directions (plane waves) of the electrons leaving a crystal. The angles that Bragg's law predicts are still approximately right, but in general there is a lattice of spots which are close to projections of the reciprocal lattice that is at right angles to the direction of the electron beam. (In contrast, Bragg's law predicts that only one or perhaps two would be present, not simultaneously tens to hundreds.) With low-energy electron diffraction where the electron energies are typically 30-1000 electron volts, the result is similar with the electrons reflected back from a surface. Also similar is reflection high-energy electron diffraction which typically leads to rings of diffraction spots. With X-rays the effect of having small crystals is described by the Scherrer equation. This leads to broadening of the Bragg peaks which can be used to estimate the size of the crystals. Bragg scattering of visible light by colloids A colloidal crystal is a highly ordered array of particles that forms over a long range (from a few millimeters to one centimeter in length); colloidal crystals have appearance and properties roughly analogous to their atomic or molecular counterparts. It has been known for many years that, due to repulsive Coulombic interactions, electrically charged macromolecules in an aqueous environment can exhibit long-range crystal-like correlations, with interparticle separation distances often being considerably greater than the individual particle diameter. Periodic arrays of spherical particles give rise to interstitial voids (the spaces between the particles), which act as a natural diffraction grating for visible light waves, when the interstitial spacing is of the same order of magnitude as the incident lightwave. In these cases brilliant iridescence (or play of colours) is attributed to the diffraction and constructive interference of visible lightwaves according to Bragg's law, in a matter analogous to the scattering of X-rays in crystalline solid. The effects occur at visible wavelengths because the interplanar spacing is much larger than for true crystals. Precious opal is one example of a colloidal crystal with optical effects. Volume Bragg gratings Volume Bragg gratings (VBG) or volume holographic gratings (VHG) consist of a volume where there is a periodic change in the refractive index. Depending on the orientation of the refractive index modulation, VBG can be used either to transmit or reflect a small bandwidth of wavelengths. Bragg's law (adapted for volume hologram) dictates which wavelength will be diffracted: where is the Bragg order (a positive integer), the diffracted wavelength, Λ the fringe spacing of the grating, the angle between the incident beam and the normal () of the entrance surface and the angle between the normal and the grating vector (). Radiation that does not match Bragg's law will pass through the VBG undiffracted. The output wavelength can be tuned over a few hundred nanometers by changing the incident angle (). VBG are being used to produce widely tunable laser source or perform global hyperspectral imagery (see Photon etc.). Selection rules and practical crystallography The measurement of the angles can be used to determine crystal structure, see x-ray crystallography for more details. As a simple example, Bragg's law, as stated above, can be used to obtain the lattice spacing of a particular cubic system through the following relation: where is the lattice spacing of the cubic crystal, and , , and are the Miller indices of the Bragg plane. Combining this relation with Bragg's law gives: One can derive selection rules for the Miller indices for different cubic Bravais lattices as well as many others, a few of the selection rules are given in the table below. These selection rules can be used for any crystal with the given crystal structure. KCl has a face-centered cubic Bravais lattice. However, the K+ and the Cl− ion have the same number of electrons and are quite close in size, so that the diffraction pattern becomes essentially the same as for a simple cubic structure with half the lattice parameter. Selection rules for other structures can be referenced elsewhere, or derived. Lattice spacing for the other crystal systems can be found here.
Physical sciences
Waves
Physics
422180
https://en.wikipedia.org/wiki/Telescope%20mount
Telescope mount
A telescope mount is a mechanical structure which supports a telescope. Telescope mounts are designed to support the mass of the telescope and allow for accurate pointing of the instrument. Many sorts of mounts have been developed over the years, with the majority of effort being put into systems that can track the motion of the fixed stars as the Earth rotates. Fixed mounts Fixed telescope mounts are entirely fixed in one position, such as Zenith telescopes that point only straight up and the National Radio Astronomy Observatory's Green Bank fixed radio 'horn' built to observe Cassiopeia A. Fixed altitude mounts Fixed-altitude mounts usually have the primary optics fixed at an altitude angle while rotating horizontally (in azimuth). They can cover the whole sky but only observe objects for the short time when that object passes a specific altitude and azimuth. Transit mounts Transit mounts are single axis mounts fixed in azimuth while rotating in altitude, usually oriented on a north-south axis. This allows the telescope to view the whole sky, but only when the Earth's rotation allows the objects to cross (transit) through that narrow north-south line (the meridian). This type of mount is used in transit telescopes, designed for precision astronomical measurement. Transit mounts are also used to save on cost or where the instruments mass makes movement on more than one axis very difficult, such as large radio telescopes. Altazimuth mounts Altazimuth, altitude-azimuth, or alt-az mounts allow telescopes to be moved in altitude (up and down), or azimuth (side to side), as separate motions. This mechanically simple mount was used in early telescope designs and until the second half of the 20th century was used as a "less sophisticated" alternative to equatorial mounts since it did not allow tracking of the night sky. This meant until recently it was normally used with inexpensive commercial and hobby constructions. Since the invention of digital tracking systems, altazimuth mounts have come to be used in practically all modern large research telescopes. Digital tracking has also made it a popular telescope mount used in amateur astronomy. Besides the mechanical inability to easily follow celestial motion the altazimuth mount does have other limitations. The telescope's field-of-view rotates at varying speed as the telescope tracks, whilst the telescope body does not, requiring a system to counter-rotate the field of view when used for astrophotography or other types of astronomical imaging. The mount also has blind spot or "zenith hole", a spot near the zenith where the tracking rate in the azimuth coordinate becomes too high to accurately follow equatorial motion (if the elevation is limited to +90 degrees). Alt-alt (altitude-altitude) mounts Alt-alt mounts, or altitude-altitude mounts, are designs similar to horizontal equatorial yoke mounts or Cardan suspension gimbals. This mount is an alternative to the altazimuth mount that has the advantage of not having a blind spot near the zenith, and for objects near the celestial equator the field rotation is minimized. It has the disadvantage of having all the mass, complexity, and engineering problems of its equatorial counterpart, so is only used in specialty applications such as satellite tracking. These mounts may include a third azimuth axis (an altitude-altitude-azimuth mount) to rotate the entire mount into an orientation that allows smoother tracking. Equatorial mounts The equatorial mount has north-south "polar axis" tilted to be parallel to Earth's polar axis that allows the telescope to swing in an east-west arc, with a second axis perpendicular to that to allow the telescope to swing in a north-south arc. Slewing or mechanically driving the mount's polar axis in a counter direction to the Earth's rotation allows the telescope to accurately follow the motion of the night sky. Equatorial mounts come in different shapes, include German equatorial mounts (GEM in short), equatorial fork mounts, mixed variations on yoke or cross-axis mounts, and equatorial platforms such as the Poncet Platform. Tilting the polar axis adds a level of complexity to the mount. Mechanical systems have to be engineered to support one or both ends of this axis (such as in fork or yoke mounts). Designs such as German equatorial or cross axis mounts also need large counter weights to counterbalance the mass of the telescope. Larger domes and other structures are also needed to cover the increased mechanical size and range of movement of equatorial mounts. Because of this, equatorial mounts become less viable in very large telescopes and have been pretty much replaced by altazimuth mounts for those applications. Hexapod-Telescope Instead of the classical mounting using two axes, the mirror is supported by six extendable struts (Stewart-Gough platform). This configuration allows moving the telescope in all six spatial degrees of freedom and also provides a strong structural integrity.
Technology
Telescope
null
422328
https://en.wikipedia.org/wiki/Napkin
Napkin
A napkin, serviette or face towelette is a square of cloth or paper tissue used at the table for wiping the mouth and fingers while eating. It is also sometimes used as a bib by tucking it into a shirt collar. It is usually small and folded, sometimes in intricate designs and shapes. Etymology and terminology The term 'napkin' dates from the 14th century, in the sense of a piece of cloth or paper used at mealtimes to wipe the lips or fingers and to protect clothing. The word derives from the Late Middle English nappekin, from Old French nappe (tablecloth, from Latin mappa), with the suffix -kin. A 'napkin' can also refer to a small cloth or towel, such as a handkerchief in dialectal British, or a kerchief in Scotland. 'Napkin' may also be short for "sanitary napkin". Description Conventionally, the napkin is folded and placed to the left of the place setting, outside the outermost fork. In a restaurant setting or a caterer's hall, it may be folded into more elaborate shapes and displayed on the empty plate. Origami techniques can be used to create a three-dimensional design. A napkin may also be held together in a bundle with cutlery by a napkin ring. Alternatively, paper napkins may be contained within a napkin holder. History Summaries of napkin history often say that the ancient Greeks used bread to wipe their hands. This is suggested by a passage in one of Alciphron's letters (3:44), and some remarks by the sausage seller in Aristophanes' play, The Knights. The bread in both texts is referred to as apomagdalia which simply means bread from inside the crust known as the crumb and not special "napkin bread". Napkins were also used in ancient Roman times. One of the earliest references to table napkins in English dates to 1384–85. Paper napkins The use of paper napkins is documented in ancient China, where paper was invented in the 2nd century BC. Paper napkins were known as chih pha, folded in squares, and used for the serving of tea. Textual evidence of paper napkins appears in a description of the possessions of the Yu family, from the city of Hangzhou. Paper napkins were first imported to the US in the late 1800s but did not gain widespread acceptance until 1948, when Emily Post asserted, "It’s far better form to use paper napkins than linen napkins that were used at breakfast." Leonardo Da Vinci It has been claimed that Leonardo da Vinci invented the napkin in 1491. According to this claim, the Duke of Milan, Ludovico Sforza, used to tie up live rabbits decorated with ribbons to the guest’s chairs so they could wipe their hands on the animal’s back. Leonardo found this inappropriate, and presented a cloth for each guest. The myth stems from Leonardo's Kitchen Notebooks (1987), by Jonathan Routh and Shelagh Routh, a prank book published as an April Fools’ Day joke, that claims a long lost Codex Romanoff was found in 1481, which never really existed.
Biology and health sciences
Hygiene products
Health
422481
https://en.wikipedia.org/wiki/Mass%E2%80%93energy%20equivalence
Mass–energy equivalence
In physics, mass–energy equivalence is the relationship between mass and energy in a system's rest frame, where the two quantities differ only by a multiplicative constant and the units of measurement. The principle is described by the physicist Albert Einstein's formula: . In a reference frame where the system is moving, its relativistic energy and relativistic mass (instead of rest mass) obey the same formula. The formula defines the energy of a particle in its rest frame as the product of mass () with the speed of light squared (). Because the speed of light is a large number in everyday units (approximately ), the formula implies that a small amount of mass corresponds to an enormous amount of energy. Rest mass, also called invariant mass, is a fundamental physical property of matter, independent of velocity. Massless particles such as photons have zero invariant mass, but massless free particles have both momentum and energy. The equivalence principle implies that when mass is lost in chemical reactions or nuclear reactions, a corresponding amount of energy will be released. The energy can be released to the environment (outside of the system being considered) as radiant energy, such as light, or as thermal energy. The principle is fundamental to many fields of physics, including nuclear and particle physics. Mass–energy equivalence arose from special relativity as a paradox described by the French polymath Henri Poincaré (1854–1912). Einstein was the first to propose the equivalence of mass and energy as a general principle and a consequence of the symmetries of space and time. The principle first appeared in "Does the inertia of a body depend upon its energy-content?", one of his annus mirabilis papers, published on 21 November 1905. The formula and its relationship to momentum, as described by the energy–momentum relation, were later developed by other physicists. Description Mass–energy equivalence states that all objects having mass, or massive objects, have a corresponding intrinsic energy, even when they are stationary. In the rest frame of an object, where by definition it is motionless and so has no momentum, the mass and energy are equal or they differ only by a constant factor, the speed of light squared (). In Newtonian mechanics, a motionless body has no kinetic energy, and it may or may not have other amounts of internal stored energy, like chemical energy or thermal energy, in addition to any potential energy it may have from its position in a field of force. These energies tend to be much smaller than the mass of the object multiplied by , which is on the order of 1017 joules for a mass of one kilogram. Due to this principle, the mass of the atoms that come out of a nuclear reaction is less than the mass of the atoms that go in, and the difference in mass shows up as heat and light with the same equivalent energy as the difference. In analyzing these extreme events, Einstein's formula can be used with as the energy released (removed), and as the change in mass. In relativity, all the energy that moves with an object (i.e., the energy as measured in the object's rest frame) contributes to the total mass of the body, which measures how much it resists acceleration. If an isolated box of ideal mirrors could contain light, the individually massless photons would contribute to the total mass of the box by the amount equal to their energy divided by . For an observer in the rest frame, removing energy is the same as removing mass and the formula indicates how much mass is lost when energy is removed. In the same way, when any energy is added to an isolated system, the increase in the mass is equal to the added energy divided by . Mass in special relativity An object moves at different speeds in different frames of reference, depending on the motion of the observer. This implies the kinetic energy, in both Newtonian mechanics and relativity, is 'frame dependent', so that the amount of relativistic energy that an object is measured to have depends on the observer. The relativistic mass of an object is given by the relativistic energy divided by . Because the relativistic mass is exactly proportional to the relativistic energy, relativistic mass and relativistic energy are nearly synonymous; the only difference between them is the units. The rest mass or invariant mass of an object is defined as the mass an object has in its rest frame, when it is not moving with respect to the observer. Physicists typically use the term mass, though experiments have shown an object's gravitational mass depends on its total energy and not just its rest mass. The rest mass is the same for all inertial frames, as it is independent of the motion of the observer, it is the smallest possible value of the relativistic mass of the object. Because of the attraction between components of a system, which results in potential energy, the rest mass is almost never additive; in general, the mass of an object is not the sum of the masses of its parts. The rest mass of an object is the total energy of all the parts, including kinetic energy, as observed from the center of momentum frame, and potential energy. The masses add up only if the constituents are at rest (as observed from the center of momentum frame) and do not attract or repel, so that they do not have any extra kinetic or potential energy. Massless particles are particles with no rest mass, and therefore have no intrinsic energy; their energy is due only to their momentum. Relativistic mass Relativistic mass depends on the motion of the object, so that different observers in relative motion see different values for it. The relativistic mass of a moving object is larger than the relativistic mass of an object at rest, because a moving object has kinetic energy. If the object moves slowly, the relativistic mass is nearly equal to the rest mass and both are nearly equal to the classical inertial mass (as it appears in Newton's laws of motion). If the object moves quickly, the relativistic mass is greater than the rest mass by an amount equal to the mass associated with the kinetic energy of the object. Massless particles also have relativistic mass derived from their kinetic energy, equal to their relativistic energy divided by , or . The speed of light is one in a system where length and time are measured in natural units and the relativistic mass and energy would be equal in value and dimension. As it is just another name for the energy, the use of the term relativistic mass is redundant and physicists generally reserve mass to refer to rest mass, or invariant mass, as opposed to relativistic mass. A consequence of this terminology is that the mass is not conserved in special relativity, whereas the conservation of momentum and conservation of energy are both fundamental laws. Conservation of mass and energy Conservation of energy is a universal principle in physics and holds for any interaction, along with the conservation of momentum. The classical conservation of mass, in contrast, is violated in certain relativistic settings. This concept has been experimentally proven in a number of ways, including the conversion of mass into kinetic energy in nuclear reactions and other interactions between elementary particles. While modern physics has discarded the expression 'conservation of mass', in older terminology a relativistic mass can also be defined to be equivalent to the energy of a moving system, allowing for a conservation of relativistic mass. Mass conservation breaks down when the energy associated with the mass of a particle is converted into other forms of energy, such as kinetic energy, thermal energy, or radiant energy. Massless particles Massless particles have zero rest mass. The Planck–Einstein relation for the energy for photons is given by the equation , where is the Planck constant and is the photon frequency. This frequency and thus the relativistic energy are frame-dependent. If an observer runs away from a photon in the direction the photon travels from a source, and it catches up with the observer, the observer sees it as having less energy than it had at the source. The faster the observer is traveling with regard to the source when the photon catches up, the less energy the photon would be seen to have. As an observer approaches the speed of light with regard to the source, the redshift of the photon increases, according to the relativistic Doppler effect. The energy of the photon is reduced and as the wavelength becomes arbitrarily large, the photon's energy approaches zero, because of the massless nature of photons, which does not permit any intrinsic energy. Composite systems For closed systems made up of many parts, like an atomic nucleus, planet, or star, the relativistic energy is given by the sum of the relativistic energies of each of the parts, because energies are additive in these systems. If a system is bound by attractive forces, and the energy gained in excess of the work done is removed from the system, then mass is lost with this removed energy. The mass of an atomic nucleus is less than the total mass of the protons and neutrons that make it up. This mass decrease is also equivalent to the energy required to break up the nucleus into individual protons and neutrons. This effect can be understood by looking at the potential energy of the individual components. The individual particles have a force attracting them together, and forcing them apart increases the potential energy of the particles in the same way that lifting an object up on earth does. This energy is equal to the work required to split the particles apart. The mass of the Solar System is slightly less than the sum of its individual masses. For an isolated system of particles moving in different directions, the invariant mass of the system is the analog of the rest mass, and is the same for all observers, even those in relative motion. It is defined as the total energy (divided by ) in the center of momentum frame. The center of momentum frame is defined so that the system has zero total momentum; the term center of mass frame is also sometimes used, where the center of mass frame is a special case of the center of momentum frame where the center of mass is put at the origin. A simple example of an object with moving parts but zero total momentum is a container of gas. In this case, the mass of the container is given by its total energy (including the kinetic energy of the gas molecules), since the system's total energy and invariant mass are the same in any reference frame where the momentum is zero, and such a reference frame is also the only frame in which the object can be weighed. In a similar way, the theory of special relativity posits that the thermal energy in all objects, including solids, contributes to their total masses, even though this energy is present as the kinetic and potential energies of the atoms in the object, and it (in a similar way to the gas) is not seen in the rest masses of the atoms that make up the object. Similarly, even photons, if trapped in an isolated container, would contribute their energy to the mass of the container. Such extra mass, in theory, could be weighed in the same way as any other type of rest mass, even though individually photons have no rest mass. The property that trapped energy in any form adds weighable mass to systems that have no net momentum is one of the consequences of relativity. It has no counterpart in classical Newtonian physics, where energy never exhibits weighable mass. Relation to gravity Physics has two concepts of mass, the gravitational mass and the inertial mass. The gravitational mass is the quantity that determines the strength of the gravitational field generated by an object, as well as the gravitational force acting on the object when it is immersed in a gravitational field produced by other bodies. The inertial mass, on the other hand, quantifies how much an object accelerates if a given force is applied to it. The mass–energy equivalence in special relativity refers to the inertial mass. However, already in the context of Newtonian gravity, the weak equivalence principle is postulated: the gravitational and the inertial mass of every object are the same. Thus, the mass–energy equivalence, combined with the weak equivalence principle, results in the prediction that all forms of energy contribute to the gravitational field generated by an object. This observation is one of the pillars of the general theory of relativity. The prediction that all forms of energy interact gravitationally has been subject to experimental tests. One of the first observations testing this prediction, called the Eddington experiment, was made during the solar eclipse of May 29, 1919. During the eclipse, the English astronomer and physicist Arthur Eddington observed that the light from stars passing close to the Sun was bent. The effect is due to the gravitational attraction of light by the Sun. The observation confirmed that the energy carried by light indeed is equivalent to a gravitational mass. Another seminal experiment, the Pound–Rebka experiment, was performed in 1960. In this test a beam of light was emitted from the top of a tower and detected at the bottom. The frequency of the light detected was higher than the light emitted. This result confirms that the energy of photons increases when they fall in the gravitational field of the Earth. The energy, and therefore the gravitational mass, of photons is proportional to their frequency as stated by the Planck's relation. Efficiency In some reactions, matter particles can be destroyed and their associated energy released to the environment as other forms of energy, such as light and heat. One example of such a conversion takes place in elementary particle interactions, where the rest energy is transformed into kinetic energy. Such conversions between types of energy happen in nuclear weapons, in which the protons and neutrons in atomic nuclei lose a small fraction of their original mass, though the mass lost is not due to the destruction of any smaller constituents. Nuclear fission allows a tiny fraction of the energy associated with the mass to be converted into usable energy such as radiation; in the decay of the uranium, for instance, about 0.1% of the mass of the original atom is lost. In theory, it should be possible to destroy matter and convert all of the rest-energy associated with matter into heat and light, but none of the theoretically known methods are practical. One way to harness all the energy associated with mass is to annihilate matter with antimatter. Antimatter is rare in the universe, however, and the known mechanisms of production require more usable energy than would be released in annihilation. CERN estimated in 2011 that over a billion times more energy is required to make and store antimatter than could be released in its annihilation. As most of the mass which comprises ordinary objects resides in protons and neutrons, converting all the energy of ordinary matter into more useful forms requires that the protons and neutrons be converted to lighter particles, or particles with no mass at all. In the Standard Model of particle physics, the number of protons plus neutrons is nearly exactly conserved. Despite this, Gerard 't Hooft showed that there is a process that converts protons and neutrons to antielectrons and neutrinos. This is the weak SU(2) instanton proposed by the physicists Alexander Belavin, Alexander Markovich Polyakov, Albert Schwarz, and Yu. S. Tyupkin. This process, can in principle destroy matter and convert all the energy of matter into neutrinos and usable energy, but it is normally extraordinarily slow. It was later shown that the process occurs rapidly at extremely high temperatures that would only have been reached shortly after the Big Bang. Many extensions of the standard model contain magnetic monopoles, and in some models of grand unification, these monopoles catalyze proton decay, a process known as the Callan–Rubakov effect. This process would be an efficient mass–energy conversion at ordinary temperatures, but it requires making monopoles and anti-monopoles, whose production is expected to be inefficient. Another method of completely annihilating matter uses the gravitational field of black holes. The British theoretical physicist Stephen Hawking theorized it is possible to throw matter into a black hole and use the emitted heat to generate power. According to the theory of Hawking radiation, however, larger black holes radiate less than smaller ones, so that usable power can only be produced by small black holes. Extension for systems in motion Unlike a system's energy in an inertial frame, the relativistic energy () of a system depends on both the rest mass () and the total momentum of the system. The extension of Einstein's equation to these systems is given by: or where the term represents the square of the Euclidean norm (total vector length) of the various momentum vectors in the system, which reduces to the square of the simple momentum magnitude, if only a single particle is considered. This equation is called the energy–momentum relation and reduces to when the momentum term is zero. For photons where , the equation reduces to . Low-speed approximation Using the Lorentz factor, , the energy–momentum can be rewritten as and expanded as a power series: For speeds much smaller than the speed of light, higher-order terms in this expression get smaller and smaller because is small. For low speeds, all but the first two terms can be ignored: In classical mechanics, both the term and the high-speed corrections are ignored. The initial value of the energy is arbitrary, as only the change in energy can be measured and so the term is ignored in classical physics. While the higher-order terms become important at higher speeds, the Newtonian equation is a highly accurate low-speed approximation; adding in the third term yields: . The difference between the two approximations is given by , a number very small for everyday objects. In 2018 NASA announced the Parker Solar Probe was the fastest ever, with a speed of . The difference between the approximations for the Parker Solar Probe in 2018 is , which accounts for an energy correction of four parts per hundred million. The gravitational constant, in contrast, has a standard relative uncertainty of about . Applications Application to nuclear physics The nuclear binding energy is the minimum energy that is required to disassemble the nucleus of an atom into its component parts. The mass of an atom is less than the sum of the masses of its constituents due to the attraction of the strong nuclear force. The difference between the two masses is called the mass defect and is related to the binding energy through Einstein's formula. The principle is used in modeling nuclear fission reactions, and it implies that a great amount of energy can be released by the nuclear fission chain reactions used in both nuclear weapons and nuclear power. A water molecule weighs a little less than two free hydrogen atoms and an oxygen atom. The minuscule mass difference is the energy needed to split the molecule into three individual atoms (divided by ), which was given off as heat when the molecule formed (this heat had mass). Similarly, a stick of dynamite in theory weighs a little bit more than the fragments after the explosion; in this case the mass difference is the energy and heat that is released when the dynamite explodes. Such a change in mass may only happen when the system is open, and the energy and mass are allowed to escape. Thus, if a stick of dynamite is blown up in a hermetically sealed chamber, the mass of the chamber and fragments, the heat, sound, and light would still be equal to the original mass of the chamber and dynamite. If sitting on a scale, the weight and mass would not change. This would in theory also happen even with a nuclear bomb, if it could be kept in an ideal box of infinite strength, which did not rupture or pass radiation. Thus, a 21.5 kiloton () nuclear bomb produces about one gram of heat and electromagnetic radiation, but the mass of this energy would not be detectable in an exploded bomb in an ideal box sitting on a scale; instead, the contents of the box would be heated to millions of degrees without changing total mass and weight. If a transparent window passing only electromagnetic radiation were opened in such an ideal box after the explosion, and a beam of X-rays and other lower-energy light allowed to escape the box, it would eventually be found to weigh one gram less than it had before the explosion. This weight loss and mass loss would happen as the box was cooled by this process, to room temperature. However, any surrounding mass that absorbed the X-rays (and other "heat") would gain this gram of mass from the resulting heating, thus, in this case, the mass "loss" would represent merely its relocation. Practical examples Einstein used the centimetre–gram–second system of units (cgs), but the formula is independent of the system of units. In natural units, the numerical value of the speed of light is set to equal 1, and the formula expresses an equality of numerical values: . In the SI system (expressing the ratio in joules per kilogram using the value of in metres per second): (≈ 9.0 × 1016 joules per kilogram). So the energy equivalent of one kilogram of mass is 89.9 petajoules 25.0 billion kilowatt-hours (≈ 25,000 GW·h) 21.5 trillion kilocalories (≈ 21 Pcal) 85.2 trillion BTUs 0.0852 quads or the energy released by combustion of the following: 21 500 kilotons of TNT-equivalent energy (≈ 21 Mt) litres or US gallons of automotive gasoline Any time energy is released, the process can be evaluated from an perspective. For instance, the "gadget"-style bomb used in the Trinity test and the bombing of Nagasaki had an explosive yield equivalent to 21 kt of TNT. About 1 kg of the approximately 6.15 kg of plutonium in each of these bombs fissioned into lighter elements totaling almost exactly one gram less, after cooling. The electromagnetic radiation and kinetic energy (thermal and blast energy) released in this explosion carried the missing gram of mass. Whenever energy is added to a system, the system gains mass, as shown when the equation is rearranged: A spring's mass increases whenever it is put into compression or tension. Its mass increase arises from the increased potential energy stored within it, which is bound in the stretched chemical (electron) bonds linking the atoms within the spring. Raising the temperature of an object (increasing its thermal energy) increases its mass. For example, consider the world's primary mass standard for the kilogram, made of platinum and iridium. If its temperature is allowed to change by 1 °C, its mass changes by 1.5 picograms (1 pg = ). A spinning ball has greater mass than when it is not spinning. Its increase of mass is exactly the equivalent of the mass of energy of rotation, which is itself the sum of the kinetic energies of all the moving parts of the ball. For example, the Earth itself is more massive due to its rotation, than it would be with no rotation. The rotational energy of the Earth is greater than 1024 Joules, which is over 107 kg. History While Einstein was the first to have correctly deduced the mass–energy equivalence formula, he was not the first to have related energy with mass, though nearly all previous authors thought that the energy that contributes to mass comes only from electromagnetic fields. Once discovered, Einstein's formula was initially written in many different notations, and its interpretation and justification was further developed in several steps. Developments prior to Einstein Eighteenth century theories on the correlation of mass and energy included that devised by the English scientist Isaac Newton in 1717, who speculated that light particles and matter particles were interconvertible in "Query 30" of the Opticks, where he asks: "Are not the gross bodies and light convertible into one another, and may not bodies receive much of their activity from the particles of light which enter their composition?" Swedish scientist and theologian Emanuel Swedenborg, in his Principia of 1734 theorized that all matter is ultimately composed of dimensionless points of "pure and total motion". He described this motion as being without force, direction or speed, but having the potential for force, direction and speed everywhere within it. During the nineteenth century there were several speculative attempts to show that mass and energy were proportional in various ether theories. In 1873 the Russian physicist and mathematician Nikolay Umov pointed out a relation between mass and energy for ether in the form of , where . English engineer Samuel Tolver Preston in 1875 and the Italian industrialist and geologist Olinto De Pretto in 1903, following physicist Georges-Louis Le Sage, imagined that the universe was filled with an ether of tiny particles that always move at speed . Each of these particles has a kinetic energy of up to a small numerical factor, giving a mass–energy relation. In 1905, independently of Einstein, French polymath Gustave Le Bon speculated that atoms could release large amounts of latent energy, reasoning from an all-encompassing qualitative philosophy of physics. Electromagnetic mass There were many attempts in the 19th and the beginning of the 20th century—like those of British physicists J. J. Thomson in 1881 and Oliver Heaviside in 1889, and George Frederick Charles Searle in 1897, German physicists Wilhelm Wien in 1900 and Max Abraham in 1902, and the Dutch physicist Hendrik Antoon Lorentz in 1904—to understand how the mass of a charged object depends on the electrostatic field. This concept was called electromagnetic mass, and was considered as being dependent on velocity and direction as well. Lorentz in 1904 gave the following expressions for longitudinal and transverse electromagnetic mass: , where Another way of deriving a type of electromagnetic mass was based on the concept of radiation pressure. In 1900, French polymath Henri Poincaré associated electromagnetic radiation energy with a "fictitious fluid" having momentum and mass By that, Poincaré tried to save the center of mass theorem in Lorentz's theory, though his treatment led to radiation paradoxes. Austrian physicist Friedrich Hasenöhrl showed in 1904 that electromagnetic cavity radiation contributes the "apparent mass" to the cavity's mass. He argued that this implies mass dependence on temperature as well. Einstein: mass–energy equivalence Einstein did not write the exact formula in his 1905 Annus Mirabilis paper "Does the Inertia of an object Depend Upon Its Energy Content?"; rather, the paper states that if a body gives off the energy by emitting light, its mass diminishes by . This formulation relates only a change in mass to a change in energy without requiring the absolute relationship. The relationship convinced him that mass and energy can be seen as two names for the same underlying, conserved physical quantity. He has stated that the laws of conservation of energy and conservation of mass are "one and the same". Einstein elaborated in a 1946 essay that "the principle of the conservation of mass… proved inadequate in the face of the special theory of relativity. It was therefore merged with the energy conservation principle—just as, about 60 years before, the principle of the conservation of mechanical energy had been combined with the principle of the conservation of heat [thermal energy]. We might say that the principle of the conservation of energy, having previously swallowed up that of the conservation of heat, now proceeded to swallow that of the conservation of mass—and holds the field alone." Mass–velocity relationship In developing special relativity, Einstein found that the kinetic energy of a moving body is with the velocity, the rest mass, and the Lorentz factor. He included the second term on the right to make sure that for small velocities the energy would be the same as in classical mechanics, thus satisfying the correspondence principle: Without this second term, there would be an additional contribution in the energy when the particle is not moving. Einstein's view on mass Einstein, following Lorentz and Abraham, used velocity- and direction-dependent mass concepts in his 1905 electrodynamics paper and in another paper in 1906. In Einstein's first 1905 paper on , he treated as what would now be called the rest mass, and it has been noted that in his later years he did not like the idea of "relativistic mass". In older physics terminology, relativistic energy is used in lieu of relativistic mass and the term "mass" is reserved for the rest mass. Historically, there has been considerable debate over the use of the concept of "relativistic mass" and the connection of "mass" in relativity to "mass" in Newtonian dynamics. One view is that only rest mass is a viable concept and is a property of the particle; while relativistic mass is a conglomeration of particle properties and properties of spacetime. Another view, attributed to Norwegian physicist Kjell Vøyenli, is that the Newtonian concept of mass as a particle property and the relativistic concept of mass have to be viewed as embedded in their own theories and as having no precise connection. Einstein's 1905 derivation Already in his relativity paper "On the electrodynamics of moving bodies", Einstein derived the correct expression for the kinetic energy of particles: . Now the question remained open as to which formulation applies to bodies at rest. This was tackled by Einstein in his paper "Does the inertia of a body depend upon its energy content?", one of his Annus Mirabilis papers. Here, Einstein used to represent the speed of light in vacuum and to represent the energy lost by a body in the form of radiation. Consequently, the equation was not originally written as a formula but as a sentence in German saying that "if a body gives off the energy in the form of radiation, its mass diminishes by ." A remark placed above it informed that the equation was approximated by neglecting "magnitudes of fourth and higher orders" of a series expansion. Einstein used a body emitting two light pulses in opposite directions, having energies of before and after the emission as seen in its rest frame. As seen from a moving frame, becomes and becomes . Einstein obtained, in modern notation: . He then argued that can only differ from the kinetic energy by an additive constant, which gives . Neglecting effects higher than third order in after a Taylor series expansion of the right side of this yields: Einstein concluded that the emission reduces the body's mass by , and that the mass of a body is a measure of its energy content. The correctness of Einstein's 1905 derivation of was criticized by German theoretical physicist Max Planck in 1907, who argued that it is only valid to first approximation. Another criticism was formulated by American physicist Herbert Ives in 1952 and the Israeli physicist Max Jammer in 1961, asserting that Einstein's derivation is based on begging the question. Other scholars, such as American and Chilean philosophers John Stachel and Roberto Torretti, have argued that Ives' criticism was wrong, and that Einstein's derivation was correct. American physics writer Hans Ohanian, in 2008, agreed with Stachel/Torretti's criticism of Ives, though he argued that Einstein's derivation was wrong for other reasons. Relativistic center-of-mass theorem of 1906 Like Poincaré, Einstein concluded in 1906 that the inertia of electromagnetic energy is a necessary condition for the center-of-mass theorem to hold. On this occasion, Einstein referred to Poincaré's 1900 paper and wrote: "Although the merely formal considerations, which we will need for the proof, are already mostly contained in a work by H. Poincaré2, for the sake of clarity I will not rely on that work." In Einstein's more physical, as opposed to formal or mathematical, point of view, there was no need for fictitious masses. He could avoid the perpetual motion problem because, on the basis of the mass–energy equivalence, he could show that the transport of inertia that accompanies the emission and absorption of radiation solves the problem. Poincaré's rejection of the principle of action–reaction can be avoided through Einstein's , because mass conservation appears as a special case of the energy conservation law. Further developments There were several further developments in the first decade of the twentieth century. In May 1907, Einstein explained that the expression for energy of a moving mass point assumes the simplest form when its expression for the state of rest is chosen to be (where is the mass), which is in agreement with the "principle of the equivalence of mass and energy". In addition, Einstein used the formula , with being the energy of a system of mass points, to describe the energy and mass increase of that system when the velocity of the differently moving mass points is increased. Max Planck rewrote Einstein's mass–energy relationship as in June 1907, where is the pressure and the volume to express the relation between mass, its latent energy, and thermodynamic energy within the body. Subsequently, in October 1907, this was rewritten as and given a quantum interpretation by German physicist Johannes Stark, who assumed its validity and correctness. In December 1907, Einstein expressed the equivalence in the form and concluded: "A mass is equivalent, as regards inertia, to a quantity of energy . […] It appears far more natural to consider every inertial mass as a store of energy." American physical chemists Gilbert N. Lewis and Richard C. Tolman used two variations of the formula in 1909: and , with being the relativistic energy (the energy of an object when the object is moving), is the rest energy (the energy when not moving), is the relativistic mass (the rest mass and the extra mass gained when moving), and is the rest mass. The same relations in different notation were used by Lorentz in 1913 and 1914, though he placed the energy on the left-hand side: and , with being the total energy (rest energy plus kinetic energy) of a moving material point, its rest energy, the relativistic mass, and the invariant mass. In 1911, German physicist Max von Laue gave a more comprehensive proof of from the stress–energy tensor, which was later generalized by German mathematician Felix Klein in 1918. Einstein returned to the topic once again after World War II and this time he wrote in the title of his article intended as an explanation for a general reader by analogy. Alternative version An alternative version of Einstein's thought experiment was proposed by American theoretical physicist Fritz Rohrlich in 1990, who based his reasoning on the Doppler effect. Like Einstein, he considered a body at rest with mass . If the body is examined in a frame moving with nonrelativistic velocity , it is no longer at rest and in the moving frame it has momentum . Then he supposed the body emits two pulses of light to the left and to the right, each carrying an equal amount of energy . In its rest frame, the object remains at rest after the emission since the two beams are equal in strength and carry opposite momentum. However, if the same process is considered in a frame that moves with velocity to the left, the pulse moving to the left is redshifted, while the pulse moving to the right is blue shifted. The blue light carries more momentum than the red light, so that the momentum of the light in the moving frame is not balanced: the light is carrying some net momentum to the right. The object has not changed its velocity before or after the emission. Yet in this frame it has lost some right-momentum to the light. The only way it could have lost momentum is by losing mass. This also solves Poincaré's radiation paradox. The velocity is small, so the right-moving light is blueshifted by an amount equal to the nonrelativistic Doppler shift factor . The momentum of the light is its energy divided by , and it is increased by a factor of . So the right-moving light is carrying an extra momentum given by: The left-moving light carries a little less momentum, by the same amount . So the total right-momentum in both light pulses is twice . This is the right-momentum that the object lost. The momentum of the object in the moving frame after the emission is reduced to this amount: So the change in the object's mass is equal to the total energy lost divided by . Since any emission of energy can be carried out by a two-step process, where first the energy is emitted as light and then the light is converted to some other form of energy, any emission of energy is accompanied by a loss of mass. Similarly, by considering absorption, a gain in energy is accompanied by a gain in mass. Radioactivity and nuclear energy It was quickly noted after the discovery of radioactivity in 1897 that the total energy due to radioactive processes is about one million times greater than that involved in any known molecular change, raising the question of where the energy comes from. After eliminating the idea of absorption and emission of some sort of Lesagian ether particles, the existence of a huge amount of latent energy, stored within matter, was proposed by New Zealand physicist Ernest Rutherford and British radiochemist Frederick Soddy in 1903. Rutherford also suggested that this internal energy is stored within normal matter as well. He went on to speculate in 1904: "If it were ever found possible to control at will the rate of disintegration of the radio-elements, an enormous amount of energy could be obtained from a small quantity of matter." Einstein's equation does not explain the large energies released in radioactive decay, but can be used to quantify them. The theoretical explanation for radioactive decay is given by the nuclear forces responsible for holding atoms together, though these forces were still unknown in 1905. The enormous energy released from radioactive decay had previously been measured by Rutherford and was much more easily measured than the small change in the gross mass of materials as a result. Einstein's equation, by theory, can give these energies by measuring mass differences before and after reactions, but in practice, these mass differences in 1905 were still too small to be measured in bulk. Prior to this, the ease of measuring radioactive decay energies with a calorimeter was thought possibly likely to allow measurement of changes in mass difference, as a check on Einstein's equation itself. Einstein mentions in his 1905 paper that mass–energy equivalence might perhaps be tested with radioactive decay, which was known by then to release enough energy to possibly be "weighed," when missing from the system. However, radioactivity seemed to proceed at its own unalterable pace, and even when simple nuclear reactions became possible using proton bombardment, the idea that these great amounts of usable energy could be liberated at will with any practicality, proved difficult to substantiate. Rutherford was reported in 1933 to have declared that this energy could not be exploited efficiently: "Anyone who expects a source of power from the transformation of the atom is talking moonshine." This outlook changed dramatically in 1932 with the discovery of the neutron and its mass, allowing mass differences for single nuclides and their reactions to be calculated directly, and compared with the sum of masses for the particles that made up their composition. In 1933, the energy released from the reaction of lithium-7 plus protons giving rise to two alpha particles, allowed Einstein's equation to be tested to an error of ±0.5%. However, scientists still did not see such reactions as a practical source of power, due to the energy cost of accelerating reaction particles. After the very public demonstration of huge energies released from nuclear fission after the atomic bombings of Hiroshima and Nagasaki in 1945, the equation became directly linked in the public eye with the power and peril of nuclear weapons. The equation was featured on page 2 of the Smyth Report, the official 1945 release by the US government on the development of the atomic bomb, and by 1946 the equation was linked closely enough with Einstein's work that the cover of Time magazine prominently featured a picture of Einstein next to an image of a mushroom cloud emblazoned with the equation. Einstein himself had only a minor role in the Manhattan Project: he had cosigned a letter to the U.S. president in 1939 urging funding for research into atomic energy, warning that an atomic bomb was theoretically possible. The letter persuaded Roosevelt to devote a significant portion of the wartime budget to atomic research. Without a security clearance, Einstein's only scientific contribution was an analysis of an isotope separation method in theoretical terms. It was inconsequential, on account of Einstein not being given sufficient information to fully work on the problem. While is useful for understanding the amount of energy potentially released in a fission reaction, it was not strictly necessary to develop the weapon, once the fission process was known, and its energy measured at 200 MeV (which was directly possible, using a quantitative Geiger counter, at that time). The physicist and Manhattan Project participant Robert Serber noted that somehow "the popular notion took hold long ago that Einstein's theory of relativity, in particular his equation , plays some essential role in the theory of fission. Einstein had a part in alerting the United States government to the possibility of building an atomic bomb, but his theory of relativity is not required in discussing fission. The theory of fission is what physicists call a non-relativistic theory, meaning that relativistic effects are too small to affect the dynamics of the fission process significantly." There are other views on the equation's importance to nuclear reactions. In late 1938, the Austrian-Swedish and British physicists Lise Meitner and Otto Robert Frisch—while on a winter walk during which they solved the meaning of Hahn's experimental results and introduced the idea that would be called atomic fission—directly used Einstein's equation to help them understand the quantitative energetics of the reaction that overcame the "surface tension-like" forces that hold the nucleus together, and allowed the fission fragments to separate to a configuration from which their charges could force them into an energetic fission. To do this, they used packing fraction, or nuclear binding energy values for elements. These, together with use of allowed them to realize on the spot that the basic fission process was energetically possible. Einstein's equation written According to the Einstein Papers Project at the California Institute of Technology and Hebrew University of Jerusalem, there remain only four known copies of this equation as written by Einstein. One of these is a letter written in German to Ludwik Silberstein, which was in Silberstein's archives, and sold at auction for $1.2 million, RR Auction of Boston, Massachusetts said on May 21, 2021.
Physical sciences
Basics_6
null
422569
https://en.wikipedia.org/wiki/Galena
Galena
Galena, also called lead glance, is the natural mineral form of lead(II) sulfide (PbS). It is the most important ore of lead and an important source of silver. Galena is one of the most abundant and widely distributed sulfide minerals. It crystallizes in the cubic crystal system often showing octahedral forms. It is often associated with the minerals sphalerite, calcite and fluorite. Occurrence Galena is the main ore of lead, used since ancient times, since lead can be smelted from galena in an ordinary wood fire. Galena typically is found in hydrothermal veins in association with sphalerite, marcasite, chalcopyrite, cerussite, anglesite, dolomite, calcite, quartz, barite, and fluorite. It is also found in association with sphalerite in low-temperature lead-zinc deposits within limestone beds. Minor amounts are found in contact metamorphic zones, in pegmatites, and disseminated in sedimentary rock. In some deposits, the galena contains up to 0.5% silver, a byproduct that far surpasses the main lead ore in revenue. In these deposits significant amounts of silver occur as included silver sulfide mineral phases or as limited silver in solid solution within the galena structure. These argentiferous galenas have long been an important ore of silver. Silver-bearing galena is almost entirely of hydrothermal origin; galena in lead-zinc deposits contains little silver. Galena deposits are found worldwide in various environments. Noted deposits include those at Freiberg in Saxony; Cornwall, the Mendips in Somerset, Derbyshire, and Cumberland in England; the Linares mines in Spain were worked from before the Roman times until the end of the 20th century; the Madan and Rhodope Mountains in Bulgaria; the Sullivan Mine of British Columbia; Broken Hill and Mount Isa in Australia; and the ancient mines of Sardinia. In the United States, it occurs most notably as lead-zinc ore in the Mississippi Valley type deposits of the Lead Belt in southeastern Missouri, which is the largest known deposit, and in the Driftless Area of Illinois, Iowa and Wisconsin, providing the origin of the name of Galena, Illinois, a historical settlement known for the material. Galena also was a major mineral of the zinc-lead mines of the tri-state district around Joplin in southwestern Missouri and the adjoining areas of Kansas and Oklahoma. Galena is also an important ore mineral in the silver mining regions of Colorado, Idaho, Utah and Montana. Of the latter, the Coeur d'Alene district of northern Idaho was most prominent. Australia is the world's leading producer of lead as of 2021, most of which is extracted as galena. Argentiferous galena was accidentally discovered at Glen Osmond in 1841, and additional deposits were discovered near Broken Hill in 1876 and at Mount Isa in 1923. Most galena in Australia is found in hydrothermal deposits emplaced around 1680 million years ago, which have since been heavily metamorphosed. The largest documented crystal of galena is composite cubo-octahedra from the Great Laxey Mine, Isle of Man, measuring . Importance Galena is the official state mineral of the U.S. states of Kansas, Missouri, and Wisconsin; the former mining communities of Galena, Kansas, Galena, Illinois, Galena, South Dakota and Galena, Alaska, take their names from deposits of this mineral. Structure Galena belongs to the octahedral sulfide group of minerals that have metal ions in octahedral positions, such as the iron sulfide pyrrhotite and the nickel arsenide niccolite. The galena group is named after its most common member, with other isometric members that include manganese bearing alabandite and niningerite. Divalent lead (Pb) cations and sulfur (S) anions form a close-packed cubic unit cell much like the mineral halite of the halide mineral group. Zinc, cadmium, iron, copper, antimony, arsenic, bismuth and selenium also occur in variable amounts in galena. Selenium substitutes for sulfur in the structure constituting a solid solution series. The lead telluride mineral altaite has the same crystal structure as galena. Geochemistry Within the weathering or oxidation zone galena alters to anglesite (lead sulfate) or cerussite (lead carbonate). Galena exposed to acid mine drainage can be oxidized to anglesite by naturally occurring bacteria and archaea, in a process similar to bioleaching. Uses One of the oldest uses of galena was to produce kohl, an eye cosmetic now regarded as toxic due to the risk of lead poisoning. In Ancient Egypt, this was applied around the eyes to reduce the glare of the desert sun and to repel flies, which were a potential source of disease. In pre-Columbian North America, galena was used by indigenous peoples as an ingredient in decorative paints and cosmetics, and widely traded throughout the eastern United States. Traces of galena are frequently found at the Mississippian city at Kincaid Mounds in present-day Illinois. The galena used at the site originated from deposits in southeastern and central Missouri and the Upper Mississippi Valley. Galena is the primary ore of lead, and is often mined for its silver content. It is used as a source of lead in ceramic glaze. Galena is a semiconductor with a small band gap of about 0.4 eV, which found use in early wireless communication systems. It was used as the crystal in crystal radio receivers, in which it was used as a point-contact diode capable of rectifying alternating current to detect the radio signals. The galena crystal was used with a sharp wire, known as a "cat's whisker", in contact with it. In modern times, galena is primarily used to extract its constituent minerals. In addition to silver, it is the most important source of lead, for uses such as in lead-acid batteries.
Physical sciences
Minerals
Earth science
422698
https://en.wikipedia.org/wiki/Decomposer
Decomposer
Decomposers are organisms that break down dead organisms and release the nutrients from the dead matter into the environment around them. Decomposition relies on chemical processes similar to digestion in animals; in fact, many sources use the words digestion and decomposition interchangeably. In both processes, complex molecules are chemically broken down by enzymes into simpler, smaller ones. The term "digestion," however, is commonly used to refer to food breakdown that occurs within animal bodies, and results in the absorption of nutrients from the gut into the animal's bloodstream. This is contrasted with external digestion, meaning that, rather than swallowing food and then digesting it using enzymes located within a GI tract, an organism instead releases enzymes directly onto the food source. After allowing the enzymes time to digest the material, the decomposer then absorbs the nutrients from the environment into its cells. Decomposition is often erroneously conflated with this process of external digestion, probably because of the strong association between fungi, which are external digesters, and decomposition. The term "decomposer" refers to a role in an ecosystem, not to a particular class or type of organism, or even to a specific capacity of those organisms. The definition of "decomposer" therefore centers on the outcome of the decomposition process, rather than the types of organisms performing it. At the center of this definition are the organisms that benefit most directly from the increase in nutrient availability that results from decomposition; plants and other non-mobile (sessile) autotrophs cannot travel to seek out nutrients, and most cannot digest other organisms themselves. They must therefore rely on decomposers to free up nutrients from dead matter that they can then absorb. Note that this definition does not focus on where digestion takes place (i.e. inside or outside of an organism's body), but rather on where the products of that digestion end up. "Decomposer" as a category, therefore, would include not just fungi and bacteria, which perform external digestion, but also invertebrates such as earthworms, woodlice, and sea cucumbers that digest dead matter internally and release nutrients locally via their feces. In some definitions of decomposition that center on the means and location of digestion, these invertebrates, which digest their food internally, are set apart from decomposers and placed in a separate category called detritivores. These categories are not, in fact, mutually exclusive. "Detritivore" describes behavior and physiology, while "decomposer" describes an ecosystem role. Therefore, an organism can be both a detritivore and a decomposer. While there are also purely physical processes, like weathering and ultraviolet light, that contribute to decomposition, "decomposer" refers only to living organisms that contribute to the process, whether by physical or chemical breakdown of dead matter. Terrestrial decomposers In terrestrial environments, decomposition happens mainly in or on soil, and decomposers' activities lead to increased soil fertility. The main nutrients plants have to derive from soils are nitrogen, phosphorus, and potassium, and all three have to be available in forms that are accessible to and absorbable by the plants. Decomposition is the process of breaking large molecules in dead matter down into smaller molecules that nearby plants are able to take up through their roots. Some steps of the process occur via mechanical grinding and churning by things like earthworms and plant roots in a process called bioturbation. Further breakdown, beyond those physical means, requires the presence of enzymes. The paired processes are akin to what occurs in mammal digestive tracts: food is mechanically ground up by teeth, and then chemically broken down by enzymes. A given organism's ability to contribute to decomposition is largely dependent on what enzymes that organism possesses. Enzymes for the digestion of molecules like fats, proteins, and starch are widespread and many organisms, from microbes to mammals, have them. As a result, those molecules are the first to decompose in the environment. Cellulose in dead plants is broken down by cellulase enzymes, which are present in far fewer organisms, and the enzymes needed to digest lignin, a chemically complex molecule in woody trees and shrubs, in fewer still. Fungi The primary decomposer of litter in many ecosystems is fungi. Unlike bacteria, which are unicellular organisms and are decomposers as well, most saprotrophic fungi grow as a branching network of hyphae. Bacteria are restricted to growing and feeding on the exposed surfaces of organic matter, but fungi can use their hyphae to penetrate larger pieces of organic matter below the surface. Additionally, only wood-decay fungi have evolved lignin-modifying enzymes necessary to decompose lignin. These two factors make fungi the primary decomposers in forests, where litter has high concentrations of lignin and often occurs in large pieces like fallen trees and branches. Fungi decompose organic matter by releasing enzymes to break down the decaying material, after which they absorb the nutrients in the decaying material. Hyphae are used to break down matter and absorb nutrients and are also used in reproduction.
Biology and health sciences
Ecology
Biology
422936
https://en.wikipedia.org/wiki/Vector%20potential
Vector potential
In vector calculus, a vector potential is a vector field whose curl is a given vector field. This is analogous to a scalar potential, which is a scalar field whose gradient is a given vector field. Formally, given a vector field , a vector potential is a vector field such that Consequence If a vector field admits a vector potential , then from the equality (divergence of the curl is zero) one obtains which implies that must be a solenoidal vector field. Theorem Let be a solenoidal vector field which is twice continuously differentiable. Assume that decreases at least as fast as for . Define where denotes curl with respect to variable . Then is a vector potential for . That is, The integral domain can be restricted to any simply connected region . That is, also is a vector potential of , where A generalization of this theorem is the Helmholtz decomposition theorem, which states that any vector field can be decomposed as a sum of a solenoidal vector field and an irrotational vector field. By analogy with the Biot-Savart law, also qualifies as a vector potential for , where . Substituting (current density) for and (H-field) for , yields the Biot-Savart law. Let be a star domain centered at the point , where . Applying Poincaré's lemma for differential forms to vector fields, then also is a vector potential for , where Nonuniqueness The vector potential admitted by a solenoidal field is not unique. If is a vector potential for , then so is where is any continuously differentiable scalar function. This follows from the fact that the curl of the gradient is zero. This nonuniqueness leads to a degree of freedom in the formulation of electrodynamics, or gauge freedom, and requires choosing a gauge.
Physical sciences
Classical mechanics
Physics
423063
https://en.wikipedia.org/wiki/Season%20extension
Season extension
Season extension in agriculture is any method that allows a crop to be grown beyond its normal outdoor growing season and harvesting time frame, or the extra time thus achieved. To extend the growing season into the colder months, one can use unheated techniques such as floating row covers, low tunnels, caterpillar tunnels, or hoophouses. However, even if colder temperatures are mitigated, most crops will stop growing when the days become shorter than 10 hours, and resume after winter as the daylight increases above 10 hours. A hothouse — a greenhouse which is heated and illuminated — creates an environment where plants are fooled into thinking it is their normal growing season. Though this is a form of season extension for the grower, it is not the usual meaning of the term. Season extension can apply to other climates, where conditions other than cold and shortened period of sunlight end the growing year (e.g. a rainy season). Structures Unheated greenhouses (also known as cold houses) offer protection from the weather, such as sub-optimal temperatures, freezing or drying winds, damaging wind gusts, frost, snow and ice. Unheated greenhouses can extend the growing season of cold hardy vegetables well into the fall and sometimes even through winter until spring. Sometimes supplementary heating is appropriate when temperatures inside the greenhouse drop below 32 degrees Fahrenheit. Passive heated or low-energy greenhouses: Using principles of passive solar building design and including thermal mass will help keep an otherwise unheated greenhouse several degrees warmer at night and on overcast days. Other systems such as ground-coupled heat exchangers, thermal chimneys, thermosiphons, or "climate batteries" can also be used to take ground-stored heat and use it to help heat a greenhouse. Polytunnels (hoop houses): Whereas a greenhouse has a frame and is glazed with glass or stiff polycarbonate sheets, polytunnels are built with thin polyethylene plastic sheeting stretched over curved frameworks, often extending as long "tunnels". Low tunnels are short enough that a person cannot walk inside them, perhaps 2 to 4 feet tall, and the plastic must be lifted to access the plants. High tunnels are commercial-sized buildings, tall enough to walk through without bending and sometimes tall enough to operate tractors inside. Sometimes polytunnels are built with two layers of plastic sheeting and air blown in between them; this increases the insulation factor, but also cuts down on the amount of sunlight reaching the plants. Row covers are lightweight fabrics placed over plants to retain heat and can provide several degrees of frost protection. Row covers, being fabric, allow rain to permeate the material, and also allow plants to transpire without holding in the moisture (as happens under plastic sheeting). Row cover material can be laid directly onto the crop (floating row covers), or laid over a framework of hoops or wires. Row covers can be set up outside of any protective structure or placed over crops within high tunnels or greenhouses. In its simplest function, it allows a light frost to form on the cover instead of on the leaves beneath. Outside row covers must be clipped or pinned in place or weighted down on the edges. Inside row covers may be draped to the ground without further attachment. Cold frames are transparent-roofed enclosures, built low to the ground, used to protect plants from cold weather. Cold frames are found in home gardens and in vegetable farming. They are most often used for growing seedlings that are later transplanted into open ground. A typical cold frame has traditionally been a rectangle of framing lumber with an old window placed over it. Since the advent of plastic sheeting, it is often used instead of old windows. Temporary coverings: In smaller gardens almost any type of cover, including glass cloches, newspaper cones, baskets, miscellaneous bits of plastic, and mulches such as hay, leaves, or straw can be used as frost-protection that is pulled on and off each day when frost is likely to occur overnight. Other methods Hotbeds: a mass of hot compost is used for the heat it gives off to warm a nearby plant. Typically a few centimetres of soil are placed on top of the compost mass, and the plant grows there, above the rising heat. Mulches: many a material placed on the soil around plants will help retain heat. Organic mulches include straw, compost, etc. Synthetic mulches, typically, plastic sheeting with slits through which plants grow, is used extensively in large-scale vegetable growing. When the plastic is black, its color may absorb more solar heat, but if the plastic is clear, it may provide a greenhouse effect; both concepts are touted in discussions of mulching, usually without citations of any field trials that might clarify which to choose. Organic mulches, in addition to retaining heat by insulating, can potentially also add some heat from their decomposition, although they must be properly chosen, as factors such as thermal or chemical "burning" (excess heat, acidity, or both) and coliform bacteria accompany animal manure used as row-crop mulch. One principle involved is to prefer aged compost over fresh compost for this purpose, as its earlier predigestion by soil microbes ends the early phase of intense heat, low pH, and gut bacteria dominance but still leaves a bit more exothermic potential available. Raised beds: beds where the soil has been loosened and piled a few inches to more than a foot above the surrounding area heat more quickly in spring, allowing earlier planting.
Technology
Basics_2
null
15295535
https://en.wikipedia.org/wiki/Snow%20leopard
Snow leopard
The snow leopard (Panthera uncia) is a species of large cat in the genus Panthera of the family Felidae. The species is native to the mountain ranges of Central and South Asia. It is listed as Vulnerable on the IUCN Red List because the global population is estimated to number fewer than 10,000 mature individuals and is expected to decline about 10% by 2040. It is mainly threatened by poaching and habitat destruction following infrastructural developments. It inhabits alpine and subalpine zones at elevations of , ranging from eastern Afghanistan, the Himalayas and the Tibetan Plateau to southern Siberia, Mongolia and western China. In the northern part of its range, it also lives at lower elevations. Taxonomically, the snow leopard was long classified in the monotypic genus Uncia. Since phylogenetic studies revealed the relationships among Panthera species, it has since been considered a member of that genus. Two subspecies were described based on morphological differences, but genetic differences between the two have not yet been confirmed. It is therefore regarded as a monotypic species. The species is widely depicted in Kyrgyz culture. Naming and etymology The Old French word once, which was intended to be used for the Eurasian lynx (Lynx lynx), is where the Latin name uncia and the English word ounce both originate. Once is believed to have originated from a previous form of the word lynx through a process known as false splitting. The word once was originally considered to be pronounced as l'once, where l''' stands for the elided form of the word la ('the') in French. Once was then understood to be the name of the animal. The word panther derives from the classical Latin panthēra, itself from the ancient Greek πάνθηρ pánthēr, which was used for spotted cats. Taxonomy Felis uncia was the scientific name used by Johann Christian Daniel von Schreber in 1777 who described a snow leopard based on an earlier description by Georges-Louis Leclerc, Comte de Buffon, assuming that the cat occurred along the Barbary Coast, in Persia, East India and China. The genus name Uncia was proposed by John Edward Gray in 1854 for Asian cats with a long and thick tail. Felis irbis, proposed by Christian Gottfried Ehrenberg in 1830, was a skin of a female snow leopard collected in the Altai Mountains. He also clarified that several leopard (P. pardus) skins were previously misidentified as snow leopard skins. Felis uncioides proposed by Thomas Horsfield in 1855 was a snow leopard skin from Nepal in the collection of the Museum of the East India Company.Uncia uncia was used by Reginald Innes Pocock in 1930 when he reviewed skins and skulls of Panthera species from Asia. He also described morphological differences between snow leopard and leopard skins.Panthera baikalensis-romanii proposed by a Russian scientist in 2000 was a dark brown snow leopard skin from the Petrovsk-Zabaykalsky District in southern Transbaikal. The snow leopard was long classified in the monotypic genus Uncia. They were subordinated to the genus Panthera based on results of phylogenetic studies. Until spring 2017, there was no evidence available for the recognition of subspecies. Results of a phylogeographic analysis indicate that three subspecies should be recognised: P. u. uncia in the range countries of the Pamir MountainsP. u. irbis in Mongolia, and P. u. uncioides in the Himalayas and Qinghai. This view has been both contested and supported by different researchers. Two possible European paleosubspecies have been named in the 2020s, Panthera uncia pyrenaica from France and Panthera uncia lusitana from Portugal, but the subspecific validity of the former is uncertain. Evolution Based on the phylogenetic analysis of the DNA sequence sampled across the living Felidae, the snow leopard forms a sister group with the tiger (P. tigris). The genetic divergence time of this group is estimated at . The snow leopard and the tiger probably diverged between . Panthera originates most likely in northern Central Asia. Panthera blytheae excavated in western Tibet's Ngari Prefecture has been initially described the oldest known Panthera species and exhibits skull characteristics similar to the snow leopard, though its taxonomic placement has been disputed by other researchers who suggest that the species likely belongs to a different genus. The mitochondrial genomes of the snow leopard, the leopard and the lion (P. leo) are more similar to each other than their nuclear genomes, indicating that their ancestors hybridised at some point in their evolution. The earliest known definitive record of the modern snow leopard is dated to the Late Pleistocene based on a specimen discovered from the Niuyan Cave of China. A Middle Pleistocene specimen from the Zhoukoudian Peking Man Site which is similar to the modern snow leopard has been referred to as P. aff. uncia. Putative fossils of the snow leopard found in the Pabbi Hills of Pakistan were dated to the Early Pleistocene, but the fossils might instead represent a leopard or belong to the genus Puma. It has also been suggested that the snow leopard had European paleosubspecies during the Middle Pleistocene. Panthera uncia pyrenaica was described in 2022 based on fossil material found in France that was dated to the early Middle Pleistocene around . Panthera uncia lusitana was described in 2025 based on fossil material discovered from Middle Pleistocene strata in Portugal, and the describers of P. u. lusitana assigned P. u. pyrenaica outside the modern snow leopard as P. pyrenaica due to the lack of similar traits, though it might represent a basal related species. Characteristics The snow leopard's fur is whitish to grey with black spots on the head and neck, with larger rosettes on the back, flanks and bushy tail. Its muzzle is short, its forehead domed, and its nasal cavities are large. The fur is thick with hairs measuring in length, and its underbelly is whitish. They are stocky, short-legged, and slightly smaller than other cats of the genus Panthera, reaching a shoulder height of , and ranging in head to body size from . Its tail is long. Males average , and females . Occasionally, large males reaching have been recorded, and small females under . Its canine teeth are long and are more slender than those of the other Panthera species. The snow leopard shows several adaptations for living in cold, mountainous environments. Its small rounded ears help to minimize heat loss, and its broad paws effectively distribute the body weight for walking on snow. Fur on the undersides of the paws enhances its grip on steep and unstable surfaces, and helps to minimize heat loss. Its long and flexible tail helps the cat to balance in rocky terrain. The tail is very thick due to fat storage, and is covered in a thick layer of fur, which allows the cat to use it like a blanket to protect its face when asleep. The snow leopard differs from the other Panthera species by a shorter muzzle, an elevated forehead, a vertical chin and a less developed posterior process of the lower jaw. Despite its partly ossified hyoid bone, a snow leopard cannot roar, as its short vocal folds provide little resistance to airflow. Its nasal openings are large in relation to the length of its skull and width of its palate; thanks to their size the volume of air inhaled with each breath is optimised, and the cold dry air becomes warmer. It is not especially adapted to high-altitude hypoxia. Distribution and habitat The snow leopard is distributed from the west of Lake Baikal through southern Siberia, in the Kunlun Mountains, Altai Mountains, Sayan and Tannu-Ola Mountains, in the Tian Shan, through Tajikistan, Kyrgyzstan, Uzbekistan and Kazakhstan to the Hindu Kush in eastern Afghanistan, Karakoram in northern Pakistan, in the Pamir Mountains, the Tibetan Plateau and in the high elevations of the Himalayas in India, Nepal and Bhutan. In Mongolia, they inhabit the Mongolian and Gobi Altai Mountains and the Khangai Mountains. In Tibet, they occur up to the Altyn-Tagh in the north. They inhabit alpine and subalpine zones at elevations of , but also lives at lower elevations in the northern part of their range. Potential snow leopard habitat in the Indian Himalayas is estimated at less than in Jammu and Kashmir, Ladakh, Uttarakhand, Himachal Pradesh, Sikkim and Arunachal Pradesh, of which about is considered good habitat, and 14.4% is protected. In the beginning of the 1990s, the Indian snow leopard population was estimated at 200–600 individuals living across about 25 protected areas. The Snow Leopard Population Assessment in India (SPAI) Programme counted the number of snow leopards between 2019 and 2023 and found their number to be 718, with 477 in Ladakh, 124 in Uttarakhand, 51 in Himachal Pradesh, 36 in Arunachal Pradesh, 21 in Sikkim, and nine in Jammu and Kashmir. In summer, the snow leopard usually lives above the tree line on alpine meadows and in rocky regions at elevations of . In winter, they descend to elevations around . They prefer rocky, broken terrain, and can move in deep snow, but prefers to use existing trails made by other animals. Snow leopards were recorded by camera traps at 16 locations in northeastern Afghanistan's isolated Wakhan Corridor. Behavior and ecology The snow leopard's vocalizations include meowing, grunting, prusten and moaning. They can purr when exhaling. It is solitary and mostly active at dawn till early morning, and again in afternoons and early evenings. They mostly rest near cliffs and ridges that provide vantage points and shade. In Nepal's Shey Phoksundo National Park, the home ranges of five adult radio-collared snow leopards largely overlapped, though they rarely met. Their individual home ranges ranged from . Males moved between per day, and females between , measured in straight lines between survey points. Since they often zigzagged in the precipitous terrain, they actually moved up to in a single night. Up to 10 individuals inhabit an area of ; in habitats with sparse prey, an area of usually supports only five individuals. A study in the Gobi Desert from 2008 to 2014 revealed that adult males used a mean home range of , while adult females ranged in areas of . Their home ranges overlapped less than 20%. These results indicate that about 40% of the 170 protected areas in their range countries are smaller than the home range of a single male snow leopard. Snow leopards leave scent marks to indicate their territories and common travel routes. They scrape the ground with the hind feet before depositing urine or feces, but also spray urine onto rocks. Their urine contains many characteristic low molecular weight compounds with diverse functional groups including pentanol, hexanol, heptanol, 3-octanone, nonanal and indole, which possibly play a role in chemical communication. Hunting and diet The snow leopard is a carnivore and actively hunts its prey. Its preferred wild prey species are Himalayan blue sheep (Pseudois nayaur), Himalayan tahr (Hemitragus jemlahicus), argali (Ovis ammon), markhor (Capra falconeri) and wild goat (C. aegagrus). It also preys on domestic livestock. It prefers prey ranging in weight from , but also hunts smaller mammals such as Himalayan marmot (Marmota himalayana), pika and vole species. Its diet depends on prey availability and varies across its range and season. In the Himalayas, it preys mostly on Himalayan blue sheep, Siberian ibex (C. sibirica), white-bellied musk deer (Moschus leucogaster) and wild boar (Sus scrofa). In the Karakoram, Tian Shan, Altai and Mongolia's Tost Mountains, its main prey consists of Siberian ibex, Thorold's deer (Cervus albirostris), Siberian roe deer (Capreolus pygargus) and argali. Snow leopard feces collected in northern Pakistan also contained remains of rhesus macaque (Macaca mulatta), masked palm civet (Paguma larvata), Cape hare (Lepus capensis), house mouse (Mus musculus), Kashmir field mouse (Apodemus rusiges), grey dwarf hamster (Cricetulus migratorius) and Turkestan rat (Rattus pyctoris). In 2017, a snow leopard was photographed carrying a freshly killed woolly flying squirrel (Eupetaurus cinereus) near Gangotri National Park. In Mongolia, domestic sheep comprises less than 20% of its diet, although wild prey has been reduced and interactions with people are common. It is capable of killing most ungulates in its habitat, with the probable exception of the adult male wild yak. It also eats grass and twigs. The snow leopard actively pursues prey down steep mountainsides, using the momentum of its initial leap to chase animals for up to . Then it drags the prey to a safe location and consumes all edible parts of the carcass. It can survive on a single Himalayan blue sheep for two weeks before hunting again, and one adult individual apparently needs 20–30 adult blue sheep per year. Snow leopards have been recorded to hunt successfully in pairs, especially mating pairs. The snow leopard is easily driven away from livestock and readily abandons kills, often without defending itself. Only two attacks on humans have been reported, both near Almaty in Kazakhstan, and neither were fatal. In 1940, a rabid snow leopard attacked two men; and an old, toothless emaciated individual attacked a person passing by. Reproduction and life cycle Snow leopards become sexually mature at two to three years, and normally live for 15–18 years in the wild. In captivity they can live for up to 25 years. Oestrus typically lasts five to eight days, and males tend not to seek out another partner after mating, probably because the short mating season does not allow sufficient time. Paired snow leopards mate in the usual felid posture, from 12 to 36 times a day. They are unusual among large cats in that they have a well-defined birth peak. They usually mate in late winter, marked by a noticeable increase in marking and calling. Females have a gestation period of 90–100 days, and the cubs are born between April and June. A litter usually consists of two to three cubs, in exceptional cases there can be up to seven. The female gives birth in a rocky den or crevice lined with fur shed from her underside. The cubs are born blind and helpless, although already with a thick coat of fur, and weigh . Their eyes open at around seven days, and the cubs can walk at five weeks and are fully weaned by 10 weeks. The cubs leave the den when they are around two to four months of age. Three radio-collared snow leopards in Mongolia's Tost Mountains gave birth between late April and late June. Two female cubs started to part from their mothers at the age of 20 to 21 months, but reunited with them several times for a few days over a period of 4–7 months. One male cub separated from his mother at the age of about 22 months, but stayed in her vicinity for a month and moved out of his natal range at 23 months of age. The snow leopard has a generation length of eight years. Threats Major threats to the population include poaching and illegal trade of its skins and body parts. Between 1999 and 2002, three live snow leopard cubs and 16 skins were confiscated, 330 traps were destroyed and 110 poachers were arrested in Kyrgyzstan. Undercover operations in the country revealed an illegal trade network with links to Russia and China via Kazakhstan. The major skin trade center in the region is the city of Kashgar in Xinjiang. In Tibet and Mongolia, skins are used for traditional dresses, and meat in traditional Tibetan medicine to cure kidney problems; bones are used in traditional Chinese and Mongolian medicine for treating rheumatism, injuries and pain of human bones and tendons. Between 1996 and 2002, 37 skins were found in wildlife markets and tourist shops in Mongolia. Between 2003 and 2016, 710 skins were traded, of which 288 skins were confiscated. In China, an estimated 103 to 236 animals are poached every year, in Mongolia between 34 and 53, in Pakistan between 23 and 53, in India from 21 to 45, and in Tajikistan 20 to 25. In 2016, a survey of Chinese websites revealed 15 advertisements for 44 snow leopard products; the dealers offered skins, canine teeth, claws and a tongue. In September 2014, nine snow leopard skins were found during a market survey in Afghanistan. Greenhouse gas emissions will likely cause a shift of the treeline in the Himalayas and a shrinking of the alpine zone, which may reduce snow leopard habitat by an estimated 30%. Where snow leopards prey on domestic livestock, they are subject to human–wildlife conflict. The loss of natural prey due to overgrazing by livestock, poaching, and defense of livestock are the major drivers for the ever decreasing snow leopard population. Livestock also cause habitat degradation, which, alongside the increasing use of forests for fuel, reduces snow leopard habitat. Conservation The snow leopard is listed in CITES Appendix I. They have been listed as threatened with extinction in Schedule I of the Convention on the Conservation of Migratory Species of Wild Animals since 1985. Hunting snow leopards has been prohibited in Kyrgyzstan since the 1950s. In India, the snow leopard is granted the highest level of protection under the Wildlife Protection Act, 1972, and hunting is sentenced with imprisonment of 3–7 years. In Nepal, they have been legally protected since 1973, with penalties of 5–15 years in prison and a fine for poaching and trading them. Since 1978, they have been listed in the Soviet Union’s Red Book and is still inscribed today in the Red Data Book of the Russian Federation as threatened with extinction. Hunting snow leopards is only permitted for the purposes of conservation and monitoring, and to eliminate a threat to the life of humans and livestock. Smuggling of snow leopard body parts is punished with imprisonment and a fine. Hunting snow leopards has been prohibited in Afghanistan since 1986. In China, they have been protected by law since 1989; hunting and trading snow leopards or their body parts constitute a criminal offence that is punishable by the confiscation of property, a fine and a sentence of at least 10 years in prison. They have been protected in Bhutan since 1995. At the end of 2020, 35 cameras were installed on the outskirts of Almaty, Kazakhstan in hopes to catch footage of snow leopards. In November 2021, it was announced by the Russian World Wildlife Fund (WWF) that snow leopards were spotted 65 times on these cameras in the Trans-Ili Alatau mountains since the cameras were installed. Global Snow Leopard Forum In 2013, government leaders and officials from all 12 countries encompassing the snow leopard's range (Afghanistan, Bhutan, China, India, Kazakhstan, Kyrgyzstan, Mongolia, Nepal, Pakistan, Russia, Tajikistan, and Uzbekistan) came together at the Global Snow Leopard Forum (GSLF) initiated by the then-President of Kyrgyzstan Almazbek Atambayev, and the State Agency on Environmental Protection and Forestry under the government of Kyrgyzstan. The meeting was held in Bishkek, and all countries agreed that the snow leopard and the high mountain habitat need trans-boundary support to ensure a viable future for snow leopard populations, and to safeguard its fragile environment. The event brought together many partners, including NGOs like the Snow Leopard Conservancy, the Snow Leopard Trust, and the Nature and Biodiversity Conservation Union. Also supporting the initiative were the Snow Leopard Network, the World Bank's Global Tiger Initiative, the United Nations Development Programme, the World Wild Fund for Nature, the United States Agency for International Development, and Global Environment Facility. In captivity The Moscow Zoo exhibited the first captive snow leopard in 1872 that had been caught in Turkestan. In Kyrgyzstan, 420 live snow leopards were caught between 1936 and 1988 and exported to zoos around the world. The Bronx Zoo housed a live snow leopard in 1903; this was the first ever specimen exhibited in a North American zoo. The first captive bred snow leopard cubs were born in the 1990s in the Beijing Zoo. The Snow Leopard Species Survival Plan was initiated in 1984; by 1986, American zoos held 234 individuals. Cultural significance The snow leopard is widely used in heraldry and as an emblem in Central Asia. The Aq Bars ('White Leopard') is a political symbol of the Tatars, Kazakhs, and Bulgars. A mythical winged Aq Bars is depicted on the national coat of arms of Tatarstan, the seal of the city of Samarqand, Uzbekistan and the old coat of arms of Astana. A snow leopard is depicted on the official seal of Almaty and on the former 10,000 Kazakhstani tenge banknote. In Kyrgyzstan, it is used in highly stylized form in the modern emblem of the capital Bishkek, and the same art has been integrated into the badge of the Kyrgyzstan Girl Scouts Association. It is also considered to be a sacred creature by the Kyrgyz people. A crowned snow leopard features in the arms of Shushensky District in Russia. It is the state animal of Ladakh and Himachal Pradesh in India. The 1978 book The Snow Leopard'' is an account by Peter Matthiessen about his two-month journey through the Dolpo region of the Nepal Himalayas in search of the snow leopard.
Biology and health sciences
Carnivora
null
15296588
https://en.wikipedia.org/wiki/Linear%20molecular%20geometry
Linear molecular geometry
The linear molecular geometry describes the geometry around a central atom bonded to two other atoms (or ligands) placed at a bond angle of 180°. Linear organic molecules, such as acetylene (), are often described by invoking sp orbital hybridization for their carbon centers. According to the VSEPR model (Valence Shell Electron Pair Repulsion model), linear geometry occurs at central atoms with two bonded atoms and zero or three lone pairs ( or ) in the AXE notation. Neutral molecules with linear geometry include beryllium fluoride () with two single bonds, carbon dioxide () with two double bonds, hydrogen cyanide () with one single and one triple bond. The most important linear molecule with more than three atoms is acetylene (), in which each of its carbon atoms is considered to be a central atom with a single bond to one hydrogen and a triple bond to the other carbon atom. Linear anions include azide () and thiocyanate (), and a linear cation is the nitronium ion (). Linear geometry also occurs in molecules, such as xenon difluoride () and the triiodide ion () with one iodide bonded to the two others. As described by the VSEPR model, the five valence electron pairs on the central atom form a trigonal bipyramid in which the three lone pairs occupy the less crowded equatorial positions and the two bonded atoms occupy the two axial positions at the opposite ends of an axis, forming a linear molecule.
Physical sciences
Bond structure
Chemistry
15296960
https://en.wikipedia.org/wiki/Bent%20molecular%20geometry
Bent molecular geometry
In chemistry, molecules with a non-collinear arrangement of two adjacent bonds have bent molecular geometry, also known as angular or V-shaped. Certain atoms, such as oxygen, will almost always set their two (or more) covalent bonds in non-collinear directions due to their electron configuration. Water (H2O) is an example of a bent molecule, as well as its analogues. The bond angle between the two hydrogen atoms is approximately 104.45°. Nonlinear geometry is commonly observed for other triatomic molecules and ions containing only main group elements, prominent examples being nitrogen dioxide (NO2), sulfur dichloride (SCl2), and methylene (CH2). This geometry is almost always consistent with VSEPR theory, which usually explains non-collinearity of atoms with a presence of lone pairs. There are several variants of bending, where the most common is AX2E2 where two covalent bonds and two lone pairs of the central atom (A) form a complete 8-electron shell. They have central angles from 104° to 109.5°, where the latter is consistent with a simplistic theory which predicts the tetrahedral symmetry of four sp3 hybridised orbitals. The most common actual angles are 105°, 107°, and 109°: they vary because of the different properties of the peripheral atoms (X). Other cases also experience orbital hybridisation, but in different degrees. AX2E1 molecules, such as SnCl2, have only one lone pair and the central angle about 120° (the centre and two vertices of an equilateral triangle). They have three sp2 orbitals. There exist also sd-hybridised AX2 compounds of transition metals without lone pairs: they have the central angle about 90° and are also classified as bent. (See further discussion at VSEPR theory#Complexes with strong d-contribution).
Physical sciences
Bond structure
Chemistry
10152440
https://en.wikipedia.org/wiki/Dallasaurus
Dallasaurus
Dallasaurus ("Dallas lizard") is a basal mosasauroid from the Upper Cretaceous of North America. Along with Russellosaurus, Dallasaurus is one of the two oldest mosasauroid taxa currently known from North America. It is also one of the smallest known mosasaurines, measuring approximately in length. Specimens The genus is based upon two partial skeletons recovered from the Arcadia Park Shale (lower Middle Turonian), approximately 15 meters above its contact with the older Kamp Ranch Limestone in Dallas County in north-central Texas. The holotype specimen (TMM 43209-1, Texas Memorial Museum, University of Texas at Austin) consists of an incomplete and disarticulated skull, along with considerable portions of the postcranial skeleton, making up about 80 percent of the animal. The second referred specimen (DMNH 8121-8125, 8143-8149, and 8161-8180, Dallas Museum of Natural History) lacks any skull material and consists entirely of disarticulated postcranial remains. The strata containing these fossils were temporarily exposed during excavations for a housing development, and both sites have now been reburied by construction. The two specimens were discovered about 100 meters from one another; the first was found by an amateur collector, Van Turner, for whom the type species (Dallasaurus turneri) was named. The genus is named for Dallas County, where both specimens were found. Anatomy Polcyn and Bell diagnose Dallasaurus as follows: "Small, plesiopedal mosasauroid possessing the following autapomorphies: posterior maxillary teeth strongly recurved posteriorly, slightly inflated at the crown and bearing only posterior carinae that is slightly offset laterally; atlas neural arch mediolaterally compressed but not flattened at its base, condylar surfaces irregularly figure-eight shaped; cervical vertebra synapophyses protrude below the level of the ventral edge of the centrum; short, wide fossa excavated immediately below the ventral rim of the cotyle of at least one middle cervical vertebra; hypapophysis anteroventral edge terminating in short projections of irregular length; postglenoid process capped by bony epiphysis bearing a calcified cartilage apex." Bell and Polcyn use the term "plesiopedal" to indicate a "conservative ecologically adaptive grade" characterized by "small size, slightly modified swimming tail and [a] relatively plesiomorphic limb condition" compared to more derived mosasauroids. Polcyn and Bell note that plesiopedal mosasauroids tend to be relatively small lizards possessing limbs in which the "propodial elements [equals humerus, radius, and ulna] remain elongated, generally constituting one-half or more of the full length of the osseus limb", as compared to more derived "hydropedal" mosasaurs in which the propodial elements are stout and have been substantially shortened, constituting less than one-half of the full length of the ossueus limb. While hydropedal mosasaurs were probably entirely aquatic, plesiopedal mosasauroids were still capable of terrestrial locomotion and so likely lived an amphibious lifestyle. Classification Bell and Polcyn have conducted a cladistic analysis of Dallasaurus, concluding that this taxon should be placed within the subfamily Mosasaurinae, "well within [the family] Mosasauridae", and that, despite its small size and the "primitive" condition of its limbs, it should not be placed within the paraphyletic family Aigialosauridae. Dallasaurus is the sister group to more derived mosasaurine mosasaurs such as Clidastes, Prognathodon, Mosasaurus, and Plotosaurus. In the popular press, Dallasaurus has been hailed as a "missing link" uniting fully aquatic mosasaurs with their terrestrial ancestors.
Biology and health sciences
Prehistoric squamates
Animals
10153680
https://en.wikipedia.org/wiki/Fluoxetine
Fluoxetine
Fluoxetine, sold under the brand name Prozac, among others, is an antidepressant medication of the selective serotonin reuptake inhibitor (SSRI) class used for the treatment of major depressive disorder, anxiety, obsessive–compulsive disorder (OCD), panic disorder, premenstrual dysphoric disorder, and bulimia nervosa. It is also approved for treatment of major depressive disorder in adolescents and children 8 years of age and over. It has also been used to treat premature ejaculation. Fluoxetine is taken by mouth. Common side effects include loss of appetite, nausea, diarrhea, headache, trouble sleeping, dry mouth, and sexual dysfunction. Serious side effects include serotonin syndrome, mania, seizures, an increased risk of suicidal behavior in people under 25 years old, and an increased risk of bleeding. Antidepressant discontinuation syndrome is less likely to occur with fluoxetine than with other antidepressants, but it still happens in many cases. Fluoxetine taken during pregnancy is associated with a significant increase in congenital heart defects in newborns. It has been suggested that fluoxetine therapy may be continued during breastfeeding if it was used during pregnancy or if other antidepressants were ineffective. Fluoxetine was invented by Eli Lilly and Company in 1972 and entered medical use in 1986. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 22nd most commonly prescribed medication in the United States, with more than 24million prescriptions. Eli Lilly also markets fluoxetine in a fixed-dose combination with olanzapine as olanzapine/fluoxetine (Symbyax), which was approved by the U.S. FDA for the treatment of depressive episodes of bipolar I disorder in 2003 and for treatment-resistant depression in 2009. Medical uses Fluoxetine is frequently used to treat major depressive disorder, obsessive–compulsive disorder (OCD), post-traumatic stress disorder (PTSD), bulimia nervosa, panic disorder, premenstrual dysphoric disorder, and trichotillomania. It has also been used for cataplexy, obesity, and alcohol dependence, as well as binge eating disorder. Fluoxetine seems to be ineffective for social anxiety disorder. Studies do not support a benefit in children with autism, though there is tentative evidence for its benefit in adult autism. Fluoxetine together with fluvoxamine has shown some initial promise as a potential treatment for reducing COVID-19 severity if given early. Depression Fluoxetine is approved for the treatment of major depression in children and adults. A meta-analysis of trials in adults concluded that fluoxetine modestly outperforms placebo. Fluoxetine may be less effective than other antidepressants, but has high acceptability. For children and adolescents with moderate-to-severe depressive disorder, fluoxetine seems to be the best treatment (either with or without cognitive behavioural therapy, although fluoxetine alone does not appear to be superior to CBT alone) but more research is needed to be certain, as effect sizes are small and the existing evidence is of dubious quality. A 2022 systematic review and trial restoration of the two original blinded-control trials used to approve the use of fluoxetine in children and adolescents with depression found that both of the trials were severely flawed, and therefore did not demonstrate the safety or efficacy of the medication. Obsessive–compulsive disorder Fluoxetine is effective in the treatment of obsessive–compulsive disorder (OCD) for adults. It is also effective for treating OCD in children and adolescents. The American Academy of Child and Adolescent Psychiatry state that SSRIs, including fluoxetine, should be used as first-line therapy in children, along with cognitive behavioral therapy (CBT), for the treatment of moderate to severe OCD. Panic disorder The efficacy of fluoxetine in the treatment of panic disorder was demonstrated in two 12-week randomized multicenter phase III clinical trials that enrolled patients diagnosed with panic disorder, with or without agoraphobia. In the first trial, 42% of subjects in the fluoxetine-treated arm were free of panic attacks at the end of the study, vs. 28% in the placebo arm. In the second trial, 62% of fluoxetine-treated patients were free of panic attacks at the end of the study, vs. 44% in the placebo arm. Bulimia nervosa A 2011 systematic review discussed seven trials that compared fluoxetine to a placebo in the treatment of bulimia nervosa, six of which found a statistically significant reduction in symptoms such as vomiting and binge eating. However, no difference was observed between treatment arms when fluoxetine and psychotherapy were compared to psychotherapy alone. Premenstrual dysphoric disorder Fluoxetine is used to treat premenstrual dysphoric disorder, a condition where individuals have affective and somatic symptoms monthly during the luteal phase of menstruation. Taking fluoxetine 20 mg/d can be effective in treating PMDD, though doses of 10 mg/d have also been prescribed effectively. Impulsive aggression Fluoxetine is considered a first-line medication for the treatment of impulsive aggression of low intensity. Fluoxetine reduced low-intensity aggressive behavior in patients in intermittent aggressive disorder and borderline personality disorder. Fluoxetine also reduced acts of domestic violence in alcoholics with a history of such behavior. Obesity and overweight adults In 2019 a systematic review compared the effects on weight of various doses of fluoxetine (60 mg/d, 40 mg/d, 20 mg/d, 10 mg/d) in obese and overweight adults. When compared to placebo, all dosages of fluoxetine appeared to contribute to weight loss but lead to increased risk of experiencing side effects, such as dizziness, drowsiness, fatigue, insomnia, and nausea, during the period of treatment. However, these conclusions were from low-certainty evidence. When comparing, in the same review, the effects of fluoxetine on the weight of obese and overweight adults, to other anti-obesity agents, omega-3 gel capsule and not receiving treatment, the authors could not reach conclusive results due to poor quality of evidence. Special populations In children and adolescents, fluoxetine is the antidepressant of choice due to tentative evidence favoring its efficacy and tolerability. Evidence supporting an increased risk of major fetal malformations resulting from fluoxetine exposure is limited, although the Medicines and Healthcare products Regulatory Agency (MHRA) of the United Kingdom has warned prescribers and patients of the potential for fluoxetine exposure in the first trimester (during organogenesis, formation of the fetal organs) to cause a slight increase in the risk of congenital cardiac malformations in the newborn. Furthermore, an association between fluoxetine use during the first trimester and an increased risk of minor fetal malformations was observed in one study. However, a systematic review and meta-analysis of 21 studies – published in the Journal of Obstetrics and Gynaecology Canada – concluded, "the apparent increased risk of fetal cardiac malformations associated with maternal use of fluoxetine has recently been shown also in depressed women who deferred SSRI therapy in pregnancy, and therefore most probably reflects an ascertainment bias. Overall, women who are treated with fluoxetine during the first trimester of pregnancy do not appear to have an increased risk of major fetal malformations." Per the FDA, infants exposed to SSRIs in late pregnancy may have an increased risk for persistent pulmonary hypertension of the newborn. Limited data support this risk, but the FDA recommends physicians consider tapering SSRIs such as fluoxetine during the third trimester. A 2009 review recommended against fluoxetine as a first-line SSRI during lactation, stating, "Fluoxetine should be viewed as a less-preferred SSRI for breastfeeding mothers, particularly with newborn infants, and in those mothers who consumed fluoxetine during gestation." Sertraline is often the preferred SSRI during pregnancy due to the relatively minimal fetal exposure observed and its safety profile while breastfeeding. Adverse effects Side effects observed in fluoxetine-treated persons in clinical trials with an incidence >5% and at least twice as common in fluoxetine-treated persons compared to those who received a placebo pill include abnormal dreams, abnormal ejaculation, anorexia, anxiety, asthenia, diarrhea, dizziness, dry mouth, dyspepsia, fatigue, flu syndrome, impotence, insomnia, decreased libido, nausea, nervousness, pharyngitis, rash, sinusitis, somnolence, sweating, tremor, vasodilation, and yawning. Fluoxetine is considered the most stimulating of the SSRIs (that is, it is most prone to causing insomnia and agitation). It also appears to be the most prone of the SSRIs for producing dermatologic reactions (e.g. urticaria (hives), rash, itchiness, etc.). Sexual dysfunction Sexual dysfunction, including loss of libido, erectile dysfunction, lack of vaginal lubrication, and anorgasmia, are some of the most commonly encountered adverse effects of treatment with fluoxetine and other SSRIs. While early clinical trials suggested a relatively low rate of sexual dysfunction, more recent studies in which the investigator actively inquires about sexual problems suggest that the incidence is >70%. In 2019, the Pharmacovigilance Risk Assessment Committee of the European Medicines Agency recommended that packaging leaflets of selected SSRIs and SNRIs should be amended to include information regarding a possible risk of persistent sexual dysfunction. Following on the European assessment, a safety review by Health Canada "could neither confirm nor rule out a causal link... which was long lasting in rare cases", but recommended that "healthcare professionals inform patients about the potential risk of long-lasting sexual dysfunction despite discontinuation of treatment". Antidepressant discontinuation syndrome Fluoxetine's longer half-life makes it less common to develop antidepressant discontinuation syndrome following cessation of therapy, especially when compared with antidepressants with shorter half-lives such as paroxetine. Although gradual dose reductions are recommended with antidepressants with shorter half-lives, tapering may not be necessary with fluoxetine. It has been recommended as a treatment option for antidepressant discontinuation syndrome. Pregnancy Antidepressant exposure (including fluoxetine) is associated with shorter average duration of pregnancy (by three days), increased risk of preterm delivery (by 55%), lower birth weight (by 75 g), and lower Apgar scores (by <0.4 points). There is 30–36% increase in congenital heart defects among children whose mothers were prescribed fluoxetine during pregnancy, with fluoxetine use in the first trimester associated with 38–65% increase in septal heart defects. Suicide On 14 September 1989, Joseph T. Wesbecker killed eight people and injured twelve before committing suicide. His relatives and victims blamed his actions on the Fluoxetine he had begun taking 11 days previously. Eli Lilly settled the case. The incident set off a chain of lawsuits and public outcries, resulting in Eli Lilly paying out $50 million across 300 claims. Eli Lilly was accused of not doing enough to warn patients and doctors about the adverse effects, which it had described as "activation", years before the incident. It was revealed in a lawsuit by the family of Bill Forsyth Sr, who killed his wife and then himself on 11 March 1993, that the Federal Health Agency (BGA) in the Federal Republic of Germany had refused to license Fluoxetine after examination of internal Eli Lilly documents there had been 16 suicide attempts, two of which had been successful, during clinical trials. The BGA considered that Fluoxetine administration was causative because those considered to be at risk of suicide were not allowed to participate in the trial. On the basis of the internal statistical evidence engathered by Eli Lilly that emerged in this lawsuit, it was estimated by 1999 that there would have been 250,000 suicide attempts and 25,000 suicides globally. In October 2004, the FDA added its most serious warning, a black box warning, to all antidepressant drugs regarding use in children. In 2006, the FDA included adults aged 25 or younger. Statistical analyses conducted by two independent groups of FDA experts found a 2-fold increase of the suicidal ideation and behavior in children and adolescents, and 1.5-fold increase of suicidality in the 18–24 age group. The suicidality was slightly decreased for those older than 24, and statistically significantly lower in the 65 and older group. In February 2018, the Food and Drug Administration (FDA) ordered an update to the warnings based on statistical evidence from twenty-four trials in which the risk of such events increased from two percent to four percent relative to the placebo trials. A study published in May 2009 found that fluoxetine was more likely to increase overall suicidal behavior. 14.7% of the patients (n=44) on Fluoxetine had suicidal events, compared to 6.3% in the psychotherapy group and 8.4% from the combined treatment group. Similarly, the analysis conducted by the UK MHRA found a 50% increase in suicide-related events, not reaching statistical significance, in the children and adolescents on fluoxetine as compared to the ones on placebo. According to the MHRA data, fluoxetine did not change the rate of self-harm in adults and statistically significantly decreased suicidal ideation by 50%. QT prolongation Fluoxetine can affect the electrical currents that heart muscle cells use to coordinate their contraction, specifically the potassium currents Ito and IKs that repolarise the cardiac action potential. Under certain circumstances, this can lead to prolongation of the QT interval, a measurement made on an electrocardiogram reflecting how long it takes for the heart to electrically recharge after each heartbeat. When fluoxetine is taken alongside other drugs that prolong the QT interval, or by those with a susceptibility to long QT syndrome, there is a small risk of potentially lethal abnormal heart rhythms such as torsades de pointes. A study completed in 2011 found that fluoxetine does not alter the QT interval and has no clinically meaningful effects on the cardiac action potential. Overdose In overdose, most frequent adverse effects include: Nervous system effects anxiety nervousness insomnia drowsiness fatigue or asthenia tremor dizziness or lightheadedness Gastrointestinal effects anorexia (symptom) nausea diarrhea vasodilation dry mouth abnormal vision Other effects abnormal ejaculation rash sweating decreased libido Interactions Contraindications include prior treatment (within the past 5–6 weeks, depending on the dose) with MAOIs such as phenelzine and tranylcypromine, due to the potential for serotonin syndrome. Its use should also be avoided in those with known hypersensitivities to fluoxetine or any of the other ingredients in the formulation used. Its use in those concurrently receiving pimozide or thioridazine is also advised against. In case of short-term administration of codeine for pain management, it is advised to monitor and adjust dosage. Codeine might not provide sufficient analgesia when fluoxetine is co-administered. If opioid treatment is required, oxycodone use should be monitored since oxycodone is metabolized by the cytochrome P450 (CYP) enzyme system and fluoxetine and paroxetine are potent inhibitors of CYP2D6 enzymes. This means combinations of codeine or oxycodone with fluoxetine antidepressant may lead to reduced analgesia. In some cases, use of dextromethorphan-containing cold and cough medications with fluoxetine is advised against, due to fluoxetine increasing serotonin levels, as well as the fact that fluoxetine is a cytochrome P450 2D6 inhibitor, which causes dextromethorphan to not be metabolized at a normal rate, thus increasing the risk of serotonin syndrome and other potential side effects of dextromethorphan. Patients who are taking NSAIDs, antiplatelet drugs, anticoagulants, omega-3 fatty acids, vitamin E, and garlic supplements must be careful when taking fluoxetine or other SSRIs, as they can sometimes increase the blood-thinning effects of these medications. Fluoxetine and norfluoxetine inhibit many isozymes of the cytochrome P450 system that are involved in drug metabolism. Both are potent inhibitors of CYP2D6 (which is also the chief enzyme responsible for their metabolism) and CYP2C19, and mild to moderate inhibitors of CYP2B6 and CYP2C9. In vivo, fluoxetine and norfluoxetine do not significantly affect the activity of CYP1A2 and CYP3A4. They also inhibit the activity of P-glycoprotein, a type of membrane transport protein that plays an important role in drug transport and metabolism and hence P-glycoprotein substrates, such as loperamide, may have their central effects potentiated. This extensive effect on the body's pathways for drug metabolism creates the potential for interactions with many commonly used drugs. Its use should also be avoided in those receiving other serotonergic drugs such as monoamine oxidase inhibitors, tricyclic antidepressants, methamphetamine, amphetamine, MDMA, triptans, buspirone, ginseng, dextromethorphan (DXM), linezolid, tramadol, serotonin–norepinephrine reuptake inhibitors, and other SSRIs due to the potential for serotonin syndrome to develop as a result. Fluoxetine may also increase the risk of opioid overdose in some instances, in part due to its inhibitory effect on cytochrome P-450. Similar to how fluoxetine can effect the metabolization of dextromethorphan, it may cause medications like oxycodone to not be metabolized at a normal rate, thus increasing the risk of serotonin syndrome as well as resulting in an increased concentration of oxycodone in the blood, which may lead to accidental overdose. A 2022 study that examined the health insurance claims of over 2 million Americans who began taking oxycodone while using SSRIs between 2000 and 2020, found that patients taking paroxetine or fluoxetine had a 23% higher risk of overdosing on oxycodone than those using other SSRIs. There is also the potential for interaction with highly protein-bound drugs due to the potential for fluoxetine to displace said drugs from the plasma or vice versa hence increasing serum concentrations of either fluoxetine or the offending agent. Pharmacology Pharmacodynamics Fluoxetine is a selective serotonin reuptake inhibitor (SSRI) and does not appreciably inhibit norepinephrine and dopamine reuptake at therapeutic doses. It does, however, delay the reuptake of serotonin, resulting in serotonin persisting longer when it is released. Large doses in rats have been shown to induce a significant increase in synaptic norepinephrine and dopamine. Thus, dopamine and norepinephrine may contribute to the antidepressant action of fluoxetine in humans at supratherapeutic doses (60–80 mg). This effect may be mediated by 5HT2C receptors, which are inhibited by higher concentrations of fluoxetine. Fluoxetine increases the concentration of circulating allopregnanolone, a potent GABAA receptor positive allosteric modulator, in the brain. Norfluoxetine, a primary active metabolite of fluoxetine, produces a similar effect on allopregnanolone levels in the brains of mice. Additionally, both fluoxetine and norfluoxetine are such modulators themselves, actions which may be clinically relevant. In addition, fluoxetine has been found to act as an agonist of the σ1-receptor, with a potency greater than that of citalopram but less than that of fluvoxamine. However, the significance of this property is not fully clear. Fluoxetine also functions as a channel blocker of anoctamin 1, a calcium-activated chloride channel. A number of other ion channels, including nicotinic acetylcholine receptors and 5-HT3 receptors, are also known to be at similar concentrations. Fluoxetine has been shown to inhibit acid sphingomyelinase, a key regulator of ceramide levels which derives ceramide from sphingomyelin. Mechanism of action While it is unclear how fluoxetine exerts its effect on mood, it has been suggested that fluoxetine elicits an antidepressant effect by inhibiting serotonin reuptake in the synapse by binding to the reuptake pump on the neuronal membrane to increase serotonin availability and enhance neurotransmission. Over time, this leads to a downregulation of pre-synaptic 5-HT1A receptors, which is associated with an improvement in passive stress tolerance, and delayed downstream increase in expression of brain-derived neurotrophic factor, which may contribute to a reduction in negative affective biases. Norfluoxetine and desmethylfluoxetine are metabolites of fluoxetine and also act as serotonin reuptake inhibitors, increasing the duration of action of the drug. Prolonged exposure to fluoxetine changes the expression of genes involved in myelination, a process that shapes brain connectivity and contributes to symptoms of psychiatric disorders. The regulation of genes involved with myelination is partially responsible for the long-term therapeutic benefits of chronic SSRI exposure. Pharmacokinetics The bioavailability of fluoxetine is relatively high (72%), and peak plasma concentrations are reached in 6–8 hours. It is highly bound to plasma proteins, mostly albumin and α1-glycoprotein. Fluoxetine is metabolized in the liver by isoenzymes of the cytochrome P450 system, including CYP2D6. The role of CYP2D6 in the metabolism of fluoxetine may be clinically important, as there is great genetic variability in the function of this enzyme among people. CYP2D6 is responsible for converting fluoxetine to its only active metabolite, norfluoxetine. Both drugs are also potent inhibitors of CYP2D6. The extremely slow elimination of fluoxetine and its active metabolite norfluoxetine from the body distinguishes it from other antidepressants. With time, fluoxetine and norfluoxetine inhibit their own metabolism, so fluoxetine elimination half-life increases from 1 to 3 days, after a single dose, to 4 to 6 days, after long-term use. Similarly, the half-life of norfluoxetine is longer (16 days) after long-term use. Therefore, the concentration of the drug and its active metabolite in the blood continues to grow through the first few weeks of treatment, and their steady concentration in the blood is achieved only after four weeks. Moreover, the brain concentration of fluoxetine and its metabolites keeps increasing through at least the first five weeks of treatment. For major depressive disorder, while onset of antidepressant action may be felt as early as 1–2 weeks, the full benefit of the current dose a patient receives is not realized for at least a month following ingestion. For example, in one 6-week study, the median time to achieving consistent response was 29 days. Likewise, complete excretion of the drug may take several weeks. During the first week after treatment discontinuation, the brain concentration of fluoxetine decreases by only 50%, The blood level of norfluoxetine four weeks after treatment discontinuation is about 80% of the level registered by the end of the first treatment week, and, seven weeks after discontinuation, norfluoxetine is still detectable in the blood. Measurement in body fluids Fluoxetine and norfluoxetine may be quantitated in blood, plasma, or serum to monitor therapy, confirm a diagnosis of poisoning in hospitalized persons, or assist in a medicolegal death investigation. Blood or plasma fluoxetine concentrations are usually in a range of 50–500 μg/L in persons taking the drug for its antidepressant effects, 900–3000 μg/L in survivors of acute overdosage, and 1000–7000 μg/L in victims of fatal overdosage. Norfluoxetine concentrations are approximately equal to those of the parent drug during chronic therapy but may be substantially less following acute overdosage since it requires at least 1–2 weeks for the metabolite to achieve equilibrium. History The work which eventually led to the invention of fluoxetine began at Eli Lilly and Company in 1970 as a collaboration between Bryan Molloy and Ray Fuller. It was known at that time that the antihistamine diphenhydramine showed some antidepressant-like properties. 3-Phenoxy-3-phenylpropylamine, a compound structurally similar to diphenhydramine, was taken as a starting point. Molloy and fellow Eli Lilly chemist Klaus Schmiegel synthesized a series of dozens of its derivatives. Hoping to find a derivative inhibiting only serotonin reuptake, another Eli Lilly scientist, David T. Wong, proposed to retest the series for the in vitro reuptake of serotonin, norepinephrine and dopamine, using a technique developed by neuroscientist Solomon Snyder. This test showed the compound later named fluoxetine to be the most potent and selective inhibitor of serotonin reuptake of the series. The first article about fluoxetine was published in 1974, following talks given at FASEB and ASPET. A year later, it was given the official chemical name fluoxetine and the Eli Lilly and Company gave it the brand name Prozac. In February 1977, Dista Products Company, a division of Eli Lilly & Company, filed an Investigational New Drug application to the U.S. Food and Drug Administration (FDA) for fluoxetine. Fluoxetine appeared on the Belgian market in 1986. In the U.S., the FDA gave its final approval in December 1987, and a month later Eli Lilly began marketing Prozac; annual sales in the U.S. reached $350 million within a year. Worldwide sales eventually reached a peak of $2.6 billion a year. Lilly tried several product line extension strategies, including extended-release formulations and paying for clinical trials to test the efficacy and safety of fluoxetine in premenstrual dysphoric disorder and rebranding fluoxetine for that indication as "Sarafem" after it was approved by the FDA in 2000, following the recommendation of an advisory committee in 1999. The invention of using fluoxetine to treat PMDD was made by Richard Wurtman at MIT; the patent was licensed to his startup, Interneuron, which in turn sold it to Lilly. To defend its Prozac revenue from generic competition, Lilly also fought a five-year, multimillion-dollar battle in court with the generic company Barr Pharmaceuticals to protect its patents on fluoxetine, and lost the cases for its line-extension patents, other than those for Sarafem, opening fluoxetine to generic manufacturers starting in 2001. When Lilly's patent expired in August 2001, generic drug competition decreased Lilly's sales of fluoxetine by 70% within two months. In 2000 an investment bank had projected that annual sales of Sarafem could reach $250M/year. Sales of Sarafem reached about $85M/year in 2002, and in that year Lilly sold its assets connected with the drug for $295M to Galen Holdings, a small Irish pharmaceutical company specializing in dermatology and women's health that had a sales force tasked to gynecologists' offices; analysts found the deal sensible since the annual sales of Sarafem made a material financial difference to Galen, but not to Lilly. Bringing Sarafem to market harmed Lilly's reputation in some quarters. The diagnostic category of PMDD was controversial since it was first proposed in 1987, and Lilly's role in retaining it in the appendix of the DSM-IV-TR, the discussions for which got underway in 1998, has been criticized. Lilly was criticized for inventing a disease to make money, and for not innovating but rather just seeking ways to continue making money from existing drugs. It was also criticized by the FDA and groups concerned with women's health for marketing Sarafem too aggressively when it was first launched; the campaign included a television commercial featuring a harried woman at the grocery store who asks herself if she has PMDD. Society and culture Prescription trends In 2010, over 24.4 million prescriptions for generic fluoxetine were filled in the United States, making it the third-most prescribed antidepressant after sertraline and citalopram. In 2011, 6 million prescriptions for fluoxetine were filled in the United Kingdom. Between 1998 and 2017, along with amitriptyline, it was the most commonly prescribed first antidepressant for adolescents aged 12–17 years in England. Environmental effects Fluoxetine has been detected in aquatic ecosystems, especially in North America. There is a growing body of research addressing the effects of fluoxetine (among other SSRIs) exposure on non-target aquatic species. In 2003, one of the first studies addressed in detail the potential effects of fluoxetine on aquatic wildlife; this research concluded that exposure at environmental concentrations was of little risk to aquatic systems if a hazard quotient approach was applied to risk assessment. However, they also stated the need for further research addressing sub-lethal consequences of fluoxetine, specifically focusing on study species' sensitivity, behavioural responses, and endpoints modulated by the serotonin system. Fluoxetine similar to several other SSRIs induces reproductive behavior in some shellfish at concentrations as low as 10 M, or 30 parts per trillion. Since 2003, several studies have reported fluoxetine-induced impacts on many behavioural and physiological endpoints, inducing antipredator behaviour, reproduction, and foraging at or below field-detected concentrations. However, a 2014 review on the ecotoxicology of fluoxetine concluded that, at that time, a consensus on the ability of environmentally realistic dosages to affect the behaviour of wildlife could not be reached. At environmentally realistic concentrations, fluoxetine alters insect emergence timing. Richmond et al., 2019 find that at low concentrations it accelerates emergence of Diptera, while at unusually high concentrations it has no discernable effect. Several common plants are known to absorb fluoxetine. Several crops have been tested, and Redshaw et al. 2008 find that cauliflower absorbs large amounts into the stem and leaf but not the head or root. Wu et al. 2012 find that lettuce and spinach also absorb detectable amounts, while Carter et al. 2014 find that radish (Raphanus sativus), ryegrass (Lolium perenne) – and Wu et al. 2010 find that soybean (Glycine max) – absorb little. Wu tested all tissues of soybean and all showed only low concentrations. By contrast various Reinhold et al. 2010 find duckweeds have a high uptake of fluoxetine and show promise for bioremediation of contaminated water, especially Lemna minor and Landoltia punctata. Ecotoxicity for organisms involved in aquaculture is well documented. Fluoxetine affects both aquacultured invertebrates and vertebrates, and inhibits soil microbes including a large antibacterial effect. For applications of this see . Politics During the 1990 campaign for governor of Florida, it was disclosed that one of the candidates, Lawton Chiles, had depression and had resumed taking fluoxetine, leading his political opponents to question his fitness to serve as governor. American aircraft pilots Beginning in April 2010, fluoxetine became one of four antidepressant drugs that the FAA permitted for pilots with authorization from an aviation medical examiner. The other permitted antidepressants are sertraline (Zoloft), citalopram (Celexa), and escitalopram (Lexapro). These four remain the only antidepressants permitted by FAA Sertraline, citalopram, and escitalopram are the only antidepressants permitted for EASA medical certification, as of January 2019. Research The antibacterial effect described above () could be applied against multiresistant biotypes in crop bacterial diseases and bacterial aquaculture diseases. In a glucocorticoid receptor-defective zebrafish mutant (Danio rerio) with reduced exploratory behavior, fluoxetine rescued the normal exploratory behavior. This demonstrates relationships between glucocorticoids, fluoxetine, and exploration in this fish. Fluoxetine has an anti-nematode effect. Choy et al., 1999 found some of this effect is due to interference with certain transmembrane proteins. Veterinary use Fluoxetine is commonly used and effective in treating anxiety-related behaviours and separation anxiety in dogs, especially when given as supplementation to behaviour modification.
Biology and health sciences
Psychiatric drugs
Health
222676
https://en.wikipedia.org/wiki/Lewis%20acids%20and%20bases
Lewis acids and bases
A Lewis acid (named for the American physical chemist Gilbert N. Lewis) is a chemical species that contains an empty orbital which is capable of accepting an electron pair from a Lewis base to form a Lewis adduct. A Lewis base, then, is any species that has a filled orbital containing an electron pair which is not involved in bonding but may form a dative bond with a Lewis acid to form a Lewis adduct. For example, NH3 is a Lewis base, because it can donate its lone pair of electrons. Trimethylborane [(CH3)3B] is a Lewis acid as it is capable of accepting a lone pair. In a Lewis adduct, the Lewis acid and base share an electron pair furnished by the Lewis base, forming a dative bond. In the context of a specific chemical reaction between NH3 and Me3B, a lone pair from NH3 will form a dative bond with the empty orbital of Me3B to form an adduct NH3•BMe3. The terminology refers to the contributions of Gilbert N. Lewis. The terms nucleophile and electrophile are sometimes interchangeable with Lewis base and Lewis acid, respectively. These terms, especially their abstract noun forms nucleophilicity and electrophilicity, emphasize the kinetic aspect of reactivity, while the Lewis basicity and Lewis acidity emphasize the thermodynamic aspect of Lewis adduct formation. Depicting adducts In many cases, the interaction between the Lewis base and Lewis acid in a complex is indicated by an arrow indicating the Lewis base donating electrons toward the Lewis acid using the notation of a dative bond — for example, ←. Some sources indicate the Lewis base with a pair of dots (the explicit electrons being donated), which allows consistent representation of the transition from the base itself to the complex with the acid: A center dot may also be used to represent a Lewis adduct, such as . Another example is boron trifluoride diethyl etherate, . In a slightly different usage, the center dot is also used to represent hydrate coordination in various crystals, as in for hydrated magnesium sulfate, irrespective of whether the water forms a dative bond with the metal. Although there have been attempts to use computational and experimental energetic criteria to distinguish dative bonding from non-dative covalent bonds, for the most part, the distinction merely makes note of the source of the electron pair, and dative bonds, once formed, behave simply as other covalent bonds do, though they typically have considerable polar character. Moreover, in some cases (e.g., sulfoxides and amine oxides as and ), the use of the dative bond arrow is just a notational convenience for avoiding the drawing of formal charges. In general, however, the donor–acceptor bond is viewed as simply somewhere along a continuum between idealized covalent bonding and ionic bonding. Lewis acids Lewis acids are diverse and the term is used loosely. Simplest are those that react directly with the Lewis base, such as boron trihalides and the pentahalides of phosphorus, arsenic, and antimony. In the same vein, can be considered to be the Lewis acid in methylation reactions. However, the methyl cation never occurs as a free species in the condensed phase, and methylation reactions by reagents like CH3I take place through the simultaneous formation of a bond from the nucleophile to the carbon and cleavage of the bond between carbon and iodine (SN2 reaction). Textbooks disagree on this point: some asserting that alkyl halides are electrophiles but not Lewis acids, while others describe alkyl halides (e.g. CH3Br) as a type of Lewis acid. The IUPAC states that Lewis acids and Lewis bases react to form Lewis adducts, and defines electrophile as Lewis acids. Simple Lewis acids Some of the most studied examples of such Lewis acids are the boron trihalides and organoboranes: BF3 + F− → In this adduct, all four fluoride centres (or more accurately, ligands) are equivalent. BF3 + OMe2 → BF3OMe2 Both BF4− and BF3OMe2 are Lewis base adducts of boron trifluoride. Many adducts violate the octet rule, such as the triiodide anion: I2 + I− → The variability of the colors of iodine solutions reflects the variable abilities of the solvent to form adducts with the Lewis acid I2. Some Lewis acids bind with two Lewis bases, a famous example being the formation of hexafluorosilicate: SiF4 + 2 F− → Complex Lewis acids Most compounds considered to be Lewis acids require an activation step prior to formation of the adduct with the Lewis base. Complex compounds such as Et3Al2Cl3 and AlCl3 are treated as trigonal planar Lewis acids but exist as aggregates and polymers that must be degraded by the Lewis base. A simpler case is the formation of adducts of borane. Monomeric BH3 does not exist appreciably, so the adducts of borane are generated by degradation of diborane: B2H6 + 2 H− → 2 In this case, an intermediate can be isolated. Many metal complexes serve as Lewis acids, but usually only after dissociating a more weakly bound Lewis base, often water. [Mg(H2O)6]2+ + 6 NH3 → [Mg(NH3)6]2+ + 6 H2O H+ as Lewis acid The proton (H+)  is one of the strongest but is also one of the most complicated Lewis acids. It is convention to ignore the fact that a proton is heavily solvated (bound to solvent). With this simplification in mind, acid-base reactions can be viewed as the formation of adducts: H+ + NH3 → H+ + OH− → H2O Applications of Lewis acids A typical example of a Lewis acid in action is in the Friedel–Crafts alkylation reaction. The key step is the acceptance by AlCl3 of a chloride ion lone-pair, forming and creating the strongly acidic, that is, electrophilic, carbonium ion. RCl +AlCl3 → R+ + Lewis bases A Lewis base is an atomic or molecular species where the highest occupied molecular orbital (HOMO) is highly localized. Typical Lewis bases are conventional amines such as ammonia and alkyl amines. Other common Lewis bases include pyridine and its derivatives. Some of the main classes of Lewis bases are amines of the formula NH3−xRx where R = alkyl or aryl. Related to these are pyridine and its derivatives. phosphines of the formula PR3−xArx. compounds of O, S, Se and Te in oxidation state −2, including water, ethers, ketones The most common Lewis bases are anions. The strength of Lewis basicity correlates with the of the parent acid: acids with high 's give good Lewis bases. As usual, a weaker acid has a stronger conjugate base. Examples of Lewis bases based on the general definition of electron pair donor include: simple anions, such as H− and F− other lone-pair-containing species, such as H2O, NH3, HO−, and CH3− complex anions, such as sulfate electron-rich -system Lewis bases, such as ethyne, ethene, and benzene The strength of Lewis bases have been evaluated for various Lewis acids, such as I2, SbCl5, and BF3. Applications of Lewis bases Nearly all electron pair donors that form compounds by binding transition elements can be viewed ligands. Thus, a large application of Lewis bases is to modify the activity and selectivity of metal catalysts. Chiral Lewis bases, generally multidentate, confer chirality on a catalyst, enabling asymmetric catalysis, which is useful for the production of pharmaceuticals. The industrial synthesis of the anti-hypertension drug mibefradil uses a chiral Lewis base (R-MeOBIPHEP), for example. Hard and soft classification Lewis acids and bases are commonly classified according to their hardness or softness. In this context hard implies small and nonpolarizable and soft indicates larger atoms that are more polarizable. typical hard acids: H+, alkali/alkaline earth metal cations, boranes, Zn2+ typical soft acids: Ag+, Mo(0), Ni(0), Pt2+ typical hard bases: ammonia and amines, water, carboxylates, fluoride and chloride typical soft bases: organophosphines, thioethers, carbon monoxide, iodide For example, an amine will displace phosphine from the adduct with the acid BF3. In the same way, bases could be classified. For example, bases donating a lone pair from an oxygen atom are harder than bases donating through a nitrogen atom. Although the classification was never quantified it proved to be very useful in predicting the strength of adduct formation, using the key concepts that hard acid—hard base and soft acid—soft base interactions are stronger than hard acid—soft base or soft acid—hard base interactions. Later investigation of the thermodynamics of the interaction suggested that hard—hard interactions are enthalpy favored, whereas soft—soft are entropy favored. Quantifying Lewis acidity Many methods have been devised to evaluate and predict Lewis acidity. Many are based on spectroscopic signatures such as shifts NMR signals or IR bands e.g. the Gutmann-Beckett method and the Childs method. The ECW model is a quantitative model that describes and predicts the strength of Lewis acid base interactions, −ΔH. The model assigned E and C parameters to many Lewis acids and bases. Each acid is characterized by an EA and a CA. Each base is likewise characterized by its own EB and CB. The E and C parameters refer, respectively, to the electrostatic and covalent contributions to the strength of the bonds that the acid and base will form. The equation is −ΔH = EAEB + CACB + W The W term represents a constant energy contribution for acid–base reaction such as the cleavage of a dimeric acid or base. The equation predicts reversal of acids and base strengths. The graphical presentations of the equation show that there is no single order of Lewis base strengths or Lewis acid strengths. and that single property scales are limited to a smaller range of acids or bases. History The concept originated with Gilbert N. Lewis who studied chemical bonding. In 1923, Lewis wrote An acid substance is one which can employ an electron lone pair from another molecule in completing the stable group of one of its own atoms. The Brønsted–Lowry acid–base theory was published in the same year. The two theories are distinct but complementary. A Lewis base is also a Brønsted–Lowry base, but a Lewis acid does not need to be a Brønsted–Lowry acid. The classification into hard and soft acids and bases (HSAB theory) followed in 1963. The strength of Lewis acid-base interactions, as measured by the standard enthalpy of formation of an adduct can be predicted by the Drago–Wayland two-parameter equation. Reformulation of Lewis theory Lewis had suggested in 1916 that two atoms are held together in a chemical bond by sharing a pair of electrons. When each atom contributed one electron to the bond, it was called a covalent bond. When both electrons come from one of the atoms, it was called a dative covalent bond or coordinate bond. The distinction is not very clear-cut. For example, in the formation of an ammonium ion from ammonia and hydrogen the ammonia molecule donates a pair of electrons to the proton; the identity of the electrons is lost in the ammonium ion that is formed. Nevertheless, Lewis suggested that an electron-pair donor be classified as a base and an electron-pair acceptor be classified as acid. A more modern definition of a Lewis acid is an atomic or molecular species with a localized empty atomic or molecular orbital of low energy. This lowest-energy molecular orbital (LUMO) can accommodate a pair of electrons. Comparison with Brønsted–Lowry theory A Lewis base is often a Brønsted–Lowry base as it can donate a pair of electrons to H+; the proton is a Lewis acid as it can accept a pair of electrons. The conjugate base of a Brønsted–Lowry acid is also a Lewis base as loss of H+ from the acid leaves those electrons which were used for the A—H bond as a lone pair on the conjugate base. However, a Lewis base can be very difficult to protonate, yet still react with a Lewis acid. For example, carbon monoxide is a very weak Brønsted–Lowry base but it forms a strong adduct with BF3. In another comparison of Lewis and Brønsted–Lowry acidity by Brown and Kanner, 2,6-di-t-butylpyridine reacts to form the hydrochloride salt with HCl but does not react with BF3. This example demonstrates that steric factors, in addition to electron configuration factors, play a role in determining the strength of the interaction between the bulky di-t-butylpyridine and tiny proton.
Physical sciences
Concepts
Chemistry
222881
https://en.wikipedia.org/wiki/Pint
Pint
The pint (, ; symbol pt, sometimes abbreviated as p) is a unit of volume or capacity in both the imperial and United States customary measurement systems. In both of those systems it is traditionally one eighth of a gallon. The British imperial pint is about 20% larger than the American pint because the two systems are defined differently. Almost all other countries have standardized on the metric system, so although some of them still also have traditional units called pints (such as for beverages), the volume varies by regional custom. The imperial pint (≈) is used in the United Kingdom and Ireland and to a limited extent in Commonwealth nations. In the United States, two kinds of pint are used: a liquid pint (≈) and a less common dry pint (≈). Other former British colonies, such as Australia, South Africa and New Zealand, converted to the metric system in the 1960s and 1970s; so while the term may still be in common use in these countries, it may no longer refer to the British imperial pint once used throughout the British Empire. Name Pint comes from the Old French word and perhaps ultimately from Vulgar Latin meaning "painted", for marks painted on the side of a container to show capacity. It is linguistically related, though greatly diverging in meaning, to Pinto – an Italian, Spanish, and Portuguese name for a person with a speckled or dark complexion, often used as a surname in these languages. In France, the French word is now used to describe a half-litre, slightly smaller than an Imperial pint, but in Canadian French it is used to describe an Imperial quart and the French word is used for an Imperial pint. Definitions Imperial pint The imperial pint is equal to one eighth of an imperial gallon. {| |- |1 imperial pint  |=  |align=right height=30| ||imperial gallon |- |||=  |align=right height=30| ||imperial quart |- |||=  |align=right| 4||imperial gills |- |||=  |align=right| 20||imperial fluid ounces |- |||=  |align=right|||millilitres (exactly) |- |||≈  |align=right|||cubic inches |- |||≈  |align=right|||US liquid pints |- |||≈  |align=right|||US dry pints |- |||≈  |align=right|||US fluid ounces |- |||≈  |align=right colspan=2 height=30|the volume of of water at |} US liquid pint In the United States, the liquid pint is legally defined as one eighth of a liquid gallon of precisely 231 cubic inches. {| |- |1 US liquid pint  |=  |align=right height=30|||US liquid gallon |- |||=  |align=right height=30|||US liquid quart |- |||=  |align=right|2||US cups |- |||=  |align=right|4||US fluid gills |- |||=  |align=right|16||US fluid ounces |- |||=  |align=right|32||US tablespoons |- |||=  |align=right|96||US teaspoons |- |||=  |align=right|128||US fluid drams |- |||=  |align=right|||cubic inches (exactly) |- |||=  |align=right|||millilitres (exactly) |- |||≈  |align=right|||imperial pints |- |||≈  |align=right|||US dry pints |- |||≈  |align=right|||imperial fluid ounces |- |||≈  |align=right colspan=2|the volume of of water at |} US dry pint In the United States, the dry pint is one sixty-fourth of a bushel. {| |- |rowspan=8 valign=top|1 US dry pint |= | |- |= | |- |= | |- |= | |- |= | |- |= | |- |≈ | |- |≈ | |} Other pints The United States dry pint is equal to one eighth of a United States dry gallon. It is used in the United States, but is not as common as the liquid pint. A now-obsolete unit of measurement in Scotland, known as the Scottish pint, or , is equal to 1696 mL (2 pints 19.69 imp fl oz). It remained in use until the 19th century, surviving significantly longer than most of the old Scottish measurements. The word pint is one of numerous false friends between English and French. They are not the same unit although they have the same linguistic origin. The French word is etymologically related, but historically described a larger unit. The Royal pint () was 48 French cubic inches (952.1 mL), but regional pints varied in size depending on locality and on commodity (usually wine or olive oil) varying from 0.95 L to over 2 L. In Canada, the Weights and Measures Act (R.S. 1985) defines a pint in English as one eighth of a gallon, but defines a in French as one quarter of a gallon. Thus, if "a pint of beer" is ordered in English, servers are legally required to serve an imperial pint (568 mL) of beer, but under the federal Act, "" legally refers to the larger imperial quart (1136 mL), while an imperial pint is designated as . However, in practice and according to Quebec’s Board of the French Language, commonly refers to the same 568 mL imperial pint as in English. In Flanders, the word , meaning 'little pint', refers only to a 250 mL glass of lager. Some West- and East-Flemish dialects use it as a word for beaker. The equivalent word in German, , refers to a glass of a third of a litre in Cologne and the Rhineland. In South Australia, ordering "a pint of beer" results in 425 mL (15 fl oz) being served. Customers must specifically request "an Imperial pint of beer" to get 570 mL (20 fl oz). Australians from other states often contest the size of their beers in Adelaide. Equivalence One US liquid pint of water weighs , which gives rise to a popular saying: "A pint's a pound the world around". However, the statement does not hold around the world because the British imperial pint, which was also the standard measure in Australia, India, Malaya, New Zealand, South Africa and other former British colonies, weighs . This prompted the Society for the Diffusion of Useful Knowledge to coin a saying for use in Commonwealth countries: "a pint of pure water weighs a pound and a quarter". History The pint is traditionally one eighth of a gallon. In the Latin of the apothecaries' system, the symbol O ( or ; plural or – reflecting the "eighth" concept in its syllable) was used for the pint. Because of the variety of definitions of a gallon, there have been equally many versions of the pint. Britain's North American colonies adopted the British wine gallon, defined in 1707 as 231 cubic inches exactly (3 in × 7 in × 11 in) as their basic liquid measure, from which the US wet pint is derived; and the British corn gallon ( of a standard "Winchester" bushel of corn, or 268.8 cubic inches) as its dry measure, from which the US dry pint is derived. In 1824, the British parliament replaced all the various gallons with a new imperial gallon based on ten pounds of distilled water at (277.42 cubic inches), from which the current UK pint is derived. The various Canadian provinces continued to use the Queen Anne Winchester wine gallon as a basis for their pint until 1873, well after Britain adopted the imperial system in 1824. This made the Canadian pint compatible with the American pint, but after 1824 it was incompatible with the British pint. The traditional French used in Lower Canada (Quebec) was twice the size of the traditional English "pint" used in Upper Canada (Ontario). After four of the British provinces united in the Canadian Confederation in 1867, Canada legally adopted the British imperial system of measure in 1873, making Canadian liquid units incompatible with American ones from that year forward. In 1873, the French Canadian was defined as being one imperial quart or two imperial pints, while the imperial pint was legally called a in French Canada. Canadian imperial units of liquid measure remain incompatible with American traditional units to this day, and although the Canadian pint, quart, and gallon are still legal units of measure in Canada, they are still 20% larger than the American ones. Historically, units called a pint (or the equivalent in the local language) were used across much of Europe, with values varying between countries from less than half a litre to over one litre. Within continental Europe, these pints were replaced with liquid measures based on the metric system during the 19th century. The term is still in limited use in parts of France and Central Europe, notably some areas of Germany and Switzerland, where is colloquially used for roughly half a litre. In Spanish holiday resorts frequented by British tourists, 'pint' is often taken to mean a beer glass (especially a dimple mug). Half-pint 285 mL, and pint mugs, 570 mL, may therefore be referred to as ('half jar/jug') and ('large jar/jug'). Effects of metrication In the British and Irish metrication processes, the pint was replaced by metric units as the legally defined primary unit of measure for trading by volume or capacity, except for the sale of draught beer and cider, and milk in returnable containers. As a supplementary unit, the pint can still be used in those countries in all circumstances. UK legislation mandates that draught beer and cider must be sold in a third of a pint, two thirds of a pint or multiples of half a pint, which must be served in stamped, measured glasses or from government-stamped meters. Milk, in returnable containers may come in pints without the metric equivalent stated. However all other goods apart from the aforementioned exceptions must be sold or labelled in metric units. Milk in plastic containers mostly comes multiples of 1 pint sizes, but are required to display the metric equivalent on packaging. Filtered milk, and UHT Milk sold in the UK is commonly sold in multiples of 1 litre bottles or containers. Recipes published in the UK and Ireland would have given ingredient quantities in imperial, where the pint is used as a unit for larger liquid quantities, as well as the metric measure - though recipes written now are more likely to use metric units. In Australia and New Zealand, a subtle change was made to 1 pint milk bottles during the conversion from imperial to metric in the 1970s. The height and diameter of the milk bottle remained unchanged, so that existing equipment for handling and storing the bottles was unaffected, but the shape was adjusted to increase the capacity from 568 mL to 600 mL—a conveniently rounded metric measure. Such milk bottles are no longer officially referred to as pints. However, the "pint glass" in pubs in Australia remains closer to the standard imperial pint, at 570 mL. It holds about 500 mL of beer and about 70 mL of froth, except in South Australia, where a pint is served in a 425 mL glass and a 570 mL glass is called an "imperial pint". In New Zealand, there is no longer any legal requirement for beer to be served in standard measures: in pubs, the largest size of glass, which is referred to as a pint, varies, but usually contains 425 mL. After metrication in Canada, milk and other liquids in pre-packaged containers came in metric sizes so conversion issues could no longer arise. Draft beer in Canada, when advertised as a "pint", is legally required to be an imperial pint (568 mL). With the allowed margin of error of 0.5 fluid ounces, a "pint" that is less than 554 mL of beer is an offence, though this regulation is often violated and rarely enforced. To avoid legal issues, many drinking establishments are moving away from using the term "pint" and are selling "glasses" or "sleeves" of beer, neither of which have a legal definition. A 375 mL bottle of liquor in the US and the Canadian maritime provinces is sometimes referred to as a "pint" and a 200 mL bottle is called a "half-pint", harking back to the days when liquor came in US pints, fifths, quarts, and half-gallons. Liquor in the US has been sold in metric-sized bottles since 1980 although beer is still sold in US traditional units. In France, a standard 250 mL measure of beer is known as ("a half"), originally meaning a half-pint.
Physical sciences
Volume
Basics and measurement
223033
https://en.wikipedia.org/wiki/Gannet
Gannet
Gannets are seabirds comprising the genus Morus in the family Sulidae, closely related to boobies. They are known as 'solan' or 'solan goose' in Scotland. A common misconception is that the Scottish name is 'guga' but this is the Gaelic name referring to the chicks only. Gannets are large white birds with yellowish heads, black-tipped wings and long bills. Northern gannets are the largest seabirds in the North Atlantic, having a wingspan of up to . The other two species occur in the temperate seas around southern Africa, southern Australia, and New Zealand. Etymology "Gannet" is derived from Old English ganot meaning "strong or masculine", ultimately from the same Old Germanic root as "gander". Taxonomy Morus is derived from Ancient Greek moros "stupid" or "foolish" due to lack of fear shown by breeding gannets and boobies, allowing them to be easily killed. Behaviour Hunting Gannets hunt fish by diving into the sea from a height of and pursuing their prey underwater, and have a number of adaptations: They have no external nostrils; they are located inside the mouth, instead. They have air sacs in the face and chest under the skin, which act like bubble wrap, cushioning the impact with the water. The position of their eyes is far enough forward on the face for binocular vision, allowing them to judge distances accurately. Gannets can achieve speeds of 100 km/h (62.13 mph) as they strike the water, enabling them to catch fish at a much greater depth than most airborne birds. The gannet's supposed capacity for eating large quantities of fish has led to "gannet" becoming a description of somebody with a voracious appetite. Mating and nesting Gannets are colonial breeders on islands and coasts, normally laying one chalky-blue egg. They lack brood patches and use their webbed feet to warm the eggs. They reach maturity around 5 years of age. First-year birds are completely black, and subsequent subadult plumages show increasing amounts of white. Northern gannets The most important nesting ground for northern gannets is the United Kingdom, with about two-thirds of the world's population. These live mainly in Scotland, including the Shetland Isles. The rest of the world's northern-gannet population nests in Canada, Ireland, the Faroe Islands, and Iceland, with small numbers in France (they are present in the Bay of Biscay), the Channel Islands, Norway, and a single colony in Germany on Heligoland. The biggest northern-gannet colony is on Scotland's Bass Rock in the Firth of Forth; in 2014, this colony contained some 75,000 pairs. Sulasgeir off the coast of the Isle of Lewis, St Kilda, Grassholm in Pembrokeshire, Bempton Cliffs in the East Riding of Yorkshire, Sceilig Bheag, Ireland, Cape St Mary's, Newfoundland, and Bonaventure Island, Quebec, are also important northern-gannet breeding sites. Systematics and evolution The three gannet species are now usually placed in the genus Morus, Abbott's booby in Papasula, and the remaining boobies in Sula. However, some authorities believe that all nine sulid species should be considered congeneric, in Sula. At one time, the various gannet species were considered to be a single species. Most fossil gannets are from the Late Miocene or Pliocene, when the diversity of seabirds in general was much higher than today. The cause the decline in species at the end of the Pleistocene is not clear; increased competition due to the spread of marine mammals may have played a role. The genus Morus is much better documented in the fossil record than Sula, though the latter is more numerous today. The reasons are not clear; boobies possibly were better adapted or simply "lucky" to occur in the right places for dealing with the challenges of the Late Pliocene ecological change, or many more fossil boobies could still await discovery. Notably, gannets are today restricted to temperate oceans, while boobies are also found in tropical waters, whereas several of the prehistoric gannet species had a more equatorial distribution than their congeners of today. Fossil species of gannets are: Morus loxostylus (Early Miocene of EC USA) – includes M. atlanticus Morus olsoni (Middle Miocene of Romania) Morus lompocanus (Lompoc Late Miocene of Lompoc, USA) Morus magnus (Late Miocene of California) Morus peruvianus (Pisco Late Miocene of Peru) Morus vagabundus (Temblor Late Miocene of California) Morus willetti (Late Miocene of California) – formerly in Sula Morus sp. (Temblor Late Miocene of Sharktooth Hill, US: Miller 1961) – possibly M. magnus Morus sp. 1 (Late Miocene/Early Pliocene of Lee Creek Mine, US) Morus sp. 2 (Late Miocene/Early Pliocene of Lee Creek Mine, US) Morus peninsularis (Early Pliocene) Morus recentior (Middle Pliocene of California, US) Morus reyanus – Del Rey gannet (Late Pleistocene of W US) Cultural references In many parts of the United Kingdom, the term "gannet" is used to refer to people who steadily eat vast quantities of food, especially at public functions. Young gannets were historically used as a food source, a tradition still practised in Ness, Scotland, where they are called "guga". Like examples of continued traditional whale harvesting, the modern-day hunting of gannet chicks results in great controversies as to whether it should continue to be given "exemption from the ordinary protection afforded to sea birds in UK and EU law". The Ness hunt is currently limited to 2,000 chicks per year and dates back at least to the Iron Age. The hunt is considered to be sustainable, since between 1902 and 2003 gannet numbers in Scotland increased dramatically from 30,000 to 180,000. In The Bookshop Sketch, originally from At Last the 1948 Show (1967), a customer (Marty Feldman) asks the bookshop proprietor (John Cleese) for "the expurgated version" of Olsen's Standard Book of British Birds, "the one without the gannet", because he does not like gannets owing to their "long nasty beaks". Desperate to satisfy the customer, the proprietor tears the page about the gannet out of the book, only for the customer then to refuse to buy it because it is damaged. The sketch is reprised in Monty Python's Contractual Obligation Album, where the customer (Graham Chapman) says he does not like the gannet because "they wet their nests." In Series 1, Episode 3, of The F Word, Gordon Ramsay travels to the northwestern coast of Scotland and is shown how to prepare, cook and eat gannet.
Biology and health sciences
Pelecanimorphae
Animals
223034
https://en.wikipedia.org/wiki/Booby
Booby
A booby is a seabird in the genus Sula, part of the family Sulidae. Boobies are closely related to the gannets (Morus), which were formerly included in Sula. Systematics and evolution The genus Sula was introduced by the French zoologist Mathurin Jacques Brisson in 1760. The type species is the brown booby. The name is derived from súla, the Old Norse and Icelandic word for the other member of the family Sulidae, the gannet. The English name booby was possibly based on the Spanish slang term , meaning "stupid", as these tame birds had a habit of landing on board sailing ships, where they were easily captured and eaten. Owing to this, boobies are often mentioned as having been caught and eaten by shipwrecked sailors, notably William Bligh of the Bounty and his adherents during their famous voyage after being set adrift by Fletcher Christian and his followers. Six of the ten extant Sulidae species called boobies are in the genus Sula, while the three gannet species are usually placed in the genus Morus. Abbott's booby was formerly included in Sula but is now placed in a monotypic genus Papasula, which represents an ancient lineage perhaps closer to Morus. Some authorities consider that all ten species should be considered congeneric in Sula. However, they are readily distinguished by means of osteology. The distinct lineages of gannets and boobies are known to have existed in such form, since at least the Middle Miocene (). The fossil record of boobies is not as well documented as that of gannets, either because booby speciation was lower from the late Miocene to the Pliocene (when gannet diversity was at its highest), or because the booby fossil species record is as yet incomplete due to most localities being equatorial or in the Southern Hemisphere. Behaviour Boobies hunt fish by diving from a height into the sea and pursuing their prey underwater. Facial air sacs under their skin cushion the impact with the water. Boobies are colonial breeders on islands and coasts. They normally lay one or more chalky-blue eggs on the ground or sometimes in a tree nest. Selective pressures, likely through competition for resource, have shaped the ecomorphology and foraging behaviours of the six species of boobies in the Pacific. List of species
Biology and health sciences
Pelecanimorphae
Animals
223204
https://en.wikipedia.org/wiki/Emperor%20penguin
Emperor penguin
The emperor penguin (Aptenodytes forsteri) is the tallest and heaviest of all living penguin species and is endemic to Antarctica. The male and female are similar in plumage and size, reaching in length and weighing from . Feathers of the head and back are black and sharply delineated from the white belly, pale-yellow breast and bright-yellow ear patches. Like all species of penguin, the emperor is flightless, with a streamlined body, and wings stiffened and flattened into flippers for a marine habitat. Its diet consists primarily of fish, but also includes crustaceans, such as krill, and cephalopods, such as squid. While hunting, the species can remain submerged around 20 minutes, diving to a depth of . It has several adaptations to facilitate this, including an unusually structured haemoglobin to allow it to function at low oxygen levels, solid bones to reduce barotrauma, and the ability to reduce its metabolism and shut down non-essential organ functions. The only penguin species that breeds during the Antarctic winter, emperor penguins trek over the ice to breeding colonies which can contain up to several thousand individuals. The female lays a single egg, which is incubated for just over two months by the male while the female returns to the sea to feed; parents subsequently take turns foraging at sea and caring for their chick in the colony. The lifespan of an emperor penguin is typically 20 years in the wild, although observations suggest that some individuals may live as long as 50 years of age. Taxonomy Emperor penguins were described in 1844 by English zoologist George Robert Gray, who created the generic name from Ancient Greek word elements, ἀ-πτηνο-δύτης [a-ptēno-dytēs], "without-wings-diver". Its specific name is in honour of the German naturalist Johann Reinhold Forster, who accompanied Captain James Cook on his second voyage and officially named five other penguin species. Forster may have been the first person to see emperor penguins in 1773–74, when he recorded a sighting of what he believed was the similar king penguin (A. patagonicus) but given the location, may very well have been the emperor penguin (A. forsteri). Together with the king penguin, the emperor penguin is one of two extant species in the genus Aptenodytes. Fossil evidence of a third species—Ridgen's penguin (A. ridgeni)—has been found in fossil records from the late Pliocene, about three million years ago, in New Zealand. Studies of penguin behaviour and genetics have proposed that the genus Aptenodytes is basal; in other words, that it split off from a branch which led to all other living penguin species. Mitochondrial and nuclear DNA evidence suggests this split occurred around 40 million years ago. Description Adult emperor penguins are in length, averaging according to Stonehouse (1975). Due to the method of bird measurement that measures length between bill to tail, sometimes body length and standing height are confused, and some reported height even reaching tall. There are still more than a few papers mentioning that they reach a standing height of instead of body length. Although standing height of emperor penguin is rarely provided at scientific reports, Prévost (1961) recorded 86 wild individuals and measured maximum height of . Friedman (1945) recorded measurements from 22 wild individuals and resulted height ranging . Ksepka et al. (2012) measured standing height of according to 11 complete skins collected in American Museum of Natural History. The weight ranges from and varies by sex, with males weighing more than females. It is the fifth heaviest living bird species, after only the larger varieties of ratite. The weight also varies by season, as both male and female penguins lose substantial mass while raising hatchlings and incubating their egg. Male emperor penguins must withstand the extreme Antarctic winter cold for more than two months while protecting their eggs; eating nothing during this time, male emperors will lose around while waiting for their eggs to hatch. The mean weight of males at the start of the breeding season is and that of females is . After the breeding season this drops to for both sexes. Like all penguin species, emperor penguins have streamlined bodies to minimize drag while swimming, and wings that are more like stiff, flat flippers. The tongue is equipped with rear-facing barbs to prevent prey from escaping when caught. Males and females are similar in size and colouration. The adult has deep black dorsal feathers, covering the head, chin, throat, back, dorsal part of the flippers, and tail. The black plumage is sharply delineated from the light-coloured plumage elsewhere. The underparts of the wings and belly are white, becoming pale yellow in the upper breast, while the ear patches are bright yellow. The upper mandible of the long bill is black, and the lower mandible can be pink, orange or lilac. In juveniles, the auricular patches, chin and throat are white, while its bill is black. Emperor penguin chicks are typically covered with silver-grey down and have black heads and white masks. A chick with all-white plumage was seen in 2001, but was not considered to be an albino as it did not have pink eyes. Chicks weigh around after hatching, and fledge when they reach about 50% of adult weight. The emperor penguin's dark plumage fades to brown from November until February (the Antarctic summer), before the yearly moult in January and February. Moulting is rapid in this species compared with other birds, taking only around 34 days. Emperor penguin feathers emerge from the skin after they have grown to a third of their total length, and before old feathers are lost, to help reduce heat loss. New feathers then push out the old ones before finishing their growth. The average yearly survival rate of an adult emperor penguin has been measured at 95.1%, with an average life expectancy of 19.9 years. The same researchers estimated that 1% of emperor penguins hatched could feasibly reach an age of 50 years. In contrast, only 19% of chicks survive their first year of life. Therefore, 80% of the emperor penguin population comprises adults five years and older. Vocalisation As the species has no fixed nesting sites that individuals can use to locate their own partner or chick, emperor penguins must rely on vocal sounds alone for identification. They use a complex set of calls that are critical to individual recognition between mates, parents and offspring, displaying the widest variation in individual calls of all penguin species. Vocalizing emperor penguins use two frequency bands simultaneously. Chicks use a frequency-modulated whistle to beg for food and to contact parents. Adaptations to cold The emperor penguin breeds in the coldest environment of any bird species; air temperatures may reach , and wind speeds may reach . Water temperature is a frigid , which is much lower than the emperor penguin's average body temperature of . The species has adapted in several ways to counteract heat loss. Dense feathers provide 80–90% of its insulation and it has a layer of sub-dermal fat which may be up to thick before breeding. While the density of contour feathers is approximately 9 per square centimetre (58 per square inch), a combination of dense afterfeathers and down feathers (plumules) likely play a critical role for insulation. Muscles allow the feathers to be held erect on land, reducing heat loss by trapping a layer of air next to the skin. Conversely, the plumage is flattened in water, thus waterproofing the skin and the downy underlayer. Preening is vital in facilitating insulation and in keeping the plumage oily and water-repellent. The emperor penguin is able to thermoregulate (maintain its core body temperature) without altering its metabolism, over a wide range of temperatures. Known as the thermoneutral range, this extends from . Below this temperature range, its metabolic rate increases significantly, although an individual can maintain its core temperature from down to . Movement by swimming, walking, and shivering are three mechanisms for increasing metabolism; a fourth process involves an increase in the breakdown of fats by enzymes, which is induced by the hormone glucagon. At temperatures above , an emperor penguin may become agitated as its body temperature and metabolic rate rise to increase heat loss. Raising its wings and exposing the undersides increases the exposure of its body surface to the air by 16%, facilitating further heat loss. Adaptations to pressure and low oxygen In addition to the cold, the emperor penguin encounters another stressful condition on deep dives—markedly increased pressure of up to 40 times that of the surface, which in most other terrestrial organisms would cause barotrauma. The bones of the penguin are solid rather than air-filled, which eliminates the risk of mechanical barotrauma. While diving, the emperor penguin's oxygen use is markedly reduced, as its heart rate is reduced to as low as 15–20 beats per minute and non-essential organs are shut down, thus facilitating longer dives. Its haemoglobin and myoglobin are able to bind and transport oxygen at low blood concentrations; this allows the bird to function with very low oxygen levels that would otherwise result in loss of consciousness. Distribution and habitat The emperor penguin has a circumpolar distribution in the Antarctic almost exclusively between the 66° and 77° south latitudes. It almost always breeds on stable pack ice near the coast and up to offshore. Breeding colonies are usually in areas where ice cliffs and icebergs provide some protection from the wind. Three land colonies have been reported: one (now disappeared) on a shingle spit at the Dion Islands on the Antarctic Peninsula, one on a headland at Taylor Glacier in Victoria Land, and most recently one at Amundsen Bay. Since 2009, a number of colonies have been reported on shelf ice rather than sea ice, in some cases moving to the shelf in years when sea ice forms late. The northernmost breeding population is on Snow Hill Island, near the northern tip of the Peninsula. Individual vagrants have been seen on Heard Island, South Georgia, and occasionally in New Zealand. The furthest north a vagrant has been recorded was in Denmark, Western Australia in November 2024. This individual, which is believed to have originated from eastern Antarctica, was encountered by a group of surfers shortly after it arrived in Denmark, and was taken in by conservationists at the Department of Biodiversity, Conservation and Attractions to have its condition evaluated. In 2009, the total population of emperor penguins was estimated to be at around 595,000 adult birds, in 46 known colonies spread around the Antarctic and sub-Antarctic; around 35% of the known population lives north of the Antarctic Circle. Major breeding colonies were located at Cape Washington, Coulman Island in Victoria Land, Halley Bay, Cape Colbeck, and Dibble Glacier. Colonies are known to fluctuate over time, often breaking into "suburbs" which move apart from the parent group, and some have been known to disappear entirely. The Cape Crozier colony on the Ross Sea shrank drastically between the first visits by the Discovery Expedition in 1902–03 and the later visits by the Terra Nova Expedition in 1910–11; it was reduced to a few hundred birds, and may have come close to extinction due to changes in the position of the ice shelf. By the 1960s it had rebounded dramatically, but by 2009 was again reduced to a small population of around 300. Conservation status In 2012, the emperor penguin was downgraded from a species of least concern to near threatened by the IUCN. Along with nine other species of penguin, it is currently under consideration for inclusion under the US Endangered Species Act. The primary causes for an increased risk of species endangerment are declining food availability, due to the effects of climate change and industrial fisheries on the crustacean and fish populations. Other reasons for the species's placement on the Endangered Species Act's list include disease, habitat destruction, and disturbance at breeding colonies by humans. Of particular concern is the impact of tourism. One study concluded that emperor penguin chicks in a crèche become more apprehensive following a helicopter approach to . Population declines of 50% in the Terre Adélie region have been observed due to an increased death rate among adult birds, especially males, during an abnormally prolonged warm period in the late 1970s, which resulted in reduced sea-ice coverage. On the other hand, egg hatching success rates declined when the sea-ice extent increased; chick deaths also increased; the species is therefore considered to be highly sensitive to climatic changes. In 2009, the Dion Islands colony, which had been extensively studied since 1948, was reported to have completely disappeared at some point over the previous decade, the fate of the birds unknown. This was the first confirmed loss of an entire colony. Beginning in September 2015, a strong El Niño, strong winds, and record low amounts of sea ice resulted in "almost total breeding failure" with the deaths of thousands of emperor chicks for three consecutive years within the Halley Bay colony, the second largest emperor penguin colony in the world. Researchers have attributed this loss to immigration of breeding penguins to the Dawson-Lambton colony south, in which a tenfold population increase was observed between 2016 and 2018. However, this increase is nowhere near the total number of breeding adults formerly at the Halley Bay colony. In January 2009, a study from the Woods Hole Oceanographic Institution concluded that global climate change could push the emperor penguin to the brink of extinction by the year 2100. The study constructed a mathematical model to predict how the loss of sea ice from climate warming would affect a big colony of emperor penguins at Terre Adélie, Antarctica. The study forecasted an 87% decline in the colony's population, from three thousand breeding pairs in 2009 to four hundred breeding pairs in 2100. Another study by the Woods Hole Oceanographic Institution in 2014 again concluded that emperor penguins are at risk from global warming, which is melting the sea ice. This study predicted that by 2100 all 45 colonies of emperor penguins will be declining in numbers, mostly due to loss of habitat. Loss of ice reduces the supply of krill, which is a primary food for emperor penguins. In December 2022, a new colony at Verleger Point in West Antarctica was discovered by satellite imaging, bringing the total known colonies to 66. In 2023, another study found that more than 90% of emperor penguin colonies could face "quasi-extinction" from "catastrophic breeding failure" due to the loss of sea ice caused by climate change. Behaviour The emperor penguin is a social animal in its nesting and its foraging behaviour; birds hunting together may coordinate their diving and surfacing. Individuals may be active day or night. A mature adult travels throughout most of the year between the breeding colony and ocean foraging areas; the species disperses into the oceans from January to March. The American physiologist Gerry Kooyman revolutionised the study of penguin foraging behaviour in 1971 when he published his results from attaching automatic dive-recording devices to emperor penguins. He found that the species reaches depths of , with dive periods of up to 18 minutes. Later research revealed a small female had dived to a depth of near McMurdo Sound. It is possible that emperor penguins can dive for even deeper and longer periods, as the accuracy of the recording devices is diminished at greater depths. Further study of one bird's diving behaviour revealed regular dives to in water around deep, and shallow dives of less than , interspersed with deep dives of more than in depths of . This was suggestive of feeding near or at the sea bottom. In 1994, a penguin from Auster rookery reached a depth of ; the entire dive took him 21.8 min. Both male and female emperor penguins forage for food up to from colonies while collecting food to feed chicks, covering per individual per trip. A male returning to the sea after incubation heads directly out to areas of permanent open water, known as polynyas, around from the colony. An efficient swimmer, the emperor penguin exerts pressure with both its upward and downward strokes while swimming. The upward stroke works against buoyancy and helps maintain depth. Its average swimming speed is . On land, the emperor penguin alternates between walking with a wobbling gait and tobogganing—sliding over the ice on its belly, propelled by its feet and wing-like flippers. Like all penguins, it is flightless. The emperor penguin is a very powerful bird. In one case, a crew of six men, trying to capture a single male penguin for a zoo collection, were repeatedly tossed around and knocked over before all of the men had to collectively tackle the bird, which weighs about half as much as a man. As a defence against the cold, a colony of emperor penguins forms a compact huddle (also known as the turtle formation) ranging in size from ten to several hundred birds, with each bird leaning forward on a neighbour. As the wind chill is the least severe in the center of the colony, all the juveniles are usually huddled there. Those on the outside upwind tend to shuffle slowly around the edge of the formation and add themselves to its leeward edge, producing a slow churning action, and giving each bird a turn on the inside and on the outside. Predators The emperor penguin's predators include birds and aquatic mammals. Southern giant petrels (Macronectes giganteus) are the predominant land predator of chicks, responsible for over one-third of chick deaths in some colonies; they also scavenge dead penguins. The south polar skua (Stercorarius maccormicki) mainly scavenges for dead chicks, as the live chicks are usually too large to be attacked by the time of its annual arrival in the colony. Occasionally, a parent may attempt to defend its chick from attack, although it may be more passive if the chick is weak or sickly. The only known predators known to attack healthy adults, and which attack emperor penguins in the ocean, are both mammals. The first is the leopard seal (Hydrurga leptonyx), which takes adult birds and fledglings soon after they enter the water. Orcas (Orcinus orca), mostly take adult birds at sea, although they will attack penguins of any age in or near water. Courtship and breeding Although Emperor penguins can breed at around three years of age, they generally do not begin breeding for another one to three years. The yearly reproductive cycle begins at the start of the Antarctic winter, in March and April, when all mature emperor penguins travel to colonial nesting areas, often walking inland from the edge of the pack ice. The start of travel appears to be triggered by decreasing day lengths; emperor penguins in captivity have been induced successfully into breeding by using artificial lighting systems mimicking seasonal Antarctic day lengths. The British Antarctic Survey (BAS) used satellite imagery to find new Emperor penguin breeding sites in Antarctica, a discovery that increased the estimated population of the Emperor penguins by 5 to 10 percent to around 278,000 breeding pairs. Given their remote locations and harsh weather conditions, penguin populations are found by scanning aerial imagery and locating enormous pages of ice that have been stained with their guano. The new discoveries increased the number of known breeding sites from 50 to 61. The penguins start courtship in March or April, when the temperature can be as low as . A lone male gives an ecstatic display, where it stands still and places its head on its chest before inhaling and giving a courtship call for 1–2 seconds; it then moves around the colony and repeats the call. A male and female then stand face to face, with one extending its head and neck up and the other mirroring it; they both hold this posture for several minutes. Once in pairs, couples waddle around the colony together, with the female usually following the male. Before copulation, one bird bows deeply to its mate, its bill pointed close to the ground, and its mate then does the same. Contrary to popular belief, Emperor penguins do not mate for life; they are serially monogamous, having only one mate each year, and remaining faithful to that mate. However, fidelity between years is only around 15%. The narrow window of opportunity available for mating appears to be an influence, as there is a priority to mate and breed which usually precludes waiting for the previous year's partner to arrive at the colony. The female penguin lays one egg in May or early June; it is vaguely pear-shaped, pale greenish-white, and measures around . It represents just 2.3% of its mother's body weight, making it one of the smallest eggs relative to the maternal weight in any bird species. 15.7% of the weight of an emperor penguin egg is shell; like those of other penguin species, the shell is relatively thick, which helps minimize risk of breakage. After laying, the mother's food reserves are exhausted and she very carefully transfers the egg to the male, and then immediately returns to the sea for two months to feed. The transfer of the egg can be awkward and difficult, especially for first-time parents, and many couples drop or crack the egg in the process. When this happens, the chick inside is quickly lost, as the egg cannot withstand the sub-freezing temperatures on the icy ground for more than one to two minutes. When a couple loses an egg in this manner, their relationship is ended and both walk back to the sea. They will return to the colony next year to try mating again. After a successful transfer of the egg to the male penguin, the female departs for the sea and the male spends the dark, stormy winter incubating the egg against his brood patch, a patch of skin without feathers. There he balances it on the tops of his feet, engulfing it with loose skin and feathers for around 65-75 consecutive days until hatching. The emperor is the only penguin species where this behaviour is observed; in all other penguin species both parents take shifts incubating. By the time the egg hatches, the male will have fasted for around 120 days since arriving at the colony. To survive the cold and savage winds of up to , the males huddle together, taking turns in the middle of the huddle. They have also been observed with their backs to the wind to conserve body heat. In the four months of travel, courtship, and incubation, the male may lose as much as , from a total mass of . Hatching may take as long as two or three days to complete, as the shell of the egg is thick. Newly hatched chicks are semi-altricial, covered with only a thin layer of down and entirely dependent on their parents for food and warmth. The chick usually hatches before the mother's return, and the father feeds it a curd-like substance composed of 59% protein and 28% lipid, which is produced by a gland in his oesophagus. This ability to produce "crop milk" in birds is only found in pigeons, flamingos and male Emperor penguins. The father is able to produce this crop milk to temporarily sustain the chick for generally 3 to 7 days, until the mother returns from fishing at sea with food to properly feed the chick. If the mother penguin is delayed, the chick will die. Research indicates that between 10-20% of female Emperor penguins do not return to their colony from foraging at sea, most victims of the harsh winter weather conditions or being eaten by predators. The young chick is brooded in what is called the guard phase, spending time balanced on its parent's feet and kept warm by the brood patch. The female penguin returns at any time from hatching up to ten days afterwards, from mid-July to early August. She finds her mate among the hundreds of fathers by his vocal call and takes over caring for the chick, feeding it by regurgitating the partially digested fish, squid and krill that she has stored in her stomach. The male is often reluctant to surrender the chick he has been caring for all winter to its mother, but he soon leaves to take his turn at sea, spending 3 to 4 weeks feeding there before returning. The parents then take turns, one brooding while the other forages at sea. If either parent is delayed or fails to return to the colony, the lone parent will return to the sea to feed, leaving the chick to die. Abandoned eggs do not hatch and orphaned chicks never survive. Female emperors who failed to find a mate to breed with, or have lost their own chick may attempt to adopt a stray chick or steal the chick of another female. The mother of the chick and neighboring females will fight to protect the chick or reclaim it, if it has been successfully stolen. These scuffles involving several birds often result in the chick being smothered or trampled to death. Chicks which have been adopted or stolen are quickly abandoned once again, as it is impossible for the female to feed and care for the chick alone. The orphaned chicks wander around the colony attempting to seek food and protection from other adults. They will even try to shelter themselves in an adult bird's brood patch already occupied by their own chick. These stray chicks are brusquely driven away by the adults and their chicks. All orphaned chicks will rapidly become weaker and die of starvation, or freeze to death. About 45–50 days after hatching, the chicks form a crèche, huddling together for warmth and protection. During this time, both parents forage at sea and return periodically to feed their chicks. A crèche may consist of around a dozen, up to several thousand chicks densely packed together and is essential for surviving the low Antarctic temperatures. From early November, chicks begin moulting into juvenile plumage, which takes up to two months and is usually not completed by the time they leave the colony. Adults cease feeding them during this time. All birds make the considerably shorter trek to the sea in December and January. The birds spend the rest of the summer feeding there. Feeding The emperor penguin's diet consists mainly of fish, crustaceans and cephalopods, although its composition varies from population to population. Fish are usually the most important food source, and the Antarctic silverfish (Pleuragramma antarcticum) makes up the bulk of the bird's diet. Other prey commonly recorded include other fish of the family Nototheniidae, the glacial squid (Psychroteuthis glacialis), and the hooked squid species Kondakovia longimana, as well as Antarctic krill (Euphausia superba). The emperor penguin searches for prey in the open water of the Southern Ocean, in either ice-free areas of open water or tidal cracks in pack ice. One of its feeding strategies is to dive to around , where it can easily spot sympagic fish like the bald notothen (Pagothenia borchgrevinki) swimming against the bottom surface of the sea-ice; it swims up to the bottom of the ice and catches the fish. It then dives again and repeats the sequence about half a dozen times before surfacing to breathe. Relationship with humans In zoos and aquariums Since the 1930s, there have been several attempts at keeping emperor penguins in captivity. Malcolm Davis of the National Zoological Park made early attempts at keeping penguins, capturing several from Antarctica. He successfully transferred penguins to the National Zoological Park on March 5, 1940, where they lived for up to 6 years. Until the 1960s, keeping attempts were largely unsuccessful, as knowledge of penguin keeping in general was limited and acquired by trial and error. The first to achieve a level of success was Aalborg Zoo where a chilled house was built especially for this Antarctic species. One individual lived for 20 years at the zoo and a chick was hatched there, but died shortly after. Today, the species is kept at just a few zoos and public aquariums in North America and Asia. Emperor penguins were first successfully bred at SeaWorld San Diego; more than 20 birds have hatched there since 1980. Considered a flagship species, 55 individuals were counted in captivity in North American zoos and aquaria in 1999. In China, the emperor penguin was first bred at Nanjing Underwater World in 2009, followed by Laohutan Ocean Park in Dalian in 2010. Since then it has been kept and bred at a few other facilities in China, and the only confirmed twin emperor penguins (the species normally lays just one egg) hatched at Sun Asia Ocean World in Dalian in 2017. In Japan, the species is housed at Port of Nagoya Public Aquarium and Wakayama Adventure World, with successful hatching at Adventure World. Penguin rescue, rehabilitation and release In June 2011, a juvenile emperor penguin was found on the beach at Peka Peka, north of Wellington in New Zealand. He had consumed of sand, which he had apparently mistaken for snow, as well as sticks and stones, and had to undergo a number of operations to remove these to save his life. Following recovery, on 4 September, the juvenile, named "Happy Feet" (after the 2006 film), was fitted with a tracking device and released into the Southern Ocean north of Campbell Island. However, 8 days later scientists lost contact with the bird, suggesting that the transmitter had fallen off (considered likely) or that he had been eaten by a predator (considered less likely). Cultural references The species' unique life cycle in such a harsh environment has been described in print and visual media. Apsley Cherry-Garrard, the Antarctic explorer, wrote in 1922: "Take it all in all, I do not believe anybody on Earth has a worse time than an emperor penguin". Widely distributed in cinemas in 2005, the French documentary La Marche de l'empereur, which was also released with the English title March of the Penguins, told the story of the penguins' reproductive cycle. The subject has been covered for the small screen five times by the BBC and presenter David Attenborough: first in episode five of the 1993 series on the Antarctic Life in the Freezer, again in the 2001 series The Blue Planet, once again in the 2006 series Planet Earth, in Frozen Planet in 2011 and a one-hour programme dedicated to the species in the 2018 series Dynasties. The animated movie Happy Feet (2006, followed by a sequel Happy Feet Two, 2011) features emperor penguins as its primary characters, with one in particular that loves to dance; although a comedy, it too depicts their life cycle and promotes an underlying serious environmental message of threats from global warming and depletion of food sources by overfishing. The animated movie Surf's Up (2007) features a surfing emperor penguin named Zeke "Big-Z" Topanga. More than 30 countries have depicted the bird on their stamps – Australia, Great Britain, Chile and France have each issued several. It has also been depicted on a 1962 10 franc stamp as part of an Antarctic expedition series. Canadian band The Tragically Hip composed the song "Emperor Penguin" for their 1998 album Phantom Power. The Emperor Lays an Egg is a 2004 non-fiction children's picture book by Brenda Z. Guiberson. DC Comics' crime boss character Oswald Chesterfield Cobblepot, aka "The Penguin", styles himself after an emperor penguin, a fact which is often referenced in stories, e.g., in his occasional alias "Forster Aptenodytes."
Biology and health sciences
Sphenisciformes
Animals
223325
https://en.wikipedia.org/wiki/Software%20design
Software design
Software design is the process of conceptualizing how a software system will work before it is implemented or modified. Software design also refers to the direct result of the design process the concepts of how the software will work which consists of both design documentation and undocumented concepts. Software design usually is directed by goals for the resulting system and involves problem-solving and planning including both high-level software architecture and low-level component and algorithm design. In terms of the waterfall development process, software design is the activity of following requirements specification and before coding. General process The design process enables a designer to model various aspects of a software system before it exists. Creativity, past experience, a sense of what makes "good" software, and a commitment to quality are success factors for a competent design. However, the design process is not always a straightforward procedure. The software design model can be compared to an architected plan for a house. High-level plans represent the totality of the house (e.g., a three-dimensional rendering of the house). Lower-level plans provide guidance for constructing each detail (e.g., the plumbing lay). Similarly, the software design model provides a variety of views of the proposed software solution. Iterative Design for Software Components Software systems inherently deal with uncertainties, and the size of software components can significantly influence a system's outcomes, both positively and negatively. Neal Ford and Mark Richards propose an iterative approach to address the challenge of identifying and right-sizing components. This method emphasizes continuous refinement as teams develop a more nuanced understanding of system behavior and requirements. The approach typically involves a cycle with several stages: A high-level partitioning strategy is established, often categorized as technical or domain-based. Guidelines for the smallest meaningful deployable unit, referred to as "quanta," are defined. While these foundational decisions are made early, they may be revisited later in the cycle if necessary. Initial components are identified based on the established strategy. Requirements are assigned to the identified components. The roles and responsibilities of each component are analyzed to ensure clarity and minimize overlap. Architectural characteristics, such as scalability, fault tolerance, and maintainability, are evaluated. Components may be restructured based on feedback from development teams. This cycle serves as a general framework and can be adapted to different domains. Value Software design documentation may be reviewed or presented to allow constraints, specifications and even requirements to be adjusted prior to coding. Redesign may occur after a review of a programmed simulation or prototype. It is possible to design software in the process of coding, without a plan or requirement analysis, but for more complex projects this is less feasible. A separate design prior to coding allows for multidisciplinary designers and subject-matter experts (SMEs) to collaborate with programmers in order to produce software that is useful and technically sound. Requirements analysis One component of software design is software requirements analysis (SRA). SRA is a part of the software development process that lists specifications used in software engineering. The output of the analysis is smaller problems to solve. In contrast, the design focuses on capabilities, and thus multiple designs for the same problem can exist. Depending on the environment, the design often varies, whether it is created from reliable frameworks or implemented with suitable design patterns. Artifacts A design process may include the production of artifacts such as flow chart, use case, Pseudocode, Unified Modeling Language model and other Fundamental modeling concepts. For user centered software, design may involve user experience design yielding a storyboard to help determine those specifications. Sometimes the output of a design process is design documentation. Design principles Basic design principles enable a software engineer to navigate the design process. Davis suggests a set of principles for software design, which have been adapted and extended in the following list: The design process should not suffer from "tunnel vision". A good designer should consider alternative approaches, judging each based on the requirements of the problem, the resources available to do the job. The design should be traceable to the analysis model. Because a single element of the design model can often be traced back to multiple requirements, it is necessary to have a means for tracking how requirements have been satisfied by the design model. The design should not reinvent the wheel. Systems are constructed using a set of design patterns, many of which have likely been encountered before. These patterns should always be chosen as an alternative to reinvention. Time is short and resources are limited; design time should be invested in representing (truly new) ideas by integrating patterns that already exist (when applicable). The design should "minimize the intellectual distance" between the software and the problem as it exists in the real world. That is, the structure of the software design should, whenever possible, mimic the structure of the problem domain. The design should exhibit uniformity and integration. A design is uniform if it appears fully coherent. In order to achieve this outcome, rules of style and format should be defined for a design team before design work begins. A design is integrated if care is taken in defining interfaces between design components. The design should be structured to accommodate change. The design concepts discussed in the next section enable a design to achieve this principle. The design should be structured to degrade gently, even when aberrant data, events, or operating conditions are encountered. Well-designed software should never "bomb"; it should be designed to accommodate unusual circumstances, and if it must terminate processing, it should do so in a graceful manner. Design is not coding, coding is not design. Even when detailed procedural designs are created for program components, the level of abstraction of the design model is higher than the source code. The only design decisions made at the coding level should address the small implementation details that enable the procedural design to be coded. The design should be assessed for quality as it is being created, not after the fact. A variety of design concepts and design measures are available to assist the designer in assessing quality throughout the development process. The design should be reviewed to minimize conceptual (semantic) errors. There is sometimes a tendency to focus on minutiae when the design is reviewed, missing the forest for the trees. A design team should ensure that major conceptual elements of the design (omissions, ambiguity, inconsistency) have been addressed before worrying about the syntax of the design model. Design concepts Design concepts provide a designer with a foundation from which more sophisticated methods can be applied. A set of design concepts has evolved including: Abstraction - Abstraction is the process or result of generalization by reducing the information content of a concept or an observable phenomenon, typically to retain only information that is relevant for a particular purpose. It is an act of Representing essential features without including the background details or explanations. Refinement - It is the process of elaboration. A hierarchy is developed by decomposing a macroscopic statement of function in a step-wise fashion until programming language statements are reached. In each step, one or several instructions of a given program are decomposed into more detailed instructions. Abstraction and Refinement are complementary concepts. Modularity - Software architecture is divided into components called modules. Software Architecture - It refers to the overall structure of the software and the ways in which that structure provides conceptual integrity for a system. Good software architecture will yield a good return on investment with respect to the desired outcome of the project, e.g. in terms of performance, quality, schedule and cost. Control Hierarchy - A program structure that represents the organization of a program component and implies a hierarchy of control. Structural Partitioning - The program structure can be divided horizontally and vertically. Horizontal partitions define separate branches of modular hierarchy for each major program function. Vertical partitioning suggests that control and work should be distributed top-down in the program structure. Data Structure - It is a representation of the logical relationship among individual elements of data. Software Procedure - It focuses on the processing of each module individually. Information Hiding - Modules should be specified and designed so that information contained within a module is inaccessible to other modules that have no need for such information. In his object model, Grady Booch mentions Abstraction, Encapsulation, Modularisation, and Hierarchy as fundamental software design principles. The acronym PHAME (Principles of Hierarchy, Abstraction, Modularisation, and Encapsulation) is sometimes used to refer to these four fundamental principles. Design considerations There are many aspects to consider in the design of a piece of software. The importance of each consideration should reflect the goals and expectations that the software is being created to meet. Some of these aspects are: Compatibility - The software is able to operate with other products that are designed for interoperability with another product. For example, a piece of software may be backward-compatible with an older version of itself. Extensibility - New capabilities can be added to the software without major changes to the underlying architecture. Modularity - the resulting software comprises well defined, independent components which leads to better maintainability. The components could be then implemented and tested in isolation before being integrated to form a desired software system. This allows division of work in a software development project. Fault-tolerance - The software is resistant to and able to recover from component failure. Maintainability - A measure of how easily bug fixes or functional modifications can be accomplished. High maintainability can be the product of modularity and extensibility. Reliability (Software durability) - The software is able to perform a required function under stated conditions for a specified period of time. Reusability - The ability to use some or all of the aspects of the preexisting software in other projects with little to no modification. Robustness - The software is able to operate under stress or tolerate unpredictable or invalid input. For example, it can be designed with resilience to low memory conditions. Security - The software is able to withstand and resist hostile acts and influences. Usability - The software user interface must be usable for its target user/audience. Default values for the parameters must be chosen so that they are a good choice for the majority of the users. Performance - The software performs its tasks within a time-frame that is acceptable for the user, and does not require too much memory. Portability - The software should be usable across a number of different conditions and environments. Scalability - The software adapts well to increasing data or added features or number of users. According to Marc Brooker: "a system is scalable in the range where marginal cost of additional workload is nearly constant." Serverless technologies fit this definition but you need to consider total cost of ownership not just the infra cost. Modeling language A modeling language can be used to express information, knowledge or systems in a structure that is defined by a consistent set of rules. These rules are used for interpretation of the components within the structure. A modeling language can be graphical or textual. Examples of graphical modeling languages for software design include: Architecture description language (ADL) is a language used to describe and represent the software architecture of a software system. Business Process Modeling Notation (BPMN) is an example of a Process Modeling language. EXPRESS and EXPRESS-G (ISO 10303-11) is an international standard general-purpose data modeling language. Extended Enterprise Modeling Language (EEML) is commonly used for business process modeling across a number of layers. Flowcharts are schematic representations of algorithms or other step-wise processes. Fundamental Modeling Concepts (FMC) is modeling language for software-intensive systems. IDEF is a family of modeling languages, the most notable of which include IDEF0 for functional modeling, IDEF1X for information modeling, and IDEF5 for modeling ontologies. Jackson Structured Programming (JSP) is a method for structured programming based on correspondences between data stream structure and program structure. LePUS3 is an object-oriented visual Design Description Language and a formal specification language that is suitable primarily for modeling large object-oriented (Java, C++, C#) programs and design patterns. Unified Modeling Language (UML) is a general modeling language to describe software both structurally and behaviorally. It has a graphical notation and allows for extension with a Profile (UML). Alloy (specification language) is a general purpose specification language for expressing complex structural constraints and behavior in a software system. It provides a concise language base on first-order relational logic. Systems Modeling Language (SysML) is a new general-purpose modeling language for systems engineering. Service-oriented modeling framework (SOMF) Design patterns A software designer may identify a design aspect which has been visited and perhaps even solved by others in the past. A template or pattern describing a solution to a common problem is known as a design pattern. The reuse of such patterns can increase software development velocity. Code as design The difficulty of using the term "design" in relation to software is that in some senses, the source code of a program is the design for the program that it produces. To the extent that this is true, "software design" refers to the design of the design. Edsger W. Dijkstra referred to this layering of semantic levels as the "radical novelty" of computer programming, and Donald Knuth used his experience writing TeX to describe the futility of attempting to design a program prior to implementing it:
Technology
Software development: General
null
223356
https://en.wikipedia.org/wiki/Power%20inverter
Power inverter
A power inverter, inverter, or invertor is a power electronic device or circuitry that changes direct current (DC) to alternating current (AC). The resulting AC frequency obtained depends on the particular device employed. Inverters do the opposite of rectifiers which were originally large electromechanical devices converting AC to DC. The input voltage, output voltage and frequency, and overall power handling depend on the design of the specific device or circuitry. The inverter does not produce any power; the power is provided by the DC source. A power inverter can be entirely electronic or maybe a combination of mechanical effects (such as a rotary apparatus) and electronic circuitry. Static inverters do not use moving parts in the conversion process. Power inverters are primarily used in electrical power applications where high currents and voltages are present; circuits that perform the same function for electronic signals, which usually have very low currents and voltages, are called oscillators. Input and output Input voltage A typical power inverter device or circuit requires a stable DC power source capable of supplying enough current for the intended power demands of the system. The input voltage depends on the design and purpose of the inverter. Examples include: 12 V DC, for smaller consumer and commercial inverters that typically run from a rechargeable 12 V lead acid battery or automotive electrical outlet. 24, 36, and 48 V DC, which are common standards for home energy systems. 200 to 400 V DC, when power is from photovoltaic solar panels. 300 to 450 V DC, when power is from electric vehicle battery packs in vehicle-to-grid systems. Hundreds of thousands of volts, where the inverter is part of a high-voltage direct current power transmission system. Output waveform An inverter may produce a square wave, sine wave, modified sine wave, pulsed sine wave, or near-sine pulse-width modulated wave (PWM) depending on circuit design. Common types of inverters produce square waves or quasi-square waves. One measure of the purity of a sine wave is the total harmonic distortion (THD). Technical standards for commercial power distribution grids require less than 3% THD in the wave shape at the customer's point of connection. IEEE Standard 519 recommends less than 5% THD for systems connecting to a power grid. There are two basic designs for producing household plug-in voltage from a lower-voltage DC source, the first of which uses a switching boost converter to produce a higher-voltage DC and then converts to AC. The second method converts DC to AC at battery level and uses a line-frequency transformer to create the output voltage. Square wave A 50% duty cycle square wave is one of the simplest waveforms an inverter design can produce, but adds ~48.3% THD to its fundamental sine wave. Thus, a square wave output can produce undesired "humming" noises when connected to audio equipment and is better suited to low-sensitivity applications such as lighting and heating. Sine wave A power inverter device that produces a multiple step sinusoidal AC waveform is referred to as a sine wave inverter. To more clearly distinguish the inverters with outputs of much less distortion than the modified sine wave (three-step) inverter designs, the manufacturers often use the phrase pure sine wave inverter. Almost all consumer grade inverters that are sold as a "pure sine wave inverter" do not produce a smooth sine wave output at all, just a less choppy output than the square wave (two-step) and modified sine wave (three-step) inverters. However, this is not critical for most electronics as they deal with the output quite well. Where power inverter devices substitute for standard line power, a sine wave output is desirable because many electrical products are engineered to work best with a sine wave AC power source. The standard electric utility provides a sine wave, typically with minor imperfections but sometimes with significant distortion. Sine wave inverters with more than three steps in the wave output are more complex and have significantly higher cost than a modified sine wave, with only three steps, or square wave (one step) types of the same power handling. Switched-mode power supply (SMPS) devices, such as personal computers or DVD players, function on modified sine wave power. AC motors directly operated on non-sinusoidal power may produce extra heat, may have different speed-torque characteristics, or may produce more audible noise than when running on sinusoidal power. Modified sine wave The modified sine wave is the sum of two square waves, one of which is delayed one-quarter of the period with respect to the other. The result is a repeated voltage step sequence of zero, peak positive, zero, peak negative, and again zero. The resultant voltage waveform better approximates the shape of a sinusoidal voltage waveform than a single square wave. Most inexpensive consumer power inverters produce a modified sine wave rather than a pure sine wave. If the waveform is chosen to have its peak voltage values for half of the cycle time, the peak voltage to RMS voltage ratio is the same as for a sine wave. The DC bus voltage may be actively regulated, or the "on" and "off" times can be modified to maintain the same RMS value output up to the DC bus voltage to compensate for DC bus voltage variations. By changing the pulse width, the harmonic spectrum can be changed. The lowest THD for a three-step modified sine wave is 30% when the pulses are at 130 degrees width of each electrical cycle. This is slightly lower than for a square wave. The ratio of on to off time can be adjusted to vary the RMS voltage while maintaining a constant frequency with a technique called pulse-width modulation (PWM). The generated gate pulses are given to each switch in accordance with the developed pattern to obtain the desired output. The harmonic spectrum in the output depends on the width of the pulses and the modulation frequency. It can be shown that the minimum distortion of a three-level waveform is reached when the pulses extend over 130 degrees of the waveform, but the resulting voltage will still have about 30% THD, higher than commercial standards for grid-connected power sources. When operating induction motors, voltage harmonics are usually not of concern; however, harmonic distortion in the current waveform introduces additional heating and can produce pulsating torques. Numerous items of electric equipment will operate quite well on modified sine wave power inverter devices, especially loads that are resistive in nature such as traditional incandescent light bulbs. Items with a switched-mode power supply operate almost entirely without problems, but if the item has a mains transformer, this can overheat depending on how marginally it is rated. However, the load may operate less efficiently owing to the harmonics associated with a modified sine wave and produce a humming noise during operation. This also affects the efficiency of the system as a whole, since the manufacturer's nominal conversion efficiency does not account for harmonics. Therefore, pure sine wave inverters may provide significantly higher efficiency than modified sine wave inverters. Most AC motors will run on MSW inverters with an efficiency reduction of about 20% owing to the harmonic content. However, they may be quite noisy. A series LC filter tuned to the fundamental frequency may help. A common modified sine wave inverter topology found in consumer power inverters is as follows: An onboard microcontroller rapidly switches on and off power MOSFETs at high frequency like ~50 kHz. The MOSFETs directly pull from a low voltage DC source (such as a battery). This signal then goes through step-up transformers (generally many smaller transformers are placed in parallel to reduce the overall size of the inverter) to produce a higher voltage signal. The output of the step-up transformers then gets filtered by capacitors to produce a high voltage DC supply. Finally, this DC supply is pulsed with additional power MOSFETs by the microcontroller to produce the final modified sine wave signal. More complex inverters use more than two voltages to form a multiple-stepped approximation to a sine wave. These can further reduce voltage and current harmonics and THD compared to an inverter using only alternating positive and negative pulses; but such inverters require additional switching components, increasing cost. Near sine wave PWM Some inverters use PWM to create a waveform that can be low pass filtered to re-create the sine wave. These only require one DC supply, in the manner of the MSN designs, but the switching takes place at a far faster rate, typically many kHz, so that the varying width of the pulses can be smoothed to create the sine wave. If a microprocessor is used to generate the switching timing, the harmonic content and efficiency can be closely controlled. Output frequency The AC output frequency of a power inverter device is usually the same as standard power line frequency, 50 or 60 hertz. The exception is in designs for motor driving, where a variable frequency results in a variable speed control. Also, if the output of the device or circuit is to be further conditioned (for example stepped up) then the frequency may be much higher for good transformer efficiency. Output voltage The AC output voltage of a power inverter is often regulated to be the same as the grid line voltage, typically 120 or 240 VAC at the distribution level, even when there are changes in the load that the inverter is driving. This allows the inverter to power numerous devices designed for standard line power. Some inverters also allow selectable or continuously variable output voltages. Output power A power inverter will often have an overall power rating expressed in watts or kilowatts. This describes the power that will be available to the device the inverter is driving and, indirectly, the power that will be needed from the DC source. Smaller popular consumer and commercial devices designed to mimic line power typically range from 150 to 3000 watts. Not all inverter applications are solely or primarily concerned with power delivery; in some cases the frequency and or waveform properties are used by the follow-on circuit or device. Batteries The runtime of an inverter powered by batteries is dependent on the battery power and the amount of power being drawn from the inverter at a given time. As the amount of equipment using the inverter increases, the runtime will decrease. In order to prolong the runtime of an inverter, additional batteries can be added to the inverter. Formula to calculate inverter battery capacity:Battery Capacity (Ah) = Total Load (In Watts) × Usage Time (in hours) / Input Voltage (V)When attempting to add more batteries to an inverter, there are two basic options for installation: Series configurationIf the goal is to increase the overall input voltage to the inverter, one can daisy chain batteries in a series configuration. In a series configuration, if a single battery dies, the other batteries will not be able to power the load. Parallel configuration If the goal is to increase capacity and prolong the runtime of the inverter, batteries can be connected in parallel. This increases the overall ampere hour (Ah) rating of the battery set. If a single battery is discharged though, the other batteries will then discharge through it. This can lead to rapid discharge of the entire pack, or even an overcurrent and possible fire. To avoid this, large paralleled batteries may be connected via diodes or intelligent monitoring with automatic switching to isolate an under-voltage battery from the others. Applications DC power source usage An inverter converts the DC electricity from sources such as batteries or fuel cells to AC electricity. The electricity can be at any required voltage; in particular it can operate AC equipment designed for mains operation, or rectified to produce DC at any desired voltage. Uninterruptible power supplies An uninterruptible power supply (UPS) uses batteries and an inverter to supply AC power when mains power is not available. When mains power is restored, a rectifier supplies DC power to recharge the batteries. Electric motor speed control Inverter circuits designed to produce a variable output voltage range are often used within motor speed controllers. The DC power for the inverter section can be derived from a normal AC wall outlet or some other source. Control and feedback circuitry is used to adjust the final output of the inverter section which will ultimately determine the speed of the motor operating under its mechanical load. Motor speed control needs are numerous and include things like: industrial motor driven equipment, electric vehicles, rail transport systems, and power tools. (See related: variable-frequency drive) Switching states are developed for positive, negative, and zero voltages as per the patterns given in the switching Table 1. The generated gate pulses are given to each switch in accordance with the developed pattern and thus the output is obtained. In refrigeration compressors An inverter can be used to control the speed of the compressor motor to drive variable refrigerant flow in a refrigeration or air conditioning system to regulate system performance. Such installations are known as inverter compressors. Traditional methods of refrigeration regulation use single-speed compressors switched on and off periodically; inverter-equipped systems have a variable-frequency drive that controls the speed of the motor and thus the compressor and cooling output. The variable-frequency AC from the inverter drives a brushless or induction motor, the speed of which is proportional to the frequency of the AC it is fed, so the compressor can be run at variable speeds—eliminating compressor stop-start cycles increases efficiency. A microcontroller typically monitors the temperature in the space to be cooled, and adjusts the speed of the compressor to maintain the desired temperature. The additional electronics and system hardware add cost to the equipment, but can result in substantial savings in operating costs. The first inverter air conditioners were released by Toshiba in 1981, in Japan. Power grid Grid-tied inverters are designed to feed into the electric power distribution system. They transfer synchronously with the line and have as little harmonic content as possible. They also need a means of detecting the presence of utility power for safety reasons, so as not to continue to dangerously feed power to the grid during a power outage. Synchronverters are inverters that are designed to simulate a rotating generator, and can be used to help stabilize grids. They can be designed to react faster than normal generators to changes in grid frequency, and can give conventional generators a chance to respond to very sudden changes in demand or production. Large inverters, rated at several hundred megawatts, are used to deliver power from high-voltage direct current transmission systems to alternating current distribution systems. Solar A solar inverter is a balance of system (BOS) component of a photovoltaic system and can be used for both grid-connected and off-grid (standalone) systems. Solar inverters have special functions adapted for use with photovoltaic arrays, including maximum power point tracking and anti-islanding protection. Solar micro-inverters differ from conventional inverters, as an individual micro-inverter is attached to each solar panel. This can improve the overall efficiency of the system. The output from several micro-inverters is then combined and often fed to the electrical grid. In other applications, a conventional inverter can be combined with a battery bank maintained by a solar charge controller. This combination of components is often referred to as a solar generator. Solar inverters are also used in spacecraft photovoltaic systems. Induction heating Inverters convert low frequency main AC power to higher frequency for use in induction heating. To do this, AC power is first rectified to provide DC power. The inverter then changes the DC power to high frequency AC power. Due to the reduction in the number of DC sources employed, the structure becomes more reliable and the output voltage has higher resolution due to an increase in the number of steps so that the reference sinusoidal voltage can be better achieved. This configuration has recently become very popular in AC power supply and adjustable speed drive applications. This new inverter can avoid extra clamping diodes or voltage balancing capacitors. There are three kinds of level shifted modulation techniques, namely: Phase opposition disposition (POD) Alternative phase opposition disposition (APOD) Phase disposition (PD) HVDC power transmission With HVDC power transmission, AC power is rectified and high voltage DC power is transmitted to another location. At the receiving location, an inverter in a HVDC converter station converts the power back into AC. The inverter must be synchronized with grid frequency and phase and minimize harmonic generation. Electroshock weapons Electroshock weapons and tasers have a DC/AC inverter to generate several tens of thousands of V AC out of a small 9 V DC battery. First the 9 V DC is converted to 400–2000 V AC with a compact high frequency transformer, which is then rectified and temporarily stored in a high voltage capacitor until a pre-set threshold voltage is reached. When the threshold (set by way of an airgap or TRIAC) is reached, the capacitor dumps its entire load into a pulse transformer which then steps it up to its final output voltage of 20–60 kV. A variant of the principle is also used in electronic flash and bug zappers, though they rely on a capacitor-based voltage multiplier to achieve their high voltage. Miscellaneous Typical applications for power inverters include: Portable consumer devices that allow the user to connect a battery, or set of batteries, to the device to produce AC power to run various electrical items such as lights, televisions, kitchen appliances, and power tools. Use in power generation systems such as electric utility companies or solar generating systems to convert DC power to AC power. Use within any larger electronic system where an engineering need exists for deriving an AC source from a DC source. For example, a DC-powered electronic device may contain a small inverter to power an electroluminescent or fluorescent backlight, which requires high-frequency AC, Utility frequency conversion - if a user in (say) a 50 Hz country needs a 60 Hz supply to power equipment that is frequency-specific, such as a small motor or some electronics, it is possible to convert the frequency by running an inverter with a 60 Hz output from a DC source such as a 12V power supply running from the 50 Hz mains. Circuit description Basic design In one simple inverter circuit, DC power is connected to a transformer through the center tap of the primary winding. A relay switch is rapidly switched back and forth to allow current to flow back to the DC source following two alternate paths through one end of the primary winding and then the other. The alternation of the direction of current in the primary winding of the transformer produces alternating current (AC) in the secondary circuit. The electromechanical version of the switching device includes two stationary contacts and a spring supported moving contact. The spring holds the movable contact against one of the stationary contacts and an electromagnet pulls the movable contact to the opposite stationary contact. The current in the electromagnet is interrupted by the action of the switch so that the switch continually switches rapidly back and forth. This type of electromechanical inverter switch, called a vibrator or buzzer, was once used in vacuum tube automobile radios. A similar mechanism has been used in door bells, buzzers, and tattoo machines. As they became available with adequate power ratings, transistors, and various other types of semiconductor switches have been incorporated into inverter circuit designs. Certain ratings, especially for large systems (many kilowatts) use thyristors (SCR). SCRs provide large power handling capability in a semiconductor device, and can readily be controlled over a variable firing range. The switch in the simple inverter described above, when not coupled to an output transformer, produces a square voltage waveform due to its simple off and on nature as opposed to the sinusoidal waveform that is the usual waveform of an AC power supply. Using Fourier analysis, periodic waveforms are represented as the sum of an infinite series of sine waves. The sine wave that has the same frequency as the original waveform is called the fundamental component. The other sine waves, called harmonics, that are included in the series have frequencies that are integral multiples of the fundamental frequency. Fourier analysis can be used to calculate the total harmonic distortion (THD). The total harmonic distortion (THD) is the square root of the sum of the squares of the harmonic voltages divided by the fundamental voltage: Advanced designs There are many different power circuit topologies and control strategies used in inverter designs. Different design approaches address various issues that may be more or less important depending on the way that the inverter is intended to be used. For example, an electric motor in a car that is moving can turn into a source of energy and can, with the right inverter topology (full H-bridge) charge the car battery when decelerating or braking. In a similar manner, the right topology (full H-bridge) can invert the roles of "source" and "load", that is, if for example the voltage is higher on the AC "load" side (by adding a solar inverter, similar to a gen-set, but solid state), energy can flow back into the DC "source" or battery. Based on the basic H-bridge topology, there are two different fundamental control strategies called basic frequency-variable bridge converter and PWM control. Here, in the left image of H-bridge circuit, the top left switch is named as "S1", and others are named as "S2, S3, S4" in counterclockwise order. For the basic frequency-variable bridge converter, the switches can be operated at the same frequency as the AC in the electric grid. However, it is the rate at which the switches open and close that determines the AC frequency. When S1 and S4 are on and the other two are off, the load is provided with positive voltage and vice versa. We could control the on-off states of the switches to adjust the AC magnitude and phase. We could also control the switches to eliminate certain harmonics. This includes controlling the switches to create notches, or 0-state regions, in the output waveform or adding the outputs of two or more converters in parallel that are phase shifted in respect to one another. Another method that can be used is PWM. Unlike the basic frequency-variable bridge converter, in the PWM controlling strategy, only two switches S3, S4 can operate at the frequency of the AC side or at any low frequency. The other two would switch much faster (typically 100 kHz) to create square voltages of the same magnitude but for different time duration, which behaves like a voltage with changing magnitude in a larger time-scale. These two strategies create different harmonics. For the first one, through Fourier Analysis, the magnitude of harmonics would be 4/(pi*k) (k is the order of harmonics). So the majority of the harmonics energy is concentrated in the lower order harmonics. Meanwhile, for the PWM strategy, the energy of the harmonics lie in higher-frequencies because of the fast switching. Their different characteristics of harmonics leads to different THD and harmonics elimination requirements. Similar to "THD", the conception "waveform quality" represents the level of distortion caused by harmonics. The waveform quality of AC produced directly by H-bridge mentioned above would be not as good as we want. The issue of waveform quality can be addressed in many ways. Capacitors and inductors can be used to filter the waveform. If the design includes a transformer, filtering can be applied to the primary or the secondary side of the transformer or to both sides. Low-pass filters are applied to allow the fundamental component of the waveform to pass to the output while limiting the passage of the harmonic components. If the inverter is designed to provide power at a fixed frequency, a resonant filter can be used. For an adjustable frequency inverter, the filter must be tuned to a frequency that is above the maximum fundamental frequency. Since most loads contain inductance, feedback rectifiers or antiparallel diodes are often connected across each semiconductor switch to provide a path for the peak inductive load current when the switch is turned off. The antiparallel diodes are somewhat similar to the freewheeling diodes used in AC/DC converter circuits. Fourier analysis reveals that a waveform, like a square wave, that is anti-symmetrical about the 180 degree point contains only odd harmonics, the 3rd, 5th, 7th, etc. Waveforms that have steps of certain widths and heights can attenuate certain lower harmonics at the expense of amplifying higher harmonics. For example, by inserting a zero-voltage step between the positive and negative sections of the square-wave, all of the harmonics that are divisible by three (3rd, 9th, etc.) can be eliminated. That leaves only the 5th, 7th, 11th, 13th, etc. The required width of the steps is one third of the period for each of the positive and negative steps and one sixth of the period for each of the zero-voltage steps. Changing the square wave as described above is an example of pulse-width modulation. Modulating, or regulating the width of a square-wave pulse is often used as a method of regulating or adjusting an inverter's output voltage. When voltage control is not required, a fixed pulse width can be selected to reduce or eliminate selected harmonics. Harmonic elimination techniques are generally applied to the lowest harmonics because filtering is much more practical at high frequencies, where the filter components can be much smaller and less expensive. Multiple pulse-width or carrier based PWM control schemes produce waveforms that are composed of many narrow pulses. The frequency represented by the number of narrow pulses per second is called the switching frequency or carrier frequency. These control schemes are often used in variable-frequency motor control inverters because they allow a wide range of output voltage and frequency adjustment while also improving the quality of the waveform. Multilevel inverters provide another approach to harmonic cancellation. Multilevel inverters provide an output waveform that exhibits multiple steps at several voltage levels. For example, it is possible to produce a more sinusoidal wave by having split-rail direct current inputs at two voltages, or positive and negative inputs with a central ground. By connecting the inverter output terminals in sequence between the positive rail and ground, the positive rail and the negative rail, the ground rail and the negative rail, then both to the ground rail, a stepped waveform is generated at the inverter output. This is an example of a three-level inverter: the two voltages and ground. More on achieving a sine wave Resonant inverters produce sine waves with LC circuits to remove the harmonics from a simple square wave. Typically there are several series- and parallel-resonant LC circuits, each tuned to a different harmonic of the power line frequency. This simplifies the electronics, but the inductors and capacitors tend to be large and heavy. Its high efficiency makes this approach popular in large uninterruptible power supplies in data centers that run the inverter continuously in an "online" mode to avoid any switchover transient when power is lost. (See related: Resonant inverter) A closely related approach uses a ferroresonant transformer, also known as a constant-voltage transformer, to remove harmonics and to store enough energy to sustain the load for a few AC cycles. This property makes them useful in standby power supplies to eliminate the switchover transient that otherwise occurs during a power failure while the normally idle inverter starts and the mechanical relays are switching to its output. Enhanced quantization A proposal suggested in Power Electronics magazine utilizes two voltages as an improvement over the common commercialized technology, which can only apply DC bus voltage in either direction or turn it off. The proposal adds intermediate voltages to the common design. Each cycle sees the following sequence of delivered voltages: v1, v2, v1, 0, −v1, −v2, −v1, 0. Three-phase inverters Three-phase inverters are used for variable-frequency drive applications and for high power applications such as HVDC power transmission. A basic three-phase inverter consists of three single-phase inverter switches each connected to one of the three load terminals. For the most basic control scheme, the operation of the three switches is coordinated so that one switch operates at each 60 degree point of the fundamental output waveform. This creates a line-to-line output waveform that has six steps. The six-step waveform has a zero-voltage step between the positive and negative sections of the square-wave such that the harmonics that are multiples of three are eliminated as described above. When carrier-based PWM techniques are applied to six-step waveforms, the basic overall shape, or envelope, of the waveform is retained so that the 3rd harmonic and its multiples are cancelled. To construct inverters with higher power ratings, two six-step three-phase inverters can be connected in parallel for a higher current rating or in series for a higher voltage rating. In either case, the output waveforms are phase shifted to obtain a 12-step waveform. If additional inverters are combined, an 18-step inverter is obtained with three inverters etc. Although inverters are usually combined for the purpose of achieving increased voltage or current ratings, the quality of the waveform is improved as well. Size Compared to other household electric devices, inverters are large in size and volume. In 2014, Google together with IEEE started an open competition named Little Box Challenge, with a prize money of $1,000,000, to build a (much) smaller power inverter. History Early inverters From the late nineteenth century through the middle of the twentieth century, DC-to-AC power conversion was accomplished using rotary converters or motor-generator sets (M-G sets). In the early twentieth century, vacuum tubes and gas-filled tubes began to be used as switches in inverter circuits. The most widely used type of tube was the thyratron. The origins of electromechanical inverters explain the source of the term inverter. Early AC-to-DC converters used an induction or synchronous AC motor direct-connected to a generator (dynamo) so that the generator's commutator reversed its connections at exactly the right moments to produce DC. A later development is the synchronous converter, in which the motor and generator windings are combined into one armature, with slip rings at one end and a commutator at the other and only one field frame. The result with either is AC-in, DC-out. With an M-G set, the DC can be considered to be separately generated from the AC; with a synchronous converter, in a certain sense it can be considered to be "mechanically rectified AC". Given the right auxiliary and control equipment, an M-G set or rotary converter can be "run backwards", converting DC to AC. Hence an inverter is an inverted converter. Controlled rectifier inverters Since early transistors were not available with sufficient voltage and current ratings for most inverter applications, it was the 1957 introduction of the thyristor or silicon-controlled rectifier (SCR) that initiated the transition to solid-state inverter circuits. The commutation requirements of SCRs are a key consideration in SCR circuit designs. SCRs do not turn off or commutate automatically when the gate control signal is shut off. They only turn off when the forward current is reduced to below the minimum holding current, which varies with each kind of SCR, through some external process. For SCRs connected to an AC power source, commutation occurs naturally every time the polarity of the source voltage reverses. SCRs connected to a DC power source usually require a means of forced commutation that forces the current to zero when commutation is required. The least complicated SCR circuits employ natural commutation rather than forced commutation. With the addition of forced commutation circuits, SCRs have been used in the types of inverter circuits described above. In applications where inverters transfer power from a DC power source to an AC power source, it is possible to use AC-to-DC controlled rectifier circuits operating in the inversion mode. In the inversion mode, a controlled rectifier circuit operates as a line commutated inverter. This type of operation can be used in HVDC power transmission systems and in regenerative braking operation of motor control systems. Another type of SCR inverter circuit is the current source input (CSI) inverter. A CSI inverter is the dual of a six-step voltage source inverter. With a current-source inverter, the DC power supply is configured as a current source rather than a voltage source. The inverter SCRs are switched in a six-step sequence to direct the current to a three-phase AC load as a stepped current waveform. CSI inverter commutation methods include load commutation and parallel capacitor commutation. With both methods, the input current regulation assists the commutation. With load commutation, the load is a synchronous motor operated at a leading power factor. As they have become available in higher voltage and current ratings, semiconductors such as transistors or IGBTs that can be turned off by means of control signals have become the preferred switching components for use in inverter circuits. Rectifier and inverter pulse numbers Rectifier circuits are often classified by the number of current pulses that flow to the DC side of the rectifier per cycle of AC input voltage. A single-phase half-wave rectifier is a one-pulse circuit and a single-phase full-wave rectifier is a two-pulse circuit. A three-phase half-wave rectifier is a three-pulse circuit and a three-phase full-wave rectifier is a six-pulse circuit. With three-phase rectifiers, two or more rectifiers are sometimes connected in series or parallel to obtain higher voltage or current ratings. The rectifier inputs are supplied from special transformers that provide phase shifted outputs. This has the effect of phase multiplication. Six phases are obtained from two transformers, twelve phases from three transformers, and so on. The associated rectifier circuits are 12-pulse rectifiers, 18-pulse rectifiers, and so on... When controlled rectifier circuits are operated in the inversion mode, they would be classified by pulse number also. Rectifier circuits that have a higher pulse number have reduced harmonic content in the AC input current and reduced ripple in the DC output voltage. In the inversion mode, circuits that have a higher pulse number have lower harmonic content in the AC output voltage waveform. Other notes The large switching devices for power transmission applications installed until 1970 predominantly used mercury-arc valves. Modern inverters are usually solid state (static inverters). A modern design method features components arranged in an Hbridge configuration. This design is also quite popular with smaller-scale consumer devices.
Technology
Functional circuits
null
223396
https://en.wikipedia.org/wiki/Laridae
Laridae
Laridae is a family of seabirds in the order Charadriiformes that includes the gulls, terns (including white terns), noddies, and skimmers. It includes around 100 species arranged into 22 genera. They are an adaptable group of mostly aerial birds found worldwide. Taxonomy The family Laridae was introduced (as Laridia) by the French polymath Constantine Samuel Rafinesque in 1815. Historically, Laridae were restricted to the gulls, while the terns were placed in a separate family, Sternidae, and the skimmers in a third family, Rynchopidae. The noddies were traditionally included in Sternidae. In 1990 Charles Sibley and Jon Ahlquist included auks and skuas in a broader family Laridae. A molecular phylogenetic study by Baker and colleagues published in 2007 found that the noddies in the genus Anous formed a sister group to a clade containing the gulls, skimmers, and the other terns. To create a monophyletic family group, Laridae was expanded to include the genera that had previously been in Sternidae and Rynchopidae. Baker and colleagues found that the Laridae lineage diverged from a lineage that gave rise to both the skuas (Stercorariidae) and auks (Alcidae) before the end of the Cretaceous in the age of dinosaurs. They also found that the Laridae themselves began expanding in the early Paleocene, around 60 million years ago. The German palaeontologist Gerald Mayr has questioned the validity of these early dates and suggested that inappropriate fossils were used in calibrating the molecular data. The earliest charadriiform fossils date only from the late Eocene, around 35 million years ago. Anders Ödeen and colleagues investigated the development of ultraviolet vision in shorebirds, by looking for the SWS1 opsin gene in various species; as gulls were the only shorebirds known to have developed the trait. They discovered that the gene was present in the gull, skimmer, and noddy lineages but not the tern lineage. They also recovered the noddies as an early lineage, though the evidence was not strong. Genera For the complete list of species, see the article List of Laridae species. Subfamily Anoinae (noddies) Genus Anous (5 species) Subfamily Gyginae (white terns) Genus Gygis (1 or 2 species) Subfamily Rynchopinae (skimmers) Genus Rynchops (3 species) Subfamily Larinae (gulls) Genus Creagrus (swallow-tailed gull) Genus Rissa (kittiwakes) (2 species) Genus Pagophila (ivory gull) Genus Xema (Sabine's gull) Genus Chroicocephalus (11 species) Genus Hydrocoloeus (little gull) Genus Rhodostethia (Ross's gull) Genus Leucophaeus (5 species) Genus Ichthyaetus (6 species) Genus Larus (25 species) Subfamily Sterninae (terns) Genus Gelochelidon (2 species) Genus Hydroprogne (Caspian tern) Genus Thalasseus (8 species) Genus Sternula (7 species) Genus Onychoprion (4 species) Genus Sterna (13 species) Genus Chlidonias (4 species) Genus Phaetusa (large-billed tern) Genus Larosterna (Inca tern) Cladogram Part of the cladogram of the genera in the order Charadriiformes based on the analysis by Baker and colleagues published in 2007. Distribution and habitat The Laridae have spread around the world, and their adaptability has likely been a factor. Most have become much more aerial than their ancestor, which was likely some form of shorebird.
Biology and health sciences
Charadriiformes
Animals
223397
https://en.wikipedia.org/wiki/Larus
Larus
Larus is a large genus of gulls with worldwide distribution (by far the greatest species diversity is in the Northern Hemisphere). Many of its species are abundant and well-known birds in their ranges. Until about 2005–2007, most gulls were placed in this genus, but this arrangement is now known to be polyphyletic, leading to the resurrection of the genera Chroicocephalus, Ichthyaetus, Hydrocoloeus, and Leucophaeus for many other species formerly included in Larus. They are in general medium-large birds, typically pale grey to black above and white below and on the head, often with black markings with white spots ("mirrors") on their wingtips and in a few species also some black on the tail. They have stout, longish bills and webbed feet; in winter, the head is often streaked or smudged dark grey. The young birds are brown, and take three to five years to reach adult plumage, with subadult plumages intermediate between the young and adult. The taxonomy of the large gulls in the herring and lesser black-backed complex is complicated, with different authorities recognising from two species in the past, increasingly up to eight species more recently. Taxonomy The genus Larus was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. The genus name is from Ancient Greek laros (λάῥος) or Latin larus, which appears to have referred to a gull or other large seabird. The type species is the great black-backed gull (Larus marinus). The Latin name Larus marinus translates as "sea gull", and the gulls in this genus generally are the species most often known colloquially as "seagulls". Species The genus contains 25 extant species. Fossils Fossils of Larus gulls are known from the Middle Miocene, about 20-15 million years ago; allocation of earlier fossils to this genus is generally rejected. Biogeography of the fossil record suggests that the genus evolved in the northern Atlantic and spread globally during the Pliocene, when species diversity seems to have been highest, as with most seabirds. Larus sp. (Middle Miocene of Grund, Austria) Larus sp. (Middle Miocene of Romania) Larus sp. (Late? Miocene/Early Pliocene of Lee Creek Mine, U.S.) - several species Larus elmorei (Middle Pliocene of Bone Valley, southeastern U.S.) Larus lacus (Late Pliocene of Pinecrest, southeastern U.S.) Larus perpetuus (Pliocene of southeastern U.S.) Larus sp. (San Diego Late Pliocene of the southwestern U.S.) Larus oregonus (Late Pliocene - Late Pleistocene of the west-central U.S.) Larus robustus (Late Pliocene - Late Pleistocene of the west-central U.S.) Larus sp. (Late Pleistocene of Lake Manix western U.S.) "Larus" raemdonckii (Early Oligocene of Belgium) is now at least tentatively believed to belong in the procellariiform genus Puffinus. "L." elegans (Late Oligocene?/Early Miocene of St-Gérand-le-Puy, France) and "L." totanoides (Late Oligocene?/Early Miocene of southeastern France) are now in Laricola, while "L." dolnicensis (Early Miocene of the Czech Republic) was actually a pratincole; it is now placed in Mioglareola. The Early Miocene "Larus" desnoyersii (southeastern France) and "L." pristinus (John Day Formation, Willow Creek, U.S.) probably do not belong in this genus; the former may be a skua. Ring species The circumpolar group of Larus gull species has often been cited as a classic example of the ring species. The range of these gulls forms a ring around the North Pole. The European herring gull, which lives primarily in Great Britain and Northern Europe, can hybridize with the American herring gull (living in North America), which can also interbreed with the Vega or East Siberian gull, the western subspecies of which, Birula's gull, can hybridize with Heuglin's gull which, in turn, can interbreed with the Siberian lesser black-backed gull (all four of these live across the north of Siberia). The last is the eastern representative of the lesser black-backed gulls back in northwestern Europe, including Great Britain. However, the lesser black-backed gulls and herring gull are sufficiently different that they rarely interbreed; thus, the group of gulls forms a continuum except in Europe, where the two lineages meet. However, a recent genetic study has shown that this example is far more complicated than presented here, and probably does not constitute a true ring species.
Biology and health sciences
Charadriiformes
Animals
223405
https://en.wikipedia.org/wiki/Coraciidae
Coraciidae
Coraciidae () is a family of Old World birds, which are known as rollers because of the aerial acrobatics some of these birds perform during courtship or territorial flights. The family contains 13 species and is divided into two genera. Rollers resemble crows in size and build, and share the colourful appearance of kingfishers and bee-eaters, blues and pinkish or cinnamon browns predominating. The two inner front toes are connected, but not the outer one. They are mainly insect eaters, with Eurystomus species taking their prey on the wing, and those of the genus Coracias diving from a perch to catch food items from on the ground, like giant shrikes. Although living rollers are birds of warm climates in the Old World, fossil records show that rollers were present in North America during the Eocene. They are monogamous and nest in an unlined hole in a tree or in masonry, and lay 2–4 eggs in the tropics, 3–6 at higher latitudes. The eggs, which are white, hatch after 17–20 days, and the young remain in the nest for approximately another 30 days. Taxonomy and systematics The roller family Coraciidae was introduced (as Coracinia) by the French polymath Constantine Samuel Rafinesque in 1815. It is one of six families in the order Coraciiformes, which also includes the motmots, bee-eaters, todies, ground rollers, and kingfishers. The family gets its scientific name for Latin , "like a raven", and the English name "roller" from the aerial acrobatics some of these birds perform during courtship or territorial flights. The phylogenetic relationship between the six families that make up the order Coraciiformes is shown in the cladogram below. The number of species in each family is taken from the list maintained by Frank Gill, Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC). A molecular phylogenetic study by Ulf Johansson and collaborators published in 2018 found that the azure dollarbird (Eurystomus azureus) was nested in a clade containing subspecies of the Oriental dollarbird (Eurystomus orientalis). Genera The roller family has two extant genera as follows: Description Rollers can be identified as medium-sized birds with strong, slightly hooked beaks and stocky bodies, often with brightly colored plumage. Rollers resemble crows in size and build, ranging from in length. They share the colourful appearance of kingfishers and bee-eaters, blues and pinkish or cinnamon browns predominating. The rollers are similar in general morphology to their relatives in the order Coraciiformes, having large heads on short necks, bright plumage, weak feet and short legs. The two inner front toes are connected, but not the outer one. The weakness of the feet and legs is reflected in their behaviour: rollers do not hop or move along perches and seldom use their feet other than for occasional lurching leaps along the ground pursuing escaping prey. The bill is robust, and is shorter yet broader in the genus Eurystomus, sometimes known as the broad-billed rollers. The broad-billed rollers have brightly coloured bills, whereas those of the Coracias (or true) rollers are black. Other differences between the two genera are in wing length; the more aerial Eurystomus rollers have longer wings (and shorter feet still) than the Coracias rollers, this reflects differences in their foraging ecology. Their calls are "repeated short, gruff caws". Distribution and habitat The rollers are found in warmer parts of the Old World. Africa has most species, and is believed to be where the family originated. This is supported by the fact that the related ground rollers are found on Madagascar. The European roller is completely migratory, breeding in Europe and wintering in Africa, and the dollarbird also leaves much of its breeding range in winter. Other species are sedentary or short-range migrants. These are birds of open habitats with trees or other elevated perches from which to hunt. Behavior Breeding Rollers are noisy and aggressive when defending their nesting territories, which they patrol while displaying their striking plumage. Intruders are attacked with intimidating rolling dives. They are monogamous and nest in an unlined hole in a tree or in masonry, and lay 2–4 eggs in the tropics, 3–6 at higher latitudes. The eggs, which are white, hatch after 17–20 days, and the young remain in the nest for approximately another 30 days. Egg laying is staggered at one-day intervals so that if food is short only the older larger nestlings get fed. The chicks are naked, blind and helpless when they hatch. Feeding Coracias rollers are watch-and-wait hunters. They sit in a tree or on a post before descending on their prey and carrying it back to a perch in the beak before dismembering it. They take a wide range of terrestrial invertebrates, and small vertebrates such as frogs, lizards, rodents and young birds. They will take items avoided by many other birds, such as hairy caterpillars, insects with warning colouration and snakes. Eurystomus rollers hunt on the wing, swooping on flying beetles, crickets and other insects which are crushed by their broad deep beaks and eaten on the wing. The azure roller and dollarbird will hunt huge swarms of termites and flying ants which appear after thunderstorms. Tens or hundreds of these rollers may be attracted to large swarms.
Biology and health sciences
Piciformes
Animals
223508
https://en.wikipedia.org/wiki/Beehive
Beehive
A beehive is an enclosed structure where some honey bee species of the subgenus Apis live and raise their young. Though the word beehive is used to describe the nest of any bee colony, scientific and professional literature distinguishes nest from hive. Nest is used to discuss colonies that house themselves in natural or artificial cavities or are hanging and exposed. The term hive is used to describe an artificial/man-made structure to house a honey bee nest. Several species of Apis live in colonies. But for honey production, the western honey bee (Apis mellifera) and the eastern honey bee (Apis cerana) are the main species kept in hives. The nest's internal structure is a densely packed group of hexagonal prismatic cells made of beeswax, called a honeycomb. The bees use the cells to store food (honey and pollen) and to house the brood (eggs, larvae, and pupae). Beehives serve several purposes. These include producing honey, pollinating nearby crops, housing bees for apitherapy treatment, and mitigating the effects of colony collapse disorder. In America, hives are commonly transported so bees can pollinate crops elsewhere. Several patents have been issued for beehive designs. Honey bee nests Honey bees use caves, rock cavities, and hollow trees as natural nesting sites. In warmer climates, they may build exposed hanging nests; members of other subgenera have exposed aerial combs. Multiple parallel honeycombs form the hive with a relatively uniform bee space. It usually has a single entrance. Western honey bees prefer nest cavities approximately in volume and avoid those smaller than or larger than . Western honey bees show several nest-site preferences: the height above ground is usually between and , entrance positions tend to face downward, equatorial-facing entrances are favored, and nest sites over from the parent colony are preferred. Most bees occupy nests for several years. The bees often smooth the bark surrounding the nest entrance and coat the cavity walls with a thin layer of hardened plant resin called propolis. Honeycombs are attached to the walls along the cavity tops and sides, but the bees leave passageways along the comb edges. The standard nest architecture for all honeybees is similar: honey is stored in the upper part of the comb; beneath it are rows of pollen-storage cells, worker-brood cells, and drone-brood cells, in that order. The peanut-shaped queen cells are normally built at the lower edge of the comb. Ancient hives In antiquity, Egyptians kept bees in human-made hives. The walls of the Egyptian sun temple of Nyuserre Ini from the 5th Dynasty, dated earlier than 2422 BCE, depict workers blowing smoke into hives as they remove honeycombs. Inscriptions detailing honey production are found on the tomb of Pabasa from the 26th Dynasty (), and describe honey stored in jars and cylindrical hives. The archaeologist Amihai Mazar cites 30 intact hives that were discovered in the ruins of Rehov (2,000 residents in 900 BCE, Israelites and Canaanites). This is evidence that an advanced honey industry existed in Israel approximately 4,000 years ago. The 150 beehives, many broken, were made of straw and unbaked clay. They were found in orderly rows. Ezra Marcus from the University of Haifa said the discovery provided a glimpse of ancient beekeeping seen in texts and ancient art from the Near East. An altar decorated with fertility figurines was found alongside the hives and may indicate religious practices associated with beekeeping. While beekeeping predates these ruins, this is the oldest apiary yet discovered. Traditional hives Traditional beehives provided an enclosure for the bee colony. Because no internal structures were provided for the bees, they created their honeycomb within the hives. The comb is often cross-attached and cannot be moved without destroying it. This is sometimes called a fixed-frame hive to differentiate it from the modern movable-frame hives. Harvest often destroyed the hives, though some adaptations were using top baskets which could be removed when the bees filled them with honey. These were gradually supplanted with box hives of varying dimensions, with or without frames, and finally replaced by newer modern equipment. Honey from traditional hives was extracted by pressing – crushing the wax honeycomb to squeeze out the contents. Due to this harvesting, traditional beehives provided more beeswax, but far less honey than a modern hive. Four styles of traditional beehives are mud hives, clay/tile hives, skeps, and bee gums. Mud hives Mud hives are still used in Egypt and Siberia. These are long cylinders made from a mixture of unbaked mud, straw, and dung. Clay hives Clay tiles were the customary homes of kept bees in the eastern end of the Mediterranean. Long cylinders of baked clay were used in ancient Egypt, the Middle East, and to some extent in Greece, Italy, and Malta. They sometimes were used singly, but more often stacked in rows to provide some shade, at least for those not on top. Keepers would smoke one end to drive the bees to the other end while they harvested honey. Skeps Skeps, baskets placed open-end-down, have been used to house bees for some 2000 years. Believed to have been first used in Ireland, they were initially made from wicker plastered with mud and dung but after the Middle Ages, almost all were made of straw. In northern and western Europe, skeps were made of coils of grass or straw. In its simplest form, there is a single entrance at the bottom of the skep. Again, there is no internal structure provided for the bees and the colony must produce its honeycomb, which is attached to the inside of the skep. The size of early modern skeps was about two pecks to a bushel (18 to 36 liters). Skeps have two disadvantages: beekeepers cannot inspect the comb for diseases and pests, and honey removal is difficult and often results in the destruction of the entire colony. To get the honey beekeepers either drove the bees out of the skep or, by using a bottom extension called an eke or a top extension called a cap, sought to create a comb with only honey in it. Quite often the bees were killed, sometimes using lighted sulfur, to allow the honeycomb to be removed. Skeps could also be squeezed in a vise to extract the honey. As of 1998, most US states prohibited the use of skeps, or any other hive that cannot be inspected for disease and parasites. Later skep designs included a smaller woven basket (cap) on top over a small hole in the main skep. This cap acted as a crude super, allowing some honey to be extracted with less destruction of brood and bees. In England, such an extension piece consisting of a ring of about 4 or 5 coils of straw placed below a straw beehive to give extra room for brood rearing was called an eke, imp, or nadir. An eke was used to give just a bit of extra room, or to "eke" some more space, a nadir is a larger extension used when a full story was needed beneath. The term is derived from Old Norse skeppa, "basket". A person who made such woven beehives was called a "skepper", a surname that still exists in Western countries. In England the thickness of the coil of straw was controlled using a ring of leather or a piece of cow's horn called a "girth" and the coils of straw could be sewn together using strips of briar. Likenesses of skeps can be found in paintings, carvings, and old manuscripts. The skep is often used on signs as an indication of industry ("the busy bee"). In the late 18th century, more complex skeps appeared with wooden tops with holes in them over which glass jars were placed. The comb would then be built into the glass jars, making the designs commercially attractive. Bee gums In the eastern United States, especially in the Southeast, sections of hollow trees were used until the 20th century. These were called "gums" because they often were from black gum (Nyssa sylvatica) trees. Sections of the hollow trees were set upright in "bee yards" or apiaries. Sometimes sticks or crossed sticks were placed under a board cover to give an attachment for the honeycomb. As with skeps, the harvest of honey from these destroyed the colony. Often the harvester would kill the bees before even opening their nest. This was done by inserting a metal container of burning sulfur into the gum. Natural tree hollows and artificially hollowed tree trunks were widely used in the past by beekeepers in Central Europe. For example, in Poland, such a beehive was called a barć and was protected in various ways from unfavorable weather conditions (rain, frost) and predators (woodpeckers, bears, pine martens, forest dormice). Harvest of honey from these did not destroy the colony, as only a protective piece of wood was removed from the opening and smoke was used to pacify the bees for a short time. Spain still uses cork bark cylinder with cork top hives, similar to a gum or barc, colmenas de corcho. Part of the reason why bee gums are still used is that this allows the producers of the honey to distinguish themselves from other honey producers and to ask for a higher price for the honey. An example where bee gums are still used is Mont-Lozère, France, although in Europe they are referred to as log hives. The length of these log hives used is shorter than bee gums; they are hollowed out artificially and cut to a specific size. Modern hives The earliest recognizably modern designs of beehives arose in the 19th century, though they were perfected from intermediate stages of progress made in the 18th century. Intermediate stages in hive design were recorded for example by Thomas Wildman in 1768-1770, who described advances over the destructive old skep-based beekeeping so that the bees no longer had to be killed to harvest the honey. Wildman, for example, fixed a parallel array of wooden bars across the top of a straw hive or skep (with a separate straw top to be fixed on later) "so that there are in all seven bars of deal" [in a hive] "to which the bees fix their combs". He also described using such hives in a multi-story configuration, foreshadowing the modern use of supers: he described adding (at the proper time) successive straw hives below, and eventually removing the ones above when free of brood and filled with honey so that the bees could be separately preserved at the harvest for the following season. Wildman also described a further development, using hives with "sliding frames" for the bees to build their comb, foreshadowing more modern uses of movable-comb hives. Wildman acknowledged the advances in knowledge of bees previously made by Swammerdam, Maraldi, and de Reaumur – he included a lengthy translation of Reaumur's account of the natural history of bees – and he also described the initiatives of others in designing hives for the preservation of bee-life when taking the harvest, citing in particular reports from Brittany dating from the 1750s, due to Comte de la Bourdonnaye. In 1814 Petro Prokopovych, the founder of commercial beekeeping in Ukraine, invented one of the first beehive frames which allowed an easier honey harvest. The correct distance between combs for easy operations in beehives was described in 1845 by Jan Dzierżon as from the center of one top bar to the center of the next one. In 1848, Dzierżon introduced grooves into the hive's side walls replacing the strips of wood for moving top bars. The grooves were , the spacing later termed bee space. Based on the aforementioned measurements, August Adolph von Berlepsch (Bienezeitung May 1852) in Thuringia and L.L. Langstroth (October 1852) in the United States designed their own movable-frame hives. Langstroth used, however "about 1/2 inch" () above the frame's top bars and "about 3/8 inch" () between the frames and hive body. Hives can be vertical or horizontal. There are three main types of modern hive in common use worldwide: the Langstroth hive the top-bar hive the Warre hive Most hives have been optimized for Apis mellifera and Apis cerana. Some other hives have been designed and optimized for some meliponines such as Melipona beecheii. Examples of such hives are the Nogueira-Neto hive and the UTOB hive. Vertical hives Langstroth hives Langstroth hives are named for Rev. Lorenzo Langstroth, who patented his design in the United States on October 5, 1852. It was based on the ideas of Johann Dzierzon and other leaders in apiculture. It combines a top-worked hive with hanging frames and the use of bee spaces between frames and other parts. Variants of his design have become the standard style of hive for many of the world's beekeepers, both professional and amateur. Langstroth hive bodies are rectangular and can be stacked to expand the usable space for the bees. They can be made from a variety of materials, but commonly of timber. The modern Langstroth hive consists of: Bottom board: this has an entrance for the bees. Boxes containing frames for brood and honey: the lowest box for the queen to lay eggs, and boxes above where honey is stored Inner cover and top cap providing weather protection Inside the boxes, frames are hung parallel to each other. Langstroth frames are thin rectangular structures made of wood or plastic and typically have a plastic or wax foundation on which the bees draw out the comb. The frames hold the honeycomb formed by the bees with beeswax. Eight or ten frames side by side (depending on the size of the box) will fill the hive body and leave the right amount of bee space between each frame and between the end frames and the hive body. With appropriate provision of bee space, the bees are not likely to glue parts together with propolis nor fill spaces with burr combalthough the dimensions now usual for top bee space are not the same as those that Langstroth described. Self-spacing beehive frames were introduced by Julius Hoffman, a student of Johann Dzierzon. Langstroth frames can be reinforced with wire, making it possible to spin the honey out of the comb in a centrifuge. As a result, the empty frames and comb can be returned to the beehive for re-filling by the bees. Creating a honeycomb involves a significant energy investment, conservatively estimated at of honey needed to create of comb in temperate climates. Reusing comb can thus increase the productivity of a beekeeping enterprise. The sizes of hive bodies (rectangular boxes without tops or bottoms placed one on top of another) and of internal frames vary between named styles. A variety of approximations to Langstroth's original box and frame sizes are still used, with top bars some long or a little more. However, this class of hives includes several other styles, mostly used in Europe, which differ mainly in the size and number of frames used. These include: BS National Beehive: This smaller version of the Langstroth class of hive is designed for the less prolific and more docile Buckfastleigh bee strain, and for standard dimension parts. It is based on square boxes ( side), with a standard/brood box and shallow, Supers typically used for honey. The construction of the boxes is relatively complicated (eight pieces), but strong and with easy-to-hold handles. The boxes take frames of in length, with a relatively long lug () and a comb width of . BS Commercial hive: A variation with the same cross-sectional dimensions as a BS National hive (18 in x 18 in, 460 mm x 460 mm), but deeper brood box () and supers intended for more prolific bees. The internal structure of the boxes is also simpler, resulting in wider frames () with shorter handles or lugs. Some find these supers too heavy when full of honey and therefore use National supers on top of a Commercial brood box. Rose Hive: A hive and method of management developed by Tim Rowe, it is a variation on the BS National hive. The Rose hive maintains the same cross-sectional dimensions as the National hive (18 in x 18 in, 460 mm x 460 mm), but opts for a single depth box of . The single box and frame size are used for both brood and honey supers. Standardizing on one size reduces complexity and allows for the movement of brood or honey frames to any other position in the hive. A queen excluder is avoided, allowing the queen freedom to move where she wants. Boxes are added to the hive above the brood and below the supers. The colony can expand during a large nectar flow and retract to lower portions of the hive as the colony shrinks in the fall. When collecting honey, brood and honey frames can be relocated up or down the hive, as needed. Smith hive German Normal: German normal measure (DNM): mainly used in central and northern Germany. Of which Frankenbeute, Segeberger, and Spessartbeute are variants. Zander: Developed by Enoch Zander, mainly used in southern Germany D.E. hive Designed by David Eyre Dadant hive: Developed by Charles Dadant (developed in the US in 1920 from the Dadant-Blatt hive) B-BOX: Developed by the Italian company Beeing for urban locations and uses Dadant frames Hyper Hyve: Designed by Mike James and incorporates an insulated hive with integrated monitoring. Warré hives The Warré hive was invented by the village priest Émile Warré, and is also called ('the people's hive'). It is a modular and storied design similar to a Langstroth hive. The hive body is made of boxes stacked vertically; however, it uses top bars for comb support instead of full frames similar to a Top-Bar Hive, as a general rule. The popularity of this hive is growing among 'sustainable-practice' beekeepers. The Warre hive differs from other stacked hive systems in one fundamental aspect: when the bees need more space as the colony expands, the new box is "nadired"; i.e., positioned underneath the existing box or boxes. This serves the purpose of warmth retention within the brood nest of the hive, considered vital to colony health. WBC hives The WBC, invented by and named after William Broughton Carr in 1890, is a double-walled hive with an external housing that splays out towards the bottom of each frame covering a standard box shape hive inside. The WBC is in many respects the 'classic' hive as represented in pictures and paintings, but despite the extra level of insulation for the bees offered by its double-walled design, many beekeepers avoid it, owing to the inconvenience of having to remove the external layer before the hive can be examined. CDB hives In 1890, Charles Nash Abbott (1830–1894), advisor to Ireland's Department of Agriculture and Technical Instruction, designed a new Congested Districts Board (CDB) hive in Dublin, Ireland. It was commissioned by and named after the Congested Districts Board for Ireland which provided support for rural populations until its absorption into the Department of Agriculture. AZ hives One of the most famous Slovenian beekeepers was Anton Žnideršič (1874–1947). He developed the AZ hive house and hive box widely used today in Slovenia. Horizontal hives These are single, long, boxes, with the bars hanging in parallel. The hive body of a common style of horizontal hive is often shaped like an inverted trapezoid, but it may be rectangular in cross-section and able to accept normal frames. They have movable combs and make use of the concept of bee space. They were developed as a lower-cost alternative to the standard Langstroth hives and equipment. They do not require the beekeeper to lift heavy supers when the hive is inspected or manipulated. They are popular in the US due to their alignment with the organic, treatment-free philosophies of many new beekeeping devotees in the United States. The initial costs and equipment requirements are typically much less than other hive designs. Scrap wood can often be used to build a good hive. Horizontal hives do not require the beekeeper to lift super boxes; all checks and manipulation can be done while lifting only one comb at a time and with minimal bending. In areas where large terrestrial animals such as honey badgers and bears present a threat to beehives, single-box hives may be suspended out of reach. Elsewhere, they are commonly raised to a level that allows the beekeeper to inspect and manipulate them in comfort. Disadvantages include (usually) unsupported combs that cannot be spun in most honey extractors, and it is not usually possible to expand the hive if additional honey storage space is required. Most horizontal hives cannot easily be lifted and carried by one person. Top-bar hives Horizontal hives often use top-bars instead of frames. Top bars are simple lengths of timber often made by cutting scrap wood to size; it is not necessary to buy or assemble frames. The top bars form a continuous roof over the hive chamber, unlike conventional frames which offer a bee-space gap so that the bees can move up and down between hive boxes. The beekeeper does not usually provide foundation wax (or provides only a small starter piece of foundation) for the bees to build from. The bees build the comb so it hangs down from the top bar. This is in keeping with the way bees build wax in a natural cavity. Because the unsupported comb built from a top bar cannot usually be centrifuged in a honey extractor, the honey is usually extracted by crushing and straining rather than centrifuging. Because the bees have to rebuild their comb after the honey is harvested, a top-bar hive yields a beeswax harvest in addition to honey. Queen excluders may or may not be used to keep the brood areas entirely separate from the honey. Even if no queen excluder is used, the bees store most of their honey separately from the areas where they are raising the brood, and honey can still be harvested without killing the bees or brood. Cathedral Hive: Modified top bar. The top bar is split into 3 equal parts and joined at angles of 120° to form half a hexagon. Long box hive The long box hive is a single-story hive that accepts enclosed frames and is worked horizontally in the manner of Kenya/Tanzanian top-bar hives. This non-stacked style had higher popularity a century ago in the Southeast United States but faded from use due to a lack of portability. With the recent popularity of horizontal top-bar hives, the long box hive is gaining renewed but limited utilization. Alternative names are "new idea hive", "single story hive", "Poppleton hive", or "long hive". Variations: Long Langstroth Hive: Uses 32 standard Langstroth deep frames without any supers. Dartington long deep (DLD) hive: Being derived from fixing two Deep National hives back-to-back, the DLD can take up to 21 frames each . It is possible to have two colonies in the brood box; e.g., "swarm" and "parent", separated by a loose Divider Board, as there is an entrance at either end. It has half-size honey supers, which take six frames that are lighter than full supers and are correspondingly easier to lift than 12-frame National supers. The Dartington was originally developed by Robin Dartington so that he could keep bees on his London rooftop. Beehaus Hive: A proprietary design for a beehive launched in 2009 based on the Dartington long deep. It is a hybrid of the top-bar hive and a Langstroth hive. Layens Hive: Developed by Georges de Layens in 1864. This hive is a popular standard in Spain and Romania. It was also popular in Russia during the early 1900s until forced industrialization standardized all apiaries. ZEST Hive: Lazutin Hive: Developed by Fedor Lazutin Golden Hive: Ukrainian Hive or Einraumbeute Hive uses Dadant size frames that are rotated ninety degrees. Symbolism The beehive is a commonly used symbol in various human cultures. In Europe, it was used by the Romans as well as in heraldry. Most heraldic representation of beehives is in the form of a skep. Bees (and beehives) have some symbols often associated with them though it is not universal: In modern times, it is a key symbol in Freemasonry. In masonic lectures, it represents industry and cooperation, and as a metaphor cautioning against intellectual laziness, warning that "he that will so demean himself as not to be endeavoring to add to the common stock of knowledge and understanding, may be deemed a drone in the hive of nature, a useless member of society, and unworthy of our protection as Masons." The beehive appears on the 3rd Degree emblems on the Tracing Board of Royal Cumberland No. 41, Bath and is explained as such: The beehive is also used with a similar meaning by the Church of Jesus Christ of Latter-day Saints, informally known as Mormons. From Latter-day Saint usage, it has become one of the State symbols of Utah (see Deseret). Relocation and destruction Relocation Beekeepers and companies may remove unwanted honey bee nests from structures to relocate them into an artificial hive. This process is called a "cut out". Destruction Animal destruction Black bears destroy hives in their quest for honey and protein-rich larvae. Grizzly bears will also eat beehives and are harder to dissuade from taking several beehives. Hives can be protected with an electrified enclosure and livestock fence "shocker" base. Systems available are grid plugged(120v) or solar (dc) for remote spots (more info). Hives erected as a crop shield against elephants are sometimes destroyed elephants. These hives are hung on a single metal wire that encircles the crop field of some farms in African elephant territory. The installation is called a beehive fence and was conceived by Lucy King. Human destruction Humans have historically destroyed nests and hives of honey-producing bee species to obtain honey and beeswax and other bee products. Modern honey frame and centerfuge systems, such as Langstroth, are less harmful to the hive, assuming harvest will happen, and increase production at the same time. Humans may also determine that a beehive must be destroyed in the interest of public safety or in the interest of preventing the spread of bee diseases. The U.S. state of Florida destroyed the hives of Africanized honey bees, in 1999. The state of Alaska has issued regulations governing the treatment of diseased beehives via burning followed by burial, fumigation using ethylene oxide or other approved gases, sterilization by treatment with lye, or by scorching. In New Zealand and the United Kingdom, the treatment of hives infected with the disease American foulbrood with antibiotics is prohibited, and beekeepers are required by law to destroy such colonies and hives with fire.
Biology and health sciences
Shelters and structures
Animals
223515
https://en.wikipedia.org/wiki/Jersey%20cattle
Jersey cattle
The Jersey is a British breed of small dairy cattle from Jersey, in the British Channel Islands. It is one of three Channel Island cattle breeds, the others being the Alderney – now extinct – and the Guernsey. The milk is high in butterfat and has a characteristic yellowish tinge. The Jersey adapts well to various climates and environments, and unlike many breeds originating in temperate climates, tolerates heat well. It has been exported to many countries of the world; in some of them, including Denmark, France, New Zealand and the United States, it has developed into an independent breed. In Nepal, it is used as a draught animal. History of the breed As its name implies, the Jersey was bred on the British Channel Island of Jersey. It apparently descended from cattle stock brought over from the nearby Norman mainland, and was first recorded as a separate breed around 1700. There is evidence the breed is related to the African buffalo, possibly the Bos brachyceros. The breed was isolated from outside influence for over 200 years, with a ban from 1789 to 2008. Farmers Weekly stated the ban began in 1763 until 2008 or 245 years. In December 2022 a dairy was struck with a virus that killed around half the dairy cows, Woodlands Farm. The exact cause has been stated as the "most likely" cause of death was botulism. The exact number of cows, listed as over 100 was also stated as 112 cows and 132 cows, which was more than half the dairy herd. Before 1789, cows would be given as dowry for inter-island marriages between Jersey and Guernsey. This was, however, not widespread. In 1789, imports of foreign cattle into Jersey were forbidden by law to maintain the purity of the breed, although exports of cattle and semen have been important economic resources for the island. The restriction on the import of cattle was initially introduced to prevent a collapse in the export price. The United Kingdom levied no import duty on cattle imported from Jersey. Cattle were being shipped from France to Jersey and then shipped onward to England to circumvent the tariff on French cattle. The increase in the supply of cattle, sometimes of inferior quality, was bringing the price down and damaging the reputation of Jersey cattle. The import ban stabilised the price and enabled a more scientifically controlled programme of breeding to be undertaken. Sir John Le Couteur studied selective breeding and became a Fellow of the Royal Society; his work led to the establishment of the Royal Jersey Agricultural and Horticultural Society in 1833. At that time, the breed displayed greater variation than it does today, with white, dark brown, and mulberry beasts. However, since the honey-brown cows sold best, the breed was developed accordingly. In 1860, cows were exported via England, the average price being £16 per head. By 1910, over head were exported annually to the United States alone. In 1866, at the annual general meeting of the Royal Jersey Agricultural and Horticultural Society, H.G. Shepard noted in his history that "it was resolved – on the motion of Col. Le Couteur, that the Hon. Secretary be hereby invited to open and to carry on a "herd book" in which the pedigree of bulls, cows, and heifers shall be entered for reference to all the members of the Society." In 1869 for the first time, prizes were awarded at the society's shows for herd book stock cattle. The States of Jersey took a census of stock in 1866, and Jersey then supported head of cattle, of which 611 were bulls. In July 2008 the States of Jersey took the historic step of ending the ban on imports, and allowing the import of bull semen from any breed of cattle, although only semen that is genetically pure enables the resultant progeny to be entered in the Jersey Herd Book. For many decades, each of the 12 parishes in Jersey held cattle shows in the spring, summer, and autumn, followed in turn by the main shows held by the Royal Jersey Agricultural and Horticultural Society, where the best of the parish shows competed. The colour of the rosette secured by a prize-winning cow was said to determine its export value. Today, the RJAHS holds two shows a year, where usually five or six of the remaining 23 herds compete against each other for the top prizes. A Jersey cattle show is also held in Jersey, by the West Show Association. In February 2010, semen from an impure breed Jersey bull had been imported into the island despite strict laws and checks, and 100 cows had been impregnated with the semen. Their offspring was not recorded in the Jersey Herd Book. Jersey cattle were exported to the United States from about 1850. A breed society, the American Jersey Cattle Club, was formed in 1868. In the USA, a distinction is sometimes made between the "American Jersey", which is comparatively coarse and large and has been selectively bred mainly for milk yield, and the original or "Island" type; the latter may also be called "Miniature Jersey". Characteristics The Jersey is small. Cows in the island weigh some and stand about at the withers; bulls weigh some . Factors contributing to the popularity of the breed have been their greater economy of production, due to: The ability to carry a larger number of effective milking cows per unit area due to lower body weight, hence lower maintenance requirements, and superior grazing ability Calving ease and a relatively lower rate of dystocia, leading to their popularity in crossbreeding with other dairy and even beef breeds to reduce calving related injuries High fertility High butterfat (4.84%) and protein (3.95%), and the ability to thrive on locally produced feed Jerseys occur in all shades of brown, from light tan to almost black. They are frequently fawn in colour. All purebred Jerseys have a lighter band around their muzzles, a dark switch (long hair on the end of the tail), and black hooves, although in recent years, colour regulations have been relaxed to allow a broadening of the gene pool. The cows are calm and docile; bulls may be unpredictable or aggressive. Jersey cattle have a greater tendency towards postparturient hypocalcaemia (or "milk fever") in dams, and tend to have frail calves that require more attentive management in cold weather than other dairy breeds due to their smaller body size (which results in an increased surface area-to-mass ratio, increasing heat loss). Milk After 2008, there was some pressure for Jersey dairymen to attempt to increase the milk production per cow. This led to possibly securing options from outside the island. From 2020 onward there was a further challenge with COVID-19 while seeking the "maximum productivity and business efficiencies". Jersey milk has 20% more calcium, 18% more protein, and 29% more milk fat than Holstein.
Biology and health sciences
Cattle
Animals
223980
https://en.wikipedia.org/wiki/Ice%20crystal
Ice crystal
Ice crystals are solid ice in symmetrical shapes including hexagonal columns, hexagonal plates, and dendritic crystals. Ice crystals are responsible for various atmospheric optic displays and cloud formations. Formation  At ambient temperature and pressure, water molecules have a V shape. The two hydrogen atoms bond to the oxygen atom at a 105° angle. Ice crystals have a hexagonal crystal lattice, meaning the water molecules arrange themselves into layered hexagons upon freezing. Slower crystal growth from colder and drier atmospheres produces more hexagonal symmetry. Depending on environmental temperature and humidity, ice crystals can develop from the initial hexagonal prism into many symmetric shapes. Possible shapes for ice crystals are columns, needles, plates and dendrites. Mixed patterns are also possible. The symmetric shapes are due to depositional growth, which is when ice forms directly from water vapor in the atmosphere. Small spaces in atmospheric particles can also collect water, freeze, and form ice crystals. This is known as nucleation. Snowflakes form when additional vapor freezes onto an existing ice crystal. Trigonal and cubic crystals Supercooled water refers to water below its freezing point that is still liquid. Ice crystals formed from supercooled water have stacking defects in their layered hexagons. This causes ice crystals to display trigonal or cubic symmetry depending on the temperature. Trigonal or cubic crystals form in the upper atmosphere where supercooling occurs. Square crystals Water can pass through laminated sheets of graphene oxide unlike smaller molecules such as helium. When squeezed between two layers of graphene, water forms square ice crystals at room temperature. Researchers believe high pressure and the van der Waals force, an attractive force present between all molecules, drives the formation. The material is a new crystalline phase of ice. Weather phenomena Ice crystals create optical phenomena like diamond dust and halos in the sky due to light reflecting off of the crystals in a process called scattering. Cirrus clouds and ice fog are made of ice crystals. Cirrus clouds are often the sign of an approaching warm front, where warm and moist air rises and freezes into ice crystals. Ice crystals rubbing against each other also produces lightning. The crystals normally fall horizontally, but electric fields can cause them to clump together and fall in other directions. Detection The aerospace industry is working to design a radar that can detect ice crystal environments to discern hazardous flight conditions. Ice crystals can melt when they touch the surface of warm aircraft, and refreeze due to environmental conditions. The accumulation of ice around the engine damages the aircraft. Weather forecasting uses differential reflectivity weather radars to identify types of precipitation by comparing a droplet's horizontal and vertical lengths. Ice crystals are larger in the horizontal direction and are thus detectable.
Physical sciences
Water
Chemistry
223986
https://en.wikipedia.org/wiki/Cycloalkane
Cycloalkane
In organic chemistry, the cycloalkanes (also called naphthenes, but distinct from naphthalene) are the monocyclic saturated hydrocarbons. In other words, a cycloalkane consists only of hydrogen and carbon atoms arranged in a structure containing a single ring (possibly with side chains), and all of the carbon-carbon bonds are single. The larger cycloalkanes, with more than 20 carbon atoms are typically called cycloparaffins. All cycloalkanes are isomers of alkenes. The cycloalkanes without side chains (also known as monocycloalkanes) are classified as small (cyclopropane and cyclobutane), common (cyclopentane, cyclohexane, and cycloheptane), medium (cyclooctane through cyclotridecane), and large (all the rest). Besides this standard definition by the International Union of Pure and Applied Chemistry (IUPAC), in some authors' usage the term cycloalkane includes also those saturated hydrocarbons that are polycyclic. In any case, the general form of the chemical formula for cycloalkanes is CnH2(n+1−r), where n is the number of carbon atoms and r is the number of rings. The simpler form for cycloalkanes with only one ring is CnH2n. Nomenclature Unsubstituted cycloalkanes that contain a single ring in their molecular structure are typically named by adding the prefix "cyclo" to the name of the corresponding linear alkane with the same number of carbon atoms in its chain as the cycloalkane has in its ring. For example, the name of cyclopropane (C3H6) containing a three-membered ring is derived from propane (C3H8) - an alkane having three carbon atoms in the main chain. The naming of polycyclic alkanes such as bicyclic alkanes and spiro alkanes is more complex, with the base name indicating the number of carbons in the ring system, a prefix indicating the number of rings ( "bicyclo-" or "spiro-"), and a numeric prefix before that indicating the number of carbons in each part of each ring, exclusive of junctions. For instance, a bicyclooctane that consists of a six-membered ring and a four-membered ring, which share two adjacent carbon atoms that form a shared edge, is [4.2.0]-bicyclooctane. That part of the six-membered ring, exclusive of the shared edge has 4 carbons. That part of the four-membered ring, exclusive of the shared edge, has 2 carbons. The edge itself, exclusive of the two vertices that define it, has 0 carbons. There is more than one convention (method or nomenclature) for the naming of compounds, which can be confusing for those who are just learning, and inconvenient for those who are well-rehearsed in the older ways. For beginners, it is best to learn IUPAC nomenclature from a source that is up to date, because this system is constantly being revised. In the above example [4.2.0]-bicyclooctane would be written bicyclo[4.2.0]octane to fit the conventions for IUPAC naming. It then has room for an additional numerical prefix if there is the need to include details of other attachments to the molecule such as chlorine or a methyl group. Another convention for the naming of compounds is the common name, which is a shorter name and it gives less information about the compound. An example of a common name is terpineol, the name of which can tell us only that it is an alcohol (because the suffix "-ol" is in the name) and it should then have a hydroxyl group (–OH) attached to it. The IUPAC naming system for organic compounds can be demonstrated using the example provided in the adjacent image. The base name of the compound, indicating the total number of carbons in both rings (including the shared edge), is listed first. For instance, "heptane" denotes "hepta-", which refers to the seven carbons, and "-ane", indicating single bonding between carbons. Next, the numerical prefix is added in front of the base name, representing the number of carbons in each ring (excluding the shared carbons) and the number of carbons present in the bridge between the rings. In this example, there are two rings with two carbons each and a single bridge with one carbon, excluding the carbons shared by both the rings. The prefix consists of three numbers that are arranged in descending order, separated by dots: [2.2.1]. Before the numerical prefix is another prefix indicating the number of rings (e.g., "bicyclo+"). Thus, the name is bicyclo[2.2.1]heptane. Cycloalkanes as a group are also known as naphthenes, a term mainly used in the petroleum industry. Properties Containing only C–C and C–H bonds, cycloalkanes are similar to alkanes in their general properties. Cycloalkanes with high angle strain, such as cyclopropane, have weaker C–C bonds, promoting ring-opening reactions. Cycloalkanes have higher boiling points, melting points, and densities than alkanes. This is due to stronger London forces because the ring shape allows for a larger area of contact. Even-numbered cycloalkanes tend to have higher melting points than odd-numbered cycloalkanes. While variations in enthalpy and orientational entropy of the solid-phase crystal structure largely explain the odd-even alternation found in alkane melting points, conformational entropy of the solid and liquid phases has a large impact on cycloalkane melting points. For example, cycloundecane has a large number of accessible conformers near room temperature, giving it a low melting point, whereas cyclododecane adopts a single lowest-energy conformation (up to chirality) in both the liquid phase and solid phase (above 199 K), and has a high melting point. These trends are broken from cyclopentadecane onwards, due to increasing variation in solid-phase conformational mobility, though higher cycloalkanes continue to display large odd-even fluctuations in their plastic crystal transition temperatures. Sharp plastic crystal phase transitions disappear from onwards, and sufficiently high molecular weight cycloalkanes, such as , have similar crystal lattices and melting points to high-density polyethylene. Table of cycloalkanes Conformations and ring strain In cycloalkanes, the carbon atoms are sp3 hybridized, which would imply an ideal tetrahedral bond angle of 109° 28′ whenever possible. Owing to evident geometrical reasons, rings with 3, 4, and (to a small extent) also 5 atoms can only afford narrower angles; the consequent deviation from the ideal tetrahedral bond angles causes an increase in potential energy and an overall destabilizing effect. Eclipsing of hydrogen atoms is an important destabilizing effect, as well. The strain energy of a cycloalkane is the increase in energy caused by the compound's geometry, and is calculated by comparing the experimental standard enthalpy change of combustion of the cycloalkane with the value calculated using average bond energies. Molecular mechanics calculations are well suited to identify the many conformations occurring particularly in medium rings. Ring strain is highest for cyclopropane, in which the carbon atoms form a triangle and therefore have 60° C–C–C bond angles. There are also three pairs of eclipsed hydrogens. The ring strain is calculated to be around 120 kJ mol−1. Cyclobutane has the carbon atoms in a puckered square with approximately 90° bond angles; "puckering" reduces the eclipsing interactions between hydrogen atoms. Its ring strain is therefore slightly less, at around 110 kJ mol−1. For a theoretical planar cyclopentane the C–C–C bond angles would be 108°, very close to the measure of the tetrahedral angle. Actual cyclopentane molecules are puckered, but this changes only the bond angles slightly so that angle strain is relatively small. The eclipsing interactions are also reduced, leaving a ring strain of about 25 kJ mol−1. In cyclohexane the ring strain and eclipsing interactions are negligible because the puckering of the ring allows ideal tetrahedral bond angles to be achieved. In the most stable chair form of cyclohexane, axial hydrogens on adjacent carbon atoms are pointed in opposite directions, virtually eliminating eclipsing strain. In medium-sized rings (7 to 13 carbon atoms) conformations in which the angle strain is minimised create transannular strain or Pitzer strain. At these ring sizes, one or more of these sources of strain must be present, resulting in an increase in strain energy, which peaks at 9 carbons (around 50 kJ mol−1). After that, strain energy slowly decreases until 12 carbon atoms, where it drops significantly; at 14, another significant drop occurs and the strain is on a level comparable with 10 kJ mol−1. At larger ring sizes there is little or no strain since there are many accessible conformations corresponding to a diamond lattice. Ring strain can be considerably higher in bicyclic systems. For example, bicyclobutane, C4H6, is noted for being one of the most strained compounds that is isolatable on a large scale; its strain energy is estimated at 267 kJ mol−1. Reactions Cycloalkanes, referred to as naphthenes, are a major substrate for the catalytic reforming process. In the presence of a catalyst and at temperatures of about 495 to 525 °C, naphthenes undergo dehydrogenation to give aromatic derivatives: The process provides a way to produce high octane gasoline. In another major industrial process, cyclohexanol is produced by the oxidation of cyclohexane in air, typically using cobalt catalysts: 2 C6H12 + O2 → 2 C6H11OH This process coforms cyclohexanone, and this mixture ("KA oil" for ketone-alcohol oil) is the main feedstock for the production of adipic acid, used to make nylon. The small cycloalkanes – in particular, cyclopropane – have a lower stability due to Baeyer strain and ring strain. They react similarly to alkenes, though they do not react in electrophilic addition, but in nucleophilic aliphatic substitution. These reactions are ring-opening reactions or ring-cleavage reactions of alkyl cycloalkanes. Preparation Many simple cycloalkanes are obtained from petroleum. They can be produced by hydrogenation of unsaturated, even aromatic precursors. Numerous methods exist for preparing cycloalkanes by ring-closing reactions of difunctional precursors. For example, diesters are cyclized in the Dieckmann condensation: The acyloin condensation can be deployed similarly. For larger rings (macrocyclizations) more elaborate methods are required since intramolecular ring closure competes with intermolecular reactions. The Diels-Alder reaction, a [4+2] cycloaddition, provides a route to cyclohexenes: The corresponding [2+2] cycloaddition reactions, which usually require photochemical activation, result in cyclobutanes.
Physical sciences
Aliphatic hydrocarbons
Chemistry
223992
https://en.wikipedia.org/wiki/Wind%20shear
Wind shear
Wind shear (; also written windshear), sometimes referred to as wind gradient, is a difference in wind speed and/or direction over a relatively short distance in the atmosphere. Atmospheric wind shear is normally described as either vertical or horizontal wind shear. Vertical wind shear is a change in wind speed or direction with a change in altitude. Horizontal wind shear is a change in wind speed with a change in lateral position for a given altitude. Wind shear is a microscale meteorological phenomenon occurring over a very small distance, but it can be associated with mesoscale or synoptic scale weather features such as squall lines and cold fronts. It is commonly observed near microbursts and downbursts caused by thunderstorms, fronts, areas of locally higher low-level winds referred to as low-level jets, near mountains, radiation inversions that occur due to clear skies and calm winds, buildings, wind turbines, and sailboats. Wind shear has significant effects on the control of an aircraft, and it has been the only or a contributing cause of many aircraft accidents. Sound movement through the atmosphere is affected by wind shear, which can bend the wave front, causing sounds to be heard where they normally would not. Strong vertical wind shear within the troposphere also inhibits tropical cyclone development but helps to organize individual thunderstorms into longer life cycles which can then produce severe weather. The thermal wind concept explains how differences in wind speed at different heights are dependent on horizontal temperature differences and explains the existence of the jet stream. Definition Wind shear refers to the variation of wind velocity over either horizontal or vertical distances. Airplane pilots generally regard significant wind shear to be a horizontal change in airspeed of for light aircraft, and near for airliners at flight altitude. Vertical speed changes greater than also qualify as significant wind shear for aircraft. Low-level wind shear can affect aircraft airspeed during takeoff and landing in disastrous ways, and airliner pilots are trained to avoid all microburst wind shear (headwind loss in excess of ). The rationale for this additional caution includes: microburst intensity can double in a minute or less, the winds can shift to excessive crosswinds, is the threshold for survivability at some stages of low-altitude operations, and several of the historical wind shear accidents involved microbursts. Wind shear is also a key factor in the formation of severe thunderstorms. The additional hazard of turbulence is often associated with wind shear. Occurrence Weather situations where shear is observed include: Weather fronts. Significant shear is observed when the temperature difference across the front is or more, and the front moves at or faster. Because fronts are three-dimensional phenomena, frontal shear can be observed at any altitude between surface and tropopause, and can therefore be seen both horizontally and vertically. Vertical wind shear above warm fronts is more of an aviation concern than near and behind cold fronts due to their greater duration. Upper-level jet streams. Associated with upper-level jet streams is a phenomenon known as clear air turbulence (CAT), caused by vertical and horizontal wind shear connected to the wind gradient at the edge of the jet streams. The CAT is strongest on the anticyclonic shear side of the jet, usually next to or just below the axis of the jet. Low-level jet streams. When a nocturnal low-level jet forms overnight above Earth's surface ahead of a cold front, significant low-level vertical wind shear can develop near the lower portion of the low-level jet. This is also known as non-convective wind shear as it is not due to nearby thunderstorms. Mountains. Inversions. When on a clear and calm night, a radiation inversion is formed near the ground, the friction does not affect wind above the top of the inversion layer. The change in wind can be 90 degrees in direction and in speed. Even a nocturnal (overnight) low-level jet can sometimes be observed. It tends to be strongest towards sunrise. Density differences cause additional problems to aviation. Downbursts. When an outflow boundary forms due to a shallow layer of rain-cooled air spreading out near ground level from the parent thunderstorm, both speed and directional wind shear can result at the leading edge of the three-dimensional boundary. The stronger the outflow boundary is, the stronger the resultant vertical wind shear will become. Horizontal component Weather fronts Weather fronts are boundaries between two masses of air of different densities, or different temperature and moisture properties, which normally are convergence zones in the wind field and are the principal cause of significant weather. Within surface weather analyses, they are depicted using various colored lines and symbols. The air masses usually differ in temperature and may also differ in humidity. Wind shear in the horizontal occurs near these boundaries. Cold fronts feature narrow bands of thunderstorms and severe weather and may be preceded by squall lines and dry lines. Cold fronts are sharper surface boundaries with more significant horizontal wind shear than warm fronts. When a front becomes stationary, it can degenerate into a line that separates regions of differing wind speed, known as a shear line, though the wind direction across the front normally remains constant. In the tropics, tropical waves move from east to west across the Atlantic and eastern Pacific basins. Directional and speed shear can occur across the axis of stronger tropical waves, as northerly winds precede the wave axis and southeast winds are seen behind the wave axis. Horizontal wind shear can also occur along the local land breeze and sea breeze boundaries. Near coastlines The magnitude of winds offshore is nearly double the wind speed observed onshore. This is attributed to the differences in friction between landmasses and offshore waters. Sometimes, there are even directional differences, particularly if local sea breezes change the wind on shore during daylight hours. Vertical component Thermal wind Thermal wind is a meteorological term not referring to an actual wind, but a difference in the geostrophic wind between two pressure levels and , with ; in essence, wind shear. It is only present in an atmosphere with horizontal changes in temperature (or in an ocean with horizontal gradients of density), i.e., baroclinicity. In a barotropic atmosphere, where temperature is uniform, the geostrophic wind is independent of height. The name stems from the fact that this wind flows around areas of low (and high) temperature in the same manner as the geostrophic wind flows around areas of low (and high) pressure. The thermal wind equation is where the are geopotential height fields with , is the Coriolis parameter, and is the upward-pointing unit vector in the vertical direction. The thermal wind equation does not determine the wind in the tropics. Since is small or zero, such as near the equator, the equation reduces to stating that is small. This equation basically describes the existence of the jet stream, a westerly current of air with maximum wind speeds close to the tropopause which is (even though other factors are also important) the result of the temperature contrast between equator and pole. Effects on tropical cyclones Tropical cyclones are, in essence, heat engines that are fueled by the temperature gradient between the warm tropical ocean surface and the colder upper atmosphere. Tropical cyclone development requires relatively low values of vertical wind shear so that their warm core can remain above their surface circulation center, thereby promoting intensification. Strongly sheared tropical cyclones weaken as the upper circulation is blown away from the low-level center. Effects on thunderstorms and severe weather Severe thunderstorms, which can spawn tornadoes and hailstorms, require wind shear to organize the storm in such a way as to maintain the thunderstorm for a longer period. This occurs as the storm's inflow becomes separated from its rain-cooled outflow. An increasing nocturnal, or overnight, low-level jet can increase the severe weather potential by increasing the vertical wind shear through the troposphere. Thunderstorms in an atmosphere with virtually no vertical wind shear weaken as soon as they send out an outflow boundary in all directions, which then quickly cuts off its inflow of relatively warm, moist air and causes the thunderstorm to dissipate. Planetary boundary layer The atmospheric effect of surface friction with winds aloft forces surface winds to slow and back counterclockwise near the surface of Earth blowing inward across isobars (lines of equal pressure) when compared to the winds in frictionless flow well above Earth's surface. This layer where friction slows and changes the wind is known as the planetary boundary layer, sometimes the Ekman layer, and it is thickest during the day and thinnest at night. Daytime heating thickens the boundary layer as winds at the surface become increasingly mixed with winds aloft due to insolation, or solar heating. Radiative cooling overnight further enhances wind decoupling between the winds at the surface and the winds above the boundary layer by calming the surface wind which increases wind shear. These wind changes force wind shear between the boundary layer and the wind aloft and are most emphasized at night. Effects on flight Gliding In gliding, wind gradients just above the surface affect the takeoff and landing phases of the flight of a glider. Wind gradient can have a noticeable effect on ground launches, also known as winch launches or wire launches. If the wind gradient is significant or sudden, or both, and the pilot maintains the same pitch attitude, the indicated airspeed will increase, possibly exceeding the maximum ground launch tow speed. The pilot must adjust the airspeed to deal with the effect of the gradient. When landing, wind shear is also a hazard, particularly when the winds are strong. As the glider descends through the wind gradient on final approach to landing, airspeed decreases while sink rate increases, and there is insufficient time to accelerate prior to ground contact. The pilot must anticipate the wind gradient and use a higher approach speed to compensate for it. Wind shear is also a hazard for aircraft making steep turns near the ground. It is a particular problem for gliders which have a relatively long wingspan, which exposes them to a greater wind speed difference for a given bank angle. The different airspeed experienced by each wing tip can result in an aerodynamic stall on one wing, causing a loss of control accident. Parachuting Wind shear or wind gradients are a threat to parachutists, particularly to BASE jumping and wingsuit flying. Skydivers have been pushed off of their course by sudden shifts in wind direction and speed, and have collided with bridges, cliffsides, trees, other skydivers, the ground, and other obstacles. Skydivers routinely make adjustments to the position of their open canopies to compensate for changes in direction while making landings to prevent accidents such as canopy collisions and canopy inversion. Soaring Soaring related to wind shear, also called dynamic soaring, is a technique used by soaring birds like albatrosses, who can maintain flight without wing flapping. If the wind shear is of sufficient magnitude, a bird can climb into the wind gradient, trading ground speed for height, while maintaining airspeed. By then turning downwind, and diving through the wind gradient, they can also gain energy. It has also been used by glider pilots on rare occasions. Wind shear can also produce wave. This occurs when an atmospheric inversion separates two layers with a marked difference in wind direction. If the wind encounters distortions in the inversion layer caused by thermals coming up from below, it will produce significant shear waves that can be used for soaring. Impact on passenger aircraft Windshear can be extremely dangerous for aircraft, especially during takeoff and landing. Sudden changes in wind velocity can cause rapid decreases in airspeed, leading to the aircraft being unable to maintain altitude. Windshear has been responsible for several deadly accidents, including Eastern Air Lines Flight 66, Pan Am Flight 759, Delta Air Lines Flight 191, and USAir Flight 1016. Windshear can be detected using Doppler radar. Airports can be fitted with low-level windshear alert systems or Terminal Doppler Weather Radar, and aircraft can be fitted with airborne wind shear detection and alert systems. Following the 1985 crash of Delta Air Lines Flight 191, in 1988 the U.S. Federal Aviation Administration mandated that all commercial aircraft have airborne wind shear detection and alert systems by 1993. The installation of high-resolution Terminal Doppler Weather Radar stations at many U.S. airports that are commonly affected by windshear has further aided the ability of pilots and ground controllers to avoid wind shear conditions. Sailing Wind shear affects sailboats in motion by presenting a different wind speed and direction at different heights along the mast. The effect of low-level wind shear can be factored into the selection of sail twist in the sail design, but this can be difficult to predict since wind shear may vary widely in different weather conditions. Sailors may also adjust the trim of the sail to account for low-level wind shear, for example using a boom vang. Sound propagation Wind shear can have a pronounced effect upon sound propagation in the lower atmosphere, where waves can be "bent" by refraction phenomenon. The audibility of sounds from distant sources, such as thunder or gunshots, is very dependent on the amount of shear. The result of these differing sound levels is key in noise pollution considerations, for example from roadway noise and aircraft noise, and must be considered in the design of noise barriers. This phenomenon was first applied to the field of noise pollution study in the 1960s, contributing to the design of urban highways as well as noise barriers. The speed of sound varies with temperature. Since temperature and sound velocity normally decrease with increasing altitude, sound is refracted upward, away from listeners on the ground, producing an acoustic shadow at some distance from the source. In 1862, during the American Civil War Battle of Iuka, an acoustic shadow, believed to have been enhanced by a northeast wind, kept two divisions of Union soldiers out of the battle, because they could not hear the sounds of battle only six miles downwind. Effects on architecture Wind engineering is a field of engineering devoted to the analysis of wind effects on the natural and built environment. It includes strong winds which may cause discomfort as well as extreme winds such as tornadoes, hurricanes, and storms which may cause widespread destruction. Wind engineering draws upon meteorology, aerodynamics, and several specialist engineering disciplines. The tools used include climate models, atmospheric boundary layer wind tunnels, and numerical models. It involves, among other topics, how wind impacting buildings must be accounted for in engineering. Wind turbines are affected by wind shear. Vertical wind-speed profiles result in different wind speeds at the blades nearest to the ground level compared to those at the top of blade travel, and this, in turn, affects the turbine operation. This low-level wind shear can cause a large bending moment in the shaft of a two-bladed turbine when the blades are vertical. The reduced wind shear over water means shorter and less expensive wind turbine towers can be used in shallow seas.
Physical sciences
Winds
Earth science
224145
https://en.wikipedia.org/wiki/Northern%20storm%20petrel
Northern storm petrel
Northern storm petrels are seabirds in the genus Hydrobates in the family Hydrobatidae, part of the order Procellariiformes. The family was once lumped with the similar austral storm petrels in the combined storm petrels, but have been split, as they were not closely related. These smallest of seabirds feed on planktonic crustaceans and small fish picked from the surface, typically while hovering. Their flight is fluttering and sometimes bat-like. The northern storm petrels are found in the Northern Hemisphere, although some species around the Equator dip into the south. They are strictly pelagic, coming to land only when breeding. In the case of most species, little is known of their behaviour and distribution at sea, where they can be hard to find and harder to identify. They are colonial nesters, displaying strong philopatry to their natal colonies and nesting sites. Most species nest in crevices or burrows, and all but one species attend the breeding colonies nocturnally. Pairs form long-term, monogamous bonds and share incubation and chick-feeding duties. Like many species of seabirds, nesting is highly protracted, with incubation taking up to 50 days and fledging another 70 days after that. Several species of storm petrel are threatened by human activities. One species, the Guadalupe storm petrel, is thought to have gone extinct. The principal threats to storm petrels are introduced species, particularly mammals, in their breeding colonies; many storm petrels habitually nest on isolated mammal-free islands and are unable to cope with predators such as rats and feral cats. Taxonomy The family Hydrobatidae was introduced with Hydrobates as the type genus by the Australian born ornithologist Gregory Mathews in 1912. The background is complicated as the family Hydrobatidae had originally been introduced in 1849 with Hydrobata as the type genus by the French zoologist Côme-Damien Degland. Hydrobata had been erected in 1816 for species in the dipper family Cinclidae by the French ornithologist Louis Pierre Vieillot. In 1992 the International Code of Zoological Nomenclature (ICZN) suppressed the genus Hydrobata Vieillot, 1816. Under the rules of the ICZN the family Hydrobatidae Degland, 1849 thus became unavailable as the type genus had been suppressed. This cleared the way for the family Hydrobatidae introduced in 1912 by Mathews. The genus Hydrobates was erected in 1822 by the German zoologist Friedrich Boie. He listed two species but did not specify a type. In 1884 Spencer Baird, Thomas Brewer and Robert Ridgway designated the European storm petrel as the type species. The genus name combines the Ancient Greek hudro- meaning "water-" with batēs meaning "walker". In the past two subfamilies, the Hydrobatinae and Oceanitinae, were recognized within a single large family Hydrobatidae, but this has since been split with the elevation of the Oceanitidae to family status. The Oceanitidae, or austral storm petrels, are mostly found in southern waters (though Wilson's storm petrel regularly migrates into the Northern Hemisphere). The Hydrobatidae, or northern storm petrels, are largely restricted to the Northern Hemisphere, although a few visit or breed a short distance south of the equator. The family Hydrobatidae originally included two genera Hydrobates and Oceanodroma. Cytochrome b DNA sequence analysis suggested that the family was paraphyletic and more accurately treated as two distinct families. A few fossil species have been found, with the earliest being from the Upper Miocene. In 2021, the IOC merged Hydrobates and Oceanodroma into the single genus Hydrobates, as the family was paraphyletic as previously defined. The following cladogram shows the results of the phylogenetic analysis by Wallace et al. (2017). Species One species, the Guadalupe storm petrel (O. macrodactyla), is possibly extinct. In 2010, the International Ornithological Congress (IOC) added the Cape Verde storm petrel (O. jabejabe) to their list of accepted species (AS) splits, following Bolton et al. 2007. This species was split from the band-rumped storm petrel (O. castro). In 2016, the IOC added Townsend's storm petrel (O. socorroensis) and Ainley's storm petrel (O. cheimomnestes) to their list of AS splits, following Howell 2012. These species were split from Leach's storm petrel (O. leucorhoa). Morphology and flight Northern storm petrels are the smallest of all the seabirds, ranging in size from 13 to 25 cm in length. The Hydrobatidae have longer wings than the austral storm petrels, forked or wedge-shaped tails, and shorter legs. The legs of all storm petrels are proportionally longer than those of other Procellariiformes, but they are very weak and unable to support the bird's weight for more than a few steps. All but two of the Hydrobatidae are mostly dark in colour with varying amounts of white on the rump. Two species have different plumage entirely, the ringed storm petrel, which has white undersides and facial markings, and the fork-tailed storm petrel, which has pale grey plumage. This is a notoriously difficult group to identify at sea. Onley and Scofield (2007) state that much published information is incorrect, and that photographs in the major seabird books and websites are frequently incorrectly ascribed as to species. They also consider that several national bird lists include species that have been incorrectly identified or have been accepted on inadequate evidence. Storm petrels use a variety of techniques to aid flight. Most species occasionally feed by surface pattering, holding and moving their feet on the water's surface while holding steady above the water. They remain stationary by hovering with rapid fluttering or using the wind to anchor themselves in place. This method of feeding flight is more commonly used by Oceanitidae storm petrels, however. Northern storm petrels also use dynamic soaring, gliding across wave fronts gaining energy from the vertical wind gradient. Diet The diet of many storm petrels species is poorly known owing to difficulties in researching; overall, the family is thought to concentrate on crustaceans. Small fish, oil droplets, and molluscs are also taken by many species. Some species are known to be rather more specialised; the grey-backed storm petrel is known to concentrate on the larvae of goose barnacles. Almost all species forage in the pelagic zone. Although storm petrels are capable of swimming well and often form rafts on the water's surface, they do not feed on the water. Instead, feeding usually takes place on the wing, with birds hovering above or "walking" on the surface (see morphology) and snatching small morsels. Rarely, prey is obtained by making shallow dives under the surface. Like many seabirds, storm petrels associate with other species of seabirds and marine mammal species to help obtain food. They may benefit from the actions of diving predators such as seals and penguins, which push prey up towards the surface while hunting, allowing the surface-feeding storm petrels to reach them. Distribution and movements The Hydrobatidae are mostly found in the Northern Hemisphere. Several species of northern storm petrel undertake migrations after the breeding season, of differing lengths; long ones, such as Swinhoe's storm petrel, which breeds in the west Pacific and migrates to the west Indian Ocean; or shorter ones, such as the black storm petrel, which nests in southern California and migrates down the coast of Central America as far south as Colombia. Some species, like Tristram's storm petrel, are thought to be essentially sedentary and do not undertake any migrations away from their breeding islands. Breeding Storm petrels nest colonially, for the most part on islands, although a few species breed on the mainland, particularly Antarctica. Nesting sites are attended at night to avoid predators; the wedge-rumped storm petrels nesting in the Galapagos Islands are the exception to this rule and attend their nesting sites during the day. Storm petrels display high levels of philopatry, returning to their natal colonies to breed. In one instance, a band-rumped storm petrel was caught as an adult 2 m from its natal burrow. Storm petrels nest either in burrows dug into soil or sand, or in small crevices in rocks and scree. Competition for nesting sites is intense in colonies where storm petrels compete with other burrowing petrels, with shearwaters having been recorded killing storm petrels to occupy their burrows. Colonies can be extremely large and dense, with densities as high as 8 pairs/m2 for band-rumped storm petrels in the Galapagos and colonies 3.6 million strong for Leach's storm petrel have been recorded. Storm petrels are monogamous and form long-term pair bonds that last a number of years. Studies of paternity using DNA fingerprinting have shown that, unlike many other monogamous birds, infidelity (extra-pair mating) is very rare. As with the other Procellariiformes, a single egg is laid by a pair in a breeding season; if the egg fails, then usually no attempt is made to lay again (although it happens rarely). Both sexes incubate in shifts of up to six days. The egg hatches after 40 or 50 days; the young is brooded continuously for another 7 days or so before being left alone in the nest during the day and fed by regurgitation at night. Meals fed to the chick weigh around 10–20% of the parent's body weight, and consist of both prey items and stomach oil. Stomach oil is an energy-rich (its calorific value is around 9.6 kcal/g) oil created by partly digested prey in a part of the fore gut known as the proventriculus. By partly converting prey items into stomach oil, storm petrels can maximise the amount of energy chicks receive during feeding, an advantage for small seabirds that can only make a single visit to the chick during a 24-hour period (at night). The typical age at which chicks fledge depends on the species, taking between 50 and 70 days. The time taken to hatch and raise the young is long for the bird's size, but is typical of seabirds, which in general are K-selected, living much longer, delaying breeding for longer, and investing more effort into fewer young. The young leave their burrows around 62 days. They are independent almost at once and quickly disperse into the ocean. They return to their original colony after 2 or 3 years, but will not breed until at least 4 years old. Storm petrels have been recorded living as long as 30 years. Threats and conservation Several species of storm petrel are threatened by human activities. The Guadalupe storm petrel has not been observed since 1906 and most authorities consider it extinct. One species (the ashy storm petrel) is listed as endangered by the IUCN due to a 42% decline over 20 years. For the ringed storm petrel, even the sites of their breeding colonies remain a mystery. Storm petrels face the same threats as other seabirds; in particular, they are threatened by introduced species. The Guadalupe storm petrel was driven to extinction by feral cats, and introduced predators have also been responsible for declines in other species. Habitat degradation, which limits nesting opportunities, caused by introduced goats and pigs is also a problem, especially if it increases competition from more aggressive burrowing petrels. Cultural representation of the storm petrel "Petrel" is a diminutive form of "Peter", a reference to Saint Peter; it was given to these birds because they sometimes appear to walk across the water's surface. The more specific "storm petrel" or "stormy petrel" is a reference to their habit of hiding in the lee of ships during storms. Early sailors named these birds "Mother Carey's chickens" because they were thought to warn of oncoming storms; this name is based on a corrupted form of Mater Cara, a name for the Blessed Virgin Mary. Breton folklore holds that storm petrels are the spirits of sea-captains who mistreated their crew, doomed to spend eternity flying over the sea, and they are also held to be the souls of drowned sailors. A sailing superstition holds that the appearance of a storm petrel foretells bad weather. Sinister names from Britain and France include waterwitch, satanite, satanique, and oiseau du diable. Symbol of revolution The association of the storm petrel with turbulent weather has led to its use as a metaphor for revolutionary views, the epithet "stormy petrel" being applied by various authors to characters as disparate as a Roman tribune, a Presbyterian minister in the early Carolinas, an Afghan governor, or an Arkansas politician. Russian revolutionary writer Maxim Gorky bore the epithet "the Stormy Petrel of the Revolution" (Буревестник Революции), presumably due to his authorship of the famous 1901 poem "Song of the Stormy Petrel". In "Song of the Stormy Petrel", Gorki turned to the imagery of subantarctic avifauna to describe Russian society's attitudes to the coming revolution. The storm petrel was depicted as unafraid of the coming storm –the revolution. This poem was called "the battle anthem of the revolution", and earned Gorky himself the title of the "Storm Petrel of the Revolution". While this English translation of the bird's name may not be a very ornithologically accurate translation of the Russian burevestnik (буревестник), it is poetically appropriate, as burevestnik literally means "the announcer of the storm". To honour Gorky and his work, the name Burevestnik was bestowed on a variety of institutions, locations, and products in the USSR. The motif of the stormy petrel has a long association with revolutionary anarchism. Various groups adopted the bird's name, either as a group identifier, as in the Spanish Civil War, or for their publications. "Stormy Petrel" was the title of a German anarchist paper of the late 19th century; it was also the name of a Russian exile anarchist communist group operating in Switzerland in the early 20th century. The Stormy Petrel (Burevestnik) was the title of the magazine of the Anarchist Communist Federation in Russia around the time of the 1905 revolution, and is still an imprint of the London group of the Anarchist Federation of Britain and Ireland. Writing in 1936, Emma Goldman referred to Buenaventura Durruti as "this stormy petrel of the anarchist and revolutionary movement".
Biology and health sciences
Procellariiformes
Animals
224301
https://en.wikipedia.org/wiki/Boundary%20layer
Boundary layer
In physics and fluid mechanics, a boundary layer is the thin layer of fluid in the immediate vicinity of a bounding surface formed by the fluid flowing along the surface. The fluid's interaction with the wall induces a no-slip boundary condition (zero velocity at the wall). The flow velocity then monotonically increases above the surface until it returns to the bulk flow velocity. The thin layer consisting of fluid whose velocity has not yet returned to the bulk flow velocity is called the velocity boundary layer. The air next to a human is heated, resulting in gravity-induced convective airflow, which results in both a velocity and thermal boundary layer. A breeze disrupts the boundary layer, and hair and clothing protect it, making the human feel cooler or warmer. On an aircraft wing, the velocity boundary layer is the part of the flow close to the wing, where viscous forces distort the surrounding non-viscous flow. In the Earth's atmosphere, the atmospheric boundary layer is the air layer (~ 1 km) near the ground. It is affected by the surface; day-night heat flows caused by the sun heating the ground, moisture, or momentum transfer to or from the surface. Types of boundary layers Laminar boundary layers can be loosely classified according to their structure and the circumstances under which they are created. The thin shear layer which develops on an oscillating body is an example of a Stokes boundary layer, while the Blasius boundary layer refers to the well-known similarity solution near an attached flat plate held in an oncoming unidirectional flow and Falkner–Skan boundary layer, a generalization of Blasius profile. When a fluid rotates and viscous forces are balanced by the Coriolis effect (rather than convective inertia), an Ekman layer forms. In the theory of heat transfer, a thermal boundary layer occurs. A surface can have multiple types of boundary layer simultaneously. The viscous nature of airflow reduces the local velocities on a surface and is responsible for skin friction. The layer of air over the wing's surface that is slowed down or stopped by viscosity, is the boundary layer. There are two different types of boundary layer flow: laminar and turbulent. Laminar boundary layer flow The laminar boundary is a very smooth flow, while the turbulent boundary layer contains swirls or "eddies." The laminar flow creates less skin friction drag than the turbulent flow, but is less stable. Boundary layer flow over a wing surface begins as a smooth laminar flow. As the flow continues back from the leading edge, the laminar boundary layer increases in thickness. Turbulent boundary layer flow At some distance back from the leading edge, the smooth laminar flow breaks down and transitions to a turbulent flow. From a drag standpoint, it is advisable to have the transition from laminar to turbulent flow as far aft on the wing as possible, or have a large amount of the wing surface within the laminar portion of the boundary layer. The low energy laminar flow, however, tends to break down more suddenly than the turbulent layer. The Prandtl boundary layer concept The aerodynamic boundary layer was first hypothesized by Ludwig Prandtl in a paper presented on August 12, 1904, at the third International Congress of Mathematicians in Heidelberg, Germany. It simplifies the equations of fluid flow by dividing the flow field into two areas: one inside the boundary layer, dominated by viscosity and creating the majority of drag experienced by the boundary body; and one outside the boundary layer, where viscosity can be neglected without significant effects on the solution. This allows a closed-form solution for the flow in both areas by making significant simplifications of the full Navier–Stokes equations. The same hypothesis is applicable to other fluids (besides air) with moderate to low viscosity such as water. For the case where there is a temperature difference between the surface and the bulk fluid, it is found that the majority of the heat transfer to and from a body takes place in the vicinity of the velocity boundary layer. This again allows the equations to be simplified in the flow field outside the boundary layer. The pressure distribution throughout the boundary layer in the direction normal to the surface (such as an airfoil) remains relatively constant throughout the boundary layer, and is the same as on the surface itself. The thickness of the velocity boundary layer is normally defined as the distance from the solid body to the point at which the viscous flow velocity is 99% of the freestream velocity (the surface velocity of an inviscid flow). Displacement thickness is an alternative definition stating that the boundary layer represents a deficit in mass flow compared to inviscid flow with slip at the wall. It is the distance by which the wall would have to be displaced in the inviscid case to give the same total mass flow as the viscous case. The no-slip condition requires the flow velocity at the surface of a solid object be zero and the fluid temperature be equal to the temperature of the surface. The flow velocity will then increase rapidly within the boundary layer, governed by the boundary layer equations, below. The thermal boundary layer thickness is similarly the distance from the body at which the temperature is 99% of the freestream temperature. The ratio of the two thicknesses is governed by the Prandtl number. If the Prandtl number is 1, the two boundary layers are the same thickness. If the Prandtl number is greater than 1, the thermal boundary layer is thinner than the velocity boundary layer. If the Prandtl number is less than 1, which is the case for air at standard conditions, the thermal boundary layer is thicker than the velocity boundary layer. In high-performance designs, such as gliders and commercial aircraft, much attention is paid to controlling the behavior of the boundary layer to minimize drag. Two effects have to be considered. First, the boundary layer adds to the effective thickness of the body, through the displacement thickness, hence increasing the pressure drag. Secondly, the shear forces at the surface of the wing create skin friction drag. At high Reynolds numbers, typical of full-sized aircraft, it is desirable to have a laminar boundary layer. This results in a lower skin friction due to the characteristic velocity profile of laminar flow. However, the boundary layer inevitably thickens and becomes less stable as the flow develops along the body, and eventually becomes turbulent, the process known as boundary layer transition. One way of dealing with this problem is to suck the boundary layer away through a porous surface (see Boundary layer suction). This can reduce drag, but is usually impractical due to its mechanical complexity and the power required to move the air and dispose of it. Natural laminar flow (NLF) techniques push the boundary layer transition aft by reshaping the airfoil or fuselage so that its thickest point is more aft and less thick. This reduces the velocities in the leading part and the same Reynolds number is achieved with a greater length. At lower Reynolds numbers, such as those seen with model aircraft, it is relatively easy to maintain laminar flow. This gives low skin friction, which is desirable. However, the same velocity profile which gives the laminar boundary layer its low skin friction also causes it to be badly affected by adverse pressure gradients. As the pressure begins to recover over the rear part of the wing chord, a laminar boundary layer will tend to separate from the surface. Such flow separation causes a large increase in the pressure drag, since it greatly increases the effective size of the wing section. In these cases, it can be advantageous to deliberately trip the boundary layer into turbulence at a point prior to the location of laminar separation, using a turbulator. The fuller velocity profile of the turbulent boundary layer allows it to sustain the adverse pressure gradient without separating. Thus, although the skin friction is increased, overall drag is decreased. This is the principle behind the dimpling on golf balls, as well as vortex generators on aircraft. Special wing sections have also been designed which tailor the pressure recovery so laminar separation is reduced or even eliminated. This represents an optimum compromise between the pressure drag from flow separation and skin friction from induced turbulence. When using half-models in wind tunnels, a peniche is sometimes used to reduce or eliminate the effect of the boundary layer. Boundary layer equations The deduction of the boundary layer equations was one of the most important advances in fluid dynamics. Using an order of magnitude analysis, the well-known governing Navier–Stokes equations of viscous fluid flow can be greatly simplified within the boundary layer. Notably, the characteristic of the partial differential equations (PDE) becomes parabolic, rather than the elliptical form of the full Navier–Stokes equations. This greatly simplifies the solution of the equations. By making the boundary layer approximation, the flow is divided into an inviscid portion (which is easy to solve by a number of methods) and the boundary layer, which is governed by an easier to solve PDE. The continuity and Navier–Stokes equations for a two-dimensional steady incompressible flow in Cartesian coordinates are given by where and are the velocity components, is the density, is the pressure, and is the kinematic viscosity of the fluid at a point. The approximation states that, for a sufficiently high Reynolds number the flow over a surface can be divided into an outer region of inviscid flow unaffected by viscosity (the majority of the flow), and a region close to the surface where viscosity is important (the boundary layer). Let and be streamwise and transverse (wall normal) velocities respectively inside the boundary layer. Using scale analysis, it can be shown that the above equations of motion reduce within the boundary layer to become and if the fluid is incompressible (as liquids are under standard conditions): The order of magnitude analysis assumes the streamwise length scale significantly larger than the transverse length scale inside the boundary layer. It follows that variations in properties in the streamwise direction are generally much lower than those in the wall normal direction. Apply this to the continuity equation shows that , the wall normal velocity, is small compared with the streamwise velocity. Since the static pressure is independent of , then pressure at the edge of the boundary layer is the pressure throughout the boundary layer at a given streamwise position. The external pressure may be obtained through an application of Bernoulli's equation. Let be the fluid velocity outside the boundary layer, where and are both parallel. This gives upon substituting for the following result For a flow in which the static pressure also does not change in the direction of the flow so remains constant. Therefore, the equation of motion simplifies to become These approximations are used in a variety of practical flow problems of scientific and engineering interest. The above analysis is for any instantaneous laminar or turbulent boundary layer, but is used mainly in laminar flow studies since the mean flow is also the instantaneous flow because there are no velocity fluctuations present. This simplified equation is a parabolic PDE and can be solved using a similarity solution often referred to as the Blasius boundary layer. Prandtl's transposition theorem Prandtl observed that from any solution which satisfies the boundary layer equations, further solution , which is also satisfying the boundary layer equations, can be constructed by writing where is arbitrary. Since the solution is not unique from mathematical perspective, to the solution can be added any one of an infinite set of eigenfunctions as shown by Stewartson and Paul A. Libby. Von Kármán momentum integral Von Kármán derived the integral equation by integrating the boundary layer equation across the boundary layer in 1921. The equation is where is the wall shear stress, is the suction/injection velocity at the wall, is the displacement thickness and is the momentum thickness. Kármán–Pohlhausen Approximation is derived from this equation. Energy integral The energy integral was derived by Wieghardt. where is the energy dissipation rate due to viscosity across the boundary layer and is the energy thickness. Von Mises transformation For steady two-dimensional boundary layers, von Mises introduced a transformation which takes and (stream function) as independent variables instead of and and uses a dependent variable instead of . The boundary layer equation then become The original variables are recovered from This transformation is later extended to compressible boundary layer by von Kármán and HS Tsien. Crocco's transformation For steady two-dimensional compressible boundary layer, Luigi Crocco introduced a transformation which takes and as independent variables instead of and and uses a dependent variable (shear stress) instead of . The boundary layer equation then becomes The original coordinate is recovered from Turbulent boundary layers The treatment of turbulent boundary layers is far more difficult due to the time-dependent variation of the flow properties. One of the most widely used techniques in which turbulent flows are tackled is to apply Reynolds decomposition. Here the instantaneous flow properties are decomposed into a mean and fluctuating component with the assumption that the mean of the fluctuating component is always zero. Applying this technique to the boundary layer equations gives the full turbulent boundary layer equations not often given in literature: Using a similar order-of-magnitude analysis, the above equations can be reduced to leading order terms. By choosing length scales for changes in the transverse-direction, and for changes in the streamwise-direction, with , the x-momentum equation simplifies to: This equation does not satisfy the no-slip condition at the wall. Like Prandtl did for his boundary layer equations, a new, smaller length scale must be used to allow the viscous term to become leading order in the momentum equation. By choosing as the y-scale, the leading order momentum equation for this "inner boundary layer" is given by: In the limit of infinite Reynolds number, the pressure gradient term can be shown to have no effect on the inner region of the turbulent boundary layer. The new "inner length scale" is a viscous length scale, and is of order , with being the velocity scale of the turbulent fluctuations, in this case a friction velocity. Unlike the laminar boundary layer equations, the presence of two regimes governed by different sets of flow scales (i.e. the inner and outer scaling) has made finding a universal similarity solution for the turbulent boundary layer difficult and controversial. To find a similarity solution that spans both regions of the flow, it is necessary to asymptotically match the solutions from both regions of the flow. Such analysis will yield either the so-called log-law or power-law. Similar approaches to the above analysis has also been applied for thermal boundary layers, using the energy equation in compressible flows. The additional term in the turbulent boundary layer equations is known as the Reynolds shear stress and is unknown a priori. The solution of the turbulent boundary layer equations therefore necessitates the use of a turbulence model, which aims to express the Reynolds shear stress in terms of known flow variables or derivatives. The lack of accuracy and generality of such models is a major obstacle in the successful prediction of turbulent flow properties in modern fluid dynamics. A constant stress layer exists in the near wall region. Due to the damping of the vertical velocity fluctuations near the wall, the Reynolds stress term will become negligible and we find that a linear velocity profile exists. This is only true for the very near wall region. Heat and mass transfer In 1928, the French engineer André Lévêque observed that convective heat transfer in a flowing fluid is affected only by the velocity values very close to the surface. For flows of large Prandtl number, the temperature/mass transition from surface to freestream temperature takes place across a very thin region close to the surface. Therefore, the most important fluid velocities are those inside this very thin region in which the change in velocity can be considered linear with normal distance from the surface. In this way, for when , then where θ is the tangent of the Poiseuille parabola intersecting the wall. Although Lévêque's solution was specific to heat transfer into a Poiseuille flow, his insight helped lead other scientists to an exact solution of the thermal boundary-layer problem. Schuh observed that in a boundary-layer, u is again a linear function of y, but that in this case, the wall tangent is a function of x. He expressed this with a modified version of Lévêque's profile, This results in a very good approximation, even for low numbers, so that only liquid metals with much less than 1 cannot be treated this way. In 1962, Kestin and Persen published a paper describing solutions for heat transfer when the thermal boundary layer is contained entirely within the momentum layer and for various wall temperature distributions. For the problem of a flat plate with a temperature jump at , they propose a substitution that reduces the parabolic thermal boundary-layer equation to an ordinary differential equation. The solution to this equation, the temperature at any point in the fluid, can be expressed as an incomplete gamma function. Schlichting proposed an equivalent substitution that reduces the thermal boundary-layer equation to an ordinary differential equation whose solution is the same incomplete gamma function. Analytic solutions can be derived with the time-dependent self-similar Ansatz for the incompressible boundary layer equations including heat conduction. As is well known from several textbooks, heat transfer tends to decrease with the increase in the boundary layer. Recently, it was observed on a practical and large scale that wind flowing through a photovoltaic generator tends to "trap" heat in the PV panels under a turbulent regime due to the decrease in heat transfer. Despite being frequently assumed to be inherently turbulent, this accidental observation demonstrates that natural wind behaves in practice very close to an ideal fluid, at least in an observation resembling the expected behaviour in a flat plate, potentially reducing the difficulty in analysing this kind of phenomenon on a larger scale. Convective transfer constants from boundary layer analysis Paul Richard Heinrich Blasius derived an exact solution to the above laminar boundary layer equations. The thickness of the boundary layer is a function of the Reynolds number for laminar flow. = the thickness of the boundary layer: the region of flow where the velocity is less than 99% of the far field velocity ; is position along the semi-infinite plate, and is the Reynolds Number given by ( density and dynamic viscosity). The Blasius solution uses boundary conditions in a dimensionless form: at at and Note that in many cases, the no-slip boundary condition holds that , the fluid velocity at the surface of the plate equals the velocity of the plate at all locations. If the plate is not moving, then . A much more complicated derivation is required if fluid slip is allowed. In fact, the Blasius solution for laminar velocity profile in the boundary layer above a semi-infinite plate can be easily extended to describe Thermal and Concentration boundary layers for heat and mass transfer respectively. Rather than the differential x-momentum balance (equation of motion), this uses a similarly derived Energy and Mass balance: Energy: Mass: For the momentum balance, kinematic viscosity can be considered to be the momentum diffusivity. In the energy balance this is replaced by thermal diffusivity , and by mass diffusivity in the mass balance. In thermal diffusivity of a substance, is its thermal conductivity, is its density and is its heat capacity. Subscript AB denotes diffusivity of species A diffusing into species B. Under the assumption that , these equations become equivalent to the momentum balance. Thus, for Prandtl number and Schmidt number the Blasius solution applies directly. Accordingly, this derivation uses a related form of the boundary conditions, replacing with or (absolute temperature or concentration of species A). The subscript S denotes a surface condition. at at and Using the streamline function Blasius obtained the following solution for the shear stress at the surface of the plate. And via the boundary conditions, it is known that We are given the following relations for heat/mass flux out of the surface of the plate So for where are the regions of flow where and are less than 99% of their far field values. Because the Prandtl number of a particular fluid is not often unity, German engineer E. Polhausen who worked with Ludwig Prandtl attempted to empirically extend these equations to apply for . His results can be applied to as well. He found that for Prandtl number greater than 0.6, the thermal boundary layer thickness was approximately given by: and therefore From this solution, it is possible to characterize the convective heat/mass transfer constants based on the region of boundary layer flow. Fourier's law of conduction and Newton's Law of Cooling are combined with the flux term derived above and the boundary layer thickness. This gives the local convective constant at one point on the semi-infinite plane. Integrating over the length of the plate gives an average Following the derivation with mass transfer terms ( = convective mass transfer constant, = diffusivity of species A into species B, ), the following solutions are obtained: These solutions apply for laminar flow with a Prandtl/Schmidt number greater than 0.6. Naval architecture Many of the principles that apply to aircraft also apply to ships, submarines, and offshore platforms, with water as the primary fluid of concern rather than air. As water is not an ideal fluid, ships moving in water experience resistance. The fluid particles cling to the hull of the ship due to the adhesive force between water and the ship, creating a boundary layer where the speed of flow of the fluid forms a small but steep speed gradient, with the fluid in contact with the ship ideally has a relative velocity of 0, and the fluid at the border of the boundary layer being the free-stream speed, or the relative speed of the fluid around the ship. While the front of the ship faces normal pressure forces due to the fluid surrounding it, the aft portion sees a lower acting component of pressure due to the boundary layer. This leads to higher resistance due to pressure known as 'viscous pressure drag' or 'form drag'. For ships, unlike aircraft, one deals with incompressible flows, where change in water density is negligible (a pressure rise close to 1000kPa leads to a change of only 2–3 kg/m3). This field of fluid dynamics is called hydrodynamics. A ship engineer designs for hydrodynamics first, and for strength only later. The boundary layer development, breakdown, and separation become critical because the high viscosity of water produces high shear stresses. Boundary layer turbine This effect was exploited in the Tesla turbine, patented by Nikola Tesla in 1913. It is referred to as a bladeless turbine because it uses the boundary layer effect and not a fluid impinging upon the blades as in a conventional turbine. Boundary layer turbines are also known as cohesion-type turbine, bladeless turbine, and Prandtl layer turbine (after Ludwig Prandtl). Predicting transient boundary layer thickness in a cylinder using dimensional analysis By using the transient and viscous force equations for a cylindrical flow you can predict the transient boundary layer thickness by finding the Womersley Number (). Transient Force = Viscous Force = Setting them equal to each other gives: Solving for delta gives: In dimensionless form: where = Womersley Number; = density; = velocity; ?; = length of transient boundary layer; = viscosity; = characteristic length. Predicting convective flow conditions at the boundary layer in a cylinder using dimensional analysis By using the convective and viscous force equations at the boundary layer for a cylindrical flow you can predict the convective flow conditions at the boundary layer by finding the dimensionless Reynolds Number (). Convective force: Viscous force: Setting them equal to each other gives: Solving for delta gives: In dimensionless form: where = Reynolds Number; = density; = velocity; = length of convective boundary layer; = viscosity; = characteristic length. Boundary layer ingestion Boundary layer ingestion promises an increase in aircraft fuel efficiency with an aft-mounted propulsor ingesting the slow fuselage boundary layer and re-energising the wake to reduce drag and improve propulsive efficiency. To operate in distorted airflow, the fan is heavier and its efficiency is reduced, and its integration is challenging. It is used in concepts like the Aurora D8 or the French research agency Onera's Nova, saving 5% in cruise by ingesting 40% of the fuselage boundary layer. Airbus presented the Nautilius concept at the ICAS congress in September 2018: to ingest all the fuselage boundary layer, while minimizing the azimuthal flow distortion, the fuselage splits into two spindles with 13-18:1 bypass ratio fans. Propulsive efficiencies are up to 90% like counter-rotating open rotors with smaller, lighter, less complex and noisy engines. It could lower fuel burn by over 10% compared to a usual underwing 15:1 bypass ratio engine.
Physical sciences
Fluid mechanics
Physics