id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
838151 | https://en.wikipedia.org/wiki/Hilbert%27s%20axioms | Hilbert's axioms | Hilbert's axioms are a set of 20 assumptions proposed by David Hilbert in 1899 in his book Grundlagen der Geometrie (tr. The Foundations of Geometry) as the foundation for a modern treatment of Euclidean geometry. Other well-known modern axiomatizations of Euclidean geometry are those of Alfred Tarski and of George Birkhoff.
The axioms
Hilbert's axiom system is constructed with six primitive notions: three primitive terms:
point;
line;
plane;
and three primitive relations:
Betweenness, a ternary relation linking points;
Lies on (Containment), three binary relations, one linking points and straight lines, one linking points and planes, and one linking straight lines and planes;
Congruence, two binary relations, one linking line segments and one linking angles, each denoted by an infix ≅.
Line segments, angles, and triangles may each be defined in terms of points and straight lines, using the relations of betweenness and containment. All points, straight lines, and planes in the following axioms are distinct unless otherwise stated.
I. Incidence
For every two points A and B there exists a line a that contains them both. We write AB = a or BA = a. Instead of "contains", we may also employ other forms of expression; for example, we may say "A lies upon a", "A is a point of a", "a goes through A and through B", "a joins A to B", etc. If A lies upon a and at the same time upon another line b, we make use also of the expression: "The lines a and b have the point A in common", etc.
For every two points there exists no more than one line that contains them both; consequently, if and , where , then also .
There exist at least two points on a line. There exist at least three points that do not lie on the same line.
For every three points A, B, C not situated on the same line there exists a plane α that contains all of them. For every plane there exists a point which lies on it. We write . We employ also the expressions: "A, B, C lie in α"; "A, B, C are points of α", etc.
For every three points A, B, C which do not lie in the same line, there exists no more than one plane that contains them all.
If two points A, B of a line a lie in a plane α, then every point of a lies in α. In this case we say: "The line a lies in the plane α", etc.
If two planes α, β have a point A in common, then they have at least a second point B in common.
There exist at least four points not lying in a plane.
II. Order
If a point B lies between points A and C, B is also between C and A, and there exists a line containing the distinct points A, B, C.
If A and C are two points, then there exists at least one point B on the line AC such that C lies between A and B.
Of any three points situated on a line, there is no more than one which lies between the other two.
Pasch's Axiom: Let A, B, C be three points not lying in the same line and let a be a line lying in the plane ABC and not passing through any of the points A, B, C. Then, if the line a passes through a point of the segment AB, it will also pass through either a point of the segment BC or a point of the segment AC.
III. Congruence
If A, B are two points on a line a, and if A′ is a point upon the same or another line a′, then, upon a given side of A′ on the straight line a′, we can always find a point B′ so that the segment AB is congruent to the segment A′B′. We indicate this relation by writing . Every segment is congruent to itself; that is, we always have .We can state the above axiom briefly by saying that every segment can be laid off upon a given side of a given point of a given straight line in at least one way.
If a segment AB is congruent to the segment A′B′ and also to the segment A″B″, then the segment A′B′ is congruent to the segment A″B″; that is, if and , then .
Let AB and BC be two segments of a line a which have no points in common aside from the point B, and, furthermore, let A′B′ and B′C′ be two segments of the same or of another line a′ having, likewise, no point other than B′ in common. Then, if and , we have .
Let an angle be given in the plane α and let a line a′ be given in a plane α′. Suppose also that, in the plane α′, a definite side of the straight line a′ be assigned. Denote by h′ a ray of the straight line a′ emanating from a point O′ of this line. Then in the plane α′ there is one and only one ray k′ such that the angle , or , is congruent to the angle and at the same time all interior points of the angle lie upon the given side of a′. We express this relation by means of the notation .
If the angle is congruent to the angle and to the angle , then the angle is congruent to the angle ; that is to say, if and , then .
If, in the two triangles ABC and A′B′C′ the congruences , , hold, then the congruence holds (and, by a change of notation, it follows that also holds).
IV. Parallels
Playfair's axiom: Let a be any line and A a point not on it. Then there is at most one line in the plane, determined by a and A, that passes through A and does not intersect a.
V. Continuity
Axiom of Archimedes: If AB and CD are any segments then there exists a number n such that n segments CD constructed contiguously from A, along the ray from A through B, will pass beyond the point B.
Axiom of line completeness: An extension (An extended line from a line that already exists, usually used in geometry) of a set of points on a line with its order and congruence relations that would preserve the relations existing among the original elements as well as the fundamental properties of line order and congruence that follows from Axioms I-III and from V-1 is impossible.
Hilbert's discarded axiom
Hilbert (1899) included a 21st axiom that read as follows:
II.4. Any four points A, B, C, D of a line can always be labeled so that B shall lie between A and C and also between A and D, and, furthermore, that C shall lie between A and D and also between B and D.
This statement is also known as Pasch's theorem.
E. H. Moore and R. L. Moore independently proved that this axiom is redundant, and the former published this result in an article appearing in the Transactions of the American Mathematical Society in 1902.
Before this, Pasch's axiom, now listed as II.4, was numbered II.5.
Editions and translations of Grundlagen der Geometrie
The original monograph, based on his own lectures, was organized and written by Hilbert for a memorial address given in 1899. This was quickly followed by a French translation, in which Hilbert added V.2, the Completeness Axiom. An English translation, authorized by Hilbert, was made by E.J. Townsend and copyrighted in 1902. This translation incorporated the changes made in the French translation and so is considered to be a translation of the 2nd edition. Hilbert continued to make changes in the text and several editions appeared in German. The 7th edition was the last to appear in Hilbert's lifetime. In the Preface of this edition Hilbert wrote:
"The present Seventh Edition of my book Foundations of Geometry brings considerable improvements and additions to the previous edition, partly from my subsequent lectures on this subject and partly from improvements made in the meantime by other writers. The main text of the book has been revised accordingly."
New editions followed the 7th, but the main text was essentially not revised. The modifications in these editions occur in the appendices and in supplements. The changes in the text were large when compared to the original and a new English translation was commissioned by Open Court Publishers, who had published the Townsend translation. So, the 2nd English Edition was translated by Leo Unger from the 10th German edition in 1971. This translation incorporates several revisions and enlargements of the later German editions by Paul Bernays.
The Unger translation differs from the Townsend translation with respect to the axioms in the following ways:
Old axiom II.4 is renamed as Theorem 5 and moved.
Old axiom II.5 (Pasch's Axiom) is renumbered as II.4.
V.2, the Axiom of Line Completeness, replaced:
Axiom of completeness. To a system of points, straight lines, and planes, it is impossible to add other elements in such a manner that the system thus generalized shall form a new geometry obeying all of the five groups of axioms. In other words, the elements of geometry form a system which is not susceptible of extension, if we regard the five groups of axioms as valid.
The old axiom V.2 is now Theorem 32.
The last two modifications are due to P. Bernays.
Other changes of note are:
The term straight line used by Townsend has been replaced by line throughout.
The Axioms of Incidence were called Axioms of Connection by Townsend.
Application
These axioms axiomatize Euclidean solid geometry. Removing five axioms mentioning "plane" in an essential way, namely I.4–8, and modifying III.4 and IV.1 to omit mention of planes, yields an axiomatization of Euclidean plane geometry.
Hilbert's axioms, unlike Tarski's axioms, do not constitute a first-order theory because the axioms V.1–2 cannot be expressed in first-order logic.
The value of Hilbert's Grundlagen was more methodological than substantive or pedagogical. Other major contributions to the axiomatics of geometry were those of Moritz Pasch, Mario Pieri, Oswald Veblen, Edward Vermilye Huntington, Gilbert Robinson, and Henry George Forder. The value of the Grundlagen is its pioneering approach to metamathematical questions, including the use of models to prove axioms independent; and the need to prove the consistency and completeness of an axiom system.
Mathematics in the twentieth century evolved into a network of axiomatic formal systems. This was, in considerable part, influenced by the example Hilbert set in the Grundlagen. A 2003 effort (Meikle and Fleuriot) to formalize the Grundlagen with a computer, though, found that some of Hilbert's proofs appear to rely on diagrams and geometric intuition, and as such revealed some potential ambiguities and omissions in his definitions.
| Mathematics | Axiomatic systems | null |
839672 | https://en.wikipedia.org/wiki/Spotted%20skunk | Spotted skunk | The genus Spilogale includes all skunks commonly known as spotted skunks. Currently, there are four accepted extant species: S. gracilis, S. putorius, S. pygmaea, and S. angustifrons. New research, however, proposes that there may be up to seven.
Extant species
In the past, anywhere between two and fourteen species of Spilogale have been recognized, but today most authorities accept a four species model (seen below). A 2021 DNA analysis of 203 specimens from across their known range suggests that there may be as many as seven distinct species in the genus, some cryptic.
Description
Mammalogists consider S. gracilis and S. putorius different species because of differences in reproductive patterns, reproductive morphology, and chromosomal variation. However, interbreeding has never been disproved. The name Spilogale comes from the Greek word spilo, which means "spotted", and gale, which means "weasel". Putorius is the Latin word for "fetid odor". Gracilis is the Latin word for "slender". Several other names attributed to S. putorius include: civet cat, polecat, hydrophobian skunk, phoby skunk, phoby cat, tree skunk, weasel skunk, black marten, little spotted skunk, four-lined skunk, four-striped skunk, and sachet kitty.
Distribution and habitat
Range
The western spotted skunk (Spilogale gracilis) can be found west of the Continental Divide from southern British Columbia to Central America, as well as in some parts of Montana, North Dakota, Wyoming, Colorado, and western Texas. Eastward, its range borders that of the eastern spotted skunk (Spilogale putorius). Spilogale gracilis generally occupies lowland areas but they are sometimes found at higher elevations (2600 m). Although the western spotted skunk is now recognized as S. gracilis, previously, skunks west of the Cascade Crest in British Columbia, Washington, and Oregon were recognized as a distinct subspecies (S. p. latifrons).
Spilogale putorius is found throughout the central and southeastern United States, as well as northeastern Mexico. In Mississippi, S. putorius is found throughout the whole state, except for the northwestern corner by the Mississippi River. In the Great Plains, there has been an observed increase in the geographical range of these skunks, and the cause of this is thought to be a result of an increase in agriculture. This would lead to an increase in mice, which happen to be one of the primary prey for S. putorius.
Habitat
Spilogale usually like to reside in covered thickets, woods, riparian hardwood, shrubbery, and areas that are located near streams. However, S. putorius usually enjoy staying in rocky and woody habitats that have copious amounts of vegetation. These sly creatures prefer to dwell in a den or natural cavities such as stumps or hollow logs. Spotted skunks have been found to adjust well to a wide array of dry prairie ecosystems in shallow dens. They take on a negative relationship with elevation, particularly in regions such as the Northern and Southern Appalachians of the United States. Although they have very effective digging claws, they prefer to occupy dens that are made by gophers, wood rats, pocket gophers, striped skunks, or armadillos. They occupy dens that are positioned to be completely dark inside. Spilogale are very social creatures and frequently share dens with up to seven other skunks. Although skunks often live in this way, maternal dens are not open to non-maternal skunks.
Biology
Reproduction
Around the time of March, the males’ testes begin to enlarge and are most massive by late September. The increase in size is accompanied by a larger testosterone production. Similarly, a female begins to experience an increase in ovarian activity in March. Spilogale begin to mate during March as well. Implantation occurs approximately 14–16 days after mating. For the western spotted skunk, most copulations occur in late September and the beginning of October. Post copulation the zygotes are subject to normal cleavage but stop at the blastocyst stage, where they can remain in the uterus for roughly 6.5 months. After implantation, gestation lasts 30 days and between April and June their offspring are born. Although litter sizes vary considerably, the average litter size is about 5.5 and the gender ratio is 65 M: 35 F.
Growth
The newborn skunks are covered with fine hair that shows the adult color pattern. The eyes open between 30 and 32 days. The kits start solid food at about 42 days and are weaned at about two months. They are full grown and reach adult size at about four months. The males do not help in raising the young.
Defenses
Spotted skunks protect themselves by spraying a strong and unpleasant scent. Two glands on the sides of the anus release the odorous oil through nipples. When threatened, the skunk turns its body into a U-shape with the head and anus facing the attacker. Muscles around the nipples of the scent gland aim them, giving the skunk great accuracy on targets up to 15 feet away. As a warning before spraying, the skunk stamps its front feet, raises its tail, and hisses. They may warn with a unique "hand stand"—the back vertical and the tail waving.
The liquid is secreted via paired anal subcutaneous glands that are connected to the body through striated muscles. The odorous solution is emitted as an atomized spray that is nearly invisible or as streams of larger droplets.
Skunks store about 1 tablespoon (15 g) of the odorous oil and can quickly spray five times in row. It takes about one week to replenish the oil.
The secretion of the spotted skunks differs from that of the striped skunks. The two major thiols of the striped skunks, (E)-2-butene-1-thiol and 3-methyl-1-butanethiol are the major components in the secretion of the spotted skunks along with a third thiol, 2-phenylethanethiol.
Thioacetate derivatives of the three thiols are present in the spray of the striped skunks but not the spotted skunks. They are not as odoriferous as the thiols. Water hydrolysis converts them to the more potent thiols. This chemical conversion may be why pets that have been sprayed by skunks will have a faint "skunky" odor on damp evenings.
Deodorizing
Changing the thiols into compounds that have little or no odor can be done by oxidizing the thiols to sulfonic acids. Hydrogen peroxide and baking soda (sodium bicarbonate) are mild enough to be used on people and animals but changes hair color.
Stronger oxidizing agents, like sodium hypochlorite solutions—liquid laundry bleach—are cheap and effective for deodorizing other materials.
Diet
Skunks are omnivorous and will eat small rodents, fruits, berries, birds, eggs, insects and larvae, lizards, snakes, and carrion. Their diet may vary with the seasons as food availability fluctuates. They have a keen sense of smell that helps them find grubs and other food. Their hearing is acute but they have poor vision.
Life expectancy
Spotted skunks can live 10 years in captivity, but in the wild, about half the skunks die after 1 or 2 years.
Conservation
The eastern spotted skunk, S. putorius, is a conservation concern. Management is hampered by an overall lack of information from surveying. During the 1940s, Spilogale populations seemingly crashed and the species is currently listed by various state agencies as endangered, threatened, or ‘of concern’ across much of its range.
The species S. pygmaea is endemic to the Mexican Pacific coast and is currently threatened. The tropical dry forest of western Mexico, where these skunks live, is a highly threatened ecosystem that has been placed on conservation priority. S. pygmaea is also the smallest carnivore native to Mexico as well as one of the smallest worldwide.
| Biology and health sciences | Other carnivora | Animals |
839943 | https://en.wikipedia.org/wiki/Bone%20fracture | Bone fracture | A bone fracture (abbreviated FRX or Fx, Fx, or #) is a medical condition in which there is a partial or complete break in the continuity of any bone in the body. In more severe cases, the bone may be broken into several fragments, known as a comminuted fracture. An open fracture (or compound fracture) is a bone fracture where the broken bone breaks through the skin. A bone fracture may be the result of high force impact or stress, or a minimal trauma injury as a result of certain medical conditions that weaken the bones, such as osteoporosis, osteopenia, bone cancer, or osteogenesis imperfecta, where the fracture is then properly termed a pathologic fracture. Most bone fractures require urgent medical attention to prevent further injury.
Signs and symptoms
Although bone tissue contains no pain receptors, a bone fracture is painful for several reasons:
Breaking in the continuity of the periosteum, with or without similar discontinuity in endosteum, as both contain multiple pain receptors.
Edema and hematoma of nearby soft tissues caused by ruptured bone marrow evokes pressure pain.
Involuntary muscle spasms trying to hold bone fragments in place.
Damage to adjacent structures such as nerves, muscles or blood vessels, spinal cord, and nerve roots (for spine fractures), or cranial contents (for skull fractures) may cause other specific signs and symptoms.
Complications
Some fractures may lead to serious complications including a condition known as compartment syndrome. If not treated, eventually, compartment syndrome may require amputation of the affected limb. Other complications may include non-union, where the fractured bone fails to heal, or malunion, where the fractured bone heals in a deformed manner. One form of malunion is the malrotation of a bone, which is especially common after femoral and tibial fractures.
Complications of fractures may be classified into three broad groups, depending upon their time of occurrence. These are as follows –
Immediate complications – occurs at the time of the fracture.
Early complications – occurring in the initial few days after the fracture.
Late complications – occurring a long time after the fracture.
Pathophysiology
The natural process of healing a fracture starts when the injured bone and surrounding tissues bleed, forming a fracture hematoma. The blood coagulates to form a blood clot situated between the broken fragments. Within a few days, blood vessels grow into the jelly-like matrix of the blood clot. The new blood vessels bring phagocytes to the area, which gradually removes the non-viable material. The blood vessels also bring fibroblasts in the walls of the vessels and these multiply and produce collagen fibres. In this way, the blood clot is replaced by a matrix of collagen. Collagen's rubbery consistency allows bone fragments to move only a small amount unless severe or persistent force is applied.
At this stage, some of the fibroblasts begin to lay down bone matrix in the form of collagen monomers. These monomers spontaneously assemble to form the bone matrix, for which bone crystals (calcium hydroxyapatite) are deposited in amongst, in the form of insoluble crystals. This mineralization of the collagen matrix stiffens it and transforms it into bone. In fact, bone is a mineralized collagen matrix; if the mineral is dissolved out of bone, it becomes rubbery. Healing bone callus on average is sufficiently mineralized to show up on X-ray within 6 weeks in adults and less in children. This initial "woven" bone does not have the strong mechanical properties of mature bone. By a process of remodelling, the woven bone is replaced by mature "lamellar" bone. The whole process may take up to 18 months, but in adults, the strength of the healing bone is usually 80% of normal by 3 months after the injury.
Several factors may help or hinder the bone healing process. For example, tobacco smoking hinders the process of bone healing, and adequate nutrition (including calcium intake) will help the bone healing process. Weight-bearing stress on bone, after the bone has healed sufficiently to bear the weight, also builds bone strength.
Although there are theoretical concerns about NSAIDs slowing the rate of healing, there is not enough evidence to warrant withholding the use of this type analgesic in simple fractures.
Effects of smoking
Smokers generally have lower bone density than non-smokers, so they have a much higher risk of fractures. There is also evidence that smoking delays bone healing.
Diagnosis
A bone fracture may be diagnosed based on the history given and the physical examination performed. Radiographic imaging often is performed to confirm the diagnosis. Under certain circumstances, radiographic examination of the nearby joints is indicated in order to exclude dislocations and fracture-dislocations. In situations where projectional radiography alone is insufficient, Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) may be indicated.
Classification
In orthopedic medicine, fractures are classified in various ways. Historically they are named after the physician who first described the fracture conditions, however, there are more systematic classifications as well.
They may be divided into stable versus unstable depending on the likelihood that they may shift further.
Mechanism
Traumatic fracture – a fracture due to sustained trauma. e.g., fractures caused by a fall, road traffic accident, fight, etc.
Pathologic fracture – a fracture through a bone that has been made weak by some underlying disease is called pathological fracture. e.g., a fracture through a bone weakened by metastasis. Osteoporosis is the most common cause of pathological fracture.
Periprosthetic fracture – a fracture at the point of mechanical weakness at the end of an implant.
Soft-tissue involvement
Closed/simple fractures are those in which the overlying skin is intact
Open/compound fractures involve wounds that communicate with the fracture, or where fracture hematoma is exposed, and may thus expose bone to contamination. Open injuries carry a higher risk of infection. Reports indicate an incidence of infection after internal fixation of closed fracture of 1-2%, rising to 30% in open fractures.
Clean fracture
Contaminated fracture
Displacement
Non-displaced
Displaced
Translated, or ad latus, with sideways displacement.
Angulated
Rotated
Shortened, a reduction in overall bone length when displaced fracture fragments overlap
Fracture pattern
Linear fracture – a fracture that is parallel to the bone's long axis
Transverse fracture – a fracture that is at a right angle to the bone's long axis
Oblique fracture – a fracture that is diagonal to a bone's long axis (more than 30°)
Spiral fracture – a fracture where at least one part of the bone has been twisted
Compression fracture/wedge fracture – usually occurs in the vertebrae, for example when the front portion of a vertebra in the spine collapses due to osteoporosis (a medical condition which causes bones to become brittle and susceptible to fracture, with or without trauma)
Impacted fracture – a fracture caused when bone fragments are driven into each other
Avulsion fracture – a fracture where a fragment of bone is separated from the main mass
Fragments
Incomplete fracture – a fracture in which the bone fragments are still partially joined, in such cases, there is a crack in the osseous tissue that does not completely traverse the width of the bone.
Complete fracture – a fracture in which bone fragments separate completely.
Comminuted fracture – a fracture in which the bone has broken into several pieces.
Anatomical location
An anatomical classification may begin with specifying the involved body part, such as the head or arm, followed by more specific localization. Fractures that have additional definition criteria than merely localization often may be classified as subtypes of fractures, such as a Holstein-Lewis fracture being a subtype of a humerus fracture. Most typical examples in an orthopaedic classification given in the previous section cannot be classified appropriately into any specific part of an anatomical classification, however, as they may apply to multiple anatomical fracture sites.
Skull fracture
Basilar skull fracture
Blowout fracture – a fracture of the walls or floor of the orbit
Mandibular fracture
Nasal fracture
Le Fort fracture of skull – facial fractures involving the maxillary bone and surrounding structures in a usually bilateral and either horizontal, pyramidal, or transverse way.
Spinal fracture
Cervical fracture
Fracture of C1, including Jefferson fracture
Fracture of C2, including Hangman's fracture
Flexion teardrop fracture – a fracture of the anteroinferior aspect of a cervical vertebral
Clay-shoveler fracture – fracture through the spinous process of a vertebra occurring at any of the lower cervical or upper thoracic vertebrae
Burst fracture – in which a vertebra breaks from a high-energy axial load
Compression fracture – a collapse of a vertebra, often in the form of wedge fractures due to larger compression anteriorly
Chance fracture – compression injury to the anterior portion of a vertebral body with concomitant distraction injury to posterior elements
Holdsworth fracture – an unstable fracture dislocation of the thoraco lumbar junction of the spine
Rib fracture
Sternal fracture
Shoulder fracture
Clavicle fracture
Scapular fracture
Arm fracture
Humerus fracture (fracture of upper arm)
Supracondylar fracture
Holstein-Lewis fracture – a fracture of the distal third of the humerus resulting in entrapment of the radial nerve
Forearm fracture
Ulnar fracture
Monteggia fracture – a fracture of the proximal third of the ulna with the dislocation of the head of the radius
Hume fracture – a fracture of the olecranon with an associated anterior dislocation of the radial head
Radius fracture
Essex-Lopresti fracture – a fracture of the radial head with concomitant dislocation of the distal radio-ulnar joint with disruption of the interosseous membrane
Distal radius fracture
Galeazzi fracture – a fracture of the radius with dislocation of the distal radioulnar joint
Colles' fracture – a distal fracture of the radius with dorsal (posterior) displacement of the wrist and hand
Smith's fracture – a distal fracture of the radius with volar (ventral) displacement of the wrist and hand
Barton's fracture – an intra-articular fracture of the distal radius with dislocation of the radiocarpal joint
Hand fracture
Scaphoid fracture
Rolando fracture – a comminuted intra-articular fracture through the base of the first metacarpal bone
Bennett's fracture – a fracture of the base of the first metacarpal bone which extends into the carpometacarpal (CMC) joint
Boxer's fracture – a fracture at the neck of a metacarpal
Broken finger – a fracture of the carpal phalanges
Pelvic fracture
Fracture of the hip bone
Duverney fracture – an isolated pelvic fracture involving only the iliac wing
Femoral fracture
Hip fracture (anatomically a fracture of the femur bone and not the hip bone)
Patella fracture
Crus fracture
Tibia fracture
Pilon fracture
Tibial plateau fracture
Bumper fracture – a fracture of the lateral tibial plateau caused by a forced valgus applied to the knee
Segond fracture – an avulsion fracture of the lateral tibial condyle
Gosselin fracture – a fractures of the tibial plafond into anterior and posterior fragments
Toddler's fracture – an undisplaced and spiral fracture of the distal third to distal half of the tibia
Fibular fracture
Maisonneuve fracture – a spiral fracture of the proximal third of the fibula associated with a tear of the distal tibiofibular syndesmosis and the interosseous membrane
Le Fort fracture of ankle – a vertical fracture of the antero-medial part of the distal fibula with avulsion of the anterior tibiofibular ligament
Bosworth fracture – a fracture with an associated fixed posterior dislocation of the distal fibular fragment that becomes trapped behind the posterior tibial tubercle; the injury is caused by severe external rotation of the ankle
Combined tibia and fibula fracture
Trimalleolar fracture – involving the lateral malleolus, medial malleolus, and the distal posterior aspect of the tibia
Bimalleolar fracture – involving the lateral malleolus and the medial malleolus
Pott's fracture
Foot fracture
Lisfranc fracture – in which one or all of the metatarsals are displaced from the tarsus
Jones fracture – a fracture of the proximal end of the fifth metatarsal
March fracture – a fracture of the distal third of one of the metatarsals occurring because of recurrent stress
Cuneiform fracture – a fracture of one of the three cuneiform bones typically due to direct blow, axial load, or avulsion
Calcaneal fracture – a fracture of the calcaneus (heel bone)
Broken toe – a fracture of the pedal phalanges
OTA/AO classification
The Orthopaedic Trauma Association Committee for Coding and Classification published its classification system in 1996, adopting a similar system to the 1987 AO Foundation system. In 2007, they extended their system, unifying the two systems regarding wrist, hand, foot, and ankle fractures.
Classifications named after people
A number of classifications are named after the person (eponymous) who developed it.
"Denis classification" for spinal fractures
"Frykman classification" for forearm fractures (fractures of radius and ulna)
"Gustilo open fracture classification"
"Letournel and Judet Classification" for Acetabular fractures
"Neer classification" for humerus fractures
Seinsheimer classification, Evans-Jensen classification, Pipkin classification, and Garden classification for hip fractures
Prevention
Both high- and low-force trauma can cause bone fracture injuries. Preventive efforts to reduce motor vehicle crashes, the most common cause of high-force trauma, include reducing distractions while driving. Common distractions are driving under the influence and texting or calling while driving, both of which lead to an approximate 6-fold increase in crashes. Wearing a seatbelt can also reduce the likelihood of injury in a collision. 30 km/h or 20 mph speed limits (as opposed to the more common intracity 50 km/h / 30 mph) also drastically reduce the risk of accident, serious injury and even death in crashes between motor vehicles and humans. Vision Zero aims to reduce traffic deaths to zero through better traffic design and other measures and to drastically reduce traffic injuries which would prevent many bone fractures.
A common cause of low-force trauma is an at-home fall. When considering preventative efforts, the National Institute of Health (NIH) examines ways to reduce the likelihood of falling, the force of the fall, and bone fragility. To prevent at-home falls they suggest keeping cords out of high-traffic areas where someone could trip, installing handrails and keeping stairways well-lit, and installing an assistive bar near the bathtub in the washroom for support. To reduce the impact of a fall the NIH recommends to try falling straight down on your buttocks or onto your hands.
Some sports have a relatively high risk of bone fractures as a common sports injury. Preventive measures depend to some extent on the specific sport, but learning proper technique, wearing protective gear and having a realistic estimation of one's own capabilities and limitations can all help reduce the risk of bone fracture. In contact sports rules have been put in place to protect athlete health, such as the prohibition of unnecessary roughness in American football.
Taking calcium and vitamin D supplements can help strengthen your bones. Vitamin D supplements combined with additional calcium marginally reduces the risk of hip fractures and other types of fracture in older adults; however, vitamin D supplementation alone did not reduce the risk of fractures. Taking vibration therapy can also help strengthening bones and reducing the risk of a fracture.
Patterns
Treatment
Treatment of bone fractures are broadly classified as surgical or conservative, the latter basically referring to any non-surgical procedure, such as pain management, immobilization or other non-surgical stabilization. A similar classification is open versus closed treatment, in which open treatment refers to any treatment in which the fracture site is opened surgically, regardless of whether the fracture is an open or closed fracture.
Pain management
In arm fractures in children, ibuprofen has been found to be as effective as a combination of paracetamol and codeine. In the EMS setting it might be applicable to administer 1mg/kg of iv ketamine to achieve a dissociated state.
Immobilization
Since bone healing is a natural process that will occur most often, fracture treatment aims to ensure the best possible function of the injured part after healing. Bone fractures typically are treated by restoring the fractured pieces of bone to their natural positions (if necessary), and maintaining those positions while the bone heals. Often, aligning the bone, called reduction, in a good position and verifying the improved alignment with an X-ray is all that is needed. This process is extremely painful without anaesthesia, about as painful as breaking the bone itself. To this end, a fractured limb usually is immobilized with a plaster or fibreglass cast or splint that holds the bones in position and immobilizes the joints above and below the fracture. When the initial post-fracture oedema or swelling goes down, the fracture may be placed in a removable brace or orthosis. If being treated with surgery, surgical nails, screws, plates, and wires are used to hold the fractured bone together more directly. Alternatively, fractured bones may be treated by the Ilizarov method which is a form of an external fixator.
Occasionally smaller bones, such as phalanges of the toes and fingers, may be treated without the cast, by buddy wrapping them, which serves a similar function to making a cast. A device called a Suzuki frame may be used in cases of deep, complex intra-articular digit fractures. By allowing only limited movement, immobilization helps preserve anatomical alignment while enabling callus formation, toward the target of achieving union.
Splinting results in the same outcome as casting in children who have a distal radius fracture with little shifting.
Surgery
Surgical methods of treating fractures have their own risks and benefits, but usually, surgery is performed only if conservative treatment has failed, is very likely to fail, or is likely to result in a poor functional outcome. With some fractures such as hip fractures (usually caused by osteoporosis), surgery is offered routinely because non-operative treatment results in prolonged immobilisation, which commonly results in complications including chest infections, pressure sores, deconditioning, deep vein thrombosis (DVT), and pulmonary embolism, which are more dangerous than surgery. When a joint surface is damaged by a fracture, surgery is also commonly recommended to make an accurate anatomical reduction and restore the smoothness of the joint.
Infection is especially dangerous in bones, due to the recrudescent nature of bone infections. Bone tissue is predominantly extracellular matrix, rather than living cells, and the few blood vessels needed to support this low metabolism are only able to bring a limited number of immune cells to an injury to fight infection. For this reason, open fractures and osteotomies call for very careful antiseptic procedures and prophylactic use of antibiotics.
Occasionally, bone grafting is used to treat a fracture.
Sometimes bones are reinforced with metal. These implants must be designed and installed with care. Stress shielding occurs when plates or screws carry too large of a portion of the bone's load, causing atrophy. This problem is reduced, but not eliminated, by the use of low-modulus materials, including titanium and its alloys. The heat generated by the friction of installing hardware can accumulate easily and damage bone tissue, reducing the strength of the connections. If dissimilar metals are installed in contact with one another (i.e., a titanium plate with cobalt-chromium alloy or stainless steel screws), galvanic corrosion will result. The metal ions produced can damage the bone locally and may cause systemic effects as well.
Bone stimulation
Bone stimulation with either electromagnetic or ultrasound waves may be suggested as an alternative to surgery to reduce the healing time for non-union fractures. The proposed mechanism of action is by stimulating osteoblasts and other proteins that form bones using these modalities. The evidence supporting the use of ultrasound and shockwave therapy for improving unions is very weak and it is likely that these approaches do not make a clinically significant difference for a delayed union or non-union.
Physical therapy
Physical therapy exercises (either home-based or physiotherapist-led) to improve functional mobility and strength, gait training for hip fractures, and other physical exercise are also often suggested to help recover physical capacities after a fracture has healed.
Children
In children, whose bones are still developing, there are risks of either a growth plate injury or a greenstick fracture.
A greenstick fracture occurs due to mechanical failure on the tension side. That is since the bone is not so brittle as it would be in an adult, it does not completely fracture, but rather exhibits bowing without complete disruption of the bone's cortex in the surface opposite the applied force.
Growth plate injuries, as in Salter-Harris fractures, require careful treatment and accurate reduction to make sure that the bone continues to grow normally.
Plastic deformation of the bone, in which the bone permanently bends, but does not break, also is possible in children. These injuries may require an osteotomy (bone cut) to realign the bone if it is fixed and cannot be realigned by closed methods.
Certain fractures mainly occur in children, including fracture of the clavicle and supracondylar fracture of the humerus.
| Biology and health sciences | Injury | null |
840191 | https://en.wikipedia.org/wiki/Sagittarius%20A%2A | Sagittarius A* | Sagittarius A*, abbreviated as Sgr A* ( ), is the supermassive black hole at the Galactic Center of the Milky Way. Viewed from Earth, it is located near the border of the constellations Sagittarius and Scorpius, about 5.6° south of the ecliptic, visually close to the Butterfly Cluster (M6) and Lambda Scorpii.
The object is a bright and very compact astronomical radio source. The name Sagittarius A* distinguishes the compact source from the larger (and much brighter) Sagittarius A (Sgr A) region in which it is embedded. Sgr A* was discovered in 1974 by and Robert L. Brown, and the asterisk was assigned in 1982 by Brown, who understood that the strongest radio emission from the center of the galaxy appeared to be due to a compact non-thermal radio object.
The observations of several stars orbiting Sagittarius A*, particularly star S2, have been used to determine the mass and upper limits on the radius of the object. Based on mass and increasingly precise radius limits, astronomers have concluded that Sagittarius A* must be the central supermassive black hole of the Milky Way galaxy. The current best estimate of its mass is 4.297 million solar masses.
Reinhard Genzel and Andrea Ghez were awarded the 2020 Nobel Prize in Physics for their discovery that Sagittarius A* is a supermassive compact object, for which a black hole was the only plausible explanation at the time.
In May 2022, astronomers released the first image of the accretion disk around the horizon of Sagittarius A*, confirming it to be a black hole, using the Event Horizon Telescope, a world-wide network of radio observatories. This is the second confirmed image of a black hole, after Messier 87's supermassive black hole in 2019. The black hole itself is not seen; as light is incapable of escaping the immense gravitational force of a black hole, only nearby objects whose behavior is influenced by the black hole can be observed. The observed radio and infrared energy emanates from gas and dust heated to millions of degrees while falling into the black hole.
Observation and description
On May 12, 2022, the first image of Sagittarius A* was released by the Event Horizon Telescope Collaboration. The image, which is based on radio interferometer data taken in 2017, confirms that the object contains a black hole. This is the second image of a black hole. This image took five years of calculations to process. The data was collected by eight radio observatories at six geographical sites. Radio images are produced from data by aperture synthesis, usually from night-long observations of stable sources. The radio emission from Sgr A* varies on the order of minutes, complicating the analysis.
Their result gives an overall angular size for the source of . At a distance of , this yields a diameter of . For comparison, Earth is from the Sun, and Mercury is from the Sun at perihelion. The proper motion of Sgr A* is approximately −2.70 mas per year for the right ascension and −5.6 mas per year for the declination.
The telescope's measurement of these black holes tested Einstein's theory of relativity more rigorously than has previously been done, and the results match perfectly.
In 2019, measurements made with the High-resolution Airborne Wideband Camera-Plus (HAWC+) mounted in the SOFIA aircraft revealed that magnetic fields cause the surrounding ring of gas and dust, temperatures of which range from , to flow into an orbit around Sagittarius A*, keeping black hole emissions low.
Astronomers have been unable to observe Sgr A* in the optical spectrum because of the effect of 25 magnitudes of extinction (absorption and scattering) by dust and gas between the source and Earth.
History
In April 1933, Karl Jansky, considered one of the fathers of radio astronomy, discovered that a radio signal was coming from a location in the direction of the constellation of Sagittarius, towards the center of the Milky Way. The radio source later became known as Sagittarius A. His observations did not extend quite as far south as we now know to be the Galactic Center. Observations by Jack Piddington and Harry Minnett using the CSIRO radio telescope at Potts Hill Reservoir, in Sydney discovered a discrete and bright "Sagittarius-Scorpius" radio source, which after further observation with the CSIRO radio telescope at Dover Heights was identified in a letter to Nature as the probable Galactic Center.
Later observations showed that Sagittarius A actually consists of several overlapping sub-components; a bright and very compact component, Sgr A*, was discovered on February 13 and 15, 1974, by Balick and Robert L. Brown using the baseline interferometer of the National Radio Astronomy Observatory. The name Sgr A* was coined by Brown in a 1982 paper because the radio source was "exciting", and excited states of atoms are denoted with asterisks.
Since the 1980s, it has been evident that the central component of Sgr A* is likely a black hole. In 1994, infrared and sub-millimetre spectroscopy studies by a Berkeley team involving Nobel Laureate Charles H. Townes and future Nobel Prize Winner Reinhard Genzel showed that the mass of Sgr A* was tightly concentrated and on the order of 3 million Suns.
On October 16, 2002, an international team led by Reinhard Genzel at the Max Planck Institute for Extraterrestrial Physics reported the observation of the motion of the star S2 near Sagittarius A* throughout a period of ten years. According to the team's analysis, the data ruled out the possibility that Sgr A* contains a cluster of dark stellar objects or a mass of degenerate fermions, strengthening the evidence for a massive black hole. The observations of S2 used near-infrared (NIR) interferometry (in the Ks-band, i.e. 2.1 μm) because of reduced interstellar extinction in this band. SiO masers were used to align NIR images with radio observations, as they can be observed in both NIR and radio bands. The rapid motion of S2 (and other nearby stars) easily stood out against slower-moving stars along the line-of-sight so these could be subtracted from the images.
The VLBI radio observations of Sagittarius A* could also be aligned centrally with the NIR images, so the focus of S2's elliptical orbit was found to coincide with the position of Sagittarius A*. From examining the Keplerian orbit of S2, they determined the mass of Sagittarius A* to be solar masses, confined in a volume with a radius no more than 17 light-hours (). Later observations of the star S14 showed the mass of the object to be about 4.1 million solar masses within a volume with radius no larger than 6.25 light-hours (). S175 passed within a similar distance. For comparison, the Schwarzschild radius is . They also determined the distance from Earth to the Galactic Center (the rotational center of the Milky Way), which is important in calibrating astronomical distance scales, as . In November 2004, a team of astronomers reported the discovery of a potential intermediate-mass black hole, referred to as GCIRS 13E, orbiting 3 light-years from Sagittarius A*. This black hole of 1,300 solar masses is within a cluster of seven stars. This observation may add support to the idea that supermassive black holes grow by absorbing nearby smaller black holes and stars.
After monitoring stellar orbits around Sagittarius A* for 16 years, Gillessen et al. estimated the object's mass at solar masses. The result was announced in 2008 and published in The Astrophysical Journal in 2009. Reinhard Genzel, team leader of the research, said the study has delivered "what is now considered to be the best empirical evidence that supermassive black holes do really exist. The stellar orbits in the Galactic Center show that the central mass concentration of four million solar masses must be a black hole, beyond any reasonable doubt."
On January 5, 2015, NASA reported observing an X-ray flare 400 times brighter than usual, a record-breaker, from Sgr A*. The unusual event may have been caused by the breaking apart of an asteroid falling into the black hole or by the entanglement of magnetic field lines within gas flowing into Sgr A*, according to astronomers.
On 13 May 2019, astronomers using the Keck Observatory witnessed a sudden brightening of Sgr A*, which became 75 times brighter than usual, suggesting that the supermassive black hole may have encountered another object.
In June 2023, unexplained filaments of radio energy were found associated with Sagittarius A*.
Central black hole
In a paper published on October 31, 2018, the discovery of conclusive evidence that Sagittarius A* is a black hole was announced. Using the GRAVITY interferometer and the four telescopes of the Very Large Telescope (VLT) to create a virtual telescope in diameter, astronomers detected clumps of gas moving at about 30% of the speed of light. Emission from highly energetic electrons very close to the black hole was visible as three prominent bright flares. These exactly match theoretical predictions for hot spots orbiting close to a black hole of four million solar masses. The flares are thought to originate from magnetic interactions in the very hot gas orbiting very close to Sagittarius A*.
In July 2018, it was reported that S2 orbiting Sgr A* had been recorded at , or 2.55% the speed of light, leading up to the pericenter approach, in May 2018, at about (approximately 1,400 Schwarzschild radii) from Sgr A*. At that close distance to the black hole, Einstein's theory of general relativity predicts that S2 would show a discernible gravitational redshift in addition to the usual velocity redshift. The gravitational redshift was detected, in agreement with the general relativity prediction within the 10 percent measurement precision.
The Sagittarius A* radio emissions are not centered on the black hole, but arise from a bright spot in the region around the black hole, close to the event horizon, possibly in the accretion disc, or a relativistic jet of material ejected from the disc. If the apparent position of Sagittarius A* were exactly centered on the black hole, it would be possible to see it magnified beyond its size, because of gravitational lensing of the black hole. According to general relativity, this would result in a ring-like structure, which has a diameter about 5.2 times the black hole's Schwarzschild radius (10 μas). For a black hole of around 4 million solar masses, this corresponds to a size of approximately 52 μas, which is consistent with the observed overall size of about 50 μas, the size (apparent diameter) of the black hole Sgr A* itself being 20 μas.
Lower resolution observations revealed that the radio source of Sagittarius A* is symmetrical. Simulations of alternative theories of gravity depict results that may be difficult to distinguish from GR. A 2018 paper predicted an image of Sagittarius A* that is in agreement with observations. In particular, it explains the small angular size and the symmetrical morphology of the source.
The mass of Sagittarius A* has been estimated in two different ways:
Two groups—in Germany and the U.S.—monitored the orbits of individual stars very near to the black hole and used Kepler's laws to infer the enclosed mass. The German group found a mass of solar masses, whereas the American group found solar masses. Given that this mass is confined inside a 44-million-kilometre-diameter sphere, this yields a density ten times higher than previous estimates.
More recently, measurement of the proper motions of a sample of several thousand stars within approximately one parsec from the black hole, combined with a statistical technique, has yielded both an estimate of the black hole's mass at , plus a distributed mass in the central parsec amounting to . The latter is thought to be composed of stars and stellar remnants.
The comparatively small mass of this supermassive black hole, along with the low luminosity of the radio and infrared emission lines, imply that the Milky Way is not a Seyfert galaxy.
Ultimately, what is seen is not the black hole itself, but observations that are consistent only if there is a black hole present near Sgr A*. In the case of such a black hole, the observed radio and infrared energy emanates from gas and dust heated to millions of degrees while falling into the black hole. The black hole itself is thought to emit only Hawking radiation at a negligible temperature, on the order of 10−14 kelvin.
The European Space Agency's gamma-ray observatory INTEGRAL observed gamma rays interacting with the nearby giant molecular cloud Sagittarius B2, causing X-ray emission from the cloud. The total luminosity from this outburst (≈1,5 erg/s) is estimated to be a million times stronger than the current output from Sgr A* and is comparable with a typical active galactic nucleus. In 2011 this conclusion was supported by Japanese astronomers observing the Milky Way's center with the Suzaku satellite.
In July 2019, astronomers reported finding a star, S5-HVS1, traveling or 0.006 c. The star is in the Grus (or Crane) constellation in the southern sky, and about 29,000 light-years from Earth, and may have been propelled out of the Milky Way galaxy after interacting with Sagittarius A*.
Several values have been given for its spin parameter ; some examples are Fragione & Loeb (2020) , Belanger et al. (2006) , Meyer et al. (2006) , Genzel et al. (2003) ,
Daly (2019) ,
and Daly et al. (2023) . Daly et al. (2023) also found that the ratio of the black hole rotational mass component to the irreducible mass component of Sgr A* is , which indicates that the black hole is rotating with an angular velocity that is of the maximum possible value, set by the speed of light.
Orbiting stars
There are a number of stars in close orbit around Sagittarius A*, which are collectively known as "S stars". These stars are observed primarily in K band infrared wavelengths, as interstellar dust drastically limits visibility in visible wavelengths. This is a rapidly changing field—in 2011, the orbits of the most prominent stars then known were plotted in the diagram at left, showing a comparison between their orbits and various orbits in the solar system. Since then, S62 has been found to approach even more closely than those stars.
The high velocities and close approaches to the supermassive black hole makes these stars useful to establish limits on the physical dimensions of Sagittarius A*, as well as to observe general-relativity associated effects like periapse shift of their orbits. An active watch is maintained for the possibility of stars approaching the event horizon close enough to be disrupted, but none of these stars are expected to suffer that fate.
, S4714 is the current record holder of closest approach to Sagittarius A*, at about , almost as close as Saturn gets to the Sun, traveling at about 8% of the speed of light. These figures given are approximate, the formal uncertainties being and . Its orbital period is 12 years, but an extreme eccentricity of 0.985 gives it the close approach and high velocity.
An excerpt from a table of this cluster (see Sagittarius A* cluster), featuring the most prominent members. In the below table, id1 is the star's name in the Gillessen catalog and id2 in the catalog of the University of California, Los Angeles. a, e, i, Ω and ω are standard orbital elements, with a measured in arcseconds. Tp is the epoch of pericenter passage, P is the orbital period in years and Kmag is the infrared K-band apparent magnitude of the star. q and v are the pericenter distance in AU and pericenter speed in percent of the speed of light.
Discovery of G2 gas cloud on an accretion course
First noticed as something unusual in images of the center of the Milky Way in 2002, the gas cloud G2, which has a mass about three times that of Earth, was confirmed to be likely on a course taking it into the accretion zone of Sgr A* in a paper published in Nature in 2012. Predictions of its orbit suggested it would make its closest approach to the black hole (a perinigricon) in early 2014, when the cloud was at a distance of just over 3,000 times the radius of the event horizon (or ≈260 AU, 36 light-hours) from the black hole. G2 has been observed to be disrupting since 2009, and was predicted by some to be completely destroyed by the encounter, which could have led to a significant brightening of X-ray and other emission from the black hole. Other astronomers suggested the gas cloud could be hiding a dim star, or a binary star merger product, which would hold it together against the tidal forces of Sgr A*, allowing the ensemble to pass by without any effect. In addition to the tidal effects on the cloud itself, it was proposed in May 2013 that, prior to its perinigricon, G2 might experience multiple close encounters with members of the black-hole and neutron-star populations thought to orbit near the Galactic Center, offering some insight to the region surrounding the supermassive black hole at the center of the Milky Way.
The average rate of accretion onto Sgr A* is unusually small for a black hole of its mass and is only detectable because it is so close to Earth. It was thought that the passage of G2 in 2013 might offer astronomers the chance to learn much more about how material accretes onto supermassive black holes. Several astronomical facilities observed this closest approach, with observations confirmed with Chandra, XMM, VLA, INTEGRAL, Swift, Fermi and requested at VLT and Keck.
Simulations of the passage were made before it happened by groups at ESO and Lawrence Livermore National Laboratory (LLNL).
As the cloud approached the black hole, Daryl Haggard said, "It's exciting to have something that feels more like an experiment", and hoped that the interaction would produce effects that would provide new information and insights.
Nothing was observed during and after the closest approach of the cloud to the black hole, which was described as a lack of "fireworks" and a "flop". Astronomers from the UCLA Galactic Center Group published observations obtained on March 19 and 20, 2014, concluding that G2 was still intact (in contrast to predictions for a simple gas cloud hypothesis) and that the cloud was likely to have a central star.
An analysis published on July 21, 2014, based on observations by the ESO's Very Large Telescope in Chile, concluded alternatively that the cloud, rather than being isolated, might be a dense clump within a continuous but thinner stream of matter, and would act as a constant breeze on the disk of matter orbiting the black hole, rather than sudden gusts that would have caused high brightness as they hit, as originally expected. Supporting this hypothesis, G1, a cloud that passed near the black hole 13 years ago, had an orbit almost identical to G2, consistent with both clouds, and a gas tail thought to be trailing G2, all being denser clumps within a large single gas stream.
Andrea Ghez et al. suggested in 2014 that G2 is not a gas cloud but rather a pair of binary stars that had been orbiting the black hole in tandem and merged into an extremely large star.
| Physical sciences | Notable galaxies | null |
841429 | https://en.wikipedia.org/wiki/Synthetic%20biology | Synthetic biology | Synthetic biology (SynBio) is a multidisciplinary field of science that focuses on living systems and organisms, and it applies engineering principles to develop new biological parts, devices, and systems or to redesign existing systems found in nature.
It is a branch of science that encompasses a broad range of methodologies from various disciplines, such as biochemistry, biotechnology, biomaterials, material science/engineering, genetic engineering, molecular biology, molecular engineering, systems biology, membrane science, biophysics, chemical and biological engineering, electrical and computer engineering, control engineering and evolutionary biology.
It includes designing and constructing biological modules, biological systems, and biological machines, or re-designing existing biological systems for useful purposes.
Additionally, it is the branch of science that focuses on the new abilities of engineering into existing organisms to redesign them for useful purposes.
In order to produce predictable and robust systems with novel functionalities that do not already exist in nature, it is also necessary to apply the engineering paradigm of systems design to biological systems. According to the European Commission, this possibly involves a molecular assembler based on biomolecular systems such as the ribosome.
History
1910: First identifiable use of the term synthetic biology in Stéphane Leduc's publication Théorie physico-chimique de la vie et générations spontanées. He also noted this term in another publication, La Biologie Synthétique in 1912.
1944: Canadian-American scientist Oswald Avery shows that DNA is the material of which genes and chromosomes are made. This becomes the bedrock on which all subsequent genetic research is built.
1953: Francis Crick and James Watson publish the structure of the DNA in Nature.
1961: Jacob and Monod postulate cellular regulation by molecular networks from their study of the lac operon in E. coli and envisioned the ability to assemble new systems from molecular components.
1973: First molecular cloning and amplification of DNA in a plasmid is published in P.N.A.S. by Cohen, Boyer et al. constituting the dawn of synthetic biology.
1978: Arber, Nathans and Smith win the Nobel Prize in Physiology or Medicine for the discovery of restriction enzymes, leading Szybalski to offer an editorial comment in the journal Gene:
1988: First DNA amplification by the polymerase chain reaction (PCR) using a thermostable DNA polymerase is published in Science by Mullis et al. This obviated adding new DNA polymerase after each PCR cycle, thus greatly simplifying DNA mutagenesis and assembly.
2000: Two papers in Nature report synthetic biological circuits, a genetic toggle switch and a biological clock, by combining genes within E. coli cells.
2003: The most widely used standardized DNA parts, BioBrick plasmids, are invented by Tom Knight. These parts will become central to the International Genetically Engineered Machine (iGEM) competition founded at MIT in the following year.
2003: Researchers engineer an artemisinin precursor pathway in E. coli.
2004: First international conference for synthetic biology, Synthetic Biology 1.0 (SB1.0) is held at MIT.
2005: Researchers develop a light-sensing circuit in E. coli. Another group designs circuits capable of multicellular pattern formation.
2006: Researchers engineer a synthetic circuit that promotes bacterial invasion of tumour cells.
2010: Researchers publish in Science the first synthetic bacterial genome, called M. mycoides JCVI-syn1.0. The genome is made from chemically-synthesized DNA using yeast recombination.
2011: Functional synthetic chromosome arms are engineered in yeast.
2012: Charpentier and Doudna labs publish in Science the programming of CRISPR-Cas9 bacterial immunity for targeting DNA cleavage. This technology greatly simplified and expanded eukaryotic gene editing.
2019: Scientists at ETH Zurich report the creation of the first bacterial genome, named Caulobacter ethensis-2.0, made entirely by a computer, although a related viable form of C. ethensis-2.0 does not yet exist.
2019: Researchers report the production of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.
2020: Scientists created the first xenobot, a programmable synthetic organism derived from frog cells and designed by AI.
2021: Scientists reported that xenobots are able to self-replicate by gathering loose cells in the environment and then forming new xenobots.
Perspectives
It is a field whose scope is expanding in terms of systems integration, engineered organisms, and practical findings.
Engineers view biology as technology (in other words, a given system includes biotechnology or its biological engineering). Synthetic biology includes the broad redefinition and expansion of biotechnology, with the ultimate goal of being able to design and build engineered live biological systems that process information, manipulate chemicals, fabricate materials and structures, produce energy, provide food, and maintain and enhance human health, as well as advance fundamental knowledge of biological systems and our environment.
Researchers and companies working in synthetic biology are using nature's power to solve issues in agriculture, manufacturing, and medicine.
Due to more powerful genetic engineering capabilities and decreased DNA synthesis and sequencing costs, the field of synthetic biology is rapidly growing. In 2016, more than 350 companies across 40 countries were actively engaged in synthetic biology applications; all these companies had an estimated net worth of $3.9 billion in the global market. Synthetic biology currently has no generally accepted definition. Here are a few examples:
It is the science of emerging genetic and physical engineering to produce new (and, therefore, synthetic) life forms. To develop organisms with novel or enhanced characteristics, this emerging field of study combines biology, engineering, and related disciplines' knowledge and techniques to design chemically synthesised DNA.
Biomolecular engineering includes approaches that aim to create a toolkit of functional units that can be introduced to present new technological functions in living cells. Genetic engineering includes approaches to construct synthetic chromosomes or minimal organisms like Mycoplasma laboratorium.
Biomolecular design refers to the general idea of de novo design and additive combination of biomolecular components. Each of these approaches shares a similar task: to develop a more synthetic entity at a higher level of complexity by inventively manipulating a simpler part at the preceding level. Optimizing these exogenous pathways in unnatural systems takes iterative fine-tuning of the individual biomolecular components to select the highest concentrations of the desired product.
On the other hand, "re-writers" are synthetic biologists interested in testing the irreducibility of biological systems. Due to the complexity of natural biological systems, it would be simpler to rebuild the natural systems of interest from the ground up; to provide engineered surrogates that are easier to comprehend, control and manipulate. Re-writers draw inspiration from refactoring, a process sometimes used to improve computer software.
Categories
Bioengineering, synthetic genomics, protocell synthetic biology, unconventional molecular biology, and in silico techniques are the five categories of synthetic biology.
It is necessary to review the distinctions and analogies between the categories of synthetic biology for its social and ethical assessment, to distinguish between issues affecting the whole field and particular to a specific one.
Bioengineering
The subfield of bioengineering concentrates on creating novel metabolic and regulatory pathways, and is currently the one that likely draws the attention of most researchers and funding. It is primarily motivated by the desire to establish biotechnology as a legitimate engineering discipline. When referring to this area of synthetic biology, the word "bioengineering" should not be confused with "traditional genetic engineering", which involves introducing a single transgene into the intended organism. Bioengineers adapted synthetic biology to provide a substantially more integrated perspective on how to alter organisms or metabolic systems.
A typical example of single-gene genetic engineering is the insertion of the human insulin gene into bacteria to create transgenic proteins. The creation of whole new signalling pathways, containing numerous genes and regulatory components (such as an oscillator circuit to initiate the periodic production of green fluorescent protein (GFP) in mammalian cells), is known as bioengineering as part of synthetic biology.
By utilising simplified and abstracted metabolic and regulatory modules as well as other standardized parts that may be freely combined to create new pathways or creatures, bioengineering aims to create innovative biological systems. In addition to creating infinite opportunities for novel applications, this strategy is anticipated to make bioengineering more predictable and controllable than traditional biotechnology.
Synthetic genomics
The formation of animals with a chemically manufactured (minimal) genome is another facet of synthetic biology that is highlighted by synthetic genomics. This area of synthetic biology has been made possible by ongoing advancements in DNA synthesis technology, which now makes it feasible to produce DNA molecules with thousands of base pairs at a reasonable cost. The goal is to combine these molecules into complete genomes and transplant them into living cells, replacing the host cell's genome and reprogramming its metabolism to perform different functions.
Scientists have previously demonstrated the potential of this approach by creating infectious viruses by synthesising the genomes of multiple viruses. These significant advances in science and technology triggered the initial public concerns concerning the risks associated with this technology.
A simple genome might also work as a "chassis genome" that could be enlarged quickly by gene inclusion created for particular tasks. Such "chassis creatures" would be more suited for the insertion of new functions than wild organisms since they would have fewer biological pathways that could potentially conflict with the new functionalities in addition to having specific insertion sites. Synthetic genomics strives to create creatures with novel "architectures," much like the bioengineering method. It adopts an integrative or holistic perspective of the organism. In this case, the objective is the creation of chassis genomes based on necessary genes and other required DNA sequences rather than the design of metabolic or regulatory pathways based on abstract criteria.
Protocell synthetic biology
The in vitro generation of synthetic cells is the protocell branch of synthetic biology. Lipid vesicles, which have all the necessary components to function as a complete system, can be used to create these artificial cells. In the end, these synthetic cells should meet the requirements for being deemed alive, namely the capacity for self-replication, self-maintenance, and evolution. The protocell technique has this as its end aim, however there are other intermediary steps that fall short of meeting all the criteria for a living cell. In order to carry out a specific function, these lipid vesicles contain cell extracts or more specific sets of biological macromolecules and complex structures, such as enzymes, nucleic acids, or ribosomes. For instance, liposomes may carry out particular polymerase chain reactions or synthesise a particular protein.
Protocell synthetic biology takes artificial life one step closer to reality by eventually synthesizing not only the genome but also every component of the cell in vitro, as opposed to the synthetic genomics approach, which relies on coercing a natural cell to carry out the instructions encoded by the introduced synthetic genome. Synthetic biologists in this field view their work as basic study into the conditions necessary for life to exist and its origin more than in any of the other techniques. The protocell technique, however, also lends itself well to applications; similar to other synthetic biology byproducts, protocells could be employed for the manufacture of biopolymers and medicines.
Unconventional molecular biology
The objective of the "unnatural molecular biology" strategy is to create new varieties of life that are based on a different kind of molecular biology, such as new types of nucleic acids or a new genetic code. The creation of new types of nucleotides that can be built into unique nucleic acids could be accomplished by changing certain DNA or RNA constituents, such as the bases or the backbone sugars.
The normal genetic code is being altered by inserting quadruplet codons or changing some codons to encode new amino acids, which would subsequently permit the use of non-natural amino acids with unique features in protein production. It is a scientific and technological problem to adjust the enzymatic machinery of the cell for both approaches.
A new sort of life would be formed by organisms with a genome built on synthetic nucleic acids or on a totally new coding system for synthetic amino acids. This new style of life would have some benefits but also some new dangers. On release into the environment, there would be no horizontal gene transfer or outcrossing of genes with natural species. Furthermore, these kinds of synthetic organisms might be created to require non-natural materials for protein or nucleic acid synthesis, rendering them unable to thrive in the wild if they accidentally escaped.
On the other hand, if these organisms ultimately were able to survive outside of controlled space, they might have a particular benefit over natural organisms because they would be resistant to predatory living organisms or natural viruses, that could lead to an unmanaged spread of the synthetic organisms.
In silico technique
Synthetic biology in silico and the various strategies are interconnected. The development of complex designs, whether they are metabolic pathways, fundamental cellular processes, or chassis genomes, is one of the major difficulties faced by the four synthetic-biology methods outlined above. Because of this, synthetic biology has a robust in silico branch, similar to systems biology, that aims to create computational models for the design of common biological components or synthetic circuits, which are essentially simulations of synthetic organisms.
The practical application of simulations and models through bioengineering or other fields of synthetic biology is the long-term goal of in silico synthetic biology. Many of the computational simulations of synthetic organisms up to this point possess little to no direct analogy to living things. Due to this, in silico synthetic biology is regarded as a separate group in this article.
It is sensible to integrate the five areas under the umbrella of synthetic biology as one unified area of study. Even though they focus on various facets of life, such as metabolic regulation, essential elements, or biochemical makeup, these five strategies all work toward the same end: creating new types of living organisms. Additionally, the varied methodologies begin with numerous methodological approaches, which leads to the diversity of synthetic biology approaches.
Synthetic biology is an interdisciplinary field that draws from and is inspired by many different scientific disciplines, not one single field or technique. Synthetic biologists all have the same underlying objective of designing and producing new forms of life, despite the fact that they may employ various methodologies, techniques, and research instruments. Any evaluation of synthetic biology, whether it examines ethical, legal, or safety considerations, must take into account the fact that while some questions, risks, and issues are unique to each technique, in other circumstances, synthetic biology as a whole must be taken into consideration.
Four engineering approaches
Synthetic biology has traditionally been divided into four different engineering approaches: top down, parallel, orthogonal and bottom up.
To replicate emergent behaviours from natural biology and build artificial life, unnatural chemicals are used. The other looks for interchangeable components from biological systems to put together and create systems that do not work naturally. In either case, a synthetic objective compels researchers to venture into new area in order to engage and resolve issues that cannot be readily resolved by analysis. Due to this, new paradigms are driven to arise in ways that analysis cannot easily do. In addition to equipments that oscillate, creep, and play tic-tac-toe, synthetic biology has produced diagnostic instruments that enhance the treatment of patients with infectious diseases.
Top-down approach
It involves using metabolic and genetic engineering techniques to impart new functions to living cells. By comparing universal genes and eliminating non-essential ones to create a basic genome, this method seeks to lessen the complexity of existing cells. These initiatives are founded on the hypothesis of a single genesis for cellular life, the so-called Last Universal Common Ancestor, which supports the presence of a universal minimal genome that gave rise to all living things. Recent studies, however, raise the possibility that the eukaryotic and prokaryotic cells that make up the tree of life may have evolved from a group of primordial cells rather than from a single cell. As a result, even while the Holy Grail-like pursuit of the "minimum genome" has grown elusive, cutting out a number of non-essential functions impairs an organism's fitness and leads to "fragile" genomes.
Bottom-up approach
This approach involves creating new biological systems in vitro by bringing together 'non-living' biomolecular components, often with the aim of constructing an artificial cell.
Reproduction, replication, and assembly are three crucial self-organizational principles that are taken into account in order to accomplish this. Cells, which are made up of a container and a metabolism, are considered "hardware" in the definition of reproduction, whereas replication occurs when a system duplicates a perfect copy of itself, as in the case of DNA, which is considered "software." When vesicles or containers (such as Oparin's coacervates) formed of tiny droplets of molecules that are organic like lipids or liposomes, membrane-like structures comprising phospholipids, aggregate, assembly occur.
The study of protocells exists along with other in vitro synthetic biology initiatives that seek to produce minimal cells, metabolic pathways, or "never-born proteins" as well as to mimic physiological functions including cell division and growth. Recently a cell-free system capable of self-sustaining using CO2 was engineered by bottom-up integrating metabolism with gene expression. This research, which is primarily essential, deserves proper recognition as synthetic biology research.
Parallel approach
Parallel engineering is also known as bioengineering. The basic genetic code is the foundation for parallel engineering research, which uses conventional biomolecules like nucleic acids and the 20 amino acids to construct biological systems. For a variety of applications in biocomputing, bioenergy, biofuels, bioremediation, optogenetics, and medicine, it involves the standardisation of DNA components, engineering of switches, biosensors, genetic circuits, logic gates, and cellular communication operators. For directing the expression of two or more genes and/or proteins, the majority of these applications often rely on the use of one or more vectors (or plasmids). Small, circular, double-strand DNA units known as plasmids, which are primarily found in prokaryotic but can also occasionally be detected in eukaryotic cells, may replicate autonomously of chromosomal DNA.
Orthogonal approach
It is also known as perpendicular engineering. This strategy, also referred to as "chemical synthetic biology," principally seeks to alter or enlarge the genetic codes of living systems utilising artificial DNA bases and/or amino acids. This subfield is also connected to xenobiology, a newly developed field that combines systems chemistry, synthetic biology, exobiology, and research into the origins of life. In recent decades, researchers have created compounds that are structurally similar to the DNA canonical bases to see if those "alien" or xeno (XNA) molecules may be employed as genetic information carriers. Similar to this, noncanonical moieties have taken the place of the DNA sugar (deoxyribose). In order to express information other than the 20 conventional amino acids of proteins, the genetic code can be altered or enlarged. One method involves incorporating a specified unnatural, noncanonical, or xeno amino acid (XAA) into one or more proteins at one or more precise places using orthogonal enzymes and a transfer RNA adaptor from an other organism. By using "directed evolution," which entails repeated cycles of gene mutagenesis (genotypic diversity production), screening or selection (of a specific phenotypic trait), and amplification of a better variant for the following iterative round, orthogonal enzymes are produced Numerous XAAs have been effectively incorporated into proteins in more complex creatures like worms and flies as well as in bacteria, yeast, and human cell lines. As a result of canonical DNA sequence changes, directed evolution also enables the development of orthogonal ribosomes, which make it easier to incorporate XAAs into proteins or create "mirror life," or biological systems that contain biomolecules made up of enantiomers with different chiral orientations.
Enabling technologies
Several novel enabling technologies were critical to the success of synthetic biology. Concepts include standardization of biological parts and hierarchical abstraction to permit using those parts in synthetic systems. DNA serves as the guide for how biological processes should function, like the score to a complex symphony of life. Our ability to comprehend and design biological systems has undergone significant modifications as a result of developments in the previous few decades in both reading (sequencing) and writing (synthesis) DNA sequences. These developments have produced ground-breaking techniques for designing, assembling, and modifying DNA-encoded genes, materials, circuits, and metabolic pathways, enabling an ever-increasing amount of control over biological systems and even entire organisms.
Basic technologies include reading and writing DNA (sequencing and fabrication). Measurements under multiple conditions are needed for accurate modeling and computer-aided design (CAD).
DNA and gene synthesis
Driven by dramatic decreases in costs of oligonucleotide ("oligos") synthesis and the advent of PCR, the sizes of DNA constructions from oligos have increased to the genomic level. In 2000, researchers reported synthesis of the 9.6 kbp (kilo bp) Hepatitis C virus genome from chemically synthesized 60 to 80-mers. In 2002, researchers at Stony Brook University succeeded in synthesizing the 7741 bp poliovirus genome from its published sequence, producing the second synthetic genome, spanning two years. In 2003, the 5386 bp genome of the bacteriophage Phi X 174 was assembled in about two weeks. In 2006, the same team, at the J. Craig Venter Institute, constructed and patented a synthetic genome of a novel minimal bacterium, Mycoplasma laboratorium and were working on getting it functioning in a living cell.
In 2007, it was reported that several companies were offering synthesis of genetic sequences up to 2000 base pairs (bp) long, for a price of about $1 per bp and a turnaround time of less than two weeks. Oligonucleotides harvested from a photolithographic- or inkjet-manufactured DNA chip combined with PCR and DNA mismatch error-correction allows inexpensive large-scale changes of codons in genetic systems to improve gene expression or incorporate novel amino-acids (see George M. Church's and Anthony Forster's synthetic cell projects.). This favors a synthesis-from-scratch approach.
Additionally, the CRISPR/Cas system has emerged as a promising technique for gene editing. It was described as "the most important innovation in the synthetic biology space in nearly 30 years". While other methods take months or years to edit gene sequences, CRISPR speeds that time up to weeks. Due to its ease of use and accessibility, however, it has raised ethical concerns, especially surrounding its use in biohacking.
Sequencing
DNA sequencing determines the order of nucleotide bases in a DNA molecule. Synthetic biologists use DNA sequencing in their work in several ways. First, large-scale genome sequencing efforts continue to provide information on naturally occurring organisms. This information provides a rich substrate from which synthetic biologists can construct parts and devices. Second, sequencing can verify that the fabricated system is as intended. Third, fast, cheap, and reliable sequencing can facilitate rapid detection and identification of synthetic systems and organisms.
Modularity
This is the ability of a system or component to operate without reference to its context.
The most used standardized DNA parts are BioBrick plasmids, invented by Tom Knight in 2003. Biobricks are stored at the Registry of Standard Biological Parts in Cambridge, Massachusetts. The BioBrick standard has been used by tens of thousands of students worldwide in the international Genetically Engineered Machine (iGEM) competition. BioBrick Assembly Standard 10 promotes modularity by allowing BioBrick coding sequences to be spliced out and exchanged using restriction enzymes EcoRI or XbaI (BioBrick prefix) and SpeI and PstI (BioBrick suffix).
Sequence overlap between two genetic elements (genes or coding sequences), called overlapping genes, can prevent their individual manipulation. To increase genome modularity, the practice of genome refactoring or improving "the internal structure of an existing system for future use, while simultaneously maintaining external system function" has been adopted across synthetic biology disciplines. Some notable examples of refactoring including the nitrogen fixation cluster and type III secretion system along with bacteriophages T7 and ΦX174.
While DNA is most important for information storage, a large fraction of the cell's activities are carried out by proteins. Tools can send proteins to specific regions of the cell and to link different proteins together. The interaction strength between protein partners should be tunable between a lifetime of seconds (desirable for dynamic signaling events) up to an irreversible interaction (desirable for device stability or resilient to harsh conditions). Interactions such as coiled coils, SH3 domain-peptide binding or SpyTag/SpyCatcher offer such control. In addition, it is necessary to regulate protein-protein interactions in cells, such as with light (using light-oxygen-voltage-sensing domains) or cell-permeable small molecules by chemically induced dimerization.
In a living cell, molecular motifs are embedded in a bigger network with upstream and downstream components. These components may alter the signaling capability of the modeling module. In the case of ultrasensitive modules, the sensitivity contribution of a module can differ from the sensitivity that the module sustains in isolation.
Modeling
Models inform the design of engineered biological systems by better predicting system behavior prior to fabrication. Synthetic biology benefits from better models of how biological molecules bind substrates and catalyze reactions, how DNA encodes the information needed to specify the cell and how multi-component integrated systems behave. Multiscale models of gene regulatory networks focus on synthetic biology applications. Simulations can model all biomolecular interactions in transcription, translation, regulation and induction of gene regulatory networks.
Only extensive modelling can enable the exploration of dynamic gene expression in a form suitable for research and design due to the numerous involved species and the intricacy of their relationships. Dynamic simulations of the entire biomolecular interconnection involved in regulation, transport, transcription, induction, and translation enable the molecular level detailing of designs. As opposed to modelling artificial networks a posteriori, this is contrasted.
Microfluidics
Microfluidics, in particular droplet microfluidics, is an emerging tool used to construct new components, and to analyze and characterize them. It is widely employed in screening assays.
Synthetic transcription factors
Studies have considered the components of the DNA transcription mechanism. One desire of scientists creating synthetic biological circuits is to be able to control the transcription of synthetic DNA in unicellular organisms (prokaryotes) and in multicellular organisms (eukaryotes). One study tested the adjustability of synthetic transcription factors (sTFs) in areas of transcription output and cooperative ability among multiple transcription factor complexes. Researchers were able to mutate functional regions called zinc fingers, the DNA specific component of sTFs, to decrease their affinity for specific operator DNA sequence sites, and thus decrease the associated site-specific activity of the sTF (usually transcriptional regulation). They further used the zinc fingers as components of complex-forming sTFs, which are the eukaryotic translation mechanisms.
Applications
Synthetic biology initiatives frequently aim to redesign organisms so that they can create a material, such as a drug or fuel, or acquire a new function, such as the ability to sense something in the environment. Examples of what researchers are creating using synthetic biology include:
Utilizing microorganisms for bioremediation to remove contaminants from our water, soil, and air.
Production of complex natural products that are usually extracted from plants but cannot be obtained in sufficient amounts, e.g. drugs of natural origin, such as artemisinin and paclitaxel.
Beta-carotene, a substance typically associated with carrots that prevents vitamin A deficiency, is produced by rice that has been modified. Every year, between 250,000 and 500,000 children lose their vision due to vitamin A deficiency, which also significantly raises their chance of dying from infectious infections.
As a sustainable and environmentally benign alternative to the fresh roses that perfumers use to create expensive smells, yeast has been created to produce rose oil.
Biosensors
A biosensor refers to an engineered organism, usually a bacterium, that is capable of reporting some ambient phenomenon such as the presence of heavy metals or toxins. One such system is the Lux operon of Aliivibrio fischeri, which codes for the enzyme that is the source of bacterial bioluminescence, and can be placed after a respondent promoter to express the luminescence genes in response to a specific environmental stimulus. One such sensor created, consisted of a bioluminescent bacterial coating on a photosensitive computer chip to detect certain petroleum pollutants. When the bacteria sense the pollutant, they luminesce. Another example of a similar mechanism is the detection of landmines by an engineered E.coli reporter strain capable of detecting TNT and its main degradation product DNT, and consequently producing a green fluorescent protein (GFP).
Modified organisms can sense environmental signals and send output signals that can be detected and serve diagnostic purposes. Microbe cohorts have been used.
Biosensors could also be used to detect pathogenic signatures—such as of SARS-CoV-2—and can be wearable.
For the purpose of detecting and reacting to various and temporary environmental factors, cells have developed a wide range of regulatory circuits, ranging from transcriptional to post-translational. These circuits are made up of transducer modules that filter the signals and activate a biological response, as well as carefully designed sensitive sections that attach analytes and regulate signal-detection thresholds. Modularity and selectivity are programmed to biosensor circuits at the transcriptional, translational, and post-translational levels, to achieve the delicate balancing of the two basic sensing modules.
Food and drink
However, not all synthetic nutrition products are animal food products – for instance, as of 2021, there are also products of synthetic coffee that are reported to be close to commercialization. Similar fields of research and production based on synthetic biology that can be used for the production of food and drink are:
Genetically engineered microbial food cultures (e.g. for solar-energy-based protein powder)
Cell-free artificial synthesis (e.g. synthetic starch; )
Materials
Photosynthetic microbial cells have been used as a step to synthetic production of spider silk.
Biological computers
A biological computer refers to an engineered biological system that can perform computer-like operations, which is a dominant paradigm in synthetic biology. Researchers built and characterized a variety of logic gates in a number of organisms, and demonstrated both analog and digital computation in living cells. They demonstrated that bacteria can be engineered to perform both analog and/or digital computation. In 2007, in human cells, research demonstrated a universal logic evaluator that operates in mammalian cells. Subsequently, researchers utilized this paradigm to demonstrate a proof-of-concept therapy that uses biological digital computation to detect and kill human cancer cells in 2011. In 2016, another group of researchers demonstrated that principles of computer engineering can be used to automate digital circuit design in bacterial cells. In 2017, researchers demonstrated the 'Boolean logic and arithmetic through DNA excision' (BLADE) system to engineer digital computation in human cells. In 2019, researchers implemented a perceptron in biological systems opening the way for machine learning in these systems.
Cell transformation
Cells use interacting genes and proteins, which are called gene circuits, to implement diverse function, such as responding to environmental signals, decision making and communication. Three key components are involved: DNA, RNA and Synthetic biologist designed gene circuits that can control gene expression from several levels including transcriptional, post-transcriptional and translational levels.
Traditional metabolic engineering has been bolstered by the introduction of combinations of foreign genes and optimization by directed evolution. This includes engineering E. coli and yeast for commercial production of a precursor of the antimalarial drug, Artemisinin.
Entire organisms have yet to be created from scratch, although living cells can be transformed with new DNA. Several ways allow constructing synthetic DNA components and even entire synthetic genomes, but once the desired genetic code is obtained, it is integrated into a living cell that is expected to manifest the desired new capabilities or phenotypes while growing and thriving. Cell transformation is used to create biological circuits, which can be manipulated to yield desired outputs.
By integrating synthetic biology with materials science, it would be possible to use cells as microscopic molecular foundries to produce materials whose properties were genetically encoded. Re-engineering has produced Curli fibers, the amyloid component of extracellular material of biofilms, as a platform for programmable nanomaterial. These nanofibers were genetically constructed for specific functions, including adhesion to substrates, nanoparticle templating and protein immobilization.
Designed proteins
Natural proteins can be engineered, for example, by directed evolution, novel protein structures that match or improve on the functionality of existing proteins can be produced. One group generated a helix bundle that was capable of binding oxygen with similar properties as hemoglobin, yet did not bind carbon monoxide. A similar protein structure was generated to support a variety of oxidoreductase activities while another formed a structurally and sequentially novel ATPase. Another group generated a family of G-protein coupled receptors that could be activated by the inert small molecule clozapine N-oxide but insensitive to the native ligand, acetylcholine; these receptors are known as DREADDs. Novel functionalities or protein specificity can also be engineered using computational approaches. One study was able to use two different computational methods: a bioinformatics and molecular modeling method to mine sequence databases, and a computational enzyme design method to reprogram enzyme specificity. Both methods resulted in designed enzymes with greater than 100 fold specificity for production of longer chain alcohols from sugar.
Another common investigation is expansion of the natural set of 20 amino acids. Excluding stop codons, 61 codons have been identified, but only 20 amino acids are coded generally in all organisms. Certain codons are engineered to code for alternative amino acids including: nonstandard amino acids such as O-methyl tyrosine; or exogenous amino acids such as 4-fluorophenylalanine. Typically, these projects make use of re-coded nonsense suppressor tRNA-Aminoacyl tRNA synthetase pairs from other organisms, though in most cases substantial engineering is required.
Other researchers investigated protein structure and function by reducing the normal set of 20 amino acids. Limited protein sequence libraries are made by generating proteins where groups of amino acids may be replaced by a single amino acid. For instance, several non-polar amino acids within a protein can all be replaced with a single non-polar amino acid. One project demonstrated that an engineered version of Chorismate mutase still had catalytic activity when only nine amino acids were used.
Researchers and companies practice synthetic biology to synthesize industrial enzymes with high activity, optimal yields and effectiveness. These synthesized enzymes aim to improve products such as detergents and lactose-free dairy products, as well as make them more cost effective. The improvements of metabolic engineering by synthetic biology is an example of a biotechnological technique utilized in industry to discover pharmaceuticals and fermentive chemicals. Synthetic biology may investigate modular pathway systems in biochemical production and increase yields of metabolic production. Artificial enzymatic activity and subsequent effects on metabolic reaction rates and yields may develop "efficient new strategies for improving cellular properties ... for industrially important biochemical production".
Designed nucleic acid systems
Scientists can encode digital information onto a single strand of synthetic DNA. In 2012, George M. Church encoded one of his books about synthetic biology in DNA. The 5.3 Mb of data was more than 1000 times greater than the previous largest amount of information to be stored in synthesized DNA. A similar project encoded the complete sonnets of William Shakespeare in DNA. More generally, algorithms such as NUPACK, ViennaRNA, Ribosome Binding Site Calculator, Cello, and Non-Repetitive Parts Calculator enables the design of new genetic systems.
Many technologies have been developed for incorporating unnatural nucleotides and amino acids into nucleic acids and proteins, both in vitro and in vivo. For example, in May 2014, researchers announced that they had successfully introduced two new artificial nucleotides into bacterial DNA. By including individual artificial nucleotides in the culture media, they were able to exchange the bacteria 24 times; they did not generate mRNA or proteins able to use the artificial nucleotides.
Space exploration
Synthetic biology raised NASA's interest as it could help to produce resources for astronauts from a restricted portfolio of compounds sent from Earth. On Mars, in particular, synthetic biology could lead to production processes based on local resources, making it a powerful tool in the development of occupied outposts with less dependence on Earth. Work has gone into developing plant strains that are able to cope with the harsh Martian environment, using similar techniques to those employed to increase resilience to certain environmental factors in agricultural crops.
Synthetic life
One important topic in synthetic biology is synthetic life, that is concerned with hypothetical organisms created in vitro from biomolecules and/or chemical analogues thereof. Synthetic life experiments attempt to either probe the origins of life, study some of the properties of life, or more ambitiously to recreate life from non-living (abiotic) components. Synthetic life biology attempts to create living organisms capable of carrying out important functions, from manufacturing pharmaceuticals to detoxifying polluted land and water. In medicine, it offers prospects of using designer biological parts as a starting point for new classes of therapies and diagnostic tools.
A living "artificial cell" has been defined as a completely synthetic cell that can capture energy, maintain ion gradients, contain macromolecules as well as store information and have the ability to mutate. Nobody has been able to create such a cell.
A completely synthetic bacterial chromosome was produced in 2010 by Craig Venter, and his team introduced it to genomically emptied bacterial host cells. The host cells were able to grow and replicate. The Mycoplasma laboratorium is the only living organism with completely engineered genome.
The first living organism with 'artificial' expanded DNA code was presented in 2014; the team used E. coli that had its genome extracted and replaced with a chromosome with an expanded genetic code. The nucleosides added are d5SICS and dNaM.
In May 2019, in a milestone effort, researchers reported the creation of a new synthetic (possibly artificial) form of viable life, a variant of the bacteria Escherichia coli, by reducing the natural number of 64 codons in the bacterial genome to 59 codons instead, in order to encode 20 amino acids.
In 2017, the international Build-a-Cell large-scale open-source research collaboration for the construction of synthetic living cells was started, followed by national synthetic cell organizations in several countries, including FabriCell, MaxSynBio and BaSyC. The European synthetic cell efforts were unified in 2019 as SynCellEU initiative.
In 2023, researchers were able to create the first synthetically made human embryos derived from stem cells.
Drug delivery platforms
In therapeutics, synthetic biology has achieved significant advancements in altering and simplifying the therapeutics scope in a relatively short period of time. In fact, new therapeutic platforms, from the discovery of disease mechanisms and drug targets to the manufacture and transport of small molecules, are made possible by the logical and model-guided design construction of biological components.
Synthetic biology devices have been designed to act as therapies in therapeutic treatment. It is possible to control complete created viruses and organisms to target particular pathogens and diseased pathways. Thus, in two independent studies 91,92, researchers utilised genetically modified bacteriophages to fight antibiotic-resistant bacteria by giving them genetic features that specifically target and hinder bacterial defences against antibiotic activity.
In the therapy of cancer, since conventional medicines frequently indiscriminately target tumours and normal tissues, artificially created viruses and organisms that can identify and connect their therapeutic action to pathological signals may be helpful. For example, p53 pathway activity in human cells was put into adenoviruses to control how they replicated.
Engineered bacteria-based platform
Bacteria have long been used in cancer treatment. Bifidobacterium and Clostridium selectively colonize tumors and reduce their size. Recently synthetic biologists reprogrammed bacteria to sense and respond to a particular cancer state. Most often bacteria are used to deliver a therapeutic molecule directly to the tumor to minimize off-target effects. To target the tumor cells, peptides that can specifically recognize a tumor were expressed on the surfaces of bacteria. Peptides used include an affibody molecule that specifically targets human epidermal growth factor receptor 2 and a synthetic adhesin. The other way is to allow bacteria to sense the tumor microenvironment, for example hypoxia, by building an AND logic gate into bacteria. Then the bacteria only release target therapeutic molecules to the tumor through either lysis or the bacterial secretion system. Lysis has the advantage that it can stimulate the immune system and control growth. Multiple types of secretion systems can be used and other strategies as well. The system is inducible by external signals. Inducers include chemicals, electromagnetic or light waves.
Multiple species and strains are applied in these therapeutics. Most commonly used bacteria are Salmonella typhimurium, Escherichia coli, Bifidobacteria, Streptococcus, Lactobacillus, Listeria and Bacillus subtilis. Each of these species have their own property and are unique to cancer therapy in terms of tissue colonization, interaction with immune system and ease of application.
Engineered yeast-based platform
Synthetic biologists are developing genetically modified live yeast that can deliver therapeutic biologic medicines. When orally delivered, these live yeast act like micro-factories and will make therapeutic molecules directly in the gastrointestinal tract. Because yeast are eukaryotic, a key benefit is that they can be administered together with antibiotics. Probiotic yeast expressing human P2Y2 purinergic receptor suppressed intestinal inflammation in mouse models of inflammatory bowel disease. A live S. boulardii yeast delivering a tetra-specific anti-toxin that potently neutralizes Toxin A and Toxin B of Clostridioides difficile has been developed. This therapeutic anti-toxin is a fusion of four single-domain antibodies (nanobodies) that potently and broadly neutralize the two major virulence factors of C. difficile at the site of infection in preclinical models. The first in human clinical trial of engineered live yeast for the treatment of Clostridioides difficile infection is anticipated in 2024 and will be sponsored by the developer Fzata, Inc.
Cell-based platform
The immune system plays an important role in cancer and can be harnessed to attack cancer cells. Cell-based therapies focus on immunotherapies, mostly by engineering T cells.
T cell receptors were engineered and 'trained' to detect cancer epitopes. Chimeric antigen receptors (CARs) are composed of a fragment of an antibody fused to intracellular T cell signaling domains that can activate and trigger proliferation of the cell. Multiple second generation CAR-based therapies have been approved by FDA.
Gene switches were designed to enhance safety of the treatment. Kill switches were developed to terminate the therapy should the patient show severe side effects. Mechanisms can more finely control the system and stop and reactivate it. Since the number of T-cells are important for therapy persistence and severity, growth of T-cells is also controlled to dial the effectiveness and safety of therapeutics.
Although several mechanisms can improve safety and control, limitations include the difficulty of inducing large DNA circuits into the cells and risks associated with introducing foreign components, especially proteins, into cells.
Biofuels, pharmaceuticals and biomaterials
The most popular biofuel is ethanol produced from corn or sugar cane, but this method of producing biofuels is troublesome and constrained due to the high agricultural cost and inadequate fuel characteristics of ethanol. A substitute and potential source of renewable energy is microbes that have had their metabolic pathways altered to be more efficient at converting biomass into biofuels. Only if their production costs could be made to match or even beat those of present fuel production can these techniques be expected to be successful. Related to this, there are several medicines whose pricey manufacturing procedures prevent them from having a larger therapeutic range. The creation of new materials and the microbiological manufacturing of biomaterials would both benefit substantially from novel artificial biology tools.
CRISPR/Cas9
The clustered frequently interspaced short palindromic repetitions (CRISPR)/CRISPR associated (Cas) system is a powerful method of genome engineering in a range of organisms because of its simplicity, modularity, and scalability. In this technique, a guide RNA (gRNA) attracts the CRISPR nuclease Cas9 to a particular spot in the genome, causing a double strand break. Several DNA repair processes, including homology-directed recombination and non-homology end joining, can be used to accomplish the desired genome change (i.e., gene deletion or insertion). Additionally, dCas9 (dead Cas9 or nuclease-deficient Cas9), a Cas9 double mutant (H840A, D10A), has been utilised to control gene expression in bacteria or when linked to a stimulation of suppression site in yeast.
Regulatory elements
To build and develop biological systems, regulating components including regulators, ribosome-binding sites (RBSs), and terminators are crucial. Despite years of study, there are many various varieties and numbers of promoters and terminators for Escherichia coli, but also for the well-researched model organism Saccharomyces cerevisiae, as well as for other organisms of interest, these tools are quite scarce. Numerous techniques have been invented for the finding and identification of promoters and terminators in order to overcome this constraint, including genome mining, random mutagenesis, hybrid engineering, biophysical modelling, combinatorial design, and rational design.
Organoids
Synthetic biology has been used for organoids, which are lab-grown organs with application to medical research and transplantation.
Bioprinted organs
Other transplants and induced regeneration
There is ongoing research and development into synthetic biology based methods for inducing regeneration in humans as well the creation of transplantable artificial organs.
Nanoparticles, artificial cells and micro-droplets
Synthetic biology can be used for creating nanoparticles which can be used for drug-delivery as well as for other purposes. Complementing research and development seeks to and has created synthetic cells that mimics functions of biological cells. Applications include medicine such as designer-nanoparticles that make blood cells eat away—from the inside out—portions of atherosclerotic plaque that cause heart attacks. Synthetic micro-droplets for algal cells or synergistic algal-bacterial multicellular spheroid microbial reactors, for example, could be used to produce hydrogen as hydrogen economy biotechnology.
Electrogenetics
Mammalian designer cells are engineered by humans to behave a specific way, such as an immune cell that expresses a synthetic receptor designed to combat a specific disease. Electrogenetics is an application of synthetic biology that involves utilizing electrical fields to stimulate a response in engineered cells. Controlling the designer cells can be done with relative ease through the use of common electronic devices, such as smartphones. Additionally, electrogenetics allows for the possibility of creating devices that are much smaller and compact than devices that use other stimulus through the use of microscopic electrodes. One example of how electrogenetics is used to benefit public health is through stimulating designer cells that are able to produce/deliver therapeutics. This was implemented in ElectroHEK cells, cells that contain voltage-gated calcium channels that are electrosensitive, meaning that the ion channel can be controlled by electrical conduction between electrodes and the ElectroHEK cells. The expression levels of the artificial gene that these ElectroHEK cells contained was shown to be able to be controlled by changing the voltage or electrical pulse length. Further studies have expanded on this robust system, one of which is a beta cell line system designed to control the release of insulin based on electric signals.
Ethics
The creation of new life and the tampering of existing life has raised ethical concerns in the field of synthetic biology and are actively being discussed.
Common ethical questions include:
Is it morally right to tamper with nature?
Is one playing God when creating new life?
What happens if a synthetic organism accidentally escapes?
What if an individual misuses synthetic biology and creates a harmful entity (e.g., a biological weapon)?
Who will have control of and access to the products of synthetic biology?
Who will gain from these innovations? Investors? Medical patients? Industrial farmers?
Does the patent system allow patents on living organisms? What about parts of organisms, like HIV resistance genes in humans?
What if a new creation is deserving of moral or legal status?
The ethical aspects of synthetic biology has three main features: biosafety, biosecurity, and the creation of new life forms. Other ethical issues mentioned include the regulation of new creations, patent management of new creations, benefit distribution, and research integrity.
Ethical issues have surfaced for recombinant DNA and genetically modified organism (GMO) technologies and extensive regulations of genetic engineering and pathogen research were in place in many jurisdictions. Amy Gutmann, former head of the Presidential Bioethics Commission, argued that we should avoid the temptation to over-regulate synthetic biology in general, and genetic engineering in particular. According to Gutmann, "Regulatory parsimony is especially important in emerging technologies...where the temptation to stifle innovation on the basis of uncertainty and fear of the unknown is particularly great. The blunt instruments of statutory and regulatory restraint may not only inhibit the distribution of new benefits, but can be counterproductive to security and safety by preventing researchers from developing effective safeguards.".
The "creation" of life
One ethical question is whether or not it is acceptable to create new life forms, sometimes known as "playing God". Currently, the creation of new life forms not present in nature is at small-scale, the potential benefits and dangers remain unknown, and careful consideration and oversight are ensured for most studies. Many advocates express the great potential value—to agriculture, medicine, and academic knowledge, among other fields—of creating artificial life forms. Creation of new entities could expand scientific knowledge well beyond what is currently known from studying natural phenomena. Yet there is concern that artificial life forms may reduce nature's "purity" (i.e., nature could be somehow corrupted by human intervention and manipulation) and potentially influence the adoption of more engineering-like principles instead of biodiversity- and nature-focused ideals. Some are also concerned that if an artificial life form were to be released into nature, it could hamper biodiversity by beating out natural species for resources (similar to how algal blooms kill marine species). Another concern involves the ethical treatment of newly created entities if they happen to sense pain, sentience, and self-perception. There is an ongoing debate as to whether such life forms should be granted moral or legal rights, though no consensus exists as to how these rights would be administered or enforced.
Ethical support for synthetic biology
Ethics and moral rationales that support certain applications of synthetic biology include their potential mitigation of substantial global problems of detrimental environmental impacts of conventional agriculture (including meat production), animal welfare, food security, and human health, as well as potential reduction of human labor needs and, via therapies of diseases, reduction of human suffering and prolonged life.
Biosafety and biocontainment
What is most ethically appropriate when considering biosafety measures? How can accidental introduction of synthetic life in the natural environment be avoided? Much ethical consideration and critical thought has been given to these questions. Biosafety not only refers to biological containment; it also refers to strides taken to protect the public from potentially hazardous biological agents. Even though such concerns are important and remain unanswered, not all products of synthetic biology present concern for biological safety or negative consequences for the environment. It is argued that most synthetic technologies are benign and are incapable of flourishing in the outside world due to their "unnatural" characteristics as there is yet to be an example of a transgenic microbe conferred with a fitness advantage in the wild.
In general, existing hazard controls, risk assessment methodologies, and regulations developed for traditional genetically modified organisms (GMOs) are considered to be sufficient for synthetic organisms. "Extrinsic" biocontainment methods in a laboratory context include physical containment through biosafety cabinets and gloveboxes, as well as personal protective equipment. In an agricultural context, they include isolation distances and pollen barriers, similar to methods for biocontainment of GMOs. Synthetic organisms may offer increased hazard control because they can be engineered with "intrinsic" biocontainment methods that limit their growth in an uncontained environment, or prevent horizontal gene transfer to natural organisms. Examples of intrinsic biocontainment include auxotrophy, biological kill switches, inability of the organism to replicate or to pass modified or synthetic genes to offspring, and the use of xenobiological organisms using alternative biochemistry, for example using artificial xeno nucleic acids (XNA) instead of DNA.
Biosecurity and bioterrorism
Some ethical issues relate to biosecurity, where biosynthetic technologies could be deliberately used to cause harm to society and/or the environment. Since synthetic biology raises ethical issues and biosecurity issues, humanity must consider and plan on how to deal with potentially harmful creations, and what kinds of ethical measures could possibly be employed to deter nefarious biosynthetic technologies. With the exception of regulating synthetic biology and biotechnology companies, however, the issues are not seen as new because they were raised during the earlier recombinant DNA and genetically modified organism (GMO) debates, and extensive regulations of genetic engineering and pathogen research are already in place in many jurisdictions.
Additionally, the development of synthetic biology tools has made it easier for individuals with less education, training, and access to equipment to modify and use pathogenic organisms as bioweapons. This increases the threat of bioterrorism, especially as terrorist groups become aware of the significant social, economic, and political disruption caused by pandemics like COVID-19. As new techniques are developed in the field of synthetic biology, the risk of bioterrorism is likely to continue to grow. Juan Zarate, who served as Deputy National Security Advisor for Combating Terrorism from 2005 to 2009, noted that "the severity and extreme disruption of a novel coronavirus will likely spur the imagination of the most creative and dangerous groups and individuals to reconsider bioterrorist attacks."
European Union
The European Union-funded project SYNBIOSAFE has issued reports on how to manage synthetic biology. A 2007 paper identified key issues in safety, security, ethics, and the science-society interface, which the project defined as public education and ongoing dialogue among scientists, businesses, government and ethicists. The key security issues that SYNBIOSAFE identified involved engaging companies that sell synthetic DNA and the biohacking community of amateur biologists. Key ethical issues concerned the creation of new life forms.
A subsequent report focused on biosecurity, especially the so-called dual-use challenge. For example, while synthetic biology may lead to more efficient production of medical treatments, it may also lead to synthesis or modification of harmful pathogens (e.g., smallpox). The biohacking community remains a source of special concern, as the distributed and diffuse nature of open-source biotechnology makes it difficult to track, regulate or mitigate potential concerns over biosafety and biosecurity.
COSY, another European initiative, focuses on public perception and communication. To better communicate synthetic biology and its societal ramifications to a broader public, COSY and SYNBIOSAFE published SYNBIOSAFE, a 38-minute documentary film, in October 2009.
The International Association Synthetic Biology has proposed self-regulation. This proposes specific measures that the synthetic biology industry, especially DNA synthesis companies, should implement. In 2007, a group led by scientists from leading DNA-synthesis companies published a "practical plan for developing an effective oversight framework for the DNA-synthesis industry".
United States
In January 2009, the Alfred P. Sloan Foundation funded the Woodrow Wilson Center, the Hastings Center, and the J. Craig Venter Institute to examine the public perception, ethics and policy implications of synthetic biology.
On July 9–10, 2009, the National Academies' Committee of Science, Technology & Law convened a symposium on "Opportunities and Challenges in the Emerging Field of Synthetic Biology".
After the publication of the first synthetic genome and the accompanying media coverage about "life" being created, President Barack Obama established the Presidential Commission for the Study of Bioethical Issues to study synthetic biology. The commission convened a series of meetings, and issued a report in December 2010 titled "New Directions: The Ethics of Synthetic Biology and Emerging Technologies." The commission stated that "while Venter's achievement marked a significant technical advance in demonstrating that a relatively large genome could be accurately synthesized and substituted for another, it did not amount to the "creation of life". It noted that synthetic biology is an emerging field, which creates potential risks and rewards. The commission did not recommend policy or oversight changes and called for continued funding of the research and new funding for monitoring, study of emerging ethical issues and public education.
Synthetic biology, as a major tool for biological advances, results in the "potential for developing biological weapons, possible unforeseen negative impacts on human health ... and any potential environmental impact". The proliferation of such technology could also make the production of biological and chemical weapons available to a wider array of state and non-state actors. These security issues may be avoided by regulating industry uses of biotechnology through policy legislation. Federal guidelines on genetic manipulation are being proposed by "the President's Bioethics Commission ... in response to the announced creation of a self-replicating cell from a chemically synthesized genome, put forward 18 recommendations not only for regulating the science ... for educating the public".
Opposition
On March 13, 2012, over 100 environmental and civil society groups, including Friends of the Earth, the International Center for Technology Assessment, and the ETC Group, issued the manifesto The Principles for the Oversight of Synthetic Biology. This manifesto calls for a worldwide moratorium on the release and commercial use of synthetic organisms until more robust regulations and rigorous biosafety measures are established. The groups specifically call for an outright ban on the use of synthetic biology on the human genome or human microbiome. Richard Lewontin wrote that some of the safety tenets for oversight discussed in The Principles for the Oversight of Synthetic Biology are reasonable, but that the main problem with the recommendations in the manifesto is that "the public at large lacks the ability to enforce any meaningful realization of those recommendations".
Health and safety
The hazards of synthetic biology include biosafety hazards to workers and the public, biosecurity hazards stemming from deliberate engineering of organisms to cause harm, and environmental hazards. The biosafety hazards are similar to those for existing fields of biotechnology, mainly exposure to pathogens and toxic chemicals, although novel synthetic organisms may have novel risks. For biosecurity, there is concern that synthetic or redesigned organisms could theoretically be used for bioterrorism. Potential risks include recreating known pathogens from scratch, engineering existing pathogens to be more dangerous, and engineering microbes to produce harmful biochemicals. Lastly, environmental hazards include adverse effects on biodiversity and ecosystem services, including potential changes to land use resulting from agricultural use of synthetic organisms. Synthetic biology is an example of a dual-use technology with the potential to be used in ways that could intentionally or unintentionally harm humans and/or damage the environment. Often "scientists, their host institutions and funding bodies" consider whether the planned research could be misused and sometimes implement measures to reduce the likelihood of misuse.
Existing risk analysis systems for GMOs are generally considered sufficient for synthetic organisms, although there may be difficulties for an organism built "bottom-up" from individual genetic sequences. Synthetic biology generally falls under existing regulations for GMOs and biotechnology in general, and any regulations that exist for downstream commercial products, although there are generally no regulations in any jurisdiction that are specific to synthetic biology.
| Biology and health sciences | Biology basics | Biology |
282377 | https://en.wikipedia.org/wiki/Anvil | Anvil | An anvil is a metalworking tool consisting of a large block of metal (usually forged or cast steel), with a flattened top surface, upon which another object is struck (or "worked").
Anvils are massive because the higher their inertia, the more efficiently they cause the energy of striking tools to be transferred to the work piece. In most cases the anvil is used as a forging tool. Before the advent of modern welding technology, it was the primary tool of metal workers.
The great majority of modern anvils are made of cast steel that has been heat treated by either flame or electric induction. Inexpensive anvils have been made of cast iron and low-quality steel, but are considered unsuitable for serious use, as they deform and lack rebound when struck.
The largest single piece tool steel anvil that is heat treated is 1600 pounds. This anvil was made in 2023 by Oak Lawn Blacksmith. There are larger anvils that are made out of multiple pieces such as “The mile long anvil” made by Napier which weighs 6500 pounds. This anvil is not heat treated or made from tool steel.
Structure
The primary work surface of the anvil is known as the face. It is generally made of hardened steel and should be flat and smooth with rounded edges for most work. Any marks on the face will be transferred to the work. Also, sharp edges tend to cut into the metal being worked and may cause cracks to form in the workpiece. The face is hardened and tempered to resist the blows of the smith's hammer, so the anvil face does not deform under repeated use. A hard anvil face also reduces the amount of force lost in each hammer blow. Hammers, tools, and work pieces of hardened steel should never directly strike the anvil face with full force, as they may damage it; this can result in chipping or deforming of the anvil face.
The horn of the anvil is a conical projection used to form various round shapes and is generally unhardened steel or iron. The horn is used mostly in bending operations. It also is used by some smiths as an aid in "drawing down" stock (making it longer and thinner). Some anvils, mainly European, are made with two horns, one square and one round. Also, some anvils are made with side horns or clips for specialized work.
The step is the area of the anvil between the "horn" and the "face". It is soft and is used for cutting; its purpose is to prevent damaging the steel face of the anvil by conducting such operations there and so as not to damage the cutting edge of the chisel, though many smiths shun this practice as it will damage the anvil over time.
There have also been other additions to the anvil such as an upsetting block; this is used to upset steel, generally in long strips/bars as it is placed between the feet of the anvil. Upsetting is a technique often used by blacksmiths for making the steel workpiece short and thick, having probably been originally long and thin.
The hardy hole is a square hole into which specialized forming and cutting tools, called hardy tools, are placed. It is also used in punching and bending operations. These are not to be confused with swage blocks, although their purpose is similar.
The pritchel hole is a small round hole that is present on most modern anvils. Some anvils have more than one. It is used mostly for punching. At times, smiths will fit a second tool to this hole to allow the smith more flexibility when using more than one anvil tool.
Placement
The anvil is placed as near to the forge as is convenient, generally no more than one step from the forge to prevent heat loss in the work piece.
An anvil needs to be placed upon a sturdy base made from an impact and fire resistant material. Common methods of attaching an anvil are spikes, chains, steel or iron straps, clips, bolts where there are holes provided, and cables.
The most common base traditionally was a hard wood log or large timber buried several feet into the floor of the forge shop. In the industrial era, cast iron bases became available. They had the advantage of adding additional weight to the anvil, making it more stable. These bases are highly sought after by collectors today. When concrete became widely available, there was a trend to make steel reinforced anvil bases by some smiths, though this practice has largely been abandoned. In more modern times, anvils have been placed upon bases fabricated from steel, often a short thick section of a large I-beam. In addition, bases have been made from dimensional lumber bolted together to form a large block or steel drums full of oil-saturated sand to provide a damping effect. In recent times, tripod bases of fabricated steel have become popular.
Types
There are many designs for anvils, which are often tailored for a specific purpose or to meet the needs of a particular smith. For example, there were anvils specifically made for farriers, general smiths, cutlers, chain makers, armorers, saw tuners, coach makers, coopers, and many other types of metal workers. Most of these anvil types look similar, but some are radically different. Saw maker anvils, for instance, are generally a large rectangular block of steel that have a harder surface than most other anvils due to hammering on a harder steel for saws. Bladesmith anvils tend to be rectangular with a hardy and pritchel, but no horn. Such designs have originated in diverse geographic locations. Several styles of anvils include, Bavarian, French Pig anvil, Austrian, and Chinese turtle anvil.
The Bavarian style is known for the sloped brow. The brow was first used in medieval times to make armor on the church windows below the brow. Common manufactures include, Söding Halbach and Holthaus. This style of anvil is known not to sway in the face due to the extra mass with the brow.
The common blacksmith's anvil is made of either forged or cast steel, forged wrought iron with a hard steel face or cast iron with a hard steel face. Cast iron anvils are not used for forging as they are incapable of standing up to the impact and will crack and dent. Also, cast iron anvils without a hard steel face do not have the rebound that a harder anvil would and will tire out the smith. Historically, some anvils have been made with a smooth top working face of hardened steel welded to a cast iron or wrought iron body, though this manufacturing method is no longer in use. At one end, the common smith's anvil has a projecting conical bick (beak, horn) used for hammering curved work pieces. The other end is typically called the heel. Occasionally, the other end is also provided with a bick, partly rectangular in section. Most anvils made since the late 18th century also have a hardy hole and a pritchel hole where various tools, such as the anvil-cutter or hot chisel, can be inserted and held by the anvil. Some anvils have several hardy and pritchel holes, to accommodate a wider variety of hardy tools and pritchels. An anvil may also have a softer pad for chisel work.
An anvil for a power hammer is usually supported on a massive anvil block, sometimes weighing over 800 tons for a 12-ton hammer; this again rests on a strong foundation of timber and masonry or concrete.
An anvil may have a marking indicating its weight, manufacturer, or place of origin. American-made anvils were often marked in pounds. European anvils are sometimes marked in kilograms. English anvils were often marked in hundredweight, the marking consisting of three numbers, indicating hundredweight, quarter hundredweight and pounds. For example, a 3-1-5, if such an anvil existed, would be 3×112lb + 1×28lb + 5 lb = 369 lb ≈ 168 kg.
Cheap anvils made from inferior steel or cast iron and often sold at retail hardware stores, are considered unsuitable for serious use, and are often derisively referred to as "ASOs", or "anvil shaped objects". Amateur smiths have used lengths of railroad rail, forklift tines, or even simple blocks of steel as makeshift anvils.
A metalworking vise may have a small anvil integrated into its design.
History
Anvils were first made of stone, then bronze, and later wrought iron. As steel became more readily available, anvils were faced with it. This was done to give the anvil a hard face and to stop the anvil from deforming from impact. Many regional styles of anvils evolved through time from the simple block that was first used by smiths. The majority of anvils found today in the US are based on the London pattern anvil of the mid-19th century.
The wrought iron steel faced anvil was produced up until the early 20th century. Through the 19th and very early 20th centuries, this method of construction evolved to produce extremely high quality anvils. The basic process involved forge-welding billets of wrought iron together to produce the desired shape. The sequence and location of the forge-welds varied between different anvil makers and the kind of anvil being made. At the same time cast iron anvils with steel faces were being made in the United States. At the dawn of the 20th century solid cast steel anvils began to be produced, as well as two piece forged anvils made from closed die forgings. Modern anvils are generally made entirely from steel.
There are many references to anvils in ancient Greek and Egyptian writings, including Homer's works. They have been found at the Calico Early Man Site in North America.
Anvils have since lost their former commonness, along with the smiths who used them. Mechanized production has made cheap and abundant manufactured goods readily available. The one-off handmade products of the blacksmith are less economically viable in the modern world, while in the past they were an absolute necessity. However, anvils are still used by blacksmiths and metal workers of all kinds in producing custom work. They are also essential to the work done by farriers.
In popular culture
Firing
Anvil firing is the practice of firing an anvil into the air using gunpowder. It has been popular in California, the eastern United States and the southern United States, much like how fireworks are used today. There is a growing interest in re-enacting this "ancient tradition" in the US, which has now spread to England.
Television and film
A typical metalworker's anvil, with horn at one end and flat face at the other, is a standard prop for cartoon gags, as the epitome of a heavy and clumsy object that is perfect for dropping onto an antagonist. This visual metaphor is common, for example, in Warner Brothers' Looney Tunes and Merrie Melodies shorts, such as those with Wile E. Coyote and the Road Runner. Anvils in cartoons were also referenced in an episode of Gilmore Girls, where one of the main characters tries to have a conversation about "Where did all the anvils go?", a reference to their falling out of use on a general scale. Animaniacs made frequent gags on the topic throughout its run, even having a kingdom named Anvilania, whose sole national product is anvils.
Books
Dwarves were blacksmiths who used anvils for metalworking in C. S. Lewis's The Chronicles of Narnia, most iconically on The Magician's Nephew and Prince Caspian; as well as in J. R. R. Tolkien's The Hobbit.
Music
Anvils have been used as percussion instruments in several famous musical compositions, including:
Louis Andriessen: De Materie (Part I), which features an extended solo for two anvils
Kurt Atterberg: Symphony No. 5
Daniel Auber: opera Le Maçon
Alan Silvestri: The Mummy Returns
Arnold Bax: Symphony No. 3
The Beatles: "Maxwell's Silver Hammer" makes prominent use of the anvil. Their roadie Mal Evans played the anvil.
Benjamin Britten: The Burning Fiery Furnace
Aaron Copland: Symphony No. 3
Don Davis: The Matrix trilogy
Brad Fiedel: The Terminator
Neil Finn: "Song of the Lonely Mountain," written for the end credits of The Hobbit: An Unexpected Journey
Gustav Holst: Second Suite in F for Military Band, which includes a movement titled "Song of the Blacksmith"
Nicholas Hooper: Harry Potter and the Half-Blood Prince
James Horner: Used it extensively in Aliens, and his other films such as Flightplan, The Forgotten and Titanic
Metallica: "For Whom the Bell Tolls"
Randy Newman: Toy Story 3
Carl Orff: Antigone
Howard Shore: The Lord of the Rings film trilogy. Used predominantly for the theme of Isengard.
Juan María Solare: Veinticinco de agosto, 1983 and Un ángel de hielo y fuego
John Philip Sousa: Dwellers of the Western World, in which the second movement, The White Man, calls for two pairs of anvils, the one small, the other large
Johann Strauss II: Der Zigeunerbaron (The Gipsy Baron; 1885): Ja, da wird das Eisen gefüge
Josef Strauss: Feuerfest!, op. 269 (1869). The title means "fireproof". This was the slogan of the Wertheim fireproof safe company, which commissioned the work.
Edgard Varèse: Ionisation
Giuseppe Verdi: Il Trovatore, featuring the famous Anvil Chorus
Richard Wagner: Der Ring des Nibelungen in Das Rheingold in scene 3, using 18 anvils tuned in F in three octaves, and Siegfried in act I, notably Siegfried's "Forging Song" (Nothung! Nothung! Neidliches Schwert!)
William Walton: Belshazzar's Feast
John Williams: Jaws, Star Wars: Episode III – Revenge of the Sith
Carl Michael Ziehrer: Der Traum eines österreichischen Reservisten (1890)
Wagner's Ring des Nibelungen is notable in using the anvil as pitched percussion. The vast majority of extant works use the anvil as unpitched. However tuned anvils are available as musical instruments, albeit unusual. These are not to be confused with the "sawyers' anvils" used to "tune" big circular saw blades. Steel anvils are used for tuning for use as musical instruments, because those based partly on cast iron and similar materials give a duller sound; this is actually valued in industry, as pure steel anvils are troublesomely noisy, though energetically more efficient. The hammer and anvil have enjoyed varying popularity in orchestral roles. Robert Donington pointed out that Sebastian Virdung notes them in his book of 1510, and Martin Agricola includes it in his list of instruments (Musica instrumentalis deudsch, 1529) largely as a compliment to Pythagoras. In pre-modern or modern times anvils occasionally appear in operatic works by Berlioz, Bizet, Gounod, Verdi, and Wagner for example. Commonly pairs of anvils tuned a third apart are used.
In practice modern orchestras commonly substitute a brake drum or other suitable steel structure that is easier to tune than an actual anvil, although a visibly convincing anvil-shaped prop may be shown as desired. In Das Rheingold Wagner scored for nine little, six mid-sized, and three large anvils, but orchestras seldom can afford instrumentation on such a scale.
| Technology | Metallurgy | null |
282457 | https://en.wikipedia.org/wiki/Drongo | Drongo | A drongo is a member of the family Dicruridae of passerine birds of the Old World tropics. The 28 species in the family are placed in a single genus, Dicrurus.
Drongos are mostly black or dark grey, short-legged birds, with an upright stance when perched. They have forked tails and some have elaborate tail decorations. They feed on insects and small birds, which they catch in flight or on the ground. Some species are accomplished mimics and have a variety of alarm calls, to which other birds and animals often respond. They are known to utter fake alarm calls that scare other animals off food, which the drongo then claims.
Taxonomy
The genus Dicrurus was introduced by French ornithologist Louis Pierre Vieillot for the drongos in 1816. The type species was subsequently designated as the balicassiao (Dicrurus balicassius) by English zoologist George Robert Gray in 1841. The name of the genus combines the Ancient Greek words dikros "forked" and oura "tail". "Drongo" is originally from the indigenous language of Madagascar, where it refers to the crested drongo; it is now used for all members of the family.
This family now includes only the genus Dicrurus, although Christidis and Boles (2007) expanded the family to include the subfamilies Rhipidurinae (Australasian fantails), Monarchinae (monarch and paradise flycatchers), and Grallininae (magpie larks).
The family was formerly treated as having two genera, Chaetorhynchus and Dicrurus. The genus Chaetorhynchus contains a single species, the New Guinea-endemic C. papuensis. On the basis of both morphological and genetic differences, it is now placed with the fantails (Rhipiduridae) and renamed from the pygmy drongo to the drongo fantail.
The genus Dicrurus contains 28 species:
The family Dicruridae is most likely of Indo-Malayan origin, with a colonization of Africa about 15 million years ago (Mya). Dispersal across the Wallace Line into Australasia is estimated to have been more recent, around 6 Mya.
Characteristics
These insectivorous birds are usually found in open forests or bush. Most are black or dark grey in colour, sometimes with metallic tints. They have long, forked tails; some Asian species have elaborate tail decorations. They have short legs and sit very upright whilst perched, like a shrike. They flycatch or take prey from the ground. Some drongos, especially the greater racket-tailed drongo, are noted for their ability to mimic other birds and even mammals.
Two to four eggs are laid in a nest high in a tree. Despite their small size, they are aggressive and fearless, and will attack much larger species if their nests or young are threatened.
Several species of animals and birds respond to drongos' alarm calls, which often warn of the presence of a predator. Fork-tailed drongos in the Kalahari Desert use alarm calls in the absence of a predator to cause animals to flee and abandon food, which they eat, getting up to 23% of their food this way. They not only use their own alarm calls, but also imitate those of many species, either their victim's or that of another species to which the victim responds. If the call of one species is not effective, perhaps because of habituation, the drongo may try another; 51 different calls are known to be imitated. In one test on pied babblers, the babbler ignored an alarm call repeated three times when no danger was present, but continued to respond to different calls. Researchers have considered the possibility that these drongos possess theory of mind, not fully shown in any animal other than humans.
Insult
The word "drongo" is used in Australian English as a mild form of insult meaning "idiot" or "stupid fellow". This usage derives from an Australian racehorse of the same name (apparently after the spangled drongo, D. bracteatus) in the 1920s that never won despite many places. The word also has been frequently used among friends and can be used in a casual or serious tone.
Gallery
| Biology and health sciences | Passerida | Animals |
282473 | https://en.wikipedia.org/wiki/Manganese%20nodule | Manganese nodule | Polymetallic nodules, also called manganese nodules, are mineral concretions on the sea bottom formed of concentric layers of iron and manganese hydroxides around a core. As nodules can be found in vast quantities, and contain valuable metals, deposits have been identified as a potential economic interest. Depending on their composition and authorial choice, they may also be called ferromanganese nodules. Ferromanganese nodules are mineral concretions composed of silicates and insoluble iron and manganese oxides that form on the ocean seafloor and terrestrial soils. The formation mechanism involves a series of redox oscillations driven by both abiotic and biotic processes. As a byproduct of pedogenesis, the specific composition of a ferromanganese nodule depends on the composition of the surrounding soil. The formation mechanisms and composition of the nodules allow for couplings with biogeochemical cycles beyond iron and manganese. The high relative abundance of nickel, copper, manganese, and other rare metals in nodules has increased interest in their use as a mining resource.
Nodules vary in size from tiny particles visible only under a microscope to large pellets more than across. However, most nodules are between in diameter, about the size of hen's eggs. Their surface textures vary from smooth to rough. They frequently have botryoidal (mammillated or knobby) texture and vary from spherical in shape to typically oblate, sometimes prolate, or are otherwise irregular. The bottom surface, buried in sediment, is generally rougher than the top due to a different type of growth.
Occurrence
Nodules lie on the seabed sediment, often partly or completely buried. They vary greatly in abundance, in some cases touching one another and covering more than 70% of the sea floor surface. The total amount of polymetallic nodules on the sea floor was estimated at 500 billion tons by Alan A. Archer of the London Geological Museum in 1981.
Polymetallic nodules are found in both shallow (e.g. the Baltic Sea) and deeper waters (e.g. the central Pacific), even in lakes, and are thought to have been a feature of the seas and oceans at least since the deep oceans were oxygenated in the Ediacaran period over 540 million years ago.
Polymetallic nodules were discovered in 1868 in the Kara Sea, in the Arctic Ocean of Siberia. During the scientific expeditions of HMS Challenger (1872–1876), they were found to occur in most oceans of the world.
Their composition varies by location, and sizeable deposits have been found in the following areas:
Penrhyn Basin within the Cook Islands.
North central Pacific Ocean in a region called the Clarion–Clipperton zone (CCZ) roughly midway between Hawaii and Clipperton Islands.
Peru Basin in the southeast Pacific, and
Southern tropical Indian Ocean in a region termed the Indian Ocean Nodule Field (IONF) roughly 500 km SE of Diego Garcia Island.
In the Eastern Pacific, including the area around Juan Fernández Islands and the abyssal plain offshore Loa River.
The largest of these deposits in terms of nodule abundance and metal concentration occur in the Clarion–Clipperton zone on vast abyssal plains in the deep ocean between . The International Seabed Authority estimates that the total amount of nodules in the Clarion–Clipperton zone exceeds 21 billions of tons (Bt), containing about 5.95 Bt of manganese, 0.27 Bt of nickel, 0.23 Bt of copper and 0.05 Bt of cobalt.
All of these deposits are in international waters apart from the Penrhyn Basin, which lies within the exclusive economic zone of the Cook Islands.
Growth and composition
In both marine and terrestrial environments, ferromanganese nodules are composed primarily of iron and manganese oxide concretions supported by an aluminosilicate matrix and surrounding a nucleus. Typically terrestrial nodules are more enriched in iron, while marine nodules tend to have higher manganese to iron ratios, depending on the formation mechanism and surrounding sedimentary composition. Regardless of where they form, the nodules are characterized by enrichment in iron, manganese, heavy metals, and rare earth element content when compared to the Earth's crust and surrounding sediment. However, organically-bound elements in the surrounding environment are not readily incorporated into nodules.
Marine nodules
On the seabed the abundance of nodules varies and is likely controlled by the thickness and stability of a geochemically active layer that forms at the seabed. Pelagic sediment type and seabed bathymetry (or geomorphology) likely influence the characteristics of the geochemically active layer.
Nodule growth is one of the slowest of all known geological phenomena, on the order of a centimeter over several million years. Several processes are hypothesized to be involved in the formation of nodules, including the precipitation of metals from seawater, the remobilization of manganese in the water column (diagenetic), the derivation of metals from hot springs associated with volcanic activity (hydrothermal), the decomposition of basaltic debris by seawater and the precipitation of metal hydroxides through the activity of microorganisms (biogenic). The sorption of divalent cations such as Mn2+, Fe2+, Co2+, Ni2+, and Cu2+ at the surface of Mn- and Fe-oxyhydroxides, known to be strong sorbents, also plays a main role in the accumulation of these transition metals in the manganese nodules. These processes (precipitation, sorption, surface complexation, surface precipitation, incorporation by formation of solid solutions...) may operate concurrently or they may follow one another during the formation of a nodule.
Manganese nodules are essentially composed of hydrated phyllomanganates. These are layered Mn-oxide minerals with interlayers containing water molecules in variable quantities. They strongly interact with trace metals (Co2+, Ni2+) because of the octahedral vacancies present in their layers. The particular properties of phyllomanganates explain the role they play in many geochemical concentration processes. They incorporate traces of transition metals mainly via cation exchange in their interlayer like clay minerals and surface complexation by formation of inner sphere complexes at the oxide surface as it is also the case with hydrous ferric oxides, HFO. Slight variations in their crystallographic structure and mineralogical composition may result in considerable changes in their chemical reactivity.
The mineral composition of manganese-bearing minerals is dependent on how the nodules are formed; sedimentary nodules, which have a lower Mn2+ content than diagenetic, are dominated by Fe-vernadite, Mn-feroxyhyte, and asbolane-buserite while diagenetic nodules are dominated by buserite I, birnessite, todorokite, and asbolane-buserite. The growth types termed diagenetic and hydrogenetic reflect suboxic and oxic growth, which in turn could relate to periods of interglacial and glacial climate. It has been estimated that suboxic-diagenetic type 2 layers make up about 50–60% of the chemical inventory of the nodules from the Clarion–Clipperton zone (CCZ) whereas oxic-hydrogenetic type 1 layers comprise about 35–40%. The remaining part (5–10%) of the nodules consists of incorporated sediment particles occurring along cracks and pores.
The chemical composition of nodules varies according to the type of manganese minerals and the size and characteristics of their core. Those of greatest economic interest contain manganese (27–30 wt. %), nickel (1.25–1.5 wt. %), copper (1–1.4 wt. %) and cobalt (0.2–0.25 wt. %). Other constituents include iron (6 wt. %), silicon (5 wt. %) and aluminium (3 wt. %), with lesser amounts of calcium, sodium, magnesium, potassium, titanium and barium, along with hydrogen and oxygen as well as water of crystallization and free water. In a given manganese nodule, there is one part of iron oxide for every two parts of manganese dioxide.
A wide range of trace elements and trace minerals are found in nodules with many of these incorporated from the seabed sediment, which itself includes particles carried as dust from all over the planet before settling to the seabed.
The size of marine ferromanganese nodules can range from a diameter of 1–15 cm, surrounding a nucleus. The nucleus itself can be made from a variety of small objects in the surrounding environment, including fragments from previously broken down nodules, rock fragments, or sunken biogenic matter. Total nodule composition varies based on the formation mechanism, broadly broken down into two major categories: hydrogenetic and diagenetic. Hydrogenetic nodules have a higher iron and cobalt enrichment with manganese to iron ratios less than 2.5, while diagenetic nodules are more enriched with manganese, nickel, and copper with manganese to iron ratios typically between 2.5 to 5 but upwards to 30+ in sub-oxic conditions. The parent mineral for hydrogenetic nodules is vernadite and buserite for diagenetic nodules. The majority of observed nodules are a mixture of hydrogenetic and diagenetic regions of growth, preserving the changes in formation mechanisms over time. Generally, diagenetic layers are found on the bottom where the nodule is either buried in or touching the sea floor sediment and hydrogenetic layers are found towards the top where it is exposed to the overlying water column. Nodule layers are discontinuous and vary in thickness on micro to nanometer scale with those composed of higher manganese content typically brighter and those with higher iron content dark and dull.
Terrestrial nodules
Terrestrial ferromanganese nodules form in a variety of soil types, including but not limited to ultisols, vertisols, inceptisols, alfisols, and mollisols. Similar to the marine nodules, concretion layers are defined based on iron and manganese content as well as their combination. High iron content nodules appear a red or brown color, while high manganese content appears black or grey. The dominant metal oxide is related to the elements enriched in the nodule. In manganese-dominated nodules, enriched elements include barium, strontium, nickel, cobalt, copper, cadmium, lead, and zinc. In contrast, iron-dominated nodules are enriched in vanadium, phosphorus, arsenic, and chromium.
Formation
Marine origin
Marine ferromanganese nodules form from the precipitation of primarily iron, manganese, nickel, copper, cobalt, and zinc around the nucleus. The mechanism is defined based on the source of the precipitation. Precipitation sourced from the above water column is referred to as hydrogenetic, while precipitation from the sediment pore water is diagenetic. Nodule growth occurs more readily in oxygenated environments with relatively low sedimentation rates that provide adequate levels of labile organic matter to fuel precipitation. When sedimentation rates are too high, nodules can be completely covered in sediments, lowering the local oxygen levels and preventing precipitation. Growth rates for nodules are a current topic for research complicated by the irregular and discontinuous nature of their formation, but average rates have been calculated using radiometric dating. In general hydrogenetic nodules grow slower than diagenetic at approximately 2–5 mm per million years versus 10 mm per million years. The formation of polynodules from multiple nodules growing together is possible and hypothesized to be facilitated by deposited encrusting organisms.
Terrestrial origin
Formation of terrestrial ferromanganese nodules involves the accumulation of iron and manganese oxides followed by repeated redox cycles of reductive dissolution and oxidative precipitation. The oscillating redox cycle is controlled by pH, microbial activity, organic matter concentration, groundwater level, soil saturation, and redox potential. Anthropogenic activity could influence these cycles through increased nutrient loading via fertilizers. Assessment of the changing paleoclimate conditions during soil evolution can be explored by analyzing the nodule's concretion structure when combined with dating techniques. Manganese layers typically form at higher redox potentials compared to iron layers, but a period of rapid increase in redox potential can form a mixed layer. As the nodules are formed, trace elements including but not limited to nickel, cobalt, copper, and zinc are incorporated. Trace metals composition is a product of three processes: uptake of parent material in surrounding soil, accumulation of the products of microbial iron or manganese-reducing bacteria, and complexation on the nodule's surface.
Proposed mining – history of mining activities
Interest in the potential exploitation of polymetallic nodules generated a great deal of activity among prospective mining consortia in the 1960s and 1970s. Almost half a billion dollars was invested in identifying potential deposits and in research and development of technology for mining and processing nodules. These studies were carried out by four multinational consortia composed of companies from the United States, Canada, the United Kingdom, West Germany, Belgium, the Netherlands, Italy, Japan, and two groups of private companies and agencies from France and Japan. There were also three publicly sponsored entities from the Soviet Union, India and China.
In the late 1970s, two of the international joint ventures collected several hundred-ton quantities of manganese nodules from the abyssal plains ( + depth) of the eastern equatorial Pacific Ocean. Significant quantities of nickel (the primary target) as well as copper and cobalt were subsequently extracted from this "ore" using both pyrometallurgical and hydrometallurgical methods. In the course of these projects, a number of ancillary developments evolved, including the use of near-bottom towed side-scan sonar array to assay the nodule population density on the abyssal silt while simultaneously performing a sub-bottom profile with a derived, vertically oriented, low-frequency acoustic beam. Since then, deep sea technology has improved significantly: including widespread and low cost use of navigation technology such as Global Positioning System (GPS) and ultra-short baseline (USBL); survey technology such as multibeam echosounder (MBES) and autonomous underwater vehicles (AUV); and intervention technology including remotely operated underwater vehicle (ROV) and high power umbilical cables. There is also improved technology that could be used in mining including pumps, tracked and screw drive rovers, rigid and flexible drilling risers, and ultra-high-molecular-weight polyethylene rope. Mining is considered to be similar to the potato harvest on land, which involves mining a field partitioned into long, narrow strips. The mining support vessel follows the mining route of the seafloor mining tools, picking up the about potato-sized nodules from the seafloor.
In the 2010s, increased demand for nickel and other metals prompted commercial interest in seabed nodules. The International Seabed Authority has granted new exploration contracts and is progressing development of a mining code for the area, with most interest being in the Clarion–Clipperton zone.
Since 2011, a number of commercial companies have received exploration contracts. These include subsidiaries of larger companies including Lockheed Martin, DEME (Global Sea Mineral Resources, GSR), Keppel Corporation, The Metals Company, and China Minmetals, and smaller companies like Nauru Ocean Resources, Tonga Offshore Mining and Marawa Research and Exploration.
In July 2021, Nauru announced a plan to exploit nodules in this area, which requires the International Seabed Authority, which regulates mining in international waters, to finalize mining regulations by July 2023. Environmentalists have criticized this move on the grounds that too little is known about seabed ecosystems to understand the potential impacts of deep-sea mining, and some of the major tech companies, including Samsung and BMW, have committed to avoid using metals derived from nodules.
Proposed mining areas of manganese nodules
The Clarion–Clipperton zone serves as the largest and most popular area for mining manganese nodules. Extending from approximately 120W to 160W, the Clarion–Clipperton zone can be located in the Pacific Ocean, lying between Hawaii and Mexico. According to the ISA, it covers an area of about four million square kilometers which almost equals the size of the European Union. The huge potential of the Clarion–Clipperton zone is based on an estimated amount of 21 billion tons of nodules. Around 44 million tons of cobalt are stored in that area alone, which is around three times more than the land reserves could provide. Manganese nodule fields are not equally distributed on the seafloor within the Clarion–Clipperton zone but rather occur in patches. Economically interesting patches with a high distribution of manganese nodules can cover an area of several thousand square kilometers. This rather irregular nodule distribution in the South Pacific can be found as a possible result of the greater topographic and sedimentological diversity of the South Pacific.
The economic interest of mining manganese nodules
The high natural abundance of nickel, copper, cobalt, zinc, iron, and manganese in ferromanganese nodules has promoted research into their use as a rare metal resource. The Clarion–Clipperton zone in the northeastern Pacific Ocean has been observed as an area containing the highest concentration of resource-grade nodules. A bulk weight greater than 3% for nickel, copper, and cobalt is required to be considered resource-grade. Nodule formation in oxic waters at or below the carbonate compensation depth produces the most desirable rare metal ratio in hydrogenic nodules. As the grade of ores from terrestrial mines has decreased over time, ferromanganese nodules may offer a way to meet the growing global demand for rare metals. However, the low estimated growth rate of hydrogenic nodules of about 2–5 mm per million years categorizes them as a non-renewable resource.
Technologies like electric car batteries, wind turbines and solar panels require rare types of resources that can be found in the seabed. Manganese nodules provide various sources of these metals, especially cobalt. The ongoing digitalization, transport and energy transition causes a rising demand for metals such as copper, nickel cobalt and many other metals used in technology. Manganese Nodules are therefore needed for batteries, laptops, and smartphones, in e-bikes and e-cars, solar and wind turbines as well as for the storage of green electricity. This enormous demand in cobalt sets the ocean into a new light — many countries have already staked their claims. Yet at the same time, mining them might cause even greater damages to the deep-sea ecosystem. Some scientists question the prime economical interest in manganese nodules. As far as they are concerned, such biological resources could be an untapped value for biotechnologies and medicines and should therefore be protected at all cost.
Ecology
Ferromanganese nodules are highly redox active, allowing for interaction with biogeochemical cycles primarily as an electron acceptor. Notably, terrestrial nodules uptake and trap nitrogen, phosphorus, and organic carbon. The higher rate of organic carbon uptake allows nodules to enhance a soil's ability to sequester carbon, creating a net sink. Phosphorus concentration in the nodules ranges from 2.5 to 7 times the value of the surrounding soil matrix. Microbes in the soil can utilize the nutrient enrichment on the surface of nodules coupled with their redox potential to fuel their metabolic pathways and release the once immobile phosphorus. Along with nutrients, ferromanganese nodules can sequester toxic heavy metals (lead, copper, zinc, cobalt, nickel, and cadmium) from the soil, improving its quality. However, similar to the release of phosphorus by microbes, reductive dissolution of the nodules would release these heavy metals back into the soil.
Abiogenesis theory
A recent study hypothesizes that the nodules are a source of "dark oxygen", oxygen produced without light, which provides the seafloor in the deep ocean with oxygen. However, this study contrasts with many other studies conducted over decades in the deep sea that did not detect oxygen production - and in fact showed only oxygen consumption. If the nodules can produce both electrical energy and oxygen, they may challenge the conventional theory of abiogenesis because, previously only living things such as plants and algae were known to be capable of producing oxygen via photosynthesis which requires sunlight.
Environmental impacts of mining manganese nodules
Very little is known about deep sea ecosystems or the potential impacts of deep-sea mining. Polymetallic nodule fields are hotspots of abundance and diversity for a highly vulnerable abyssal fauna, much of which lives attached to nodules or in the sediment immediately beneath it. Nodule mining could affect tens of thousands of square kilometers of these deep sea ecosystems, and ecosystems take millions of years to recover. It causes habitat alteration, direct mortality of benthic creatures, or smothering of filter feeders by sediment. Due to the complexity and remoteness of the deep-sea, environmental scientists work in a knowledge poor situation with many gaps and high uncertainty. Nevertheless, there are several sources of cumulative impacts caused within a mining operation that must be considered. These impacts can be directly caused by the mining activities themselves but also occur as indirect impacts such as sedimentation plumes and disposition. Multiple impacts can be caused from the same mining activity but affect the deep-sea environment in different ways.
These could include:
discharge plumes that have effects of clogging feeding mechanisms of plankton
Noise and light pollution that cause reduced visibility for predators
Ecotoxicological or chemical temperature changes in the water quality
Destruction of seabed and habitat
The dump-truck-sized collection vehicles that scour the seafloor for nodule-bearing sediment, do necessarily destroy the top of the seabed at a depth of often more than three kilometers below the surface. Scientists found that collection vehicles can have long-lasting physical and biological effects on the seafloor and cause an altering of various deep-sea ecosystems that scientists are still working to understand. This mining method leads to an inevitable loss of life among animals while the plow tracks remained visible decades later. Recent growth estimates suggest that "microbially mediated biogeochemical functions" need over 50 years to return to their undisturbed initial state. The DISCOL impact study aimed to reveal the potential long-term impacts of deep-sea mining-related disturbances on seafloor integrity by revisiting 26-year-old plough tracks. While nodules appeared outside the tracks dusted with sediments, the plough tracks themselves were clearly devoid of nodules.
The contracts to explore for manganese nodules are typically for areas up to , but the total area affected by the extractions is much greater. The extent of physically disturbed seabed area in one mine contract area only can be assumed to be between each year, which equals the size of a large town.
Sediment laden plumes
The mining robots operating on the seabed floor emit plumes of sediment, which could cover fauna in the area around the mining site and therefore have a great impact on the ecosystem of the seabed. The produced plumes contain a mixture of dissolved material and suspended particles of a range of sizes. Dissolved material is transported inextricably by the water that contains it, whereas suspended particles tend to sink. The contained area can be estimated much bigger than the actual mined area, since finer particles and dissolved material will be transported greater distances away from the actual mined area. Seabed accumulations of plume material will therefore be thicker and contain larger particles close to the source of the plume.
In addition to the plumes created by mining activities on the seabed, discharge plumes should also be considered, that will be created by the return of excess water. Excess water occurs during the dewatering process on board of the surface vessel as well as when ore slurries are transported from the mothership to the transport barges. Predictions of the net impact of plumes should therefore consider a range of scenarios. A lot of unknowns remain, scientists warn that there might be toxic impacts.
Noise pollution
Human generated sound can cause direct damage to marine animals, as many of them use sound as their primary mode of communication. The extreme background noise caused by the mining machines can interfere with the communication between animals and limit their ability to detect prey. Furthermore, noise and vibration can affect auditory senses and systems of marine animals. Noise can be caused during different processes of deep-sea mining:
noise and vibration from seabed production tools
midwater plumes used to transfer ores to the surface ship
the surface ship itself
The surface vessel produces several high intensities sounds for example caused by the propellers, engines, generators, and hydraulic pumps. It is also important to consider the fact that the ship will operate almost continuously for many years during the mining contract which usually lasts for 20–30 years.
Light pollution
Mining activities could impair the feeding and reproduction of deep-sea species through the creation of intense noise and light pollution in a naturally dark and silent environment. Light pollution is another important factor that causes environmental impacts on sea life. The light that is used to make mining work undersea possible could attract or repel some animal species, bright lights can also blind certain marine animals. Strong lights used at the vessel and ships can influence birds as well as near surface animals.
Reduced oxygen content
If these nodules are shown to produce a significant quantity of oxygen, removal of this oxygen source may impact communities.
Mitigation of environmental impacts
There is still a gap in research of how to reduce these environmental impacts. This is partly because the entire ocean ecosystem still needs to be discovered and researched much more. Some scientists suggest that one possibility would be to reduce the weight of mining vehicles. This could reduce compaction and lessen the amount of disturbed sediment at the rear of the vehicle. Since many deep seas are extremely dependent on the hard substrate of manganese nodules in their food chain, another option would be to leave at least a few tracks of nodules left and to not harvest them. Due to the extremely long growth rate, the mined manganese nodules will not return for millions of years. To combat this, distributing manufactured replacement nodules could be an option. But these possibilities also need to be further explored. The most beneficial mitigation effect would bring a reduction of the sediment plumes and their spreading, as these not only affect the immediate surroundings, but also affect the ecosystem at considerable distances from the nodule harvesting sites.Experimental studies in the 1990s concluded in part that trial mining at a reasonable scale would likely help best constrain real impacts from any commercial mining.
Recovery potential of seabed ecosystems
The slow recovery potential of ecosystems can be seen as one of the major concerns of nodule mining. Seabed areas that contain nodules will be massively disturbed and the recovery of epifauna is exceptionally slow within the mined areas. A significant proportion of the animals are dependent on the nodules, which create a hard substrate for them. These substrates will not return for millions of years until new nodules are formed. Nodules grow from a few to a few tens of millimeters per million years. Their extreme slow growth rate is not continuous or regular and differs regarding the environment and surface. They may also not grow at all or be completely buried for periods of time. Altogether, manganese nodules grow with an average of 10-20 mm per million years and usually have an age of several million years – if they are not mined. Because many deep-sea species are rare, long-lived and slow to reproduce, and because polymetallic nodules (which may take millions of years to develop to a harvestable size) are an important habitat for deep-sea species, scientists can not rule out that some species would face extinction from habitat removal due to mining. The affected ecosystems would require extremely long time periods to recover, if ever. Nodule mining could affect tens of thousands of square kilometers of deep-sea ecosystems, and ecosystems take millions of years to recover.
| Physical sciences | Sedimentary rocks | Earth science |
282956 | https://en.wikipedia.org/wiki/Ironstone | Ironstone | Ironstone is a sedimentary rock, either deposited directly as a ferruginous sediment or created by chemical replacement, that contains a substantial proportion of an iron ore compound from which iron (Fe) can be smelted commercially.
Not to be confused with native or telluric iron, which is very rare and found in metallic form, the term ironstone is customarily restricted to hard, coarsely banded, non-banded, and non-cherty sedimentary rocks of post-Precambrian age. The Precambrian deposits, which have a different origin, are generally known as banded iron formations. The iron minerals comprising ironstones can consist either of oxides, i.e. limonite, hematite, and magnetite; carbonates, i.e. siderite; silicates, i.e. chamosite; or some combination of these minerals.
Description
Freshly cleaved ironstone is usually grey. The brown external appearance is due to oxidation of its surface.
Ironstone, being a sedimentary rock is not always homogeneous, and can be found in a red-and-black banded form called tiger iron, sometimes used for jewelry purposes.
Sometimes ironstone hosts concretions or opal gems.
Occurrence
Ironstone occurs in a variety of forms. The various forms of ironstone include siderite nodules; deeply weathered saprolite, i.e. (laterite); and ooidal ironstone.
Uses
Ironstone as a source of iron
Ironstone, although widespread, is a limited source of iron. Historically, most British iron originated from ironstone, but it is now rarely used for this purpose because it is far too limited in quantity to be an economic source of iron ore.
Ceramics
Ironstone's oxide impurities render it useless as a component in ceramics: the "ironstone china" of Staffordshire and American manufacture, a fine, white, high-fired vitreous semi-porcelain, commonly used for heavy-duty dinner services in the 19th century, includes no ironstone in its production. Its "iron" quality is in its resistance to chipping.
In construction
The stone can be used as a building material. Examples include the parish churches at Kirby Bellars and South Croxton in Leicestershire, and Eydon Hall in Northamptonshire.
In art
Sculptures carved out of ironstone are rare. One example is Henry Moore's Head created in 1930.
| Physical sciences | Sedimentary rocks | Earth science |
282998 | https://en.wikipedia.org/wiki/Prism%20%28optics%29 | Prism (optics) | An optical prism is a transparent optical element with flat, polished surfaces that are designed to refract light. At least one surface must be angled — elements with two parallel surfaces are not prisms. The most familiar type of optical prism is the triangular prism, which has a triangular base and rectangular sides. Not all optical prisms are geometric prisms, and not all geometric prisms would count as an optical prism. Prisms can be made from any material that is transparent to the wavelengths for which they are designed. Typical materials include glass, acrylic and fluorite.
A dispersive prism can be used to break white light up into its constituent spectral colors (the colors of the rainbow) to form a spectrum as described in the following section. Other types of prisms noted below can be used to reflect light, or to split light into components with different polarizations.
Types
Dispersive
Dispersive prisms are used to break up light into its constituent spectral colors because the refractive index depends on wavelength; the white light entering the prism is a mixture of different wavelengths, each of which gets bent slightly differently. Blue light is slowed more than red light and will therefore be bent more than red light.
Triangular prism
Amici prism and other types of compound prisms
Littrow prism with mirror on its rear facet
Pellin–Broca prism
Abbe prism
Grism, a dispersive prism with a diffraction grating on its surface
Féry prism
Spectral dispersion is the best known property of optical prisms, although not the most frequent purpose of using optical prisms in practice.
Reflective
Reflective prisms are used to reflect light, in order to flip, invert, rotate, deviate or displace the light beam. They are typically used to erect the image in binoculars or single-lens reflex cameras – without the prisms the image would be upside down for the user.
Reflective prisms use total internal reflection to achieve near-perfect reflection of light that strikes the facets at a sufficiently oblique angle. Prisms are usually made of optical glass which, combined with anti-reflective coating of input and output facets, leads to significantly lower light loss than metallic mirrors.
Odd number of reflections, image projects as flipped (mirrored)
triangular prism reflector, projects image sideways (chromatic dispersion is zero in case of perpendicular input and output incidence)
Roof pentaprism projects image sideways flipped along the other axis
Dove prism projects image forward
Corner-cube retroreflector projects image backwards
Even number of reflections, image projects upright (without change in handedness; may or may not be rotated)
Porro prism projects image backwards and displaced
Porro–Abbe prism projects image forward, rotated by 180° and displaced
Perger prism a development based on the Porro–Abbe prism, projects image forward, rotated by 180° and displaced
Abbe–Koenig prism projects image forward, rotated by 180° and collinear (4 internal reflections [2 reflections are on roof plains])
Bauernfeind prism projects image sideways (inclined by 45°)
Amici roof prism projects image sideways
Pentaprism projects image sideways
Schmidt–Pechan prism projects image forward, rotated by 180° (6 reflections [2 reflections are on roof plains]; composed of Bauernfeind part and Schmidt part)
Uppendahl prism projects image forward, rotated by 180° and collinear (6 reflections [2 reflections are on roof plains]); composed of 3 prisms cemented together)
Beam-splitting
Various thin-film optical layers can be deposited on the hypotenuse of one right-angled prism, and cemented to another prism to form a beam-splitter cube.
Overall optical performance of such a cube is determined by the thin layer.
In comparison with a usual glass substrate, the glass cube provides protection of the thin-film layer from both sides and better mechanical stability. The cube can also eliminate etalon effects, back-side reflection and slight beam deflection.
dichroic color filters form a dichroic prism
Polarizing cube beamsplitters have lower extinction ratio than birefringent ones, but less expensive
Partially-metallized mirrors provide non-polarizing beamsplitters
Air gap − When hypotenuses of two triangular prisms are stacked very close to each other with air gap, frustrated total internal reflection in one prism makes it possible to couple part of the radiation into a propagating wave in the second prism. The transmitted power drops exponentially with the gap width, so it can be tuned over many orders of magnitude by a micrometric screw.
Biprism (or Fresnel biprism): two prisms joined at their bases, forming a wide vertex angle (~ 180°); used in common-path interferometry.
Polarizing
Another class is formed by polarizing prisms which use birefringence to split a beam of light into components of varying polarization. In the visible and UV regions, they have very low losses and their extinction ratio typically exceeds , which is superior to other types of polarizers. They may or may not employ total internal reflection;
One polarization is separated by total internal reflection:
Nicol prism
Glan–Foucault prism
Glan–Taylor prism, a high-power variant of which is also denoted as Glan–laser prism
Glan–Thompson prism
One polarization is deviated by different refraction only:
Rochon prism
Sénarmont prism
Both polarizations are deviated by refraction:
Wollaston prism
Nomarski prism – a variant of the Wollaston prism where p- and s-components emerge displaced and converging towards each other; important for differential interference contrast microscopy
Both polarizations stay parallel, but are spatially separated:
polarisation beam displacers, typically made of thick anisotropic crystal with plan-parallel facets
These are typically made of a birefringent crystalline material like calcite, but other materials like quartz and α-BBO may be necessary for UV applications, and others (, and ) will extend transmission farther into the infrared spectral range.
Prisms made of isotropic materials like glass will also alter polarization of light, as partial reflection under oblique angles does not maintain the amplitude ratio (nor phase) of the s- and p-polarized components of the light, leading to general elliptical polarization. This is generally an unwanted effect of dispersive prisms. In some cases this can be avoided by choosing prism geometry which light enters and exits under perpendicular angle, by compensation through non-planar light trajectory, or by use of p-polarized light.
Total internal reflection alters only the mutual phase between s- and p-polarized light. Under well chosen angle of incidence, this phase is close to .
Fresnel rhomb uses this effect to achieve conversion between circular and linear polarisation. This phase difference is not explicitly dependent on wavelength, but only on refractive index, so Fresnel rhombs made of low-dispersion glasses achieve much broader spectral range than quarter-wave plates. They displace the beam, however.
Doubled Fresnel rhomb, with quadruple reflection and zero beam displacement, substitutes a half-wave plate.
Similar effect can also be used to make a polarization-maintaining optics.
Depolarizers
Birefringent crystals can be assembled in a way that leads to apparent depolarization of the light.
Cornu depolarizer
Lyot depolarizer
Depolarization would not be observed for an ideal monochromatic plane wave, as actually both devices turn reduced temporal coherence or spatial coherence, respectively, of the beam into decoherence of its polarization components.
Other uses
Total internal reflection in prisms finds numerous uses through optics, plasmonics and microscopy. In particular:
Prisms are used to couple propagating light to surface plasmons. Either the hypotenuse of a triangular prism is metallized (Kretschmann configuration), or evanescent wave is coupled to very close metallic surface (Otto configuration).
Some laser active media can be formed as a prism where the low-quality pump beam enters the front facet, while the amplified beam undergoes total internal reflection under grazing incidence from it. Such a design suffers less from thermal stress and is easy to be pumped by high-power laser diodes.
Other uses of prisms are based on their beam-deviating refraction:
Wedge prisms are used to deflect a beam of monochromatic light by a fixed angle. A pair of such prisms can be used for beam steering; by rotating the prisms the beam can be deflected into any desired angle within a conical "field of regard". The most commonly found implementation is a Risley prism pair.
Transparent windows of, e.g., vacuum chambers or cuvettes can also be slightly wedged (10' − 1°). While this does not reduce reflection, it suppresses Fabry-Pérot interferences that would otherwise modulate their transmission spectrum.
Anamorphic pair of similar, but asymmetrically placed prisms can also change the profile of a beam. This is often used to make a round beam from the elliptical output of a laser diode. With its monochromatic light, slight chromatic dispersion arising from different wedge inclination is not a problem.
Deck prisms were used on sailing ships to bring daylight below deck, since candles and kerosene lamps are a fire hazard on wooden ships.
In optometry
By shifting corrective lenses off axis, images seen through them can be displaced in the same way that a prism displaces images. Eye care professionals use prisms, as well as lenses off axis, to treat various orthoptics problems:
Diplopia (double vision)
Positive and negative fusion problems
Prism spectacles with a single prism perform a relative displacement of the two eyes, thereby correcting eso-, exo, hyper- or hypotropia.
In contrast, spectacles with prisms of equal power for both eyes, called yoked prisms (also: conjugate prisms, ambient lenses or performance glasses) shift the visual field of both eyes to the same extent.
| Technology | Optics | null |
283007 | https://en.wikipedia.org/wiki/Prism%20%28geometry%29 | Prism (geometry) | In geometry, a prism is a polyhedron comprising an polygon base, a second base which is a translated copy (rigidly moved without rotation) of the first, and other faces, necessarily all parallelograms, joining corresponding sides of the two bases. All cross-sections parallel to the bases are translations of the bases. Prisms are named after their bases, e.g. a prism with a pentagonal base is called a pentagonal prism. Prisms are a subclass of prismatoids.
Like many basic geometric terms, the word prism () was first used in Euclid's Elements. Euclid defined the term in Book XI as “a solid figure contained by two opposite, equal and parallel planes, while the rest are parallelograms”. However, this definition has been criticized for not being specific enough in regard to the nature of the bases (a cause of some confusion amongst generations of later geometry writers).
Oblique vs right
An oblique prism is a prism in which the joining edges and faces are not perpendicular to the base faces.
Example: a parallelepiped is an oblique prism whose base is a parallelogram, or equivalently a polyhedron with six parallelogram faces.
A right prism is a prism in which the joining edges and faces are perpendicular to the base faces. This applies if and only if all the joining faces are rectangular.
The dual of a right -prism is a right -bipyramid.
A right prism (with rectangular sides) with regular -gon bases has Schläfli symbol It approaches a cylinder as approaches infinity.
Special cases
A right rectangular prism (with a rectangular base) is also called a cuboid, or informally a rectangular box. A right rectangular prism has Schläfli symbol
A right square prism (with a square base) is also called a square cuboid, or informally a square box.
Note: some texts may apply the term rectangular prism or square prism to both a right rectangular-based prism and a right square-based prism.
Types
Regular prism
A regular prism is a prism with regular bases.
Uniform prism
A uniform prism or semiregular prism is a right prism with regular bases and all edges of the same length.
Thus all the side faces of a uniform prism are squares.
Thus all the faces of a uniform prism are regular polygons. Also, such prisms are isogonal; thus they are uniform polyhedra. They form one of the two infinite series of semiregular polyhedra, the other series being formed by the antiprisms.
A uniform -gonal prism has Schläfli symbol
Properties
Volume
The volume of a prism is the product of the area of the base by the height, i.e. the distance between the two base faces (in the case of a non-right prism, note that this means the perpendicular distance).
The volume is therefore:
where is the base area and is the height.
The volume of a prism whose base is an -sided regular polygon with side length is therefore:
Surface area
The surface area of a right prism is:
where is the area of the base, the height, and the base perimeter.
The surface area of a right prism whose base is a regular -sided polygon with side length , and with height , is therefore:
Symmetry
The symmetry group of a right -sided prism with regular base is of order , except in the case of a cube, which has the larger symmetry group of order 48, which has three versions of as subgroups. The rotation group is of order , except in the case of a cube, which has the larger symmetry group of order 24, which has three versions of as subgroups.
The symmetry group contains inversion iff is even.
The hosohedra and dihedra also possess dihedral symmetry, and an -gonal prism can be constructed via the geometrical truncation of an -gonal hosohedron, as well as through the cantellation or expansion of an -gonal dihedron.
Schlegel diagrams
Similar polytopes
Truncated prism
A truncated prism is formed when prism is sliced by a plane that is not parallel to its bases. A truncated prism's bases are not congruent, and its sides are not parallelograms.
Twisted prism
A twisted prism is a nonconvex polyhedron constructed from a uniform -prism with each side face bisected on the square diagonal, by twisting the top, usually by radians ( degrees) in the same direction, causing sides to be concave.
A twisted prism cannot be dissected into tetrahedra without adding new vertices. The simplest twisted prism has triangle bases and is called a Schönhardt polyhedron.
An -gonal twisted prism is topologically identical to the -gonal uniform antiprism, but has half the symmetry group: , order . It can be seen as a nonconvex antiprism, with tetrahedra removed between pairs of triangles.
Frustum
A frustum is a similar construction to a prism, with trapezoid lateral faces and differently sized top and bottom polygons.
Star prism
A star prism is a nonconvex polyhedron constructed by two identical star polygon faces on the top and bottom, being parallel and offset by a distance and connected by rectangular faces. A uniform star prism will have Schläfli symbol with rectangles and 2 faces. It is topologically identical to a -gonal prism.
Crossed prism
A crossed prism is a nonconvex polyhedron constructed from a prism, where the vertices of one base are inverted around the center of this base (or rotated by 180°). This transforms the side rectangular faces into crossed rectangles. For a regular polygon base, the appearance is an -gonal hour glass. All oblique edges pass through a single body center. Note: no vertex is at this body centre. A crossed prism is topologically identical to an -gonal prism.
Toroidal prism
A toroidal prism is a nonconvex polyhedron like a crossed prism, but without bottom and top base faces, and with simple rectangular side faces closing the polyhedron. This can only be done for even-sided base polygons. These are topological tori, with Euler characteristic of zero. The topological polyhedral net can be cut from two rows of a square tiling (with vertex configuration ): a band of squares, each attached to a crossed rectangle. An -gonal toroidal prism has vertices, faces: squares and crossed rectangles, and edges. It is topologically self-dual.
Prismatic polytope
A prismatic polytope is a higher-dimensional generalization of a prism. An -dimensional prismatic polytope is constructed from two ()-dimensional polytopes, translated into the next dimension.
The prismatic -polytope elements are doubled from the ()-polytope elements and then creating new elements from the next lower element.
Take an -polytope with -face elements (). Its ()-polytope prism will have -face elements. (With , .)
By dimension:
Take a polygon with vertices, edges. Its prism has vertices, edges, and faces.
Take a polyhedron with vertices, edges, and faces. Its prism has vertices, edges, faces, and cells.
Take a polychoron with vertices, edges, faces, and cells. Its prism has vertices, edges, faces, cells, and hypercells.
Uniform prismatic polytope
A regular -polytope represented by Schläfli symbol can form a uniform prismatic ()-polytope represented by a Cartesian product of two Schläfli symbols:
By dimension:
A 0-polytopic prism is a line segment, represented by an empty Schläfli symbol
A 1-polytopic prism is a rectangle, made from 2 translated line segments. It is represented as the product Schläfli symbol If it is square, symmetry can be reduced:
Example: , Square, two parallel line segments, connected by two line segment sides.
A polygonal prism is a 3-dimensional prism made from two translated polygons connected by rectangles. A regular polygon can construct a uniform -gonal prism represented by the product If , with square sides symmetry it becomes a cube:
Example: , Pentagonal prism, two parallel pentagons connected by 5 rectangular sides.
A polyhedral prism is a 4-dimensional prism made from two translated polyhedra connected by 3-dimensional prism cells. A regular polyhedron can construct the uniform polychoric prism, represented by the product If the polyhedron and the sides are cubes, it becomes a tesseract:
Example: , Dodecahedral prism, two parallel dodecahedra connected by 12 pentagonal prism sides.
...
Higher order prismatic polytopes also exist as cartesian products of any two or more polytopes. The dimension of a product polytope is the sum of the dimensions of its elements. The first examples of these exist in 4-dimensional space; they are called duoprisms as the product of two polygons in 4-dimensions.
Regular duoprisms are represented as with vertices, edges, square faces, -gon faces, -gon faces, and bounded by -gonal prisms and -gonal prisms.
For example, a 4-4 duoprism is a lower symmetry form of a tesseract, as is a cubic prism. (4-4 duoprism prism), (cube-4 duoprism) and (tesseractic prism) are lower symmetry forms of a 5-cube.
| Mathematics | Three-dimensional space | null |
283610 | https://en.wikipedia.org/wiki/Juicer | Juicer | A juicer, also known as a juice extractor, is a tool used to extract juice from fruits, herbs, leafy greens and other types of vegetables in a process called juicing. It crushes, grinds, and/or squeezes the juice out of the pulp. A juicer clarifies the juice through a screening mesh to remove the pulp unlike a blender where the output contains both the liquids and solids of the processed fruit(s) or vegetable(s).
Some types of juicers can also function as a food processor. Most of the twin gear and horizontal masticating juicers have attachments for crushing herbs and spices, extruding pasta, noodles or bread sticks, making baby food and nut butter, grinding coffee, making nut milk, etc.
Types
Reamers
Squeezers are used for squeezing juice from citrus such as grapefruits, lemons, limes, and oranges. Juice is extracted by pressing or grinding a halved citrus along a juicer's ridged conical center and discarding the rind. Some reamers are stationary and require a user to press and turn the fruit, while others are electrical, automatically turning the ridged center when fruit is pressed upon.
Centrifugal juicers
A centrifugal juicer cuts up the fruit or vegetable with a flat cutting blade. It then spins the produce at a high speed to separate the juice from the pulp.
Masticating juicers
A masticating juicer known as cold press juicer or slow juicer uses a single auger to compact and crush produce into smaller sections before squeezing out its juice along a static screen while the pulp is expelled through a separate outlet.
Triturating juicers
Triturating juicers (twin gear juicers) have twin augers to crush and press produce.
Juicing press
A juicing press, such as a fruit press or wine press, is a larger scale press that is used in agricultural production. These presses can be stationary or mobile. A mobile press has the advantage that it can be moved from one orchard to another. The process is primarily used for apples and involves a stack of apple mash, wrapped in fine mesh cloth, which is then pressed under approx 40 tonnes. These machines are popular in Europe and have now been introduced to North America.
Steam juice extractor
A stovetop steam juice extractor is typically a pot to generate steam that is used to heat a batch of berries (or other fruit) in a perforated pot stacked on top of a juice collecting container that is above the steam pot. The juice is extracted without mechanical means so it is remarkably clear and because of the steam heating it is also pasteurized for long term storage.
| Technology | Household appliances | null |
283810 | https://en.wikipedia.org/wiki/Mass%20spectrometry | Mass spectrometry | Mass spectrometry (MS) is an analytical technique that is used to measure the mass-to-charge ratio of ions. The results are presented as a mass spectrum, a plot of intensity as a function of the mass-to-charge ratio. Mass spectrometry is used in many different fields and is applied to pure samples as well as complex mixtures.
A mass spectrum is a type of plot of the ion signal as a function of the mass-to-charge ratio. These spectra are used to determine the elemental or isotopic signature of a sample, the masses of particles and of molecules, and to elucidate the chemical identity or structure of molecules and other chemical compounds.
In a typical MS procedure, a sample, which may be solid, liquid, or gaseous, is ionized, for example by bombarding it with a beam of electrons. This may cause some of the sample's molecules to break up into positively charged fragments or simply become positively charged without fragmenting. These ions (fragments) are then separated according to their mass-to-charge ratio, for example by accelerating them and subjecting them to an electric or magnetic field: ions of the same mass-to-charge ratio will undergo the same amount of deflection. The ions are detected by a mechanism capable of detecting charged particles, such as an electron multiplier. Results are displayed as spectra of the signal intensity of detected ions as a function of the mass-to-charge ratio. The atoms or molecules in the sample can be identified by correlating known masses (e.g. an entire molecule) to the identified masses or through a characteristic fragmentation pattern.
History of the mass spectrometer
In 1886, Eugen Goldstein observed rays in gas discharges under low pressure that traveled away from the anode and through channels in a perforated cathode, opposite to the direction of negatively charged cathode rays (which travel from cathode to anode). Goldstein called these positively charged anode rays "Kanalstrahlen"; the standard translation of this term into English is "canal rays". Wilhelm Wien found that strong electric or magnetic fields deflected the canal rays and, in 1899, constructed a device with perpendicular electric and magnetic fields that separated the positive rays according to their charge-to-mass ratio (Q/m). Wien found that the charge-to-mass ratio depended on the nature of the gas in the discharge tube. English scientist J. J. Thomson later improved on the work of Wien by reducing the pressure to create the mass spectrograph.
The word spectrograph had become part of the international scientific vocabulary by 1884. Early spectrometry devices that measured the mass-to-charge ratio of ions were called mass spectrographs which consisted of instruments that recorded a spectrum of mass values on a photographic plate. A mass spectroscope is similar to a mass spectrograph except that the beam of ions is directed onto a phosphor screen. A mass spectroscope configuration was used in early instruments when it was desired that the effects of adjustments be quickly observed. Once the instrument was properly adjusted, a photographic plate was inserted and exposed. The term mass spectroscope continued to be used even though the direct illumination of a phosphor screen was replaced by indirect measurements with an oscilloscope. The use of the term mass spectroscopy is now discouraged due to the possibility of confusion with light spectroscopy. Mass spectrometry is often abbreviated as mass-spec or simply as MS.
Modern techniques of mass spectrometry were devised by Arthur Jeffrey Dempster and F.W. Aston in 1918 and 1919 respectively.
Sector mass spectrometers known as calutrons were developed by Ernest O. Lawrence and used for separating the isotopes of uranium during the Manhattan Project. Calutron mass spectrometers were used for uranium enrichment at the Oak Ridge, Tennessee Y-12 plant established during World War II.
In 1989, half of the Nobel Prize in Physics was awarded to Hans Dehmelt and Wolfgang Paul for the development of the ion trap technique in the 1950s and 1960s.
In 2002, the Nobel Prize in Chemistry was awarded to John Bennett Fenn for the development of electrospray ionization (ESI) and Koichi Tanaka for the development of soft laser desorption (SLD) and their application to the ionization of biological macromolecules, especially proteins.
Parts of a mass spectrometer
A mass spectrometer consists of three components: an ion source, a mass analyzer, and a detector. The ionizer converts a portion of the sample into ions. There is a wide variety of ionization techniques, depending on the phase (solid, liquid, gas) of the sample and the efficiency of various ionization mechanisms for the unknown species. An extraction system removes ions from the sample, which are then targeted through the mass analyzer and into the detector. The differences in masses of the fragments allows the mass analyzer to sort the ions by their mass-to-charge ratio. The detector measures the value of an indicator quantity and thus provides data for calculating the abundances of each ion present. Some detectors also give spatial information, e.g., a multichannel plate.
Theoretical example
The following describes the operation of a spectrometer mass analyzer, which is of the sector type. (Other analyzer types are treated below.) Consider a sample of sodium chloride (table salt). In the ion source, the sample is vaporized (turned into gas) and ionized (transformed into electrically charged particles) into sodium (Na+) and chloride (Cl−) ions. Sodium atoms and ions are monoisotopic, with a mass of about 23 daltons (symbol: Da or older symbol: u). Chloride atoms and ions come in two stable isotopes with masses of approximately 35 u (at a natural abundance of about 75 percent) and approximately 37 u (at a natural abundance of about 25 percent). The analyzer part of the spectrometer contains electric and magnetic fields, which exert forces on ions traveling through these fields. The speed of a charged particle may be increased or decreased while passing through the electric field, and its direction may be altered by the magnetic field. The magnitude of the deflection of the moving ion's trajectory depends on its mass-to-charge ratio. Lighter ions are deflected by the magnetic force to a greater degree than heavier ions (based on Newton's second law of motion, F = ma). The streams of magnetically sorted ions pass from the analyzer to the detector, which records the relative abundance of each ion type. This information is used to determine the chemical element composition of the original sample (i.e. that both sodium and chlorine are present in the sample) and the isotopic composition of its constituents (the ratio of 35Cl to 37Cl).
Creating ions
The ion source is the part of the mass spectrometer that ionizes the material under analysis (the analyte). The ions are then transported by magnetic or electric fields to the mass analyzer.
Techniques for ionization have been key to determining what types of samples can be analyzed by mass spectrometry.
Electron ionization and chemical ionization are used for gases and vapors. In chemical ionization sources, the analyte is ionized by chemical ion-molecule reactions during collisions in the source. Two techniques often used with liquid and solid biological samples include electrospray ionization (invented by John Fenn) and matrix-assisted laser desorption/ionization (MALDI, initially developed as a similar technique "Soft Laser Desorption (SLD)" by K. Tanaka for which a Nobel Prize was awarded and as MALDI by M. Karas and F. Hillenkamp).
Hard ionization and soft ionization
In mass spectrometry, ionization refers to the production of gas phase ions suitable for resolution in the mass analyser or mass filter. Ionization occurs in the ion source. There are several ion sources available; each has advantages and disadvantages for particular applications. For example, electron ionization (EI) gives a high degree of fragmentation, yielding highly detailed mass spectra which when skilfully analysed can provide important information for structural elucidation/characterisation and facilitate identification of unknown compounds by comparison to mass spectral libraries obtained under identical operating conditions. However, EI is not suitable for coupling to HPLC, i.e. LC-MS, since at atmospheric pressure, the filaments used to generate electrons burn out rapidly. Thus EI is coupled predominantly with GC, i.e. GC-MS, where the entire system is under high vacuum.
Hard ionization techniques are processes which impart high quantities of residual energy in the subject molecule invoking large degrees of fragmentation (i.e. the systematic rupturing of bonds acts to remove the excess energy, restoring stability to the resulting ion). Resultant ions tend to have m/z lower than the molecular ion (other than in the case of proton transfer and not including isotope peaks). The most common example of hard ionization is electron ionization (EI).
Soft ionization refers to the processes which impart little residual energy onto the subject molecule and as such result in little fragmentation. Examples include fast atom bombardment (FAB), chemical ionization (CI), atmospheric-pressure chemical ionization (APCI), atmospheric-pressure photoionization (APPI), electrospray ionization (ESI), desorption electrospray ionization (DESI), and matrix-assisted laser desorption/ionization (MALDI).
Inductively coupled plasma
Inductively coupled plasma (ICP) sources are used primarily for cation analysis of a wide array of sample types. In this source, a plasma that is electrically neutral overall, but that has had a substantial fraction of its atoms ionized by high temperature, is used to atomize introduced sample molecules and to further strip the outer electrons from those atoms. The plasma is usually generated from argon gas, since the first ionization energy of argon atoms is higher than the first of any other elements except He, F and Ne, but lower than the second ionization energy of all except the most electropositive metals. The heating is achieved by a radio-frequency current passed through a coil surrounding the plasma.
Photoionization mass spectrometry
Photoionization can be used in experiments which seek to use mass spectrometry as a means of resolving chemical kinetics mechanisms and isomeric product branching. In such instances a high energy photon, either X-ray or uv, is used to dissociate stable gaseous molecules in a carrier gas of He or Ar. In instances where a synchrotron light source is utilized, a tuneable photon energy can be utilized to acquire a photoionization efficiency curve which can be used in conjunction with the charge ratio m/z to fingerprint molecular and ionic species. More recently atmospheric pressure photoionization (APPI) has been developed to ionize molecules mostly as effluents of LC-MS systems.
Ambient ionization
Some applications for ambient ionization include environmental applications as well as clinical applications. In these techniques, ions form in an ion source outside the mass spectrometer. Sampling becomes easy as the samples don't need previous separation nor preparation. Some examples of ambient ionization techniques are Direct Analysis in Real Time (DART),DESI, SESI, LAESI, desorption atmospheric-pressure chemical ionization (DAPCI), Soft Ionization by Chemical Reaction in Transfer (SICRT) and desorption atmospheric pressure photoionization DAPPI among others.
Other ionization techniques
Others include glow discharge, field desorption (FD), fast atom bombardment (FAB), thermospray, desorption/ionization on silicon (DIOS), atmospheric pressure chemical ionization (APCI), secondary ion mass spectrometry (SIMS), spark ionization and thermal ionization (TIMS).
Mass selection
Mass analyzers separate the ions according to their mass-to-charge ratio. The following two laws govern the dynamics of charged particles in electric and magnetic fields in vacuum:
(Lorentz force law);
(Newton's second law of motion in the non-relativistic case, i.e. valid only at ion velocity much lower than the speed of light).
Here F is the force applied to the ion, m is the mass of the ion, a is the acceleration, Q is the ion charge, E is the electric field, and v × B is the vector cross product of the ion velocity and the magnetic field
Equating the above expressions for the force applied to the ion yields:
This differential equation is the classic equation of motion for charged particles. Together with the particle's initial conditions, it completely determines the particle's motion in space and time in terms of m/Q. Thus mass spectrometers could be thought of as "mass-to-charge spectrometers". When presenting data, it is common to use the (officially) dimensionless m/z, where z is the number of elementary charges (e) on the ion (z=Q/e). This quantity, although it is informally called the mass-to-charge ratio, more accurately speaking represents the ratio of the mass number and the charge number, z.
There are many types of mass analyzers, using either static or dynamic fields, and magnetic or electric fields, but all operate according to the above differential equation. Each analyzer type has its strengths and weaknesses. Many mass spectrometers use two or more mass analyzers for tandem mass spectrometry (MS/MS). In addition to the more common mass analyzers listed below, there are others designed for special situations.
There are several important analyzer characteristics. The mass resolving power is the measure of the ability to distinguish two peaks of slightly different m/z. The mass accuracy is the ratio of the m/z measurement error to the true m/z. Mass accuracy is usually measured in ppm or milli mass units. The mass range is the range of m/z amenable to analysis by a given analyzer. The linear dynamic range is the range over which ion signal is linear with analyte concentration. Speed refers to the time frame of the experiment and ultimately is used to determine the number of spectra per unit time that can be generated.
Sector instruments
A sector field mass analyzer uses a static electric and/or magnetic field to affect the path and/or velocity of the charged particles in some way.
As shown above, sector instruments bend the trajectories of the ions as they pass through the mass analyzer, according to their mass-to-charge ratios, deflecting the more charged and faster-moving, lighter ions more. The analyzer can be used to select a narrow range of m/z or to scan through a range of m/z to catalog the ions present.
Time-of-flight
The time-of-flight (TOF) analyzer uses an electric field to accelerate the ions through the same potential, and then measures the time they take to reach the detector. If the particles all have the same charge, their kinetic energies will be identical, and their velocities will depend only on their masses. For example, ions with a lower mass will travel faster, reaching the detector first. Ions usually are moving prior to being accelerated by the electric field, this causes particles with the same m/z to arrive at different times at the detector. This difference in initial velocities is often not dependent on the mass of the ion, and will turn into a difference in the final velocity. This distribution in velocities broadens the peaks shown on the count vs m/z plot, but will generally not change the central location of the peaks, since the starting velocity of ions is generally centered at zero. To fix this problem, time-lag focusing/delayed extraction has been coupled with TOF-MS.
Quadrupole mass filter
Quadrupole mass analyzers use oscillating electrical fields to selectively stabilize or destabilize the paths of ions passing through a radio frequency (RF) quadrupole field created between four parallel rods. Only the ions in a certain range of mass/charge ratio are passed through the system at any time, but changes to the potentials on the rods allow a wide range of m/z values to be swept rapidly, either continuously or in a succession of discrete hops. A quadrupole mass analyzer acts as a mass-selective filter and is closely related to the quadrupole ion trap, particularly the linear quadrupole ion trap except that it is designed to pass the untrapped ions rather than collect the trapped ones, and is for that reason referred to as a transmission quadrupole.
A magnetically enhanced quadrupole mass analyzer includes the addition of a magnetic field, either applied axially or transversely. This novel type of instrument leads to an additional performance enhancement in terms of resolution and/or sensitivity depending upon the magnitude and orientation of the applied magnetic field.
A common variation of the transmission quadrupole is the triple quadrupole mass spectrometer. The "triple quad" has three consecutive quadrupole stages, the first acting as a mass filter to transmit a particular incoming ion to the second quadrupole, a collision chamber, wherein that ion can be broken into fragments. The third quadrupole also acts as a mass filter, to transmit a particular fragment ion to the detector. If a quadrupole is made to rapidly and repetitively cycle through a range of mass filter settings, full spectra can be reported. Likewise, a triple quad can be made to perform various scan types characteristic of tandem mass spectrometry.
Ion traps
Three-dimensional quadrupole ion trap
The quadrupole ion trap works on the same physical principles as the quadrupole mass analyzer, but the ions are trapped and sequentially ejected. Ions are trapped in a mainly quadrupole RF field, in a space defined by a ring electrode (usually connected to the main RF potential) between two endcap electrodes (typically connected to DC or auxiliary AC potentials). The sample is ionized either internally (e.g. with an electron or laser beam), or externally, in which case the ions are often introduced through an aperture in an endcap electrode.
There are many mass/charge separation and isolation methods but the most commonly used is the mass instability mode in which the RF potential is ramped so that the orbit of ions with a mass are stable while ions with mass b become unstable and are ejected on the z-axis onto a detector. There are also non-destructive analysis methods.
Ions may also be ejected by the resonance excitation method, whereby a supplemental oscillatory excitation voltage is applied to the endcap electrodes, and the trapping voltage amplitude and/or excitation voltage frequency is varied to bring ions into a resonance condition in order of their mass/charge ratio.
Cylindrical ion trap
The cylindrical ion trap mass spectrometer (CIT) is a derivative of the quadrupole ion trap where the electrodes are formed from flat rings rather than hyperbolic shaped electrodes. The architecture lends itself well to miniaturization because as the size of a trap is reduced, the shape of the electric field near the center of the trap, the region where the ions are trapped, forms a shape similar to that of a hyperbolic trap.
Linear quadrupole ion trap
A linear quadrupole ion trap is similar to a quadrupole ion trap, but it traps ions in a two dimensional quadrupole field, instead of a three-dimensional quadrupole field as in a 3D quadrupole ion trap. Thermo Fisher's LTQ ("linear trap quadrupole") is an example of the linear ion trap.
A toroidal ion trap can be visualized as a linear quadrupole curved around and connected at the ends or as a cross-section of a 3D ion trap rotated on edge to form the toroid, donut-shaped trap. The trap can store large volumes of ions by distributing them throughout the ring-like trap structure. This toroidal shaped trap is a configuration that allows the increased miniaturization of an ion trap mass analyzer. Additionally, all ions are stored in the same trapping field and ejected together simplifying detection that can be complicated with array configurations due to variations in detector alignment and machining of the arrays.
As with the toroidal trap, linear traps and 3D quadrupole ion traps are the most commonly miniaturized mass analyzers due to their high sensitivity, tolerance for mTorr pressure, and capabilities for single analyzer tandem mass spectrometry (e.g. product ion scans).
Orbitrap
Orbitrap instruments are similar to Fourier-transform ion cyclotron resonance mass spectrometers (see text below). Ions are electrostatically trapped in an orbit around a central, spindle shaped electrode. The electrode confines the ions so that they both orbit around the central electrode and oscillate back and forth along the central electrode's long axis. This oscillation generates an image current in the detector plates which is recorded by the instrument. The frequencies of these image currents depend on the mass-to-charge ratios of the ions. Mass spectra are obtained by Fourier transformation of the recorded image currents.
Orbitraps have a high mass accuracy, high sensitivity and a good dynamic range.
Fourier-transform ion cyclotron resonance
Fourier-transform mass spectrometry (FTMS), or more precisely Fourier-transform ion cyclotron resonance MS, measures mass by detecting the image current produced by ions cyclotroning in the presence of a magnetic field. Instead of measuring the deflection of ions with a detector such as an electron multiplier, the ions are injected into a Penning trap (a static electric/magnetic ion trap) where they effectively form part of a circuit. Detectors at fixed positions in space measure the electrical signal of ions which pass near them over time, producing a periodic signal. Since the frequency of an ion's cycling is determined by its mass-to-charge ratio, this can be deconvoluted by performing a Fourier transform on the signal. FTMS has the advantage of high sensitivity (since each ion is "counted" more than once) and much higher resolution and thus precision.
Ion cyclotron resonance (ICR) is an older mass analysis technique similar to FTMS except that ions are detected with a traditional detector. Ions trapped in a Penning trap are excited by an RF electric field until they impact the wall of the trap, where the detector is located. Ions of different mass are resolved according to impact time.
Detectors
The final element of the mass spectrometer is the detector. The detector records either the charge induced or the current produced when an ion passes by or hits a surface. In a scanning instrument, the signal produced in the detector during the course of the scan versus where the instrument is in the scan (at what m/Q) will produce a mass spectrum, a record of ions as a function of m/Q.
Typically, some type of electron multiplier is used, though other detectors including Faraday cups and ion-to-photon detectors are also used. Because the number of ions leaving the mass analyzer at a particular instant is typically quite small, considerable amplification is often necessary to get a signal. Microchannel plate detectors are commonly used in modern commercial instruments. In FTMS and Orbitraps, the detector consists of a pair of metal surfaces within the mass analyzer/ion trap region which the ions only pass near as they oscillate. No direct current is produced, only a weak AC image current is produced in a circuit between the electrodes. Other inductive detectors have also been used.
Tandem mass spectrometry
A tandem mass spectrometer is one capable of multiple rounds of mass spectrometry, usually separated by some form of molecule fragmentation. For example, one mass analyzer can isolate one peptide from many entering a mass spectrometer. A collision cell then stabilizes the peptide ions while they collide with a gas, causing them to fragment by collision-induced dissociation (CID). A further mass analyzer then sorts the fragments produced from the peptides. Tandem MS can also be done in a single mass analyzer over time, as in a quadrupole ion trap. There are various methods for fragmenting molecules for tandem MS, including collision-induced dissociation (CID), electron capture dissociation (ECD), electron transfer dissociation (ETD), infrared multiphoton dissociation (IRMPD), blackbody infrared radiative dissociation (BIRD), electron-detachment dissociation (EDD) and surface-induced dissociation (SID). An important application using tandem mass spectrometry is in protein identification.
Tandem mass spectrometry enables a variety of experimental sequences. Many commercial mass spectrometers are designed to expedite the execution of such routine sequences as selected reaction monitoring (SRM), precursor ion scanning, product ion scanning, and neutral loss scanning.
In SRM, the first analyzer allows only a single mass through and the second analyzer monitors for multiple user-defined fragment ions over longer dwell-times than could be achieved in a full scan. This increases sensitivity.
In product ion scans, the first mass analyzer is fixed to select a particular precursor ion ("parent"), while the second is scanned to find all the fragments ("products", or "daughter ions") to which it can be fragmented in the collision cell.
In precursor ion scans, the second mass analyzer is fixed to select a particular fragment ion ("daughter"), while the first is scanned to find all possible precursor ions that could give rise to this fragment.
In neutral loss scans, the two mass analyzers are scanned in parallel, but separated by the mass of a molecular subunit of interest to the analyst. Ions are detected if they lose that fixed mass during fragmentation. This can be used to look for any chemical that is capable of losing a particular neutral group, for example a sugar residue. Together, neutral loss and precursor ion scans can be used to hunt for chemicals with particular motifs.
Another type of tandem mass spectrometry used for radiocarbon dating is accelerator mass spectrometry (AMS), which uses very high voltages, usually in the mega-volt range, to accelerate negative ions into a type of tandem mass spectrometer.
The METLIN Metabolite and Chemical Entity Database is the largest repository of experimental tandem mass spectrometry data acquired from standards. The tandem mass spectrometry data on over 930,000 molecular standards (as of January 2024) is provided to facilitate the identification of chemical entities from tandem mass spectrometry experiments. In addition to the identification of known molecules it is also useful for identifying unknowns using its similarity searching/analysis. All tandem mass spectrometry data comes from the experimental analysis of standards at multiple collision energies and in both positive and negative ionization modes.
Common mass spectrometer configurations and techniques
When a specific combination of source, analyzer, and detector becomes conventional in practice, a compound acronym may arise to designate it succinctly. One example is MALDI-TOF, which refers to a combination of a matrix-assisted laser desorption/ionization source with a time-of-flight mass analyzer. Other examples include inductively coupled plasma-mass spectrometry (ICP-MS), accelerator mass spectrometry (AMS), thermal ionization-mass spectrometry (TIMS) and spark source mass spectrometry (SSMS).
Certain applications of mass spectrometry have developed monikers that although strictly speaking would seem to refer to a broad application, in practice have come instead to connote a specific or a limited number of instrument configurations. An example of this is isotope-ratio mass spectrometry (IRMS), which refers in practice to the use of a limited number of sector based mass analyzers; this name is used to refer to both the application and the instrument used for the application.
Separation techniques combined with mass spectrometry
An important enhancement to the mass resolving and mass determining capabilities of mass spectrometry is using it in tandem with chromatographic and other separation techniques.
Gas chromatography
A common combination is gas chromatography-mass spectrometry (GC/MS or GC-MS). In this technique, a gas chromatograph is used to separate different compounds. This stream of separated compounds is fed online into the ion source, a metallic filament to which voltage is applied. This filament emits electrons which ionize the compounds. The ions can then further fragment, yielding predictable patterns. Intact ions and fragments pass into the mass spectrometer's analyzer and are eventually detected. However, the high temperatures (300 °C) used in the GC-MS injection port (and oven) can result in thermal degradation of injected molecules, thus resulting in the measurement of degradation products instead of the actual molecule(s) of interest.
Liquid chromatography
Similarly to gas chromatography MS (GC-MS), liquid chromatography-mass spectrometry (LC/MS or LC-MS) separates compounds chromatographically before they are introduced to the ion source and mass spectrometer. It differs from GC-MS in that the mobile phase is liquid, usually a mixture of water and organic solvents, instead of gas. Most commonly, an electrospray ionization source is used in LC-MS. Other popular and commercially available LC-MS ion sources are atmospheric pressure chemical ionization and atmospheric pressure photoionization. There are also some newly developed ionization techniques like laser spray.
Capillary electrophoresis–mass spectrometry
Capillary electrophoresis–mass spectrometry (CE-MS) is a technique that combines the liquid separation process of capillary electrophoresis with mass spectrometry. CE-MS is typically coupled to electrospray ionization.
Ion mobility
Ion mobility spectrometry-mass spectrometry (IMS/MS or IMMS) is a technique where ions are first separated by drift time through some neutral gas under an applied electrical potential gradient before being introduced into a mass spectrometer. Drift time is a measure of the collisional cross section relative to the charge of the ion. The duty cycle of IMS (the time over which the experiment takes place) is longer than most mass spectrometric techniques, such that the mass spectrometer can sample along the course of the IMS separation. This produces data about the IMS separation and the mass-to-charge ratio of the ions in a manner similar to LC-MS.
The duty cycle of IMS is short relative to liquid chromatography or gas chromatography separations and can thus be coupled to such techniques, producing triple modalities such as LC/IMS/MS.
Data and analysis
Data representations
Mass spectrometry produces various types of data. The most common data representation is the mass spectrum.
Certain types of mass spectrometry data are best represented as a mass chromatogram. Types of chromatograms include selected ion monitoring (SIM), total ion current (TIC), and selected reaction monitoring (SRM), among many others.
Other types of mass spectrometry data are well represented as a three-dimensional contour map. In this form, the mass-to-charge, m/z is on the x-axis, intensity the y-axis, and an additional experimental parameter, such as time, is recorded on the z-axis.
Data analysis
Mass spectrometry data analysis is specific to the type of experiment producing the data. General subdivisions of data are fundamental to understanding any data.
Many mass spectrometers work in either negative ion mode or positive ion mode. It is very important to know whether the observed ions are negatively or positively charged. This is often important in determining the neutral mass but it also indicates something about the nature of the molecules.
Different types of ion source result in different arrays of fragments produced from the original molecules. An electron ionization source produces many fragments and mostly single-charged (1-) radicals (odd number of electrons), whereas an electrospray source usually produces non-radical quasimolecular ions that are frequently multiply charged. Tandem mass spectrometry purposely produces fragment ions post-source and can drastically change the sort of data achieved by an experiment.
Knowledge of the origin of a sample can provide insight into the component molecules of the sample and their fragmentations. A sample from a synthesis/manufacturing process will probably contain impurities chemically related to the target component. A crudely prepared biological sample will probably contain a certain amount of salt, which may form adducts with the analyte molecules in certain analyses.
Results can also depend heavily on sample preparation and how it was run/introduced. An important example is the issue of which matrix is used for MALDI spotting, since much of the energetics of the desorption/ionization event is controlled by the matrix rather than the laser power. Sometimes samples are spiked with sodium or another ion-carrying species to produce adducts rather than a protonated species.
Mass spectrometry can measure molar mass, molecular structure, and sample purity. Each of these questions requires a different experimental procedure; therefore, adequate definition of the experimental goal is a prerequisite for collecting the proper data and successfully interpreting it.
Interpretation of mass spectra
Since the precise structure or peptide sequence of a molecule is deciphered through the set of fragment masses, the interpretation of mass spectra requires combined use of various techniques. Usually the first strategy for identifying an unknown compound is to compare its experimental mass spectrum against a library of mass spectra. If no matches result from the search, then manual interpretation or software assisted interpretation of mass spectra must be performed. Computer simulation of ionization and fragmentation processes occurring in mass spectrometer is the primary tool for assigning structure or peptide sequence to a molecule. An a priori structural information is fragmented in silico and the resulting pattern is compared with observed spectrum. Such simulation is often supported by a fragmentation library that contains published patterns of known decomposition reactions. Software taking advantage of this idea has been developed for both small molecules and proteins.
Analysis of mass spectra can also be spectra with accurate mass. A mass-to-charge ratio value (m/z) with only integer precision can represent an immense number of theoretically possible ion structures; however, more precise mass figures significantly reduce the number of candidate molecular formulas. A computer algorithm called formula generator calculates all molecular formulas that theoretically fit a given mass with specified tolerance.
A recent technique for structure elucidation in mass spectrometry, called precursor ion fingerprinting, identifies individual pieces of structural information by conducting a search of the tandem spectra of the molecule under investigation against a library of the product-ion spectra of structurally characterized precursor ions.
Applications
Mass spectrometry has both qualitative and quantitative uses. These include identifying unknown compounds, determining the isotopic composition of elements in a molecule, and determining the structure of a compound by observing its fragmentation. Other uses include quantifying the amount of a compound in a sample or studying the fundamentals of gas phase ion chemistry (the chemistry of ions and neutrals in a vacuum). MS is now commonly used in analytical laboratories that study physical, chemical, or biological properties of a great variety of compounds. Quantification can be relative (analyzed relative to a reference sample) or absolute (analyzed using a standard curve method).
As an analytical technique it possesses distinct advantages such as: Increased sensitivity over most other analytical techniques because the analyzer, as a mass-charge filter, reduces background interference, Excellent specificity from characteristic fragmentation patterns to identify unknowns or confirm the presence of suspected compounds, Information about molecular weight, Information about the isotopic abundance of elements, Temporally resolved chemical data.
A few of the disadvantages of the method is that it often fails to distinguish between optical and geometrical isomers and the positions of substituent in o-, m- and p- positions in an aromatic ring. Also, its scope is limited in identifying hydrocarbons that produce similar fragmented ions.
Isotope ratio MS: isotope dating and tracing
Mass spectrometry is also used to determine the isotopic composition of elements within a sample. Differences in mass among isotopes of an element are very small, and the less abundant isotopes of an element are typically very rare, so a very sensitive instrument is required. These instruments, sometimes referred to as isotope ratio mass spectrometers (IR-MS), usually use a single magnet to bend a beam of ionized particles towards a series of Faraday cups which convert particle impacts to electric current. A fast on-line analysis of deuterium content of water can be done using flowing afterglow mass spectrometry, FA-MS. Probably the most sensitive and accurate mass spectrometer for this purpose is the accelerator mass spectrometer (AMS). This is because it provides ultimate sensitivity, capable of measuring individual atoms and measuring nuclides with a dynamic range of ~1015 relative to the major stable isotope. Isotope ratios are important markers of a variety of processes. Some isotope ratios are used to determine the age of materials for example as in carbon dating. Labeling with stable isotopes is also used for protein quantification. (see protein characterization below)
Membrane-introduction mass spectrometry: measuring gases in solution
Membrane-introduction mass spectrometry combines the isotope ratio MS with a reaction chamber/cell separated by a gas-permeable membrane. This method allows the study of gases as they evolve in solution. This method has been extensively used for the study of the production of oxygen by Photosystem II.
Trace gas analysis
Several techniques use ions created in a dedicated ion source injected into a flow tube or a drift tube: selected ion flow tube (SIFT-MS), and proton transfer reaction (PTR-MS), are variants of chemical ionization dedicated for trace gas analysis of air, breath or liquid headspace using well defined reaction time allowing calculations of analyte concentrations from the known reaction kinetics without the need for internal standard or calibration.
Another technique with applications in trace gas analysis field is secondary electrospray ionization (SESI-MS), which is a variant of electrospray ionization. SESI consist of an electrospray plume of pure acidified solvent that interacts with neutral vapors. Vapor molecules get ionized at atmospheric pressure when charge is transferred from the ions formed in the electrospray to the molecules. One advantage of this approach is that it is compatible with most ESI-MS systems.
Residual gas analysis
Atom probe
An atom probe is an instrument that combines time-of-flight mass spectrometry and field-evaporation microscopy to map the location of individual atoms.
Pharmacokinetics
Pharmacokinetics is often studied using mass spectrometry because of the complex nature of the matrix (often blood or urine) and the need for high sensitivity to observe low dose and long time point data. The most common instrumentation used in this application is LC-MS with a triple quadrupole mass spectrometer. Tandem mass spectrometry is usually employed for added specificity. Standard curves and internal standards are used for quantitation of usually a single pharmaceutical in the samples. The samples represent different time points as a pharmaceutical is administered and then metabolized or cleared from the body. Blank or t=0 samples taken before administration are important in determining background and ensuring data integrity with such complex sample matrices. Much attention is paid to the linearity of the standard curve; however it is not uncommon to use curve fitting with more complex functions such as quadratics since the response of most mass spectrometers is less than linear across large concentration ranges.
There is currently considerable interest in the use of very high sensitivity mass spectrometry for microdosing studies, which are seen as a promising alternative to animal experimentation.
Recent studies show that secondary electrospray ionization (SESI) is a powerful technique to monitor drug kinetics via breath analysis. Because breath is naturally produced, several datapoints can be readily collected. This allows for the number of collected data-points to be greatly increased. In animal studies, this approach SESI can reduce animal sacrifice. In humans, SESI-MS non-invasive analysis of breath can help study the kinetics of drugs at a personalized level.
Protein characterization
Mass spectrometry is an important method for the characterization and sequencing of proteins. The two primary methods for ionization of whole proteins are electrospray ionization (ESI) and matrix-assisted laser desorption/ionization (MALDI). In keeping with the performance and mass range of available mass spectrometers, two approaches are used for characterizing proteins. In the first, intact proteins are ionized by either of the two techniques described above, and then introduced to a mass analyzer. This approach is referred to as "top-down" strategy of protein analysis. The top-down approach however is largely limited to low-throughput single-protein studies. In the second, proteins are enzymatically digested into smaller peptides using proteases such as trypsin or pepsin, either in solution or in gel after electrophoretic separation. Other proteolytic agents are also used. The collection of peptide products are often separated by chromatography prior to introduction to the mass analyzer. When the characteristic pattern of peptides is used for the identification of the protein the method is called peptide mass fingerprinting (PMF), if the identification is performed using the sequence data determined in tandem MS analysis it is called de novo peptide sequencing. These procedures of protein analysis are also referred to as the "bottom-up" approach, and have also been used to analyse the distribution and position of post-translational modifications such as phosphorylation on proteins. A third approach is also beginning to be used, this intermediate "middle-down" approach involves analyzing proteolytic peptides that are larger than the typical tryptic peptide.
Space exploration
As a standard method for analysis, mass spectrometers have reached other planets and moons. Two were taken to Mars by the Viking program. In early 2005 the Cassini–Huygens mission delivered a specialized GC-MS instrument aboard the Huygens probe through the atmosphere of Titan, the largest moon of the planet Saturn. This instrument analyzed atmospheric samples along its descent trajectory and was able to vaporize and analyze samples of Titan's frozen, hydrocarbon covered surface once the probe had landed. These measurements compare the abundance of isotope(s) of each particle comparatively to earth's natural abundance. Also on board the Cassini–Huygens spacecraft was an ion and neutral mass spectrometer which had been taking measurements of Titan's atmospheric composition as well as the composition of Enceladus' plumes. A Thermal and Evolved Gas Analyzer mass spectrometer was carried by the Mars Phoenix Lander launched in 2007.
Mass spectrometers are also widely used in space missions to measure the composition of plasmas. For example, the Cassini spacecraft carried the Cassini Plasma Spectrometer (CAPS), which measured the mass of ions in Saturn's magnetosphere.
Respired gas monitor
Mass spectrometers were used in hospitals for respiratory gas analysis beginning around 1975 through the end of the century. Some are probably still in use but none are currently being manufactured.
Found mostly in the operating room, they were a part of a complex system, in which respired gas samples from patients undergoing anesthesia were drawn into the instrument through a valve mechanism designed to sequentially connect up to 32 rooms to the mass spectrometer. A computer directed all operations of the system. The data collected from the mass spectrometer was delivered to the individual rooms for the anesthesiologist to use.
The uniqueness of this magnetic sector mass spectrometer may have been the fact that a plane of detectors, each purposely positioned to collect all of the ion species expected to be in the samples, allowed the instrument to simultaneously report all of the gases respired by the patient. Although the mass range was limited to slightly over 120 u, fragmentation of some of the heavier molecules negated the need for a higher detection limit.
Preparative mass spectrometry
The primary function of mass spectrometry is as a tool for chemical analyses based on detection and quantification of ions according to their mass-to-charge ratio. However, mass spectrometry also shows promise for material synthesis. Ion soft landing is characterized by deposition of intact species on surfaces at low kinetic energies which precludes the fragmentation of the incident species. The soft landing technique was first reported in 1977 for the reaction of low energy sulfur containing ions on a lead surface.
| Physical sciences | Analytical chemistry | null |
283974 | https://en.wikipedia.org/wiki/Coot | Coot | Coots are medium-sized water birds that are members of the rail family, Rallidae. They constitute the genus Fulica, the name being the Latin term for "coot". Coots have predominantly black plumage, and—unlike many rails—they are usually easy to see, often swimming in open water.
Taxonomy and systematics
The genus Fulica was introduced in 1758 by the Swedish naturalist Carl Linnaeus in the tenth edition of his Systema Naturae. The genus name is the Latin word for a Eurasian coot. The name was used by the Swiss naturalist Conrad Gessner in 1555. The type species is the Eurasian coot.
A group of coots is referred to as a covert or cover.
Species
The genus contains 10 extant species and one which is now extinct.
Extinct species
Recently extinct species
Fulica newtonii Milne-Edwards, 1867 – Mascarene coot (extinct, c. 1700)
Late Quaternary species
Fulica chathamensis Forbes, 1892 – Chatham Island coot (early Holocene of the Chatham Islands)
Fulica montanei Alarcón-Muñoz, Labarca & Soto-Acuña, 2020 (late Pleistocene to early Holocene of Chile)
Fulica prisca Hamilton, 1893 – New Zealand coot (early Holocene of New Zealand)
Fulica shufeldti – (late Pleistocene of Florida) possibly a paleosubspecies of Fulica americana; formerly F. minor
Fossil species
Fulica infelix Brodkorb, 1961 – (early Pliocene of Juntura, Malheur County, Oregon, USA)
Description
Coots have prominent frontal shields or other decoration on the forehead, with red to dark red eyes and coloured bills. Many have white on the under tail. The featherless shield gave rise to the expression "as bald as a coot", which the Oxford English Dictionary cites in use as early as 1430. Like other rails, they have long, lobed toes that are well adapted to soft, uneven surfaces. Coots have strong legs and can walk and run vigorously. They tend to have short, rounded wings and are weak fliers, though northern species nevertheless can cover long distances. They typically congregate in large rafts in open water. They are socially gregarious and messy aquatic feeders.
Distribution and habitat
The greatest species variety occurs in South America, and the genus likely originated there. They are common in Europe and North America. Coot species that migrate do so at night. The American coot has been observed rarely in Britain and Ireland, while the Eurasian coot is found across Asia, Australia and parts of Africa. In southern Louisiana, the coot is referred to by the French name "poule d'eau", which translates into English as "water hen".
Behaviour and ecology
Coots are omnivorous, eating mainly plant material, but also small animals, fish and eggs. They are aggressively territorial during the breeding season, but are otherwise often found in sizeable flocks on the shallow vegetated lakes they prefer.
Chick mortality occurs mainly due to starvation rather than predation as coots have difficulty feeding a large family of hatchlings on the tiny shrimp and insects that they collect. Many chicks die in the first 10 days after hatching, when they are most dependent on adults for food. Coots can be very brutal to their own young under pressure such as the lack of food, and after about three days they start attacking their own chicks when they beg for food. After a short while, these attacks concentrate on the weaker chicks, who eventually give up begging and die. The coot may eventually raise only two or three out of nine hatchlings. In this attacking behaviour, the parents are said to "tousle" their young. This can result in the death of the chick.
| Biology and health sciences | Gruiformes | Animals |
284027 | https://en.wikipedia.org/wiki/Kidney%20failure | Kidney failure | Kidney failure, also known as renal failure or end-stage renal disease (ESRD), is a medical condition in which the kidneys can no longer adequately filter waste products from the blood, functioning at less than 15% of normal levels. Kidney failure is classified as either acute kidney failure, which develops rapidly and may resolve; and chronic kidney failure, which develops slowly and can often be irreversible. Symptoms may include leg swelling, feeling tired, vomiting, loss of appetite, and confusion. Complications of acute and chronic failure include uremia, hyperkalemia, and volume overload. Complications of chronic failure also include heart disease, high blood pressure, and anaemia.
Causes of acute kidney failure include low blood pressure, blockage of the urinary tract, certain medications, muscle breakdown, and hemolytic uremic syndrome. Causes of chronic kidney failure include diabetes, high blood pressure, nephrotic syndrome, and polycystic kidney disease. Diagnosis of acute failure is often based on a combination of factors such as decreased urine production or increased serum creatinine. Diagnosis of chronic failure is based on a glomerular filtration rate (GFR) of less than 15 or the need for renal replacement therapy. It is also equivalent to stage 5 chronic kidney disease.
Treatment of acute failure depends on the underlying cause. Treatment of chronic failure may include hemodialysis, peritoneal dialysis, or a kidney transplant. Hemodialysis uses a machine to filter the blood outside the body. In peritoneal dialysis specific fluid is placed into the abdominal cavity and then drained, with this process being repeated multiple times per day. Kidney transplantation involves surgically placing a kidney from someone else and then taking immunosuppressant medication to prevent rejection. Other recommended measures from chronic disease include staying active and specific dietary changes. Depression is also common among patients with kidney failure, and is associated with poor outcomes including higher risk of kidney function decline, hospitalization, and death. A recent PCORI-funded study of patients with kidney failure receiving outpatient hemodialysis found similar effectiveness between nonpharmacological and pharmacological treatments for depression.
In the United States, acute failure affects about 3 per 1,000 people a year. Chronic failure affects about 1 in 1,000 people with 3 per 10,000 people newly developing the condition each year. In Canada, the lifetime risk of kidney failure or end-stage renal disease (ESRD) was estimated to be 2.66% for men and 1.76% for women. Acute failure is often reversible while chronic failure often is not. With appropriate treatment many with chronic disease can continue working.
Classification
Kidney failure can be divided into two categories: acute kidney failure or chronic kidney failure. The type of renal failure is differentiated by the trend in the serum creatinine; other factors that may help differentiate acute kidney failure from chronic kidney failure include anemia and the kidney size on sonography as chronic kidney disease generally leads to anemia and small kidney size.
Acute kidney failure
Acute kidney injury (AKI), previously called acute renal failure (ARF), is a rapidly progressive loss of renal function, generally characterized by oliguria (decreased urine production, quantified as less than 400 mL per day in adults, less than 0.5 mL/kg/h in children or less than 1 mL/kg/h in infants); and fluid and electrolyte imbalance. AKI can result from a variety of causes, generally classified as prerenal, intrinsic, and postrenal. Many people diagnosed with paraquat intoxication experience AKI, sometimes requiring hemodialysis. The underlying cause must be identified and treated to arrest the progress, and dialysis may be necessary to bridge the time gap required for treating these fundamental causes.
Chronic kidney failure
Chronic kidney disease (CKD) can also develop slowly and, initially, show few symptoms. CKD can be the long term consequence of irreversible acute disease or part of a disease progression. CKD is divided into 5 different stages (1–5) according to the estimated glomerular filtration rate (eGFR). In CKD1 eGFR is normal and in CKD5 eGFR has decreased to less than 15 ml/min.
Acute-on-chronic kidney failure
Acute kidney injuries can be present on top of chronic kidney disease, a condition called acute-on-chronic kidney failure (AoCRF). The acute part of AoCRF may be reversible, and the goal of treatment, as with AKI, is to return the person to baseline kidney function, typically measured by serum creatinine. Like AKI, AoCRF can be difficult to distinguish from chronic kidney disease if the person has not been monitored by a physician and no baseline (i.e., past) blood work is available for comparison.
Signs and symptoms
Symptoms can vary from person to person. Someone in early stage kidney disease may not feel sick or notice symptoms as they occur. When the kidneys fail to filter properly, waste accumulates in the blood and the body, a condition called azotemia. Very low levels of azotemia may produce few, if any, symptoms. If the disease progresses, symptoms become noticeable (if the failure is of sufficient degree to cause symptoms). Kidney failure accompanied by noticeable symptoms is termed uraemia.
Symptoms of kidney failure include the following:
High levels of urea in the blood, which can result in:
Vomiting or diarrhea (or both) that may lead to dehydration
Nausea
Weight loss
Nocturnal urination (nocturia)
More frequent urination, or in greater amounts than usual, with pale urine
Less frequent urination, or in smaller amounts than usual, with dark coloured urine
Blood in the urine
Pressure, or difficulty urinating
Unusual amounts of urination, usually in large quantities
A buildup of phosphates in the blood that diseased kidneys cannot filter out may cause:
Itching
Bone damage
Nonunion in broken bones
Muscle cramps (caused by low levels of calcium which can be associated with hyperphosphatemia)
A buildup of potassium in the blood that diseased kidneys cannot filter out (called hyperkalemia) may cause:
Abnormal heart rhythms
Muscle paralysis
Failure of kidneys to remove excess fluid may cause:
Swelling of the hands, legs, ankles, feet, or face
Shortness of breath due to extra fluid on the lungs (may also be caused by anemia)
Polycystic kidney disease, which causes large, fluid-filled cysts on the kidneys and sometimes the liver, can cause:
Pain in the back or side
Healthy kidneys produce the hormone erythropoietin that stimulates the bone marrow to make oxygen-carrying red blood cells. As the kidneys fail, they produce less erythropoietin, resulting in decreased production of red blood cells to replace the natural breakdown of old red blood cells. As a result, the blood carries less hemoglobin, a condition known as anemia. This can result in:
Feeling tired or weak
Memory problems
Difficulty concentrating
Dizziness
Low blood pressure
Normally proteins are too large to pass through the kidneys. However they are able to pass through when the glomeruli are damaged. This does not cause symptoms until extensive kidney damage has occurred, after which symptoms include:
Foamy or bubbly urine
Swelling in the hands, feet, abdomen, and face
Other symptoms include:
Appetite loss, which may include a bad taste in the mouth
Difficulty sleeping
Darkening of the skin
Excess protein in the blood
With high doses of penicillin, people with kidney failure may experience seizures
Causes
Acute kidney injury
Acute kidney injury (previously known as acute renal failure) – or AKI – usually occurs when the blood supply to the kidneys is suddenly interrupted or when the kidneys become overloaded with toxins. Causes of acute kidney injury include accidents, injuries, or complications from surgeries in which the kidneys are deprived of normal blood flow for extended periods of time. Heart-bypass surgery is an example of one such procedure.
Drug overdoses, accidental or from chemical overloads of drugs such as antibiotics or chemotherapy, along with bee stings may also cause the onset of acute kidney injury. Unlike chronic kidney disease, however, the kidneys can often recover from acute kidney injury, allowing the person with AKI to resume a normal life. People with acute kidney injury require supportive treatment until their kidneys recover function, and they often remain at increased risk of developing future kidney failure.
Among the accidental causes of renal failure is the crush syndrome, when large amounts of toxins are suddenly released in the blood circulation after a long compressed limb is suddenly relieved from the pressure obstructing the blood flow through its tissues, causing ischemia. The resulting overload can lead to the clogging and the destruction of the kidneys. It is a reperfusion injury that appears after the release of the crushing pressure. The mechanism is believed to be the release into the bloodstream of muscle breakdown products – notably myoglobin, potassium, and phosphorus – that are the products of rhabdomyolysis (the breakdown of skeletal muscle damaged by ischemic conditions). The specific action on the kidneys is not fully understood, but may be due in part to nephrotoxic metabolites of myoglobin.
Chronic kidney failure
Chronic kidney failure has numerous causes. The most common causes of chronic failure are diabetes mellitus and long-term, uncontrolled hypertension. Polycystic kidney disease is another well-known cause of chronic failure. The majority of people affected with polycystic kidney disease have a family history of the disease. Systemic lupus erythematosus (SLE) is also a known cause of chronic kidney failure. Other genetic illnesses cause kidney failure, as well.
Overuse of common drugs such as ibuprofen, and acetaminophen (paracetamol) can also cause chronic kidney failure.
Some infectious disease agents, such as hantavirus, can attack the kidneys, causing kidney failure.
Genetic predisposition
The APOL1 gene has been proposed as a major genetic risk locus for a spectrum of nondiabetic renal failure in individuals of African origin, these include HIV-associated nephropathy (HIVAN), primary nonmonogenic forms of focal segmental glomerulosclerosis, and hypertension affiliated chronic kidney disease not attributed to other etiologies. Two western African variants in APOL1 have been shown to be associated with end stage kidney disease in African Americans and Hispanic Americans.
Diagnostic approach
Measurement for CKD
Stages of kidney failure
Chronic kidney failure is measured in five stages, which are calculated using the person's GFR, or glomerular filtration rate. Stage 1 CKD is mildly diminished renal function, with few overt symptoms. Stages 2 and 3 need increasing levels of supportive care from their medical providers to slow and treat their renal dysfunction. People with stage 4 and 5 kidney failure usually require preparation towards active treatment in order to survive. Stage 5 CKD is considered a severe illness and requires some form of renal replacement therapy (dialysis) or kidney transplant whenever feasible.
Glomerular filtration rate
A normal GFR varies according to many factors, including sex, age, body size and ethnic background. Renal professionals consider the glomerular filtration rate (GFR) to be the best overall index of kidney function. The National Kidney Foundation offers an easy to use on-line GFR calculator for anyone who is interested in knowing their glomerular filtration rate. (A serum creatinine level, a simple blood test, is needed to use the calculator.)
Use of the term uremia
Before the advancement of modern medicine, renal failure was often referred to as uremic poisoning. Uremia was the term for the contamination of the blood with urea. It is the presence of an excessive amount of urea in blood. Starting around 1847, this included reduced urine output, which was thought to be caused by the urine mixing with the blood instead of being voided through the urethra. The term uremia is now used for the illness accompanying kidney failure.
Renal failure index
Two other urinary indices, are the fractional sodium excretion (FENa) index and the renal failure index (RFI). The renal failure index is equal to urine sodium times plasma creatinine divided by urine creatinine. A FENa score greater than 3% or a renal failure index (RFI) greater than 3 are helpful in confirming acute renal failure.
Complications
Those with end stage renal failure who undergo haemodialysis have higher risk of spontaneous intra-abdominal bleeding than the general population (21.2%) and non-occlusive mesenteric ischemia (18.1%). Meanwhile, those undergoing peritoneal dialysis have a higher chance of developing peritonitis and gastrointestinal perforation. However, the rate of acute pancreatitis does not differ from the general population.
Treatment
The treatment of acute kidney injury depends on the cause. The treatment of chronic kidney failure may include renal replacement therapy: hemodialysis, peritoneal dialysis, or kidney transplant.
Diet
In non-diabetics and people with type 1 diabetes, a low protein diet is found to have a preventive effect on progression of chronic kidney disease. However, this effect does not apply to people with type 2 diabetes. A whole food, plant-based diet may help some people with kidney disease. A high protein diet from either animal or plant sources appears to have negative effects on kidney function at least in the short term.
Slowing progression
People who receive earlier referrals to a nephrology specialist, meaning a longer time before they must start dialysis, have a shorter initial hospitalization and reduced risk of death after the start of dialysis. Other methods of reducing disease progression include minimizing exposure to nephrotoxins such as NSAIDs and intravenous contrast.
| Biology and health sciences | Specific diseases | Health |
284029 | https://en.wikipedia.org/wiki/Immunization | Immunization | Immunization, or immunisation, is the process by which an individual's immune system becomes fortified against an infectious agent (known as the immunogen). When this system is exposed to molecules that are foreign to the body, called non-self, it will orchestrate an immune response, and it will also develop the ability to quickly respond to a subsequent encounter because of immunological memory. This is a function of the adaptive immune system. Therefore, by exposing a human, or an animal, to an immunogen in a controlled way, its body can learn to protect itself: this is called active immunization. The most important elements of the immune system that are improved by immunization are the T cells, B cells, and the antibodies B cells produce. Memory B cells and memory T cells are responsible for a swift response to a second encounter with a foreign molecule. Passive immunization is direct introduction of these elements into the body, instead of production of these elements by the body itself.
Immunization happens in various ways, both in the wild and as done by human efforts in health care. Natural immunity is gained by those organisms whose immune systems succeed in fighting off a previous infection, if the relevant pathogen is one for which immunization is even possible. Natural immunity can have degrees of effectiveness (partial rather than absolute) and may fade over time (within months, years, or decades, depending on the pathogen). In health care, the main technique of artificial induction of immunity is vaccination, which is a major form of prevention of disease, whether by prevention of infection (pathogen fails to mount sufficient reproduction in the host), prevention of severe disease (infection still happens but is not severe), or both. Vaccination against vaccine-preventable diseases is a major relief of disease burden even though it usually cannot eradicate a disease. Vaccines against microorganisms that cause diseases can prepare the body's immune system, thus helping to fight or prevent an infection. The fact that mutations can cause cancer cells to produce proteins or other molecules that are known to the body forms the theoretical basis for therapeutic cancer vaccines. Other molecules can be used for immunization as well, for example in experimental vaccines against nicotine (NicVAX) or the hormone ghrelin in experiments to create an obesity vaccine.
Immunizations are often widely stated as less risky and an easier way to become immune to a particular disease than risking a milder form of the disease itself. They are important for both adults and children in that they can protect us from the many diseases out there. Immunization not only protects children against deadly diseases but also helps in developing children's immune systems. Through the use of immunizations, some infections and diseases have almost completely been eradicated throughout the World. One example is polio. Thanks to dedicated health care professionals and the parents of children who vaccinated on schedule, polio has been eliminated in the U.S. since 1979. Polio is still found in other parts of the world so certain people could still be at risk of getting it. This includes those people who have never had the vaccine, those who did not receive all doses of the vaccine, or those traveling to areas of the world where polio is still prevalent. Active immunization/vaccination has been named one of the "Ten Great Public Health Achievements in the 20th Century".
History
Before the introduction of vaccines, people could only become immune to an infectious disease by contracting the disease and surviving it. Smallpox (variola) was prevented in this way by inoculation, which produced a milder effect than the natural disease. The first clear reference to smallpox inoculation was made by the Chinese author Wan Quan (1499–1582) in his Douzhen xinfa (痘疹心法) published in 1549. In China, powdered smallpox scabs were blown up the noses of the healthy. The patients would then develop a mild case of the disease and from then on were immune to it. The technique did have a 0.5–2.0% mortality rate, but that was considerably less than the 20–30% mortality rate of the disease itself. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Dr. Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers. According to Voltaire (1742), the Turks derived their use of inoculation from neighbouring Circassia. Voltaire does not speculate on where the Circassians derived their technique from, though he reports that the Chinese have practiced it "these hundred years". It was introduced into England from Turkey by Lady Mary Wortley Montagu in 1721 and used by Zabdiel Boylston in Boston the same year. In 1798 Edward Jenner introduced inoculation with cowpox (smallpox vaccine), a much safer procedure. This procedure, referred to as vaccination, gradually replaced smallpox inoculation, now called variolation to distinguish it from vaccination. Until the 1880s vaccine/vaccination referred only to smallpox, but Louis Pasteur developed immunization methods for chicken cholera and anthrax in animals and for human rabies, and suggested that the terms vaccine/vaccination should be extended to cover the new procedures. This can cause confusion if care is not taken to specify which vaccine is used e.g. measles vaccine or influenza vaccine.
Passive and active immunization
Immunization can be achieved in an active or passive manner: vaccination is an active form of immunization.
Active immunization
Active immunization can occur naturally when a person comes in contact with, for example, a microbe. The immune system will eventually create antibodies and other defenses against the microbe. The next time, the immune response against this microbe can be very efficient; this is the case in many of the childhood infections that a person only contracts once, but then is immune.
Artificial active immunization is where the microbe, or parts of it, are injected into the person before they are able to take it in naturally. If whole microbes are used, they are pre-treated.
The importance of immunization is so great that the American Centers for Disease Control and Prevention has named it one of the "Ten Great Public Health Achievements in the 20th Century".
Live attenuated vaccines have decreased pathogenicity. Their effectiveness depends on the immune systems ability to replicate and elicits a response similar to natural infection. It is usually effective with a single dose. Examples of live, attenuated vaccines include measles, mumps, rubella, MMR, yellow fever, varicella, rotavirus, and influenza (LAIV).
Passive immunization
Passive immunization is where pre-synthesized elements of the immune system are transferred to a person so that the body does not need to produce these elements itself. Currently, antibodies can be used for passive immunization. This method of immunization begins to work very quickly, but it is short lasting, because the antibodies are naturally broken down, and if there are no B cells to produce more antibodies, they will disappear.
Passive immunization occurs physiologically, when antibodies are transferred from mother to fetus during pregnancy, to protect the fetus before and shortly after birth.
Artificial passive immunization is normally administered by injection and is used if there has been a recent outbreak of a particular disease or as an emergency treatment for toxicity, as in for tetanus. The antibodies can be produced in animals, called "serum therapy," although there is a high chance of anaphylactic shock because of immunity against animal serum itself. Thus, humanized antibodies produced in vitro by cell culture are used instead if available.
Economics of immunizations
Positive externality
Immunizations impose what is known as a positive consumer externality on society. In addition to providing the individual with protection against certain antigens it adds greater protection to all other individuals in society through herd immunity. Because this extra protection is not accounted for in the market transactions for immunizations we see an undervaluing of the marginal benefit of each immunization. This market failure is caused by individuals making decisions based on their private marginal benefit instead of the social marginal benefit. Society's undervaluing of immunizations means that through normal market transactions we end up at a quantity that is lower than what is socially optimal.
For example, if individual A values their own immunity to an antigen at $100 but the immunization costs $150, individual A will decide against receiving immunization. However, if the added benefit of herd immunity means person B values person A's immunity at $70 then the total social marginal benefit of their immunization is $170. Individual A's private marginal benefit being lower than the social marginal benefit leads to an under-consumption of immunizations.
Socially optimal outcome
Having private marginal benefits lower than social marginal benefits will always lead to an under-consumption of any good. The size of the disparity is determined by the value that society places on each different immunization. Many times, immunizations do not reach a socially optimum quantity high enough to eradicate the antigen. Instead, they reach a social quantity that allows for an optimal amount of sick individuals. Most of the commonly immunized diseases in the United States still see a small presence with occasional larger outbreaks. Measles is a good example of a disease whose social optimum leaves enough room for outbreaks in the United States that often lead to the deaths of a handful of individuals.
There are also examples of illnesses so dangerous that the social optimum ended with the eradication of the virus, such as smallpox. In these cases, the social marginal benefit is so large that society is willing to pay the cost to reach a level of immunization that makes the spread and survival of the disease impossible.
Despite the severity of certain illnesses, the cost of immunization versus the social marginal benefit means that total eradication is not always the end goal of immunization. Though it is hard to tell exactly where the socially optimal outcome is, we know that it is not the eradication of all disease for which an immunization exists.
Internalizing the externality
In order to internalize the positive externality imposed by immunizations payments equal to the marginal benefit must be made. In countries like the United States these payment usually come in the form of subsidies from the government. Before 1962 immunization programs in the United States were run on the local and state level of governments. The inconsistency in subsidies lead to some regions of the United States reaching the socially optimal quantity while other regions were left without subsidies and remained at the private marginal benefit level of immunizations. Since 1962 and the Vaccination Assistance Act, the United States as a whole has been moving towards the socially optimal outcome on a larger scale. Despite government subsidies it is difficult to tell when social optimum has been achieved. In addition to hardships determining the true social marginal benefit of immunizations we see cultural movements shifting private marginal benefit curves. Vaccine controversies have changed the way some private citizens view the marginal benefit of being immunized. If Individual A believes that there is a large health risk, possibly larger than the antigen itself, associated with immunization they will not be willing to pay for or receive immunization. With fewer willing participants and a widening marginal benefit reaching a social optimum becomes more difficult for governments to achieve through subsidies.
Outside of government intervention through subsidies, non profit organizations can also move a society towards the socially optimal outcome by providing free immunizations to developing regions. Without the ability to afford the immunizations to begin with, developing societies will not be able to reach a quantity determined by private marginal benefits. By running immunization programs organizations are able to move privately under-immunized communities towards the social optimum.
Race, ethnicity and immunization
In the United States, race and ethnicity are strong determinants of utilization of preventive and therapeutic health services as well as health outcomes. Rates of infant mortality and most of the leading causes of overall mortality have been higher in African Americans than in European Americans. A recent analysis of mortality from influenza and pneumonia revealed that African Americans died of these causes at higher rates than European Americans in 1999–2018. Contributing to these racial disparities are lower rates of immunization against influenza and pneumococcal pneumonia. During the COVID-19 pandemic, death rates have been higher in African Americans than European Americans and vaccination rates have lagged in African Americans during the roll-out. Among Hispanics immunization rates are lower than those in non-Hispanic whites.
| Biology and health sciences | Concepts | Health |
284266 | https://en.wikipedia.org/wiki/Bowerbird | Bowerbird | Bowerbirds () make up the bird family Ptilonorhynchidae. They are renowned for their unique courtship behaviour, where males build a structure and decorate it with sticks and brightly coloured objects in an attempt to attract a mate.
The family has 27 species in eight genera. These are medium to large-sized passerines, ranging from the golden bowerbird at and to the great bowerbird at and . Their diet consists mainly of fruit but may also include insects (especially for nestlings), flowers, nectar and leaves in some species. The satin and spotted bowerbirds are sometimes considered agricultural pests due to their habit of feeding on introduced fruit and vegetable crops and have occasionally been killed by affected orchardists.
The bowerbirds have an Australo-Papuan distribution, with ten species endemic to New Guinea, eight endemic to Australia, and two found in both. Although their distribution is centered on the tropical regions of New Guinea and northern Australia, some species extend into central, western, and southeastern Australia. They occupy a range of different habitats, including rainforest, eucalyptus and acacia forest, and shrublands. While the females are unequivocally drab, in some species the males have bright golden-yellow and sometimes black markings.
One group with particularly inconspicuous plumage in males as well as females, but loud meowing calls, is known as "catbirds". Note that the ptilonorhynchid catbirds, the grey catbird (Dumetella carolinensis) and black catbird (Melanoptila glabrirostris) from the Americas, and the Abyssinian catbird (Sylvia=Parophasma galinieri) from Africa, are only related by their common name; they belong to different families.
Behaviour and ecology
The Ailuroedus catbirds are monogamous, with males raising chicks with their partners, but all other bowerbirds are polygynous, with the female building the nest and raising the young alone. These latter species are commonly dimorphic, with the female being drabber in color. Female bowerbirds build a nest by laying soft materials, such as leaves, ferns, and vine tendrils, on top of a loose foundation of sticks.
All Papuan bowerbirds lay one egg, while Australian species lay one to three with laying intervals of two days. Bowerbird eggs are around twice the weight of those of most passerines of similar size – for instance eggs of the satin bowerbird weigh around as against a calculated for a passerine weighing . Eggs hatch after 19 to 24 days, depending on the species and are a plain cream color for catbirds and the tooth-billed bowerbird, but in other species possess brownish wavy lines similar to eggs of Australo-Papuan babblers. In accordance with their lengthy incubation periods, bowerbirds that lay more than one egg have asynchronous hatching, but siblicide has never been observed.
Bowerbirds as a group have the longest life expectancy of any passerine family with significant banding studies. The two most studied species, the green catbird and satin bowerbird, have life expectancies of around eight to ten years and one satin bowerbird has been known to live for twenty-six years. For comparison, the common raven, the heaviest passerine species with significant banding records, has not been known to live longer than 21 years.
Courtship and mating
The most notable characteristic of bowerbirds is their extraordinarily complex courtship and mating behaviour, where males build a bower to attract mates. There are two main types of bowers. Prionodura, Amblyornis, Scenopoeetes and Archiboldia bowerbirds build so-called maypole bowers, which are constructed by placing sticks around a sapling; in the former two species these bowers have a hut-like roof. Chlamydera, Sericulus and Ptilonorhynchus bowerbirds build an avenue-type bower made of two walls of vertically placed sticks. Ailuroedus catbirds are the only species which do not construct either bowers or display courts. In and around the bower, the male places a variety of brightly colored objects he has collected. These objects — usually different among each species — may include hundreds of shells, leaves, flowers, feathers, stones, berries, and even discarded plastic items, coins, nails, rifle shells, or pieces of glass. The males spend hours arranging this collection. Bowers within a species share a general form but do show significant variation, and the collection of objects reflects the biases of males of each species and its ability to procure items from the habitat, often stealing them from neighboring bowers. Several studies of different species have shown that colors of decorations males use on their bowers match the preferences of females.
In addition to the bower construction and ornamentation, male birds perform involved courtship displays to attract the female. Research suggests the male adjusts his performance based on success and female response.
Mate-searching females commonly visit multiple bowers, often returning to preferred bowers several times, and watching males' elaborate courtship displays and inspecting the quality of the bower. Through this process the female reduces the set of potential mates. Many females end up selecting the same male, and many under-performing males are left without copulations. Females mated with top-mating males tend to return to the male the next year and search less.
It has been suggested that there is an inverse relationship between bower complexity and the brightness of plumage. There may be an evolutionary "transfer" of ornamentation in some species, from their plumage to their bowers, in order to reduce the visibility of the male and thereby its vulnerability to predation. This hypothesis is not well supported because species with vastly different bower types have similar plumage. Others have suggested that the bower functioned initially as a device that benefits females by protecting them from forced copulations and thus giving them enhanced opportunity to choose males and benefits males by enhancing female willingness to visit the bower. Evidence supporting this hypothesis comes from observations of Archbold's bowerbirds that have no true bower and have greatly modified their courtship so that the male is limited in his ability to mount the female without her cooperation. In tooth-billed bowerbirds that have no bowers, males may capture females out of the air and forcibly copulate with them. Once this initial function was established, bowers were then co-opted by females for other functions such as use in assessing males based on the quality of bower construction. Recent studies with robot female bowerbirds have shown that males react to female signals of discomfort during courtship by reducing the intensity of their potentially threatening courtship. Young females tend to be more easily threatened by intense male courtship, and these females tend to choose males based on traits not dependent on male courtship intensity.
The high degree of effort directed at mate choice by females and the large skews in mating success directed at males with quality displays suggests that females gain important benefits from mate choice. Since males have no role in parental care and give nothing to females except sperm, it is suggested that females gain genetic benefits from their mate choice. However this has not been established, in part because of the difficulty of following offspring performance since males take seven years to reach sexual maturity. One hypothesis for the evolutionary causation of the bower building display is Hamilton and Zuk's "bright bird" hypothesis, which states that sexual ornaments are indicators of general health and heritable disease resistance. Doucet and Montgomerie determined that the male bowerbird's plumage reflectance indicates internal parasitic infection, whereas the bower quality is a measure of external parasitic infection. This would suggest that the bowerbird mating display evolved due to parasite-mediated sexual selection, although there is some controversy surrounding this conclusion.
This complex mating behaviour, with its highly valued types and colors of decorations, has led some researchers to regard the bowerbirds as among the most behaviorally complex species of bird. It also provides some of the most compelling evidence that the extended phenotype of a species can play a role in sexual selection and indeed, act as a powerful mechanism to shape its evolution, as seems to be the case for humans. Inspired by their seemingly extreme courtship rituals, Charles Darwin discussed both bowerbirds and birds-of-paradise in his writings.
In addition, many species of bowerbird are superb vocal mimics. MacGregor's bowerbird, for example, has been observed imitating pigs, waterfalls, and human chatter. Satin bowerbirds commonly mimic other local species as part of their courtship display.
Bowerbirds have also been observed creating optical illusions in their bowers to appeal to mates. They arrange objects in the bower's court area from smallest to largest, creating a forced perspective which holds the attention of the female for longer. Males with objects arranged in a way that have a strong optical illusion are likely to have higher mating success.
Taxonomy and systematics
Though bowerbirds have traditionally been regarded as closely related to the birds of paradise, recent molecular studies suggest that while both families are part of the great corvid radiation that took place in or near Sahul (Australia-New Guinea), the bowerbirds are more distant from the birds of paradise than was once thought. DNA–DNA hybridization studies placed them close to the lyrebirds; however, anatomical evidence appears to contradict this placement, and the true relationship remained unresolved for long. Cladistic analyses in the mid-2010s usually allied bowerbirds with the Australasian treecreepers (Climacteridae), another Sahul endemic family which is highly adapted to a woodpecker-like lifestyle (woodpeckers being absent from Sahul). This putative superfamily forms part of a large basal radiation of ancient songbirds, with the lyrebirds being part of a more ancestral branch than the bowerbirds and their DNA-DNA hybridization similarities being due to the phenetic methodology which (unlike cladistic analysis) merely assesses overall similarity without accounting for convergent evolution.
Many bowerbirds (in particular New Guinean species) are little known and even less studied. But the hypothesized relationships of 3 roughly equally distinct groups and one peculiar species inferred from courtship behaviour and external appearance are by and large confirmed by molecular phylogenetics, . Some insights from the more recent studies, however, were less expected: The Tooth-billed catbird, with its unique "stagemaker" courtship, was long suspected not to be a true catbird (genus Ailuroedus). As it turned out, this is not only correct, but in fact the Tooth-billed catbird is robustly resolved by the mtDNA data as more closely related to the "maypole"-type bower builders than to Ailuroedus, and certainly warrants separation in genus Scenopoeetes. Also, the enigmatic "maypole-builder" genus Archboldia seems to be merely a Amblyornis with unusually heavy melanin pigmentation as is often found in tropical rainforest birds. On the other hand, the "avenue-builders" also have a hypermelanic lineage, the satin bowerbird, but this seems well separable as a monotypic genus Ptilonorhynchus, as is the "maypole-building" golden bowerbird (as Prionodura). Interestingly, the widely divergent "avenue-builders" may represent the oldest living lineage, with the monogamous true catbirds, which do not build a bower and were traditionally held to be "primitive", as the most derived group among living bowerbirds – the last common ancestor of the living bowerbirds is hypothesized to have been polygynous, with sexually dimorphic plumage – cryptic greenish in the females, and probably dark with a yellow belly in the males. But overall relationships between the true catbirds, the "maypole-builders" and the "avenue-builders" were not definitely resolvable, with only a small outgroup being used and outgroup effects on intra-family relationships not being tested. Even so, it is precisely this uncertainty about inter-group relationships that strongly suggests that the "maypole"/"avenue" bowers are not one ancestral and one derived type, but evolved independent of one another, perhaps from a "clean stage"-type courtship arena which is commonly established by all bower-building species at the start of bower construction, and persists in little-altered form (just adding some remarkable leaves strewn about as decoration) in Scenopoeetes which almost certainly is the most ancient living lineage of the "maypole-builders". Among the catbirds, the white-cheeked group (A.buccoides/geislerorum/stonii) is very likely the most ancient one, which is also in line with the hypothesis that bowerbirds have become more and more drab and inconspicuous as their evolution progressed.
Genera and species
True catbirds
Genus Ailuroedus
Ochre-breasted catbird, Ailuroedus stonii
White-eared catbird, Ailuroedus buccoides
Tan-capped catbird, Ailuroedus geislerorum
Green catbird, Ailuroedus crassirostris
Spotted catbird, Ailuroedus maculosus
Huon catbird, Ailuroedus astigmaticus
Black-capped catbird, Ailuroedus melanocephalus
Northern catbird, Ailuroedus jobiensis
Arfak catbird, Ailuroedus arfakianus
Black-eared catbird, Ailuroedus melanotis
Maypole-builders (including Tooth-billed catbird)
Genus Scenopoeetes
Tooth-billed catbird, Scenopoeetes dentirostris
Genus Archboldia
Archbold's bowerbird, Archboldia papuensis
Sanford's bowerbird, Archboldia (papuensis) sanfordi (species status disputed)
Genus Amblyornis
Vogelkop bowerbird, Amblyornis inornata
MacGregor's bowerbird, Amblyornis macgregoriae
Huon bowerbird, Amblyornis germanus
Streaked bowerbird, Amblyornis subalaris
Golden-fronted bowerbird, Amblyornis flavifrons
Genus Prionodura
Golden bowerbird, Prionodura newtoniana
Avenue-builders
Genus Sericulus
Flame bowerbird, Sericulus ardens
Masked bowerbird, Sericulus aureus
Fire-maned bowerbird, Sericulus bakeri
Regent bowerbird, Sericulus chrysocephalus
Genus Ptilonorhynchus
Satin bowerbird, Ptilonorhynchus violaceus
Genus Chlamydera
Western bowerbird, Chlamydera guttata
Spotted bowerbird, Chlamydera maculata
Great bowerbird, Chlamydera nuchalis
Eastern great bowerbird, Chlamydera (nuchalis) orientalis (possibly a distinct species)
Yellow-breasted bowerbird, Chlamydera lauterbachi
Fawn-breasted bowerbird, Chlamydera cerviniventris
Fossil record
Bowerbirds have a scant fossil record that nonetheless extends to the Chattian (latest Oligocene), with the fossil species Sericuloides marynguyenae dated to 26 to 23 million years ago. It was found in Faunal Zone A deposits of the White Hunter Site at D-site Plateau of the Riversleigh World Heritage Area. S. marynguyenae was a tiny member of its family, about the same size as the golden bowerbird. It is known from the proximal end of a right carpometacarpus and the proximal end of a left tarsometatarsus. The material, though fragmentary, preserves much detail, and is overall more similar to the "avenue-builders" – in particular Chlamydera – than to the other two main groups. However, the splits between the three main groups of living bowerbirds are presumed to have occurred only in the Miocene, some time after Sericuloides lived. Thus, the fossil species may have belonged to a more basal and now entirely extinct lineage, and/or it may be considered to support the hypothesis that the "avenue-builders" are the most ancient group of bowerbirds and retain many "primitive" features in their anatomy.
Other than S. marynguyenae, as of 2023 only one other prehistoric bowerbird species is known. This has not been named, as it is only known from the distal left ulna piece QM F57970 (AR19857), also found on the D-site Plateau of Riversleigh WHA, but in interval 3 of Faunal Zone B at the Ross Scott-Orr site, in late early Miocene (Burdigalian) sediments dated to 16.55 mya. Even though this piece of fossil bone is merely some 16 mm long, it is excellently preserved, and its features are characteristic of a smallish bowerbird the size of a black-eared catbird. Bowerbird ulnae – to the extent they have been studied – differ little between genera and species, but the Miocene fossil is unlike all living members of the family in one detail or another. If anything, it resembles the presumably more advanced groups ("maypole-builders" and true catbirds) more than the "avenue-builders" and given its age it may well have been one of the earliest members of either of the former two groups.
| Biology and health sciences | Corvoidea | Animals |
284788 | https://en.wikipedia.org/wiki/Cornus | Cornus | Cornus is a genus of about 30–60 species of woody plants in the family Cornaceae, commonly known as dogwoods or cornels, which can generally be distinguished by their blossoms, berries, and distinctive bark. Most are deciduous trees or shrubs, but a few species are nearly herbaceous perennial subshrubs, and some species are evergreen. Several species have small heads of inconspicuous flowers surrounded by an involucre of large, typically white petal-like bracts, while others have more open clusters of petal-bearing flowers. The various species of dogwood are native throughout much of temperate and boreal Eurasia and North America, with China, Japan, and the southeastern United States being particularly rich in native species.
Species include the common dogwood Cornus sanguinea of Eurasia, the widely cultivated flowering dogwood (Cornus florida) of eastern North America, the Pacific dogwood Cornus nuttallii of western North America, the Kousa dogwood Cornus kousa of eastern Asia, and two low-growing boreal species, the Canadian and Eurasian dwarf cornels (or bunchberries), Cornus canadensis and Cornus suecica respectively.
Depending on botanical interpretation, the dogwoods are variously divided into one to nine genera or subgenera; a broadly inclusive genus Cornus is accepted here.
Terminology
Cornus is the Latin word for the cornel tree, Cornus mas.
The name cornel dates to the 1550s, via German from Middle Latin cornolium, ultimately from the diminutive cornuculum, of cornum, the Latin word for the cornel cherry. Cornus means "horn",
presumably applied to the cherry after the example of κερασός, the Greek word for "cherry", which itself is of pre-Greek origin but reminiscent of κέρας, the Greek word for "horn".
The name "dog-tree" entered the English vocabulary before 1548, becoming "dogwood" by 1614. Once the name dogwood was affixed to this kind of tree, it soon acquired a secondary name as the hound's tree, while the fruits came to be known as "dogberries" or "houndberries" (the latter a name also for the berries of black nightshade, alluding to Hecate's hounds).
The name was explained, from as early as the 16th century itself, as derived from dag "skewer", as the wood of the tree was said to have been used to make butcher's skewers. This is uncertain, as the form *dagwood was never attested. It is also possible that the tree was named for its berry, called dogberry from at least the 1550s, where the implication could be that the quality of the berry is inferior, as it were "fit for a dog".
An older name of the dogwood in English is whipple-tree, occurring in a list of trees (as whipultre) in Geoffrey Chaucer Canterbury Tales.
This name is cognate with the Middle Low German wipel-bom "cornel", Dutch wepe, weype "cornel" (the wh- in Chaucer is unetymological, the word would have been Middle English wipel). The tree was so named for waving its branches, c.f. Middle Dutch wepelen "totter, waver", Frisian wepeln, German wippen.
The name whippletree, also whiffle-tree, now refers to an element of the traction of a horse-drawn cart linking the draw pole of the cart to the harnesses of the horses in file. In this sense it is first recorded in 1733. This mechanism was usually made from oak or ash (and not from dogwood), and it is unlikely that there is a connection to the name for whipple-tree for Cornus.
Description
Dogwoods have simple, untoothed leaves with the veins curving distinctively as they approach the leaf margins. Most dogwood species have opposite leaves, while a few, such as Cornus alternifolia and C. controversa, have their leaves alternate. Dogwood flowers have four parts. In many species, the flowers are borne separately in open (but often dense) clusters, while in various other species (such as the flowering dogwood), the flowers themselves are tightly clustered, lacking showy petals, but surrounded by four to six large, typically white petal-like bracts.
The fruits of all dogwood species are drupes with one or two seeds, often brightly colorful. The drupes of species in the subgenus Cornus are edible. Many are without much flavor. Cornus kousa and Cornus mas are sold commercially as edible fruit trees. The fruits of Cornus kousa have a sweet, tropical pudding like flavor in addition to hard pits. The fruits of Cornus mas are both tart and sweet when completely ripe. They have been eaten in Eastern Europe for centuries, both as food and medicine to fight colds and flus. They are very high in vitamin C. By contrast, the fruits of species in subgenus Swida are mildly toxic to people, though readily eaten by birds.
Dogwoods are used as food plants by the larvae of some species of butterflies and moths, including the emperor moth, the engrailed, the small angle shades, and the following case-bearers of the genus Coleophora: C. ahenella, C. salicivorella (recorded on Cornus canadensis), C. albiantennaella, C. cornella and C. cornivorella, with the latter three all feeding exclusively on Cornus.
Uses
Dogwoods are widely planted horticulturally, and the dense wood of the larger-stemmed species is valued for certain specialized purposes. Cutting boards and fine turnings can be made from this fine grained and beautiful wood. Over 32 different varieties of game birds, including quail, feed on the red seeds.
Horticulture
Various species of Cornus, particularly the flowering dogwood (Cornus florida), are ubiquitous in American gardens and landscaping; horticulturist Donald Wyman stated, "There is a dogwood for almost every part of the U.S. except the hottest and driest areas". In contrast, in Northwest Europe the lack of sharp winters and hot summers makes Cornus florida very shy of flowering.
Other Cornus species are stoloniferous shrubs that grow naturally in wet habitats and along waterways. Several of these are used along highways and in naturalizing landscape plantings, especially those species with bright red or bright yellow stems, particularly conspicuous in winter, such as Cornus stolonifera.
The following cultivars, of mixed or uncertain origin, have gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017):
'Eddie's White Wonder'
'Norman Hadden'
'Ormonde'
'Porlock'
Fruits
The species Cornus mas is commonly cultivated in southeastern Europe for its showy, edible berries, that have the color of the carnelian gemstone. Cornelian-cherries have one seed each and are used in syrups and preserves.
Wood
Dense and fine-grained, dogwood timber has a density of 0.79 and is highly prized for making loom shuttles, tool handles, roller skates and other small items that require a very hard and strong wood. Though it is tough for woodworking, some artisans favor dogwood for small projects such as walking canes, arrow making, mountain dulcimers and fine inlays. Dogwood wood is an excellent substitute for persimmon wood in the heads of certain golf clubs ("woods"). Dogwood lumber is rare in that it is not readily available with any manufacturer and must be cut down by the person(s) wanting to use it.
Larger items have also been occasionally made of dogwood, such as the screw-in basket-style wine or fruit presses. The first kinds of laminated tennis rackets were also made from this wood, cut into thin strips.
Dogwood twigs were used by U.S. pioneers to brush their teeth. They would peel off the bark, bite the twig and then scrub their teeth.
Traditional medicine
The bark of Cornus species is rich in tannins and has been used in traditional medicine as a substitute for quinine. During the American Civil War, confederate soldiers made a tea from the bark to treat pain and fevers, and used dogwood leaves in a poultice to cover wounds.
The Japanese cornel, C. officinalis, is used in traditional Chinese medicine as shān zhū yú for several minor ailments.
Classification
The following classification recognizes a single, inclusive genus Cornus, with four subgroups and ten subgenera supported by molecular phylogeny. Geographical ranges as native plants are given below. In addition, cultivated species occasionally persist or spread from plantings beyond their native ranges, but are rarely if ever locally invasive.
Blue- or white-fruited dogwoods
Paniculate or corymbose cymes; bracts minute, nonmodified; fruits globose or subglobose, white, blue, or black:
Subgenus Yinquania. Leaves opposite to subopposite; fall blooming.
Cornus oblonga. East Asia from Pakistan through the Himalayas and China.
Cornus peruviana. Costa Rica and Venezuela to Bolivia.
Subgenus Kraniopsis. Leaves opposite; summer blooming.
Cornus alba (Siberian dogwood). Siberia and northern China.
Cornus amomum (silky dogwood). Eastern U.S. east of the Great Plains except for the Deep South.
Cornus asperifolia (toughleaf dogwood). Southeastern U.S.
Cornus austrosinensis (South China dogwood). East Asia.
Cornus bretschneideri (Bretschneider's dogwood). Northern China.
Cornus coreana (Korean dogwood). Northeast Asia.
Cornus drummondii (roughleaf dogwood). U.S. between the Appalachia and the Great Plains, and southern Ontario, Canada.
Cornus excelsa. Mexico to Honduras.
Cornus foemina (stiff dogwood) Southeastern and southern United States.
Cornus glabrata (brown dogwood or smooth dogwood). Western North America.
Cornus hemsleyi (Hemsley's dogwood). Southwest China.
Cornus koehneana (Koehne's dogwood). Southwest China.
Cornus macrophylla (large-leafed dogwood; ). East Asia.
Cornus obliqua (pale dogwood). Northeastern and central U.S., and southeastern Canada.
Cornus paucinervis. China.
Cornus racemosa (northern swamp dogwood or gray dogwood). Northeastern and central U.S., and extreme southeastern Canada.
Cornus rugosa (round-leaf dogwood). Northeastern and north-central U.S., and southeastern Canada.
Cornus sanguinea (common dogwood). Europe.
Cornus sericea (red osier dogwood). Northern and western North America, except Arctic regions.
Cornus walteri (Walter's dogwood). Central China.
Cornus wilsoniana (ghost dogwood). China.
Cornus × arnoldiana (Hybrid: C. obliqua × C. racemosa). Eastern North America.
Subgenus Mesomora. Leaves alternate; summer blooming.
Cornus alternifolia (pagoda dogwood or alternate-leaf dogwood). Eastern U.S. and southeastern Canada.
Cornus controversa (table dogwood). East Asia.
Cornelian cherries
Umbellate cymes; bracts modified, non-petaloid; fruits oblong, red; stone walls filled with cavities:
Subgenus Afrocrania. Dioecious, bracts 4.
Cornus volkensii. Afromontane eastern Africa.
Subgenus Cornus. Plants hermaphroditic, bracts 4 or 6
Cornus eydeana. Yunnan in China
Cornus mas (European cornel or Cornelian-cherry). Mediterranean.
Cornus officinalis (Japanese cornel). China, Japan, Korea.
Cornus piggae (Late Paleocene, North Dakota)
Cornus sessilis (blackfruit cornel). California.
Subgenus Sinocornus. Plants hermaphroditic, bracts 4 or 6
Cornus chinensis (Chinese cornel). China.
Big-bracted dogwoods
Capitular cymes:
Subgenus Discocrania. Bracts 4, modified, non-petaloid; fruits oblong, red.
Cornus disciflora. Mexico and Central America
Subgenus Cynoxylon. Bracts 4 or 6, large and petaloid, fruits oblong, red.
Cornus florida (flowering dogwood). U.S. east of the Great Plains, north to southern Ontario.
Cornus nuttallii (Pacific dogwood). Western North America, from British Columbia to California.
Subgenus Syncarpea. Bracts 4, large and petaloid, fruits red, fused into a compound multi-stoned berry.
Cornus capitata (Himalayan flowering dogwood). Himalaya.
Cornus hongkongensis (Hong Kong dogwood). Southern China, Laos, Vietnam.
Cornus kousa (Kousa dogwood). Japan and (as subsp. chinensis) central and northern China.
Cornus multinervosa. Yunnan and Sichuan provinces of China
Dwarf dogwoods
Minute corymbose cymes; bracts 4, petaloid; fruit globose, red; rhizomatous herb:
Subgenus Arctocrania.
Cornus canadensis (Canadian dwarf cornel or bunchberry) Northern North America, southward in the Appalachian and Rocky Mountains.
Cornus suecica (Eurasian dwarf cornel or bunchberry). Northern Eurasia, locally in extreme northeast and northwest North America.
Cornus × unalaschkensis (Hybrid: C. canadensis × C. suecica). Aleutian Islands (Alaska), Greenland, and Labrador and Newfoundland in Canada.
Cornus wardiana (Evergreen dwarf cornel or bunchberry). Northern Myanmar.
Incertae sedis (unplaced)
Cornus clarnensis (Middle Eocene, Central Oregon)
Horticultural hybrids
Cornus × rutgersensis (Hybrid: C. florida × C. kousa). Horticulturally developed.
Cultural references
The inflorescence of the Pacific dogwood (Cornus nuttallii) is the official flower of the province of British Columbia. The flowering dogwood (Cornus florida) and its inflorescence are the state tree and the state flower respectively for the U.S. Commonwealth of Virginia. It is also the state tree of Missouri and the state flower of North Carolina, and the state memorial tree of New Jersey. The term "dogwood winter", in colloquial use in the American Southeast, especially Appalachia, is sometimes used to describe a cold snap in spring, presumably because farmers believed it was not safe to plant their crops until after the dogwoods blossomed.
Anne Morrow Lindbergh gives a vivid description of the dogwood tree in her poem "Dogwood".
Nicole Dollanganger released a song titled "Dogwood" on her 2023 album Married in Mount Airy.
| Biology and health sciences | Others | null |
284851 | https://en.wikipedia.org/wiki/Draco%20%28constellation%29 | Draco (constellation) | Draco is a constellation in the far northern sky. Its name is Latin for dragon. It was one of the 48 constellations listed by the 2nd century Greek astronomer Ptolemy, and remains one of the 88 modern constellations today. The north pole of the ecliptic is in Draco. Draco is circumpolar from northern latitudes, meaning that it never sets and can be seen at any time of year.
Features
Stars
Thuban (α Draconis) was the northern pole star from 3942 BC, when it moved farther north than Theta Boötis, until 1793 BC. The Egyptian Pyramids were designed to have one side facing north, with an entrance passage geometrically aligned so that Thuban would be visible at night. Due to the effects of precession, it will again be the pole star around the year AD 21000. It is a blue-white giant star of magnitude 3.7, 309 light-years from Earth. The traditional name of Alpha Draconis, Thuban, means "head of the serpent".
There are three stars above magnitude 3 in Draco. The brighter of the three, and the brightest star in Draco, is Gamma Draconis, traditionally called Etamin or Eltanin. It is an orange giant star of magnitude 2.2, 148 light-years from Earth. The aberration of starlight was discovered in 1728 when James Bradley observed Gamma Draconis. Nearby Beta Draconis, traditionally called Rastaban, is a yellow giant star of magnitude 2.8, 362 light-years from Earth. Its name shares a meaning with Thuban, "head of the serpent". Draco also features several interacting galaxies and galaxy clusters. One such massive cluster is Abell 2218, located at a distance of 3 billion light-years (redshift 0.171).
Draco is home to several double stars and binary stars. Eta Draconis (traditionally called Athebyne) is a double star with a yellow-hued primary of magnitude 2.8 and a white-hued secondary of magnitude 8.2 located south of the primary. The two are separated by 4.8 arcseconds. Mu Draconis, traditionally called Alrakis, is a binary star with two white components. Magnitude 5.6 and 5.7, the two components orbit each other every 670 years. The Alrakis system is 88 light-years from Earth. Nu Draconis is a similar binary star with two white components, 100 light-years from Earth. Both components are of magnitude 4.9 and can be distinguished in a small amateur telescope or a pair of binoculars. Omicron Draconis is a double star divisible in small telescopes. The primary is an orange giant of magnitude 4.6, 322 light-years from Earth. The secondary is of magnitude 7.8. Psi Draconis (traditionally called Dziban) is a binary star divisible in binoculars and small amateur telescopes, 72 light-years from Earth. The primary is a yellow-white star of magnitude 4.6 and the secondary is a yellow star of magnitude 5.8. 16 Draconis and 17 Draconis are part of a triple star 400 light-years from Earth, visible in medium-sized amateur telescopes. The primary, a blue-white star of magnitude 5.1, is itself a binary with components of magnitude 5.4 and 6.5. The secondary is of magnitude 5.5 and the system is 400 light-years away. 20 Draconis is a binary star with a white-hued primary of magnitude 7.1 and a yellow-hued secondary of magnitude 7.3 located east-northeast of the primary. The two are separated by 1.2 arcseconds at their maximum and have an orbital period of 420 years. As of 2012, the two components are approaching their maximum separation. 39 Draconis is a triple star 188 light-years from Earth, divisible in small amateur telescopes. The primary is a blue star of magnitude 5.0, the secondary is a yellow star of magnitude 7.4, and the tertiary is a star of magnitude 8.0; the tertiary appears to be a close companion to the primary. 40 Draconis and 41 Draconis are a binary star divisible in small telescopes. The two orange dwarf stars are 170 light-years from Earth and are of magnitude 5.7 and 6.1.
R Draconis is a red Mira-type variable star with a period of about 8 months. Its average minimum magnitude is approximately 12.4, and its average maximum magnitude is approximately 7.6. It was discovered to be a variable star by Hans Geelmuyden in 1876.
The constellation contains the star recently named Kepler-10, which has been confirmed to be orbited by Kepler-10b.
Deep-sky objects
One of the deep-sky objects in Draco is the Cat's Eye Nebula (NGC 6543), a planetary nebula approximately 3,000 light-years away that was discovered by English astronomer William Herschel in 1786. It is 9th magnitude and was named for its appearance in the Hubble Space Telescope, though it appears as a fuzzy blue-green disk in an amateur telescope. NGC 6543 has a very complex shape due to gravitational interactions between the components of the multiple star at its center, the progenitor of the nebula approximately 1,000 years ago. It is located 9.6 arcminutes away from the north ecliptic pole to the west-northwest. It is also related to IC 4677, a nebula that appears as a bar 1.8 arcminutes to the west of the Cat's Eye nebula. In long-term exposures, IC 4677 appears as a portion of a ring surrounding the planetary nebula.
There are several faint galaxies in Draco, one of which is the lenticular galaxy NGC 5866 (sometimes considered to be Messier Object 102) that bears its name to a small group that also includes the spiral galaxies NGC 5879 and NGC 5907. Another is the Draco Dwarf Galaxy, one of the least luminous galaxies with an absolute magnitude of −8.6 and a diameter of only about 3,500 light years, discovered by Albert G. Wilson of Lowell Observatory in 1954. Another dwarf galaxy found in this constellation is PGC 39058.
Draco also features several interacting galaxies and galaxy clusters. One such massive cluster is Abell 2218, located at a distance of 3 billion light-years (redshift 0.171). It acts as a gravitational lens for even more distant background galaxies, allowing astronomers to study those galaxies as well as Abell 2218 itself; more specifically, the lensing effect allows astronomers to confirm the cluster's mass as determined by x-ray emissions. One of the most well-known interacting galaxies is Arp 188, also called the "Tadpole Galaxy". Named for its appearance, which features a "tail" of stars 280,000 light-years long, the Tadpole Galaxy is at a distance of 420 million light-years (redshift 0.0314). The tail of stars drawn off the Tadpole Galaxy appears blue because the gravitational interaction disturbed clouds of gas and sparked star formation.
Q1634+706 is a quasar that holds the distinction of being the most distant object usually visible in an amateur telescope. At magnitude 14.4, it appears star-like, though it is at a distance of 12.9 billion light-years. The light of Q1634+706 has taken 8.6 billion years to reach Earth, a discrepancy attributable to the expansion of the universe.
The Hercules–Corona Borealis Great Wall, possibly the largest known structure in the universe, covers a part of the southern region of Draco.
Mythology
Draco (also known as ) is one of the 48 constellations listed in Ptolemy's Almagest (2nd century), adopted from the list by Eudoxus of Cnidus (4th century BC).
Draco was identified with several different dragons in Greek mythology. Gaius Julius Hyginus in De Astronomica reports that it was one of the Gigantes, who battled the Olympian gods for ten years in the Gigantomachy, before the goddess Athena killed it and tossed it into the sky upon his defeat. As Athena threw the dragon, it became twisted on itself and froze at the cold north celestial pole before it could right itself. Aelius Aristides names him Aster or Asterius ('star' or 'starry') and says that Athens' Great Panathenaea festival celebrated Athena's victory over him. The festival coincided with the culmination of the constellation's head as seen from the Athenian Acropolis.
The Catasterismi attributed to Eratosthenes identify Draco as Ladon, the dragon who guarded the golden apples of the Hesperides. When Heracles was tasked with stealing the golden apples during his twelve labors, he killed Ladon and Hera transformed Ladon into a constellation. In the sky, Hercules is depicted with one foot on the head of Draco. Sometimes, Draco is represented as the monstrous son of Gaia, Typhon.
Traditional Arabic astronomy does not depict a dragon in modern-day Draco, which is called the Mother Camels. Instead, two hyenas, represented by Eta Draconis and Zeta Draconis are seen attacking a baby camel (a dim star near Beta Draconis), which is protected by four female camels, represented by Beta Draconis, Gamma Draconis, Nu Draconis, and Xi Draconis. The nomads who own the camels are camped nearby, represented by a cooking tripod composed of Upsilon, Tau, and Sigma Draconis. However Arabic astronomers also knew of the Greek interpretation of the constellation, referring to it in Arabic as At-Tinnin (, 'the dragon'), which is the source of the formal name of Gamma Draconis, Eltanin, from raʾs al-tinnīn ('the head of the dragon').
Meteor showers
The October Draconids, also called Giacobinids, is a meteor shower associated with the periodic comet 21P/Giacobini-Zinner. The shower peaks on 8 October and it has experienced storms in 1933 and 1946, when the zenithal hourly rate (ZHR) was up to 10,000 meteors per hour. Further outbursts were observed in 1985, 1998, and 2011. During the 2011 outburst, ZHR reached 400 meteors/hour, however it was largely unnoticed visually due to interference by the bright Moon.
The February Eta Draconids is a meteor shower that was discovered on February 4, 2011. Observers noted six meteors with a common radiant in a short period. Its parent is a previously unknown long-period comet.
Namesakes
Draco was a United States Navy Crater class cargo ship named after the constellation.
The main character in the 1996 film Dragonheart gets his name from this constellation. The film also reveals that Draco is actually a dragon heaven, where dragons go when their time in this world is complete, if they have upheld the oath of an ancient dragon to guard mankind, with dragons otherwise fading into nothing upon their deaths. At the conclusion of the film, Draco, the last dragon, ascends into the constellation after he sacrifices himself to destroy an evil king.
The Dragon Variation of the Sicilian Defense chess opening was also named after the constellation by Russian chess master Fyodor Dus-Chotimirsky.
Draco Malfoy, an antagonist in the Harry Potter series, is named after the constellation as well.
| Physical sciences | Other | Astronomy |
284899 | https://en.wikipedia.org/wiki/Whale%20shark | Whale shark | The whale shark (Rhincodon typus) is a slow-moving, filter-feeding carpet shark and the largest known extant fish species. The largest confirmed individual had a length of . The whale shark holds many records for size in the animal kingdom, most notably being by far the most massive living non-cetacean animal. It is the sole member of the genus Rhincodon and the only extant member of the family Rhincodontidae, which belongs to the subclass Elasmobranchii in the class Chondrichthyes. Before 1984 it was classified as Rhiniodon into Rhinodontidae.
Whale sharks inhabit the open waters of all tropical oceans. They are rarely found in water below . Whale sharks' lifespans are estimated to be between 80 and 130 years, based on studies of their vertebral growth bands and the growth rates of free-swimming sharks. Whale sharks have very large mouths and are filter feeders, which is a feeding mode that occurs in only two other sharks, the megamouth shark and the basking shark. They feed almost exclusively on plankton and small fishes, and do not pose any threat to humans.
The species was distinguished in April 1828 after the harpooning of a specimen in Table Bay, South Africa. Andrew Smith, a military doctor associated with British troops stationed in Cape Town, described it the following year. The name "whale shark" refers to the animal's appearance and large size; it is a fish, not a mammal, and (like all sharks) is not closely related to whales.
Description
Whale sharks possess a broad, flattened head with a large mouth and two small eyes located at the front corners. Unlike many other sharks, whale shark mouths are located at the front of the head rather than on the underside of the head. A whale shark was reported to have a mouth across. Whale shark mouths can contain over 300 rows of tiny teeth and 20 filter pads which it uses to filter feed. The spiracles are located just behind the eyes. Whale sharks have five large pairs of gills. Their skin is dark grey with a white belly marked with an arrangement of pale grey or white spots and stripes that is unique to each individual. The skin can be up to thick and is very hard and rough to the touch. The whale shark has three prominent ridges along its sides, which start above and behind the head and end at the caudal peduncle. The shark has two dorsal fins set relatively far back on the body, a pair of pectoral fins, a pair of pelvic fins and a single medial anal fin. The caudal fin has a larger upper lobe than the lower lobe (heterocercal).
Whale sharks have been found to possess dermal denticles on the surface of their eyeballs that are structured differently from their body denticles. The dermal denticles, as well as the whale shark's ability to retract its eyes deep into their sockets, serve to protect the eyes from damage.
Evidence suggests that whale sharks can recover from major injuries and may be able to regenerate small sections of their fins. Their spot markings have also been shown to reform over a previously wounded area.
The complete and annotated genome of the whale shark was published in 2017.
Rhodopsin, the light-sensing pigment in the rod cells of the retina, is normally sensitive to green and used to see in dim light, but in the whale shark (and the bottom-dwelling cloudy catshark) two amino acid substitutions make the pigment more sensitive to blue light instead, the light that dominates the deep ocean. One of these mutations also makes rhodopsin vulnerable to higher temperatures. In humans, a similar mutation leads to congenital stationary night blindness, as the human body temperature makes the pigment decay. This pigment becomes unstable in shallow water, where the temperature is higher and the full spectrum of light is present. To protect from this instability, the whale shark deactivates the pigment when in shallow water (as otherwise the pigment would hinder full color vision). In the colder environment at 2,000 meters below the surface where the shark dives, it is activated again. The mutations thus allow the shark to see well at both ends of its great vertical range. The eyes have also lost all cone opsins except LWS.
Size
The whale shark is the largest non-cetacean animal in the world. However, the maximum size and growth patterns of the species are not well understood.
Limited evidence, mostly from males, suggests that sexual maturity occurs around in length, with the possibility of females sexually maturing at a similar size or larger.
Various studies have aimed to estimate the growth and longevity of whale sharks, either by analysing evidence from vertebral growth rings or measurements taken from re-sighted sharks over several years. This information is used to model growth curves, which can predict asymptotic length. The growth curves produced from these studies have estimated asymptotic lengths ranging from .
A 2020 study looked at the growth of whale shark individuals over a 10-year period around the Ningaloo Reef and concluded the species exhibits sexual dimorphism with regard to size, with females growing larger than males. The study found that males on average reach in length. The same study had less female data but estimated an average length of around . However, this value dropped to if data from aquarium whale sharks was included. The authors noted that these estimates represent average asymptotic size and are not the maximum sizes possible. Additionally, they acknowledged the potential for regional size variation.
Most previous growth studies have had data predominately from males and none have data from sharks over ~. Not all previous studies created separate growth curves for males and females, instead combining data from both sexes. Those studies that made sex-specific growth curves have estimated large asymptotic length estimates for males, with lengths of or more. However, mostly immature males were available in these studies, with few adults to constrain the upper portion of the growth curves.
The largest total length for the species is uncertain due to a lack of detailed documentation of the largest reported individuals. Whale sharks as large as in length have been reported in scientific literature. However, most whale sharks observed are smaller.
Large whale sharks are difficult to measure accurately, both on the land and in the water. When on land, the total length measurement can be affected by how the tail is positioned, either angled as it would be in life or stretched as far as possible. Historically, techniques such as comparisons to objects of known size and knotted ropes have been used for in-water measurements, but these techniques may be inaccurate. Various forms of photogrammetry have been used to improve the accuracy of in-water measurements, including underwater and aerial techniques.
Reports of large whale sharks
Since the 1800s, there have been accounts of very large whale sharks. Some of these are as follows:
In 1868, the Irish natural scientist Edward Perceval Wright obtained several small whale shark specimens in the Seychelles. Wright was informed of one whale shark that was measured as exceeding . Wright claimed to have observed specimens over and was told of specimens upwards of .
Hugh M. Smith described a huge animal caught in a bamboo fish trap in Thailand in 1919. The shark was too heavy to pull ashore, and no measurements were taken. Smith learned through independent sources that it was at least 10 wa (a Thai unit of length measuring between a person's outstretched arms). Smith noted that one wa could be interpreted as either or the approximate average of , based on the local fishermen. Later sources have stated this whale shark as approximately , but the accuracy of the estimate has been questioned.
In 1934, a ship named the Maunganui came across a whale shark in the southern Pacific Ocean and rammed it. The shark became stuck on the prow of the ship, supposedly with on one side and on the other, suggesting a total length of about .
Scott A. Eckert & Brent S. Stewart reported on satellite tracking of whale sharks from 1994 to 1996. Out of the 15 individuals tracked, two females were reported as measuring and . A long whale shark was reported as being stranded along the Ratnagiri coast in 1995. A female individual with a standard length of and an estimated total length at was reported from the Arabian Sea in 2001. In a 2015 study reviewing the size of marine megafauna, McClain and colleagues considered this female as being the most reliable and accurately measured.
On 7 February 2012, a large whale shark was found floating off the coast of Karachi, Pakistan. The length of the specimen was said to be between , with a weight of around .
Distribution and habitat
The whale shark inhabits all tropical and warm-temperate seas. The fish is primarily pelagic, and can be found in both coastal and oceanic habitats. Tracking devices have shown that the whale shark displays dynamic patterns of habitat utilization, likely in response to availability of prey. Whale sharks observed off the northeast Yucatan Peninsula tend to engage in inshore surface swimming between sunrise and mid-afternoon, followed by regular vertical oscillations in oceanic waters during the afternoon and overnight. About 95% of the oscillating period was spent in epipelagic depths (<), but whale sharks also took regular deep dives (>), often descending in brief "stutter steps", perhaps for foraging. The deepest recorded dive was . Whale sharks were also observed to remain continuously at depths of greater than for three days or more.
The whale shark is migratory and has two distinct subpopulations: an Atlantic subpopulation, from Maine and the Azores to Cape Agulhas, South Africa, and an Indo-Pacific subpopulation which holds 75% of the entire whale shark population. It usually roams between 30°N and 35°S where water temperatures are higher than but have been spotted as far north as the Bay of Fundy, Canada and the Sea of Okhotsk north of Japan and as far south as Victoria, Australia.
Seasonal feeding aggregations occur at several coastal sites such as the Persian Gulf and Gulf of Oman, Ningaloo Reef in Western Australia, Darwin Island in the Galápagos, Quintana Roo in Mexico, Mafia Island of Pwani Region in Tanzania, Inhambane province in Mozambique, the Philippines, around Mahe in the Seychelles, the Gujarat and Kerala coasts of India, Taiwan, southern China and Qatar.
In 2011, more than 400 whale sharks gathered off the Yucatan Coast. It was one of the largest gatherings of whale sharks recorded. Aggregations in that area are among the most reliable seasonal gatherings known for whale sharks, with large numbers occurring in most years between May and September. Associated ecotourism has grown rapidly to unsustainable levels.
Growth and reproduction
Growth, longevity, and reproduction of the whale shark are poorly understood.
Vertebral growth bands have been used to estimate the age, growth, and longevity of whale sharks. However, there was uncertainty as to whether vertebrae growth bands are formed annually or biannually. A 2020 study compared the ratio of Carbon-14 isotopes found in growth bands of whale shark vertebrae to nuclear testing events in the 1950–60s, finding that growth bands are laid down annually. The study found an age of 50 years for a female and 35 years for a male. Various studies looking at vertebrae growth bands and measuring whale sharks in the wild have estimated their lifespans from ~80 years and up to ~130 years.
Evidence suggests that males grow faster than females in the earlier stages of life but ultimately reach a smaller maximum size. Whale sharks exhibit late sexual maturity. One study looking at free-swimming whale sharks estimated the age at maturity in males at ~25 years.
Pupping of whale sharks has not been observed, but mating has been witnessed twice in St Helena. Mating in this species was filmed for the first time in whale sharks off Ningaloo Reef via airplane in Australia in 2019, when a larger male unsuccessfully attempted to mate with a smaller, immature female.
The capture of a ~ female in July 1996 that was pregnant with ~300 pups indicated that whale sharks are ovoviviparous. The eggs remain in the body and the females give birth to live young which are long. Evidence indicates the pups are not all born at once, but rather the female retains sperm from one mating and produces a steady stream of pups over a prolonged period.
On 7 March 2009, marine scientists in the Philippines discovered what is believed to be the smallest living specimen of the whale shark. The young shark, measuring only , was found with its tail tied to a stake at a beach in Pilar, Sorsogon, Philippines, and was released into the wild. Based on this discovery, some scientists no longer believe this area is just a feeding ground; this site may be a birthing ground, as well. Both young whale sharks and pregnant females have been seen in the waters of St Helena in the South Atlantic Ocean, where numerous whale sharks can be spotted during the summer.
In a report from Rappler last August 2019, whale sharks were sighted during WWF Philippines' photo identification activities in the first half of the year. There were a total 168 sightings – 64 of them "re-sightings" or reappearances of previously recorded whale sharks. WWF noted that "very young whale shark juveniles" were identified among the 168 individuals spotted in the first half of 2019. Their presence suggests that the Ticao Pass may be a pupping ground for whale sharks, further increasing the ecological significance of the area.
Large adult females, often pregnant, are a seasonal presence around the Galapagos Islands, which may have reproductive significance. One study between 2011 and 2013 found that 91.5% of the whale sharks observed around Darwin Island were adult females.
Diet
The whale shark is a filter feeder – one of only three known filter-feeding shark species (along with the basking shark and the megamouth shark). It feeds on plankton including copepods, krill, fish eggs, Christmas Island red crab larvae and small nektonic life, such as small squid or fish. It also feeds on clouds of eggs during mass spawning of fish and corals. The many rows of vestigial teeth play no role in feeding. Feeding occurs either by ram filtration, in which the animal opens its mouth and swims forward, pushing water and food into the mouth, or by active suction feeding, in which the animal opens and closes its mouth, sucking in volumes of water that are then expelled through the gills. In both cases, the filter pads serve to separate food from water. These unique, black sieve-like structures are presumed to be modified gill rakers. Food separation in whale sharks is by cross-flow filtration, in which the water travels nearly parallel to the filter pad surface, not perpendicularly through it, before passing to the outside, while denser food particles continue to the back of the throat. This is an extremely efficient filtration method that minimizes fouling of the filter pad surface. Whale sharks have been observed "coughing", presumably to clear a build-up of particles from the filter pads. Whale sharks migrate to feed and possibly to breed.
The whale shark is an active feeder, targeting concentrations of plankton or fish. It is able to ram filter feed or can gulp in a stationary position. This is in contrast to the passive feeding basking shark, which does not pump water. Instead, it swims to force water across its gills.
A juvenile whale shark is estimated to eat 21 kg (46 pounds) of plankton per day.
The BBC program Planet Earth filmed a whale shark feeding on a school of small fish. The same documentary showed footage of a whale shark timing its arrival to coincide with the mass spawning of fish shoals and feeding on the resultant clouds of eggs and sperm.
Whale sharks are known to prey on a range of planktonic and small nektonic organisms that are spatiotemporally patchy. These include krill, crab larvae, jellyfish, sardines, anchovies, mackerels, small tunas, and squid. In ram filter feeding, the fish swims forward at constant speed with its mouth fully open, straining prey particles from the water by forward propulsion. This is also called ‘passive feeding’, which usually occurs when prey is present at low density.
Due their mode of feeding, whale sharks are susceptible to the ingestion of microplastics. As such, the presence of microplastics in whale shark scat was recently confirmed.
Relationship with humans
Behavior toward divers
Despite its size, the whale shark does not pose any danger to humans. Younger whale sharks are gentle and can play with divers. Underwater photographers such as Fiona Ayerst have photographed them swimming close to humans without any danger. Although whale sharks are docile fish, touching or riding the sharks is strictly forbidden and fineable in most countries, as it can cause serious harm to the animal.
The shark is seen by divers in many places, including the Bay Islands in Honduras, Thailand, Indonesia (Bone Bolango, Cendrawasih Bay), the Philippines, the Maldives close to Maamigili (South Ari Atoll), the Red Sea, Western Australia (Ningaloo Reef, Christmas Island), Taiwan, Panama (Coiba Island), Belize, Tofo Beach in Mozambique, Sodwana Bay (Greater St. Lucia Wetland Park) in South Africa, the Galapagos Islands, Saint Helena, Isla Mujeres (Caribbean Sea), La Paz, Baja California Sur and Bahía de los Ángeles in Mexico, the Seychelles, West Malaysia, islands off eastern peninsular Malaysia, India, Sri Lanka, Oman, Fujairah, Puerto Rico, and other parts of the Caribbean. Juveniles can be found near the shore in the Gulf of Tadjoura, near Djibouti, in the Horn of Africa.
Conservation status
There is currently no robust estimate of the global whale shark population. The species is considered endangered by the IUCN due to the impacts of fisheries, by-catch losses, and vessel strikes, combined with its long lifespan and late maturation. In June 2018 the New Zealand Department of Conservation classified the whale shark as "Migrant" with the qualifier "Secure Overseas" under the New Zealand Threat Classification System.
It is listed, along with six other species of sharks, under the CMS Memorandum of Understanding on the Conservation of Migratory Sharks. In 1998, the Philippines banned all fishing, selling, importing, and exporting of whale sharks for commercial purposes, followed by India in May 2001 and Taiwan in May 2007.
In 2010, the Gulf of Mexico oil spill resulted in of oil flowing into an area south of the Mississippi River Delta, where one-third of all whale shark sightings in the northern part of the gulf have occurred in recent years. Sightings confirmed that the whale sharks were unable to avoid the oil slick, which was situated on the surface of the sea where the whale sharks feed for several hours at a time. No dead whale sharks were found.
This species was also added to Appendix II of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) in 2003 to regulate the international trade of live specimens and its parts.
It was reported in 2014 that hundreds of whale sharks were illegally killed every year in China for their fins, skins, and oil.
In captivity
The whale shark is popular in the few public aquariums that keep it, but its large size means that a very large tank is required and it has specialized feeding requirements. Their large size and iconic status have also fueled an opposition to keeping the species in captivity, especially after the early death of some whale sharks in captivity and certain Chinese aquariums keeping the species in relatively small tanks.
The first attempt at keeping whale sharks in captivity was in 1934 when an individual was kept for about four months in a netted-off natural bay in Izu, Japan. The first attempt of keeping whale sharks in an aquarium was initiated in 1980 by the Okinawa Churaumi Aquarium (then known as Okinawa Ocean Expo Aquarium) in Japan. Since 1980, several have been kept at Okinawa, mostly obtained from incidental catches in coastal nets set by fishers (none after 2009), but two were strandings. Several of these were already weak from the capture/stranding and some were released, but initial captive survival rates were low. After the initial difficulties in maintaining the species had been resolved, some have survived long-term in captivity. The record for a whale shark in captivity is an individual that, as of 2021, has lived for more than 26 years in the Okinawa Churaumi Aquarium from Okinawa Ocean Expo Aquarium. Following Okinawa, Osaka Aquarium started keeping whale sharks and most of the basic research on the keeping of the species was made at these two institutions.
Since the mid-1990s, several other aquariums have kept the species in Japan (Kagoshima Aquarium, Kinosaki Marine World, Notojima Aquarium, Oita Marine Palace Aquarium, and Yokohama Hakkeijima Sea Paradise), South Korea (Aqua Planet Jeju), China (Chimelong Ocean Kingdom, Dalian Aquarium, Guangzhou Aquarium in Guangzhou Zoo, Qingdao Polar Ocean World and Yantai Aquarium), Taiwan (National Museum of Marine Biology and Aquarium), India (Thiruvananthapuram Aquarium) and Dubai (Atlantis, The Palm), with some maintaining whale sharks for years and others only for a very short period. The whale shark kept at Dubai's Atlantis, The Palm was rescued from shallow waters in 2008 with extensive abrasions to the fins and after rehabilitation it was released in 2010, having lived 19 months in captivity. Marine Life Park in Singapore had planned on keeping whale sharks but scrapped this idea in 2009.
Outside Asia, the first and so far only place to keep whale sharks is Georgia Aquarium in Atlanta, United States. This is unusual because of the comparatively long transport time and complex logistics required to bring the sharks to the aquarium, ranging between 28 and 36 hours. Georgia keeps two whale sharks: two males, Taroko and Yushan, who both arrived in 2007. Two earlier males at Georgia Aquarium, Ralph and Norton, both died in 2007. Trixie died in 2020. Alice died in 2021. Georgia's whale sharks were all imported from Taiwan and were taken from the commercial fishing quota for the species, usually used locally for food. Taiwan closed this fishery entirely in 2008.
Human culture
In Madagascar, whale sharks are called in Malagasy, meaning "many stars", after the appearance of the markings on the shark's back.
In the Philippines, it is called and . The whale shark is featured on the reverse of the Philippine 100-peso bill. By law snorkelers must maintain a distance of from the sharks and there is a fine and possible prison sentence for anyone who touches the animals.
Whale sharks are also known as in Japan (because the markings resemble patterns typically seen on ); (roughly "star from the East") in Indonesia; and (literally "sir fish") in Vietnam.
The whale shark is also featured on the latest 2015–2017 edition of the Maldivian 1000 rufiyaa banknote, along with the green turtle.
| Biology and health sciences | Sharks | null |
285152 | https://en.wikipedia.org/wiki/Pineal%20gland | Pineal gland | The pineal gland (also known as the pineal body or epiphysis cerebri) is a small endocrine gland in the brain of most vertebrates. It produces melatonin, a serotonin-derived hormone, which modulates sleep patterns following the diurnal cycles. The shape of the gland resembles a pine cone, which gives it its name. The pineal gland is located in the epithalamus, near the center of the brain, between the two hemispheres, tucked in a groove where the two halves of the thalamus join. It is one of the neuroendocrine secretory circumventricular organs in which capillaries are mostly permeable to solutes in the blood.
The pineal gland is present in almost all vertebrates, but is absent in protochordates in which there is a simple pineal homologue. The hagfish, archaic vertebrates, lack a pineal gland. In some species of amphibians and reptiles, the gland is linked to a light-sensing organ, variously called the parietal eye, the pineal eye or the third eye. Reconstruction of the biological evolution pattern suggests that the pineal gland was originally a kind of atrophied photoreceptor that developed into a neuroendocrine organ.
Ancient Greeks were the first to notice the pineal gland and believed it to be a valve, a guardian for the flow of pneuma. Galen in the 2nd century C.E. could not find any functional role and regarded the gland as a structural support for the brain tissue. He gave the name konario, meaning cone or pinecone, which during Renaissance was translated to Latin as pinealis. The 17th century philosopher René Descartes regarded the gland as having a mystical purpose, describing it as the "principal seat of the soul". In the mid-20th century, the biological role as a neuroendocrine organ was established.
Etymology
The word pineal, from Latin pinea (pine-cone) in reference to the gland's similar shape, was first used in the late 17th century.
Structure
The pineal gland is a pine cone-shaped (hence the name), unpaired midline brain structure. It is reddish-gray in colour and about the size of a grain of rice (5–8 mm) in humans. It forms part of the epithalamus. It is attached to the rest of the brain by a pineal stalk. The ventral lamina of the pineal stalk is continuous with the posterior commissure, and its dorsal lamina with the habenular commissure.
Location
It normally lies in a depression between the two superior colliculi. It is situated between the laterally positioned thalamic bodies, and posterior to the habenular commissure. It is located in the quadrigeminal cistern. It is located posterior to the third ventricle and encloses the small, cerebrospinal fluid-filled pineal recess of the third ventricle which projects into the stalk of the gland.
Blood supply
Unlike most of the mammalian brain, the pineal gland is isolated from the body by the blood–brain barrier system; it has profuse blood flow, second only to the kidney, supplied from the choroidal branches of the posterior cerebral artery.
Afferents
The afferent nerve supply of pineal gland is by the nervus conarii which receives postganglionic sympathetic afferents from the superior cervical ganglion, and parasympathetic afferents from the pterygopalatine ganglia and otic ganglia. According to research on animals, neurons of the trigeminal ganglion that are involved in pituitary adenylate cyclase-activating peptide neuropeptide signaling project the gland.
Neural pathway for melatonin production
The canonical neural pathway regulating pineal melatonin production begins in the eye with the intrinsically photosensitive ganglion cells of the retina which project inhibitory GABAergic efferents to the paraventricular nucleus of hypothalamus via the retinohypothalamic tract. The paraventricular nucleus in turn projects to the superior cervical ganglia, which finally projects to the pineal gland. Darkness thus leads to disinhibition of the paraventricular nucleus, leading it to activate pineal gland melatonin production by way of the superior cervical ganglia.
Microanatomy
The pineal body in humans consists of a lobular parenchyma of pinealocytes surrounded by connective tissue spaces. The gland's surface is covered by a pial capsule.
The pineal gland consists mainly of pinealocytes, but four other cell types have been identified. As it is quite cellular (in relation to the cortex and white matter), it may be mistaken for a neoplasm.
Development
The human pineal gland grows in size until about 1–2 years of age, remaining stable thereafter, although its weight increases gradually from puberty onwards. The abundant melatonin levels in children are believed to inhibit sexual development, and pineal tumors have been linked with precocious puberty. When puberty arrives, melatonin production is reduced.
Symmetry
In the zebrafish the pineal gland does not straddle the midline, but shows a left-sided bias. In humans, functional cerebral dominance is accompanied by subtle anatomical asymmetry.
Function
One function of the pineal gland is to produce melatonin. Melatonin has various functions in the central nervous system, the most important of which is to help modulate sleep patterns. Melatonin production is stimulated by darkness and inhibited by light. Light sensitive nerve cells in the retina detect light and send this signal to the suprachiasmatic nucleus (SCN), synchronizing the SCN to the day-night cycle. Nerve fibers then relay the daylight information from the SCN to the paraventricular nuclei (PVN), then to the spinal cord and via the sympathetic system to superior cervical ganglia (SCG), and from there into the pineal gland.
The compound pinoline is also claimed to be produced in the pineal gland; it is one of the beta-carbolines. This claim is subject to some controversy.
Regulation of the pituitary gland
Studies on rodents suggest that the pineal gland influences the pituitary gland's secretion of the sex hormones, follicle-stimulating hormone (FSH), and luteinizing hormone (LH). Pinealectomy performed on rodents produced no change in pituitary weight, but caused an increase in the concentration of FSH and LH within the gland. Administration of melatonin did not return the concentrations of FSH to normal levels, suggesting that the pineal gland influences pituitary gland secretion of FSH and LH through an undescribed transmitting molecule.
The pineal gland contains receptors for the regulatory neuropeptide, endothelin-1, which, when injected in picomolar quantities into the lateral cerebral ventricle, causes a calcium-mediated increase in pineal glucose metabolism.
Regulation of bone metabolism
Studies in mice suggest that the pineal-derived melatonin regulates new bone deposition. Pineal-derived melatonin mediates its action on the bone cells through MT2 receptors. This pathway could be a potential new target for osteoporosis treatment as the study shows the curative effect of oral melatonin treatment in a postmenopausal osteoporosis mouse model.
Clinical significance
Calcification
Calcification of the pineal gland is typical in young adults, and has been observed in children as young as two years of age. The internal secretions of the pineal gland are known to inhibit the development of the reproductive glands because when it is severely damaged in children, development of the sexual organs and the skeleton are accelerated. Pineal gland calcification is detrimental to its ability to synthesize melatonin and scientific literature presents inconclusive findings on whether it causes sleep problems.
The calcified gland is often seen in skull X-rays. Calcification rates vary widely by country and correlate with an increase in age, with calcification occurring in an estimated 40% of Americans by age seventeen. Calcification of the pineal gland is associated with corpora arenacea, also known as "brain sand".
Tumors
Tumors of the pineal gland are called pinealomas. These tumors are rare and 50% to 70% are germinomas that arise from sequestered embryonic germ cells. Histologically they are similar to testicular seminomas and ovarian dysgerminomas.
A pineal tumor can compress the superior colliculi and pretectal area of the dorsal midbrain, producing Parinaud's syndrome. Pineal tumors also can cause compression of the cerebral aqueduct, resulting in a noncommunicating hydrocephalus. Other manifestations are the consequence of their pressure effects and consist of visual disturbances, headache, mental deterioration, and sometimes dementia-like behaviour.
These neoplasms are divided into three categories: pineoblastomas, pineocytomas, and mixed tumors, based on their level of differentiation, which, in turn, correlates with their neoplastic aggressiveness. The clinical course of patients with pineocytomas is prolonged, averaging up to several years. The position of these tumors makes them difficult to remove surgically.
Other conditions
The morphology of the pineal gland differs markedly in different pathological conditions. For instance, it is known that its volume is reduced in both obese patients and those with primary insomnia.
Other animals
Nearly all vertebrate species possess a pineal gland. The most important exception is a primitive vertebrate, the hagfish. Even in the hagfish, however, there may be a "pineal equivalent" structure in the dorsal diencephalon. A few more complex vertebrates have lost pineal glands over the course of their evolution. The lamprey (another primitive vertebrate), however, does possess one. The lancelet Branchiostoma lanceolatum, an early chordate which is a close relative to vertebrates, also lacks a recognizable pineal gland. Protochordates in general do not have the distinct structure as an organ, but they have a mass of photoreceptor cells called lamellar body, which is regarded as a pineal homologue.
The results of various scientific research in evolutionary biology, comparative neuroanatomy and neurophysiology have explained the evolutionary history (phylogeny) of the pineal gland in different vertebrate species. From the point of view of biological evolution, the pineal gland is a kind of atrophied photoreceptor. In the epithalamus of some species of amphibians and reptiles, it is linked to a light-sensing organ, known as the parietal eye, which is also called the pineal eye or third eye. It is likely that the common ancestor of all vertebrates had a pair of photosensory organs on the top of its head, similar to the arrangement in modern lampreys. In many lower vertebrates (such as species of fish, amphibians and lizards), the pineal gland is associated with parietal or pineal eye. In these animals, the parietal eye acts as a photoreceptor, and hence are also known as the third eye, and they can be seen on top of the head in some species. Some extinct Devonian fishes have two parietal foramina in their skulls, suggesting an ancestral bilaterality of parietal eyes. The parietal eye and the pineal gland of living tetrapods are probably the descendants of the left and right parts of this organ, respectively.
During embryonic development, the parietal eye and the pineal organ of modern lizards and tuataras form together from a pocket formed in the brain ectoderm. The loss of parietal eyes in many living tetrapods is supported by developmental formation of a paired structure that subsequently fuses into a single pineal gland in developing embryos of turtles, snakes, birds, and mammals.
The pineal organs of mammals fall into one of three categories based on shape. Rodents have more structurally complex pineal glands than other mammals.
Crocodilians and some tropical lineages of mammals (some xenarthrans (sloths), pangolins, sirenians (manatees and dugongs), and some marsupials (sugar gliders)) have lost both their parietal eye and their pineal organ. Polar mammals, such as walruses and some seals, possess unusually large pineal glands.
All amphibians have a pineal organ, but some frogs and toads also have what is called a "frontal organ", which is essentially a parietal eye.
Pinealocytes in many non-mammalian vertebrates have a strong resemblance to the photoreceptor cells of the eye. Evidence from morphology and developmental biology suggests that pineal cells possess a common evolutionary ancestor with retinal cells.
Pineal cytostructure seems to have evolutionary similarities to the retinal cells of the lateral eyes. Modern birds and reptiles express the phototransducing pigment melanopsin in the pineal gland. Avian pineal glands are thought to act like the suprachiasmatic nucleus in mammals. The structure of the pineal eye in modern lizards and tuatara is analogous to the cornea, lens, and retina of the lateral eyes of vertebrates.
In most vertebrates, exposure to light sets off a chain reaction of enzymatic events within the pineal gland that regulates circadian rhythms. In humans and other mammals, the light signals necessary to set circadian rhythms are sent from the eye through the retinohypothalamic system to the suprachiasmatic nuclei (SCN) and the pineal gland.
The fossilized skulls of many extinct vertebrates have a pineal foramen (opening), which in some cases is larger than that of any living vertebrate. Although fossils seldom preserve deep-brain soft anatomy, the brain of the Russian fossil bird Cerebavis cenomanica from Melovatka, about 90 million years old, shows a relatively large parietal eye and pineal gland.
History
The secretory activity of the pineal gland is only partially understood. Its location deep in the brain suggested to philosophers throughout history that it possessed particular importance. This combination led to its being regarded as a "mystery" gland with mystical, metaphysical, and occult theories surrounding its perceived functions. The earliest recorded description of the pineal gland is from the Greek physician Galen in the 2nd century C.E. According to Galen, Herophilus (325–280 B.C.E.) had already considered the structure as a kind of valve that partitioned the brain chambers, particularly for the flow of vital spirits (pneuma). Specifically, the ancient Greeks believed that the structure maintained the movement of vital spirits from the middle (now identified as the third) ventricle to the one in the parencephalis (fourth ventricle).
Galen described the pineal gland in De usu partium corporis humani, libri VII (On the Usefulness of Parts of the Body, Part 8) and De anatomicis administrationibus, libri IX (On Anatomical Procedures, Part 9). He introduced the name κωνάριο (konario, often Latinised as conarium) that means cone, as in pinecone, in De usu partium corporis humani. He correctly located the gland as directly lying behind the third ventricle. He argued against the prevailing concept of it as a valve for two basic reasons: it is located outside of the brain tissue and it does not move on its own.
Galen instead identified the valve as a worm-like structure in the cerebellum (later called vermiform epiphysis, known today as the vermis cerebelli or cerebellar vermis). From his study on the blood vessels surrounding the pineal gland he discovered the great vein of the cerebellum, later called the vein of Galen. He could not establish any functional role of the pineal gland and regarded it as a structural support for the cerebral veins.
Seventeenth-century philosopher and scientist René Descartes discussed the pineal gland both in his first book, the Treatise of Man (written before 1637, but only published posthumously 1662/1664), and in his last book, The Passions of the Soul (1649) and he regarded it as "the principal seat of the soul and the place in which all our thoughts are formed". In the Treatise of Man, he described conceptual models of man, namely creatures created by God, which consist of two ingredients, a body and a soul. In the Passions, he split man up into a body and a soul and emphasized that the soul is joined to the whole body by "a certain very small gland situated in the middle of the brain's substance and suspended above the passage through which the spirits in the brain's anterior cavities communicate with those in its posterior cavities". Descartes gave importance to the structure as it was the only unpaired component of the brain.
The Latin name pinealis became popular in the 17th century. For example, English physician Thomas Willis described a glandula pinealis in his book, Cerebri anatome cui accessit nervorum descriptio et usus (1664). Willis criticised Descartes's concept, remarking: "we can scarce[ly] believe this to be the seat of the Soul, or its chief Faculties to arise from it; because Animals, which seem to be almost quite destitute of Imagination, Memory, and other superior Powers of the Soul, have this Glandula or Kernel large and fair enough."
Walter Baldwin Spencer at the University of Oxford gave the first description of the pineal gland in lizards. In 1886, he described an eye-like structure, which he called the pineal eye or parietal eye, that was associated with the parietal foramen and the pineal stalk. The presence of a pineal body was already discovered by German zoologist Franz Leydig in 1872, in European lizards. Leydig called them the "frontal organ" (German stirnorgan). In 1918, Swedish zoologist Nils Holmgren described the "parietal eye" in frogs and dogfish. He discovered that the parietal eyes were made up of sensory cells similar to the cone cells of the retina, and suggested that it was a primitive light-sensor organ (photoreceptor).
The pineal gland was originally believed to be a "vestigial remnant" of a larger organ. Epiphysan – an extract derived from the pineal glands of cattle – was historically used by veterinarians for rut suppression in mares and cows. In the 1930s it was tested on humans, and resulted in a temporary reduction in their masturbation impulse. In 1917, it was known that extract of cow pineals lightened frog skin. Dermatology professor Aaron B. Lerner and colleagues at Yale University, hoping that a hormone from the pineal gland might be useful in treating skin diseases, isolated it and named it melatonin in 1958. The substance did not prove to be helpful as intended, but its discovery helped solve several mysteries, such as why removing a rat's pineal gland accelerated its ovary growth, why keeping rats in constant light decreased the weight of their pineals, and why pinealectomy and constant light affect ovary growth to an equal extent; this knowledge gave a boost to the then-new field of chronobiology. Of the endocrine organs, the function of the pineal gland was the last to be discovered.
Society and culture
The notion of a "pineal-eye" is central to the philosophy of the French writer Georges Bataille, which is analyzed at length by literary scholar Denis Hollier in his study Against Architecture. In this work Hollier discusses how Bataille uses the concept of a "pineal-eye" as a reference to a blind-spot in Western rationality, and an organ of excess and delirium. This conceptual device is explicit in his surrealist texts, The Jesuve and The Pineal Eye.
In the late 19th century Madame Blavatsky, founder of theosophy, identified the pineal gland with the Hindu concept of the third eye, or the Ajna chakra. This association is still popular today. The pineal gland has also featured in other religious contexts, such as in the Principia Discordia, which claims it can be used to contact the goddess of discord Eris.
In the short story "From Beyond" by H. P. Lovecraft, a scientist creates an electronic device that emits a resonance wave, which stimulates an affected person's pineal gland, thereby allowing them to perceive planes of existence outside the scope of accepted reality, a translucent, alien environment that overlaps our own recognized reality. It was adapted as a film of the same name in 1986. The 2013 horror film Banshee Chapter is heavily influenced by this short story.
In Season 16, episode 6 of "American Dad" Steve tries to "astral project" using his pineal gland to help him understand the meaning of life. The episode is entitled "The Wondercabinet".
Additional images
The pineal body is labeled in these images.
| Biology and health sciences | Endocrine system | Biology |
285157 | https://en.wikipedia.org/wiki/Melatonin | Melatonin | Melatonin, an indoleamine, is a natural compound produced by various organisms, including bacteria and eukaryotes. Its discovery in 1958 by Aaron B. Lerner and colleagues stemmed from the isolation of a substance from the pineal gland of cows that could induce skin lightening in common frogs. This compound was later identified as a hormone secreted in the brain during the night, playing a crucial role in regulating the sleep-wake cycle, also known as the circadian rhythm, in vertebrates.
In vertebrates, melatonin's functions extend to synchronizing sleep-wake cycles, encompassing sleep-wake timing and blood pressure regulation, as well as controlling seasonal rhythmicity (circannual cycle), which includes reproduction, fattening, molting, and hibernation. Its effects are mediated through the activation of melatonin receptors and its role as an antioxidant. In plants and bacteria, melatonin primarily serves as a defense mechanism against oxidative stress, indicating its evolutionary significance. The mitochondria, key organelles within cells, are the main producers of antioxidant melatonin, underscoring the molecule's "ancient origins" and its fundamental role in protecting the earliest cells from reactive oxygen species.
In addition to its endogenous functions as a hormone and antioxidant, melatonin is also administered exogenously as a dietary supplement and medication. It is utilized in the treatment of sleep disorders, including insomnia and various circadian rhythm sleep disorders.
Biological activity
In humans, melatonin primarily acts as a potent full agonist of two types of melatonin receptors: melatonin receptor 1, with picomolar binding affinity, and melatonin receptor 2, with nanomolar binding affinity. Both receptors are part of the G-protein coupled receptors (GPCRs) family, specifically the Gi/o alpha subunit GPCRs, although melatonin receptor 1 also exhibits coupling with Gq alpha subunit.
Furthermore, melatonin functions as a high-capacity antioxidant, or free radical scavenger, within mitochondria, playing a dual role in combating cellular oxidative stress. First, it directly neutralizes free radicals, and second, it promotes the gene expression of essential antioxidant enzymes, such as superoxide dismutase, glutathione peroxidase, glutathione reductase, and catalase. This increase in antioxidant enzyme expression is mediated through signal transduction pathways activated by the binding of melatonin to its receptors. Through these mechanisms, melatonin protects the cell against oxidative stress in two ways, and plays other roles in human health than only regulating the sleep-wake cycle.
Biological functions
Circadian rhythm
In mammals, melatonin is critical for the regulation of sleep–wake cycles, or circadian rhythms. The establishment of regular melatonin levels in human infants occurs around the third month after birth, with peak concentrations observed between midnight and 8:00 am. It has been documented that melatonin production diminishes as a person ages. Additionally, a shift in the timing of melatonin secretion is observed during adolescence, resulting in delayed sleep and wake times, increasing their risk for delayed sleep phase disorder during this period.
The antioxidant properties of melatonin were first recognized in 1993. In vitro studies reveal that melatonin directly neutralizes various reactive oxygen species, including hydroxyl (OH•), superoxide (O2−•), and reactive nitrogen species such as nitric oxide (NO•). In plants, melatonin works synergistically with other antioxidants, enhancing the overall effectiveness of each antioxidant. This compound has been found to be twice as efficacious as vitamin E, a known potent lipophilic antioxidant, at scavenging peroxyl radicals. The promotion of antioxidant enzyme expression, such as superoxide dismutase, glutathione peroxidase, glutathione reductase, and catalase, is mediated through melatonin receptor-triggered signal transduction pathways.
Melatonin's concentration in the mitochondrial matrix is significantly higher than that found in the blood plasma, emphasizing its role not only in direct free radical scavenging but also in modulating the expression of antioxidant enzymes and maintaining mitochondrial integrity. This multifaceted role shows the physiological significance of melatonin as a mitochondrial antioxidant, a notion supported by numerous scholars.
Furthermore, the interaction of melatonin with reactive oxygen and nitrogen species results in the formation of metabolites capable of reducing free radicals. These metabolites, including cyclic 3-hydroxymelatonin, N1-acetyl-N2-formyl-5-methoxykynuramine (AFMK), and N1-acetyl-5-methoxykynuramine (AMK), contribute to the broader antioxidative effects of melatonin through further redox reactions with free radicals.
Immune system
Melatonin's interaction with the immune system is recognized, yet the specifics of these interactions remain inadequately defined. An anti-inflammatory effect appears to be the most significant. The efficacy of melatonin in disease treatment has been the subject of limited trials, with most available data deriving from small-scale, preliminary studies. It is posited that any beneficial immunological impact is attributable to melatonin's action on high-affinity receptors (MT1 and MT2), which are present on immunocompetent cells. Preclinical investigations suggest that melatonin may augment cytokine production and promote the expansion of T cells, thereby potentially mitigating acquired immunodeficiencies.
Weight regulation
Melatonin's potential to regulate weight gain is posited to involve its inhibitory effect on leptin, a hormone that serves as a long-term indicator of the body's energy status. Leptin is important for regulating energy balance and body weight by signaling satiety and reducing food intake. Melatonin, by modulating leptin's actions outside of waking hours, may contribute to the restoration of leptin sensitivity during daytime, thereby counteracting leptin resistance.
Biochemistry
Biosynthesis
The biosynthesis of melatonin in animals involves a sequence of enzymatic reactions starting with L-tryptophan, which can be synthesized through the shikimate pathway from chorismate, found in plants, or obtained from protein catabolism. The initial step in the melatonin biosynthesis pathway is the hydroxylation of L-tryptophan's indole ring by the enzyme tryptophan hydroxylase, resulting in the formation of 5-hydroxytryptophan (5-HTP). Subsequently, 5-HTP undergoes decarboxylation, facilitated by pyridoxal phosphate and the enzyme 5-hydroxytryptophan decarboxylase, yielding serotonin.
Serotonin, an essential neurotransmitter, is further converted into N-acetylserotonin by the action of serotonin N-acetyltransferase, utilizing acetyl-CoA. The final step in the pathway involves the methylation of N-acetylserotonin's hydroxyl group by hydroxyindole O-methyltransferase, with S-adenosyl methionine as the methyl donor, to produce melatonin.
In bacteria, protists, fungi, and plants, the synthesis of melatonin also involves tryptophan as an intermediate but originates indirectly from the shikimate pathway. The pathway commences with D-erythrose 4-phosphate and phosphoenolpyruvate, and in photosynthetic cells, additionally involves carbon dioxide. While the subsequent biosynthetic reactions share similarities with those in animals, there are slight variations in the enzymes involved in the final stages.
The hypothesis that melatonin synthesis occurs within mitochondria and chloroplasts suggests an evolutionary and functional significance of melatonin in cellular energy metabolism and defense mechanisms against oxidative stress, reflecting the molecule's ancient origins and its multifaceted roles across different domains of life.
Mechanism
The mechanism of melatonin biosynthesis initiates with the hydroxylation of L-tryptophan, a process that requires the cofactor tetrahydrobiopterin (THB) to react with oxygen and the active site iron of tryptophan hydroxylase. Although the complete mechanism is not entirely understood, two main mechanisms have been proposed:
The first mechanism involves a slow transfer of one electron from THB to molecular oxygen (O2), potentially producing a superoxide (). This superoxide could then recombine with the THB radical to form 4a-peroxypterin. 4a-peroxypterin may either react with the active site iron (II) to create an iron-peroxypterin intermediate or directly transfer an oxygen atom to the iron, facilitating the hydroxylation of L-tryptophan.
Alternatively, the second mechanism proposes that oxygen interacts with the active site iron (II) first, forming iron (III) superoxide. This molecule could then react with THB to form an iron-peroxypterin intermediate.
Following the formation of iron (IV) oxide from the iron-peroxypterin intermediate, this oxide selectively attacks a double bond to yield a carbocation at the C5 position of the indole ring. A subsequent 1,2-shift of the hydrogen and the loss of one of the two hydrogen atoms on C5 would restore aromaticity, producing 5-hydroxy-L-tryptophan.
The decarboxylation of 5-hydroxy-L-tryptophan to produce 5-hydroxytryptamine is then facilitated by a decarboxylase enzyme with pyridoxal phosphate (PLP) as a cofactor. PLP forms an imine with the amino acid derivative, facilitating the breaking of the carbon–carbon bond and release of carbon dioxide. The protonation of the amine derived from tryptophan restores the aromaticity of the pyridine ring, leading to the production of 5-hydroxytryptamine and PLP.
Serotonin N-acetyltransferase, with histidine residue His122, is hypothesized to deprotonate the primary amine of 5-hydroxytryptamine. This deprotonation allows the lone pair on the amine to attack acetyl-CoA, forming a tetrahedral intermediate. The thiol from coenzyme A then acts as a leaving group when attacked by a general base, producing N-acetylserotonin.
The final step in the biosynthesis of melatonin involves the methylation of N-acetylserotonin at the hydroxyl position by SAM, resulting in the production of S-adenosyl homocysteine (SAH) and melatonin.
Regulation
In vertebrates, the secretion of melatonin is regulated through the activation of the beta-1 adrenergic receptor by the hormone norepinephrine. Norepinephrine increases the concentration of intracellular cAMP via beta-adrenergic receptors, which in turn activates the cAMP-dependent protein kinase A (PKA). PKA then phosphorylates arylalkylamine N-acetyltransferase (AANAT), the penultimate enzyme in the melatonin synthesis pathway. When exposed to daylight, noradrenergic stimulation ceases, leading to the immediate degradation of the protein by proteasomal proteolysis. The production of melatonin recommences in the evening, a phase known as the dim-light melatonin onset.
Blue light, especially within the range, inhibits the biosynthesis of melatonin, with the degree of suppression being directly proportional to the intensity and duration of light exposure. Historically, humans in temperate climates experienced limited exposure to blue daylight during winter months, primarily receiving light from sources that emitted predominantly yellow light, such as fires. The incandescent light bulbs used extensively throughout the 20th century emitted relatively low levels of blue light. It has been found that light containing only wavelengths greater than 530 nm does not suppress melatonin under bright-light conditions. The use of glasses that block blue light in the hours preceding bedtime can mitigate melatonin suppression. Additionally, wearing blue-blocking goggles during the last hours before bedtime is recommended for individuals needing to adjust to an earlier bedtime since melatonin facilitates the onset of sleep.
Metabolism
Melatonin is metabolized with an elimination half-life ranging from 20 to 50 minutes. The primary metabolic pathway transforms melatonin into 6-hydroxymelatonin, which is then conjugated with sulfate and excreted in urine as a waste product. It is primarily metabolized by the liver enzyme CYP1A2 and to a lesser extent by CYP1A1, CYP2C19, and CYP1B1.
Measurement
For both research and clinical purposes, melatonin levels in humans can be determined through saliva or blood plasma analysis.
Use as a medication and supplement
Melatonin is used both as a prescription medication and an over-the-counter dietary supplement for the management of sleep disorders, including insomnia and various circadian rhythm sleep disorders such as delayed sleep phase disorder, jet lag disorder, and shift work sleep disorder. In addition to melatonin, a range of synthetic melatonin receptor agonists, namely ramelteon, tasimelteon, and agomelatine, are used in medicine.
A study published by the Journal of the American Medical Association (JAMA) in April 2023 found that only 12% of the 30 melatonin gummy product preparations analyzed had melatonin quantities within ±10% of the amounts specified on their labels. Some gummy supplements were found to contain up to 347% of the declared melatonin content. In Europe, melatonin is classified as an active pharmaceutical ingredient, highlighting the regulatory oversight of its use and distribution. Conversely, , the United States was considering the inclusion of melatonin in pharmacy compounding practices. A preceding study from 2022 concluded that consuming unregulated melatonin products can expose individuals, including children, to melatonin quantities ranging from 40 to 130 times higher than the recommended levels when products are used 'as directed'.
Anecdotal reports and formal research studies over the past few decades have established a link between melatonin supplementation and more vivid dreams.
History
Discovery
Melatonin's discovery is linked to the study of skin color changes in some amphibians and reptiles, a phenomenon initially observed through the administration of pineal gland extracts. In 1917, Carey Pratt McCord and Floyd P. Allen found that feeding extracts from the pineal glands of cows caused the skin of tadpoles to lighten by contracting the dark epidermal melanophores.
The hormone melatonin was isolated in 1958 by Aaron B. Lerner, a dermatology professor, and his team at Yale University. Motivated by the possibility that a substance from the pineal gland could be beneficial in treating skin diseases, they extracted and identified melatonin from bovine pineal gland extracts. Subsequent research in the mid-1970s by Lynch and others demonstrated that melatonin production follows a circadian rhythm in human pineal glands.
The first patent for the therapeutic use of melatonin as a low-dose sleep aid was awarded to Richard Wurtman at the Massachusetts Institute of Technology in 1995.
Etymology
The etymology of melatonin stems from its skin-lightening properties. As detailed in their publication in the Journal of the American Chemical Society, Lerner and his colleagues proposed the name melatonin, derived from the Greek words melas, meaning 'black' or 'dark', and tonos, meaning 'labour', 'colour' or 'suppress'. This naming convention follows that of serotonin, another agent affecting skin color, discovered in 1948 as a modulator of vascular tone, which influenced its name based on its serum vasoconstrictor effect. Melatonin was thus aptly named to reflect its role in preventing the darkening of the skin, highlighting the intersection of biochemistry and linguistics in scientific discovery.
Occurrence
Animals and Humans
In vertebrates, melatonin is produced in darkness, thus usually at night, by the pineal gland, a small endocrine gland
located in the center of the brain but outside the blood–brain barrier. Light/dark information reaches the suprachiasmatic nuclei from retinal photosensitive ganglion cells of the eyes rather than the melatonin signal (as was once postulated). Known as "the hormone of darkness", the onset of melatonin at dusk promotes activity in nocturnal (night-active) animals and sleep in diurnal ones including humans.
In humans, ~30 μg of melatonin is produced daily and 80% of the total amount is produced in the night (W). The plasma maximum concentration of melatonin at night are 80–120 pg/mL and the concentrations during the day are between 10–20 pg/mL.
Many animals and humans use the variation in duration of melatonin production each day as a seasonal clock. In animals including humans, the profile of melatonin synthesis and secretion is affected by the variable duration of night in summer as compared to winter. The change in duration of secretion thus serves as a biological signal for the organization of daylength-dependent (photoperiodic) seasonal functions such as reproduction, behavior, coat growth, and camouflage coloring in seasonal animals. In seasonal breeders that do not have long gestation periods and that mate during longer daylight hours, the melatonin signal controls the seasonal variation in their sexual physiology, and similar physiological effects can be induced by exogenous melatonin in animals including mynah birds and hamsters. Melatonin can suppress libido by inhibiting secretion of luteinizing hormone and follicle-stimulating hormone from the anterior pituitary gland, especially in mammals that have a breeding season when daylight hours are long. The reproduction of long-day breeders is repressed by melatonin and the reproduction of short-day breeders is stimulated by melatonin. In sheep, melatonin administration has also shown antioxidant and immune-modulatory regime in prenatally stressed offspring helping them survive the crucial first days of their lives.
During the night, melatonin regulates leptin, lowering its levels.
Cetaceans have lost all the genes for melatonin synthesis as well as those for melatonin receptors. This is thought to be related to their unihemispheric sleep pattern (one brain hemisphere at a time). Similar trends have been found in sirenians.
Plants
Until its identification in plants in 1987, melatonin was for decades thought to be primarily an animal neurohormone. When melatonin was identified in coffee extracts in the 1970s, it was believed to be a byproduct of the extraction process. Subsequently, however, melatonin has been found in all plants that have been investigated. It is present in all the different parts of plants, including leaves, stems, roots, fruits, and seeds, in varying proportions. Melatonin concentrations differ not only among plant species, but also between varieties of the same species depending on the agronomic growing conditions, varying from picograms to several micrograms per gram. Notably high melatonin concentrations have been measured in popular beverages such as coffee, tea, wine, and beer, and crops including corn, rice, wheat, barley, and oats. In some common foods and beverages, including coffee and walnuts, the concentration of melatonin has been estimated or measured to be sufficiently high to raise the blood level of melatonin above daytime baseline values.
Although a role for melatonin as a plant hormone has not been clearly established, its involvement in processes such as growth and photosynthesis is well established. Only limited evidence of endogenous circadian rhythms in melatonin levels has been demonstrated in some plant species and no membrane-bound receptors analogous to those known in animals have been described. Rather, melatonin performs important roles in plants as a growth regulator, as well as environmental stress protector. It is synthesized in plants when they are exposed to both biological stresses, for example, fungal infection, and nonbiological stresses such as extremes of temperature, toxins, increased soil salinity, drought, etc.
Herbicide-induced oxidative stress has been experimentally mitigated in vivo in a high-melatonin transgenic rice. Studies conducted on lettuce grown in saline soil conditions have shown that the application of melatonin significantly mitigates the harmful effects of salinity. Foliar application increases the number of leaves, their surface area, increases fresh weight and the content of chlorophyll a and chlorophyll b, and the content of carotenoids compared to plants not treated with melatonin.
Fungal disease resistance is another role. Added melatonin increases resistance in Malus prunifolia against Diplocarpon mali. Also acts as a growth inhibitor on fungal pathogens including Alternaria, Botrytis, and Fusarium spp. Decreases the speed of infection. As a seed treatment, protects Lupinus albus from fungi. Dramatically slows Pseudomonas syringae tomato DC3000 infecting Arabidopsis thaliana and infecting Nicotiana benthamiana.
Fungi
Melatonin has been observed to reduce stress tolerance in Phytophthora infestans in plant-pathogen systems. Danish pharmaceutical company Novo Nordisk have used genetically modified yeast (Saccharomyces cerevisiae) to produce melatonin.
Bacteria
Melatonin is produced by α-proteobacteria and photosynthetic cyanobacteria. There is no report of its occurrence in archaea which indicates that melatonin originated in bacteria most likely to prevent the first cells from the damaging effects of oxygen in the primitive Earth's atmosphere.
Novo Nordisk have used genetically modified Escherichia coli to produce melatonin.
Archaea
In 2022, the discovery of serotonin N-acetyltransferase (SNAT)—the penultimate, rate-limiting enzyme in the melatonin biosynthetic pathway—in the archaeon Thermoplasma volcanium firmly places melatonin biosynthesis in all three major domains of life, dating back to ~4 Gya.
Food products
Naturally-occurring melatonin has been reported in foods including tart cherries to about 0.17–13.46 ng/g, bananas, plums, grapes, rice, cereals, herbs, olive oil, wine, and beer. The consumption of milk and sour cherries may improve sleep quality. When birds ingest melatonin-rich plant feed, such as rice, the melatonin binds to melatonin receptors in their brains. When humans consume foods rich in melatonin, such as banana, pineapple, and orange, the blood levels of melatonin increase significantly.
| Biology and health sciences | Animal hormones | Biology |
285235 | https://en.wikipedia.org/wiki/Dacite | Dacite | Dacite () is a volcanic rock formed by rapid solidification of lava that is high in silica and low in alkali metal oxides. It has a fine-grained (aphanitic) to porphyritic texture and is intermediate in composition between andesite and rhyolite. It is composed predominantly of plagioclase feldspar and quartz.
Dacite is relatively common, occurring in many tectonic settings. It is associated with andesite and rhyolite as part of the subalkaline tholeiitic and calc-alkaline magma series.
Etymology
The word dacite comes from Dacia, a province of the Roman Empire which lay between the Danube River and Carpathian Mountains (now modern Romania and Moldova) where the rock was first described.
The term dacite was used for the first time in the scientific literature in the book Geologie Siebenbürgens (The Geology of Transylvania) by Austrian geologists Franz Ritter von Hauer and Guido Stache. Dacite was originally defined as a new rock type to separate calc-alkaline rocks with oligoclase phenocrysts (dacites) from rocks with orthoclase phenocrysts (rhyolites).
Composition
Dacite consists mostly of plagioclase feldspar and quartz with biotite, hornblende, and pyroxene (augite or enstatite). The quartz appears as rounded, corroded phenocrysts, or as an element of the ground-mass. The plagioclase in dacite ranges from oligoclase to andesine and labradorite. Sanidine occurs, although in small proportions, in some dacites, and when abundant gives rise to rocks that form transitions to the rhyolites.
The relative proportions of feldspars and quartz in dacite, and in many other volcanic rocks, are illustrated in the QAPF diagram. This defines dacite as having a content of 20% to 60% quartz, with plagioclase making up 65% or more of its feldspar content. However, while the IUGS recommends classifying volcanic rocks on the basis of their mineral composition whenever possible, dacites are often so fine-grained that mineral identification is impractical. The rock must then be classified chemically based on its content of silica and alkali metal oxides (K2O plus Na2O). The TAS classification puts dacite in the O3 sector.
Texture
In hand specimen, many of the hornblende and biotite dacites are grey or pale brown and yellow rocks with white feldspars, and black crystals of biotite and hornblende. Other dacites, especially pyroxene-bearing dacites, are darker colored.
In thin section, dacites may have an aphanitic to porphyritic texture. Porphyritic dacites contain blocky highly zoned plagioclase phenocrysts and/or rounded corroded quartz phenocrysts. Subhedral hornblende and elongated biotite grains are present. Sanidine phenocrysts and augite (or enstatite) are found in some samples. The groundmass of these rocks is often aphanitic microcrystalline, with a web of minute feldspars mixed with interstitial grains of quartz or tridymite; but in many dacites it is largely vitreous, while in others it is felsitic or cryptocrystalline.
Geological context and formation
Dacite usually forms as an intrusive rock such as a dike or sill. Examples of this type of dacite outcrop are found in northwestern Montana and northeastern Bulgaria. Nevertheless, because of the moderately high silica content, dacitic magma is quite viscous and therefore prone to explosive eruption. A notorious example of this is Mount St. Helens in which dacite domes formed from previous eruptions. Pyroclastic flows may also be of dacitic composition as is the case with the Fish Canyon Tuff of La Garita Caldera.
Dacitic magma is formed by the subduction of young oceanic crust under a thick felsic continental plate. Oceanic crust is hydrothermally altered causing addition of quartz and sodium. As the young, hot oceanic plate is subducted under continental crust, the subducted slab partially melts and interacts with the upper mantle through convection and dehydration reactions. The process of subduction creates metamorphism in the subducting slab. When this slab reaches the mantle and initiates the dehydration reactions, minerals such as talc, serpentine, mica and amphiboles break down generating a more sodic melt. The magma then continues to migrate upwards causing differentiation and becomes even more sodic and silicic as it rises. Once at the cold surface, the sodium rich magma crystallizes plagioclase, quartz and hornblende. Accessory minerals like pyroxenes provide insight to the history of the magma.
The formation of dacite provides a great deal of information about the connection between oceanic crust and continental crust. It provides a model for the generation of felsic, buoyant, perennial rock from a mafic, dense, short-lived one.
Dacite's role in the formation of Archean continental crust
The process by which dacite forms has been used to explain the generation of continental crust during the Archean eon. At that time, the production of dacitic magma was more ubiquitous, due to the availability of young, hot oceanic crust. Today, the colder oceanic crust that subducts under most plates is not able to melt prior to the dehydration reactions, thus inhibiting the process.
Molten dacite magma at Kīlauea
Dacitic magma was encountered in a drillhole during geothermal exploration on Kīlauea in 2005. At a depth of 2488 m, the magma flowed up the wellbore. This produced several kilograms of clear, colorless vitric (glassy, non-crystalline) cuttings at the surface. The dacite magma is a residual melt of the typical basalt magma of Kīlauea.
Distribution
Dacite is relatively common and occurs in various tectonic and magmatic contexts:
In oceanic volcanic series. Examples: Iceland (Heiðarsporður ridge), Juan de Fuca Ridge.
In calc-alkaline and tholeiitic volcanic series of the subduction zones of island arcs and active continental margins. Examples of dacitic magmatism in island arcs are Japan, the Philippines, the Aleutians, the Antilles, the Sunda Arc (Mount Batur), Tonga and the South Sandwich Islands. Examples of dacitic magmatism in active continental margins are the Cascade Range, Guatemala and the Andes (Ecuador, Peru and Chile).
In continental volcanic series, often in association with tholeiitic basalts and intermediary rocks
The type locality of dacite is Gizella quarry near Poieni, Cluj in Romania. Other occurrences of dacite in Europe are Germany (Weiselberg), Greece (Nisyros and Thera), Italy (in Bozen quartz porphyry, and Sardinia), Austria (Styrian Volcano Arc), Scotland (Argyll), Slovakia, Spain (El Hoyazo near Almería), France (Massif de l'Esterel) and Hungary (Csódi Hill).
Sites outside Europe include Iran, Morocco, New Zealand (volcanic region of Taupo), Turkey, USA and Zambia.
Dacite is found extraterrestrially at Nili Patera caldera of Syrtis Major Planum on Mars.
| Physical sciences | Igneous rocks | Earth science |
285239 | https://en.wikipedia.org/wiki/Andesite | Andesite | Andesite () is a volcanic rock of intermediate composition. In a general sense, it is the intermediate type between silica-poor basalt and silica-rich rhyolite. It is fine-grained (aphanitic) to porphyritic in texture, and is composed predominantly of sodium-rich plagioclase plus pyroxene or hornblende.
Andesite is the extrusive equivalent of plutonic diorite. Characteristic of subduction zones, andesite represents the dominant rock type in island arcs. The average composition of the continental crust is andesitic. Along with basalts, andesites are a component of the Martian crust.
The name andesite is derived from the Andes mountain range, where this rock type is found in abundance. It was first applied by Christian Leopold von Buch in 1826.
Description
Andesite is an aphanitic (fine-grained) to porphyritic (coarse-grained) igneous rock that is intermediate in its content of silica and low in alkali metals. It has less than 20% quartz and 10% feldspathoid by volume, with at least 65% of the feldspar in the rock consisting of plagioclase. This places andesite in the basalt/andesite field of the QAPF diagram. Andesite is further distinguished from basalt by its silica content of over 52%. However, it is often not possible to determine the mineral composition of volcanic rocks, due to their very fine grain size, and andesite is then defined chemically as volcanic rock with a content of 57% to 63% silica and not more than about 6% alkali metal oxides. This places the andesite in the O2 field of the TAS classification. Basaltic andesite, with a content of 52% to 57% silica, is represented by the O1 field of the TAS classification but is not a distinct rock type in the QAPF classification.
Andesite is usually light to dark grey in colour, due to its content of hornblende or pyroxene minerals. but can exhibit a wide range of shading. Darker andesite can be challenging to distinguish from basalt, but a common rule of thumb, used away from the laboratory, is that andesite has a color index less than 35.
The plagioclase in andesite varies widely in sodium content, from anorthite to oligoclase, but is typically andesine, in which anorthite makes up about 40 mol% of the plagioclase. The pyroxene minerals that may be present include augite, pigeonite, or orthopyroxene. Magnetite, zircon, apatite, ilmenite, biotite, and garnet are common accessory minerals. Alkali feldspar may be present in minor amounts.
Andesite is usually porphyritic, containing larger crystals (phenocrysts) of plagioclase formed prior to the extrusion that brought the magma to the surface, embedded in a finer-grained matrix. Phenocrysts of pyroxene or hornblende are also common. These minerals have the highest melting temperatures of the typical minerals that can crystallize from the melt and are therefore the first to form solid crystals. Classification of andesites may be refined according to the most abundant phenocryst. For example, if hornblende is the principal phenocryst mineral, the andesite will be described as a hornblende andesite.
Andesitic volcanism
Andesite lava typically has a viscosity of 3.5 × 106 cP (3.5 × 103 Pa⋅s) at . This is slightly greater than the viscosity of smooth peanut butter. As a result, andesitic volcanism is often explosive, forming tuffs and agglomerates. Andesite vents tend to build up composite volcanoes rather than the shield volcanoes characteristic of basalt, with its much lower viscosity resulting from its lower silica content and higher eruption temperature.
Block lava flows are typical of andesitic lavas from composite volcanoes. They behave in a similar manner to ʻaʻā flows but their more viscous nature causes the surface to be covered in smooth-sided angular fragments (blocks) of solidified lava instead of clinkers. As with ʻaʻā flows, the molten interior of the flow, which is kept insulated by the solidified blocky surface, advances over the rubble that falls off the flow front. They also move much more slowly downhill and are thicker in depth than ʻaʻā flows.
Generation of melts in island arcs
Though andesite is common in other tectonic settings, it is particularly characteristic of convergent plate margins. Even before the Plate Tectonics Revolution, geologists had defined an andesite line in the western Pacific that separated basalt of the central Pacific from andesite further west. This coincides with the subduction zones at the western boundary of the Pacific Plate. Magmatism in island arc regions comes from the interplay of the subducting plate and the mantle wedge, the wedge-shaped region between the subducting and overriding plates. The presence of convergent margins dominated by andesite is so characteristic of the Earth's unique plate tectonics that the Earth has been described as an "andesite planet".
During subduction, the subducted oceanic crust is subjected to increasing pressure and temperature, leading to metamorphism. Hydrous minerals such as amphibole, zeolites, or chlorite (which are present in the oceanic lithosphere) dehydrate as they change to more stable, anhydrous forms, releasing water and soluble elements into the overlying wedge of mantle. Fluxing water into the wedge lowers the solidus of the mantle material and causes partial melting. Due to the lower density of the partially molten material, it rises through the wedge until it reaches the lower boundary of the overriding plate. Melts generated in the mantle wedge are of basaltic composition, but they have a distinctive enrichment of soluble elements (e.g. potassium (K), barium (Ba), and lead (Pb)) which are contributed from sediment that lies at the top of the subducting plate. Although there is evidence to suggest that the subducting oceanic crust may also melt during this process, the relative contribution of the three components (crust, sediment, and wedge) to the generated basalts is still a matter of debate.
Basalt thus formed can contribute to the formation of andesite through fractional crystallization, partial melting of crust, or magma mixing, all of which are discussed next.
Genesis
Intermediate volcanic rocks are created via several processes:
Fractional crystallization of a mafic parent magma.
Partial melting of crustal material.
Magma mixing between felsic rhyolitic and mafic basaltic magmas in a magma reservoir
Partial melting of metasomatized mantle
Fractional crystallization
To achieve andesitic composition via fractional crystallization, a basaltic magma must crystallize specific minerals that are then removed from the melt. This removal can take place in a variety of ways, but most commonly this occurs by crystal settling. The first minerals to crystallize and be removed from a basaltic parent are olivines and amphiboles. These mafic minerals settle out of the magma, forming mafic cumulates. There is geophysical evidence from several arcs that large layers of mafic cumulates lie at the base of the crust. Once these mafic minerals have been removed, the melt no longer has a basaltic composition. The silica content of the residual melt is enriched relative to the starting composition. The iron and magnesium contents are depleted. As this process continues, the melt becomes more and more evolved eventually becoming andesitic. Without continued addition of mafic material, however, the melt will eventually reach a rhyolitic composition. This produces the characteristic basalt-andesite-rhyolite association of island arcs, with andesite the most distinctive rock type.
Partial melting of the crust
Partially molten basalt in the mantle wedge moves upwards until it reaches the base of the overriding crust. Once there, the basaltic melt can either underplate the crust, creating a layer of molten material at its base, or it can move into the overriding plate in the form of dykes. If it underplates the crust, the basalt can (in theory) cause partial melting of the lower crust due to the transfer of heat and volatiles. Models of heat transfer, however, show that arc basalts emplaced at temperatures 1100–1240 °C cannot provide enough heat to melt lower crustal amphibolite. Basalt can, however, melt pelitic upper crustal material.
Magma mixing
In continental arcs, such as the Andes, magma often pools in the shallow crust creating magma chambers. Magmas in these reservoirs become evolved in composition (dacitic to rhyolitic) through both the process of fractional crystallization and partial melting of the surrounding country rock. Over time as crystallization continues and the system loses heat, these reservoirs cool. In order to remain active, magma chambers must have continued recharge of hot basaltic melt into the system. When this basaltic material mixes with the evolved rhyolitic magma, the composition is returned to andesite, its intermediate phase. Evidence of magma mixing is provided by the presence of phenocrysts in some andesites that are not in chemical equilibrium with the melt in which they are found.
Partial melting of metasomatized mantle
High-magnesium andesites (boninites) in island arcs may be primitive andesites, generated from metasomatized mantle.
Experimental evidence shows that depleted mantle rock exposed to alkali fluids such as might be given off by a subducting slab generates magma resembling high-magnesium andesites.
Notable andesite structures
Notable stonemasonry structures built with andesite include:
Borobudur in Java, Indonesia.
Sacsayhuamán citadel in Peru.
Gate of the Sun in Bolivia.
Templo Mayor ruins and other historic buildings in Mexico City, built from andesite and Tezontle basaltic andesite.
Extraterrestrial samples
In 2009, researchers revealed that andesite was found in two meteorites (numbered GRA 06128 and GRA 06129) that were discovered in the Graves Nunataks icefield during the US Antarctic Search for Meteorites 2006/2007 field season. This possibly points to a new mechanism to generate andesite crust.
Along with basalts, andesites are a component of the Martian crust. The presence of distinctive steep-sided domes on Venus suggests that andesite may have been erupted from large magma chambers where crystal settling could take place.
| Physical sciences | Igneous rocks | Earth science |
285247 | https://en.wikipedia.org/wiki/Migmatite | Migmatite | Migmatite is a composite rock found in medium and high-grade metamorphic environments, commonly within Precambrian cratonic blocks. It consists of two or more constituents often layered repetitively: one layer is an older metamorphic rock that was reconstituted subsequently by partial melting ("paleosome"), while the alternate layer has a pegmatitic, aplitic, granitic or generally plutonic appearance ("neosome"). Commonly, migmatites occur below deformed metamorphic rocks that represent the base of eroded mountain chains.
Migmatites form under extreme temperature and pressure conditions during prograde metamorphism, when partial melting occurs in metamorphic paleosome. Components exsolved by partial melting are called neosome (meaning ‘new body’), which may or may not be heterogeneous at the microscopic to macroscopic scale. Migmatites often appear as tightly, incoherently folded veins (ptygmatic folds). These form segregations of leucosome, light-colored granitic components exsolved within melanosome, a dark colored amphibole- and biotite-rich setting. If present, a mesosome, intermediate in color between a leucosome and melanosome, forms a more or less unmodified remnant of the metamorphic parent rock paleosome. The light-colored components often give the appearance of having been molten and mobilized.
The diagenesis - metamorphism sequence
Migmatite is the penultimate member of a sequence of lithology transformations first identified by Lyell, 1837. Lyell had a clear perception of the regional diagenesis sequence in sedimentary rocks that remains valid today. It begins 'A' with deposition of unconsolidated sediment (protolith for future metamorphic rocks). As temperature and pressure increase with depth, a protolith passes through a diagenetic sequence from porous sedimentary rock through indurated rocks and phyllites 'A2' to metamorphic schists 'C1' in which the initial sedimentary components can still be discerned. Deeper still, the schists are reconstituted as gneiss 'C2' in which folia of residual minerals alternate with quartzo-feldspathic layers; partial melting continues as small batches of leucosome coalesce to form distinct layers in the neosome, and become recognizable migmatite 'D1'. The resulting leucosome layers in stromatic migmatites still retain water and gas in a discontinuous reaction series from the paleosome. This supercritical H2O and CO2 content renders the leucosome extremely mobile.
Bowen 1922, p184 described the process as being ‘In part due to … reactions between already crystallized mineral components of the rock and the remaining still-molten magma, and in part to reactions due to adjustments of equilibrium between the extreme end-stage, highly concentrated, "mother-liquor", which, by selective freezing, has been enriched with the more volatile gases usually termed "mineralizers," among which water figures prominently’. J.J. Sederholm (1926) described rocks of this type, demonstrably of mixed origin, as migmatites. He described the granitising 'ichors' as having properties intermediate between an aqueous solution and a very much diluted magma, with much of it in the gaseous state.
Partial melting, anatexis and the role of water
The role of partial melting is demanded by experimental and field evidence. Rocks begin to partially melt when they reach a combination of sufficiently high temperatures (> 650 °C) and pressures (>34MPa). Some rocks have compositions that produce more melt than others at a given temperature, a rock property called fertility. Some minerals in a sequence will make more melt than others; some do not melt until a higher temperature is reached. If the temperature attained only just surpasses the solidus, the migmatite will contain a few small patches of melt scattered about in the most fertile rock. Holmquist 1916 called the process whereby metamorphic rocks are transformed into granulite ‘anatexis’.
The segregation of melt during the prograde part of the metamorphic history (temperature > solidus) involves separating the melt fraction from the residuum, which higher specific gravity causes to accumulate at a lower level. The subsequent migration of anatectic melt flows down local pressure gradients with little or no crystallization. The network of channels through which the melt moved at this stage may be lost by compression of the melanosome, leaving isolated lenses of leucosome. The melt product gathers in an underlying channel where it becomes subject to differentiation.
Conduction is the principal mechanism of heat transfer in the continental crust; where shallow layers have been exhumed or buried rapidly there is a corresponding inflection in the geothermal gradient. Cooling due to surface exposure is conducted very slowly to deeper rocks so the deeper crust is slow to heat up and slow to cool. Numerical models of crustal heating confirm slow cooling in the deep crust. Therefore, once formed, anatectic melt can exist in the middle and lower crust for a very long period of time. It is squeezed laterally to form sills, laccolithic and lopolithic structures of mobile granulite at depths of c. 10–20 km. In outcrop today only stages of this process arrested during its initial rapid uplift are visible. Wherever the resulting fractionated granulite rises steeply in the crust, water exits from its supercriticality phase, the granulite starts to crystallize, becomes firstly fractionated melt + crystals, then solid rock, whilst still at the conditions of temperature and pressure existing beyond 8 km. Water, carbon dioxide, sulphur dioxide and other elements are exsolved under great pressure from the melt as it exits from supercritical conditions. These components rise rapidly towards the surface and contribute to formation of mineral deposits, volcanoes, mud volcanoes, geysers and hot springs.
Color-banded migmatites
A leucosome is the lightest-colored part of migmatite. The melanosome is the darker part, and occurs between two leucosomes or, if remnants of the more or less unmodified parent rock (mesosome) are still present, it is arranged in rims around these remnants. When present, the mesosome is intermediate in color between leucosome and melanosome.
The melanosome is a dark, mafic mineral band formed in migmatite which is melting into a eutaxitic texture; often, this leads to the formation of granite. The melanosomes form bands with leucosomes, and in that context may be described as schlieren (color banding) or migmatitic.
Migmatite textures
Migmatite textures are the product of thermal softening of the metamorphic rocks. Schlieren textures are a particularly common example of granite formation in migmatites, and are often seen in restite xenoliths and around the margins of S-type granites.
Ptygmatic folds are formed by highly plastic ductile deformation of the gneissic banding, and thus have little or no relationship to a defined foliation, unlike most regular folds. Ptygmatic folds can occur restricted to compositional zones of the migmatite, for instance in fine-grained shale protoliths versus in coarse granoblastic sandy protolith.
When a rock undergoes partial melting some minerals will melt (neosome, i.e. newly formed), while others remain solid (paleosome, i.e. older formation). The neosome is composed of lightly colored areas (leucosome) and dark areas (melanosome). The leucosome lies in the center of the layers and is mainly composed of quartz and feldspar. The melanosome is composed of cordierite, hornblende and biotite and forms the wall zones of the neosome.
Early history of migmatite investigations
In 1795 James Hutton made some of the earliest comments on the relationship between gneiss and granite: “If granite be truly stratified, and those strata connected with the other strata of the earth, it can have no claim to originality; and the idea of primitive mountains, of late so much employed by natural philosophers, must vanish, in a more extensive view of the operations of the globe; but it is certain that granite, or a species of the same kind of stone, is thus found stratified. It is the granit feuilletée of M. de Saussure, and, if I mistake not, what is called gneis by the Germans.” The minute penetration of gneiss, schists and sedimentary deposits altered by contact-metamorphism, alternating with granitic materials along the planes of schistosity was described by Michel-Lévy, in his 1887 paper ' Sur l'Origine des Terrains Cristallins Primitifs'. He makes the following observations: “I first drew attention to the phenomenon of intimate penetration, ‘lit par lit’ of eruptive granitic and granulitic rocks that follow the schistosity planes of gneisses and schists ... But in between, in the contact zones Immediately above eruptive rock, quartz and feldspars insert themselves, bed by bed, between the leaves of the micaceous shales; it started from a detrital shale, now we find it definitively transformed into a recent gneiss, very difficult to distinguish from ancient gneiss”.
The coincidence of schistosity with bedding gave rise to the proposals of static or load metamorphism, advanced in 1889 by John Judd and others. In 1894 L. Milch recognized vertical pressure due to the weight of the overlying load to be the controlling factor. In 1896
Home and Greenly agreed that granitic intrusions are closely associated with metamorphic processes " the cause which brought about the introduction of the granite also resulted in these high and peculiar types of crystallization ". A later paper of Edward Greenly in 1903 described the formation of granitic gneisses by solid diffusion, and ascribed the mechanism of lit-par-lit occurrence to the same process. Greenly drew attention to thin and regular seams of injected material, which indicated that these operations took place in hot rocks; also to undisturbed septa of country rocks, which suggested that the expression of the magma occurred by quiet diffusion rather than by forcible injection. In 1907 Sederholm called the migmatite-forming process palingenesis. and (although it specifically included partial melting and dissolution) he considered magma injection and its associated veined and brecciated rocks as fundamental to the process. The upward succession of gneiss, schist and phyllite in the Central European Urgebirge influenced Ulrich Grubenmann in 1910 in his formulation of three depth-zones of metamorphism.
Holmquist found high-grade gneisses that contained many small patches and veins of granitic material. Granites were absent nearby, so he interpreted the patches and veins to be collection sites for partial melt exuded from the mica-rich parts of the host gneiss. Holmquist gave these migmatites the name ‘venite’ to emphasize their internal origin and to distinguish them from Sederholm's ‘arterites’. Which also contained veins of injected material. Sederholm later placed more emphasis on the roles of assimilation and the actions of fluids in the formation of migmatites and used the term ‘ichor’, to describe them.
Persuaded by the close connection between migmatization and granites in outcrop, Sederholm considered migmatites to be an intermediary between igneous and metamorphic rocks. He thought that the granitic partings in banded gneisses originated through the agency of either melt or a nebulous fluid, the ichor, both derived from nearby granites. An opposing view, proposed by Holmquist, was that the granitic material came from the adjacent country rock, not the granites, and that it was segregated by fluid transport. Holmquist believed that such replacive migmatites were produced during metamorphism at a relatively low metamorphic grade, with partial melting only intervening at high grade. Thus, the modern view of migmatites corresponds closely to Holmquist's concept of ultrametamorphism, and to Sederholm's concept of anatexis, but is far from the concept of palingenesis, or the various metasomatic and subsolidus processes proposed during the granitization debate. Read considered that regionally metamorphosed rocks resulted from the passage of waves or fronts of metasomatizing solutions out from the central granitization core, above which arise the zones of metamorphism.
Agmatite
The original name for this phenomenon was defined by Sederholm (1923) as a rock with "fragments of older rock cemented by granite", and was regarded by him to be a type of migmatite. There is a close connection between migmatites and the occurrence of ‘explosion breccias’ in schists and phyllites adjacent to diorite and granite intrusions. Rocks matching this description can also be found around igneous intrusive bodies in low-grade or unmetamorphosed country-rocks. Brown (1973) argued that agmatites are not migmatites, and should be called ‘intrusion breccias’ or ‘vent agglomerates’. Reynolds (1951) thought the term ‘agmatite’ ought to be abandoned.
Migmatite melts provide buoyancy for sedimentary isostasy
Recent geochronological studies from granulite-facies metamorphic terranes (e.g. Willigers et al. 2001) show that metamorphic temperatures remained above the granite solidus for between 30 and 50 My. This suggests that once formed, anatectic melt can exist in the middle and lower crust for a very long period of time. The resulting granulite is free to move laterally and up along weaknesses in the overburden in directions determined by the pressure gradient.
In areas where it lies beneath a deepening sedimentary basin, a portion of granulite melt will tend to move laterally beneath the base of previously metamorphosed rocks that have not yet reached the migmatic stage of anatexis. It will congregate in areas where pressure is lower. The melt will lose its volatile content when it reaches a level where temperature and pressure is less than the supercritical water phase boundary. The melt will crystallize at that level and prevent following melt from reaching that level until persistent following magma pressure pushes the overburden upwards.
Other migmatite hypotheses
For migmatised argillaceous rocks, the partial or fractional melting would first produce a volatile and incompatible-element enriched rich partial melt of granitic composition. Such granites derived from sedimentary rock protoliths would be termed S-type granite, are typically potassic, sometimes containing leucite, and would be termed adamellite, granite and syenite. Volcanic equivalents would be rhyolite and rhyodacite.
Migmatised igneous or lower-crustal rocks which melt do so to form a similar granitic I-type granite melt, but with distinct geochemical signatures and typically plagioclase dominant mineralogy forming monzonite, tonalite and granodiorite compositions. Volcanic equivalents would be dacite and trachyte.
It is difficult to melt mafic metamorphic rocks except in the lower mantle, so it is rare to see migmatitic textures in such rocks. However, eclogite and granulite are roughly equivalent mafic rocks.
Etymology
The Finnish petrologist Jakob Sederholm first used the term in 1907 for rocks within the Scandinavian craton in southern Finland. The term was derived from the Greek word μιγμα: , meaning a mixture.
| Physical sciences | Metamorphic rocks | Earth science |
285510 | https://en.wikipedia.org/wiki/New%20York%20City%20Subway | New York City Subway | The New York City Subway is a rapid transit system in New York City serving the boroughs of Manhattan, Brooklyn, Queens, and the Bronx. It is owned by the government of New York City and leased to the New York City Transit Authority, an affiliate agency of the state-run Metropolitan Transportation Authority (MTA). Opened on October 27, 1904, the New York City Subway is one of the world's oldest public transit systems, one of the most-used, and the one with the most stations, with 472 stations in operation (423, if stations connected by transfers are counted as single stations).
The system has operated 24/7 service every day of the year throughout most of its history, barring emergencies and disasters. By annual ridership, the New York City Subway is the busiest rapid transit system in both the Western Hemisphere and the Western world, as well as the eleventh-busiest rapid transit rail system in the world. The subway carried unlinked, non-unique riders in . Daily ridership has been calculated since 1985; the record, over 6.2 million, was set on October 29, 2015.
The system is also one of the world's longest. Overall, the system consists of of routes, comprising a total of of revenue track and a total of including non-revenue trackage. Of the system's routes or "services" (which usually share track or "lines" with other services), pass through Manhattan, the exceptions being the train, the Franklin Avenue Shuttle, and the Rockaway Park Shuttle. Large portions of the subway outside Manhattan are elevated, on embankments, or in open cuts, and a few stretches of track run at ground level; 40% of track is above ground. Many lines and stations have both express and local services. These lines have three or four tracks. Normally, the outer two are used by local trains, while the inner one or two are used by express trains.
, the New York City Subway's budgetary burden for expenditures was $8.7 billion, supported by collection of fares, bridge tolls, and earmarked regional taxes and fees, as well as direct funding from state and local governments.
History
Alfred Ely Beach built the first demonstration for an underground transit system in New York City in 1869 and opened it in February 1870. His Beach Pneumatic Transit only extended under Broadway in Lower Manhattan operating from Warren Street to Murray Street and exhibited his idea for an atmospheric railway as a subway. The tunnel was never extended for political and financial reasons. Today, no part of this line remains as the tunnel was completely within the limits of the present-day City Hall station under Broadway. The Great Blizzard of 1888 helped demonstrate the benefits of an underground transportation system. A plan for the construction of the subway was approved in 1894, and construction began in 1900. Even though the underground portions of the subway had yet to be built, several above-ground segments of the modern-day New York City Subway system were already in service by then. The oldest structure still in use opened in 1885 as part of the BMT Lexington Avenue Line in Brooklyn and is now part of the BMT Jamaica Line. The oldest right-of-way, which is part of the BMT West End Line near Coney Island Creek, was in use in 1864 as a steam railroad called the Brooklyn, Bath and Coney Island Rail Road.
The first underground line of the subway opened on October 27, 1904, almost 36 years after the opening of the first elevated line in New York City (which became the IRT Ninth Avenue Line). The subway line, then called the "Manhattan Main Line", ran from City Hall station northward under Lafayette Street (then named Elm Street) and Park Avenue (then named Fourth Avenue) before turning westward at 42nd Street. It then curved northward again at Times Square, continuing under Broadway before terminating at 145th Street station in Harlem. Its operation was leased to the Interborough Rapid Transit Company (IRT), and over 150,000 passengers paid the 5-cent fare ($ in dollars ) to ride it on the first day of operation.
By the late 1900s and early 1910s, the lines had been consolidated into two privately owned systems, the IRT and the Brooklyn Rapid Transit Company (BRT, later Brooklyn–Manhattan Transit Corporation, BMT). The city built most of the lines and leased them to the companies. The first line of the city-owned and operated Independent Subway System (IND) opened in 1932. This system was intended to compete with the private systems and allow some of the elevated railways to be torn down but stayed within the core of the city due to its small startup capital. This required it to be run 'at cost', necessitating fares up to double the five-cent fare of the time, or 10¢ ($ in dollars ).
In 1940, the city bought the two private systems. Some elevated lines ceased service immediately while others closed soon after. Integration was slow, but several connections were built between the IND and BMT. These now operate as one division, called the B Division. Since the former IRT tunnels are narrower, have sharper curves, and shorter station platforms, they cannot accommodate B Division cars, and the former IRT remains its own division, the A Division. Many passenger transfers between stations of all three former companies have been created, allowing the entire network to be treated as a single unit.
During the late 1940s, the system recorded high ridership, and on December 23, 1946, the system-wide record of 8,872,249 fares was set.
The New York City Transit Authority (NYCTA), a public authority presided by New York City, was created in 1953 to take over subway, bus, and streetcar operations from the city, and placed under control of the state-level Metropolitan Transportation Authority in 1968.
Organized in 1934 by transit workers of the BRT, IRT, and IND, the Transport Workers Union of America Local 100 remains the largest and most influential local of the labor unions. Since the union's founding, there have been three union strikes over contract disputes with the MTA: 12 days in 1966, 11 days in 1980, and three days in 2005.
By the 1970s and 1980s, the New York City Subway was at an all-time low. Ridership had dropped to 1910s levels, and graffiti and crime were rampant. Maintenance was poor, and delays and track problems were common. Still, the NYCTA managed to open six new subway stations in the 1980s, make the current fleet of subway cars graffiti-free, as well as order 1,775 new subway cars. By the early 1990s, conditions had improved significantly, although maintenance backlogs accumulated during those 20 years are still being fixed today.
Entering the 21st century, progress continued despite several disasters. The September 11 attacks resulted in service disruptions on lines running through Lower Manhattan, particularly the IRT Broadway–Seventh Avenue Line, which ran directly underneath the World Trade Center. Sections of the tunnel, as well as the Cortlandt Street station, which was directly underneath the Twin Towers, were severely damaged. Rebuilding required the suspension of service on that line south of Chambers Street. Ten other nearby stations were closed for cleanup. By March 2002, seven of those stations had reopened. Except for Cortlandt Street, the rest reopened in September 2002, along with service south of Chambers Street. Cortlandt Street reopened in September 2018.
In October 2012, Hurricane Sandy flooded several underwater tunnels and other facilities near New York Harbor, as well as trackage over Jamaica Bay. The immediate damage was fixed within six months, but long-term resiliency and rehabilitation projects continued for several years. The recovery projects after the hurricane included the restoration of the new South Ferry station from 2012 to 2017; the full closure of the Montague Street Tunnel from 2013 to 2014; and the partial 14th Street Tunnel shutdown from 2019 to 2020. Annual ridership on the New York City Subway system, which totaled nearly 1.7 billion in 2019, declined dramatically during the COVID-19 pandemic and did not surpass one billion again until 2022.
Construction methods
When the IRT subway debuted in 1904, the typical tunnel construction method was cut-and-cover. The street was torn up to dig the tunnel below before being rebuilt from above. Traffic on the street above would be interrupted due to the digging up of the street. Temporary steel and wooden bridges carried surface traffic above the construction.
Contractors in this type of construction faced many obstacles, both natural and human made. They had to deal with rock formations and groundwater, which required pumps. Twelve miles of sewers, as well as water and gas mains, electric conduits, and steam pipes had to be rerouted. Street railways had to be torn up to allow the work. The foundations of tall buildings often ran near the subway construction, and in some cases needed underpinning to ensure stability.
This method worked well for digging soft dirt and gravel near the street surface. Tunnelling shields were required for deeper sections, such as the Harlem and East River tunnels, which used cast-iron tubes. Rock or concrete-lined tunnels were used on segments from 33rd to 42nd streets under Park Avenue; 116th to 120th Streets under Broadway; 145th to Dyckman Streets (Fort George) under Broadway and St. Nicholas Avenue; and 96th Street and Broadway to Central Park North and Lenox Avenue.
About 40% of the subway system runs on surface or elevated tracks, including steel or cast-iron elevated structures, concrete viaducts, embankments, open cuts and surface routes. , there are of elevated tracks. All of these construction methods are completely grade-separated from road and pedestrian crossings, and most crossings of two subway tracks are grade-separated with flying junctions. The sole exceptions of at-grade junctions of two lines in regular service are the 142nd Street and Myrtle Avenue junctions, whose tracks intersect at the same level, as well as the same-direction pairs of tracks on the IRT Eastern Parkway Line at Rogers Junction.
The 7,700 workers who built the original subway lines were mostly immigrants living in Manhattan.
More recent projects use tunnel boring machines, which increase the cost. However, they minimize disruption at street level and avoid already existing utilities. Examples of such projects include the extension of the IRT Flushing Line and the IND Second Avenue Line.
Expansion
Since the opening of the original New York City Subway line in 1904, multiple official and planning agencies have proposed numerous extensions to the subway system. One of the more expansive proposals was the "IND Second System", part of a plan to construct new subway lines in addition to taking over existing subway lines and railroad rights-of-way. The most grandiose IND Second Subway plan, conceived in 1929, was to be part of the city-operated IND, and was to comprise almost of the current subway system. By 1939, with unification planned, all three systems were included within the plan, which was ultimately never carried out. Many different plans were proposed over the years of the subway's existence, but expansion of the subway system mostly stopped during World War II.
Though most of the routes proposed over the decades have never seen construction, discussion remains strong to develop some of these lines, to alleviate existing subway capacity constraints and overcrowding, the most notable being the proposals for the Second Avenue Subway. Plans for new lines date back to the early 1910s, and expansion plans have been proposed during many years of the system's existence.
After the IND Sixth Avenue Line was completed in 1940, the city went into great debt, and only 33 new stations have been added to the system since, nineteen of which were part of defunct railways that already existed. Five stations were on the abandoned New York, Westchester and Boston Railway, which was incorporated into the system in 1941 as the IRT Dyre Avenue Line. Fourteen more stations were on the abandoned LIRR Rockaway Beach Branch (now the IND Rockaway Line), which opened in 1955. Two stations (57th Street and Grand Street) were part of the Chrystie Street Connection, and opened in 1968; the Harlem–148th Street terminal opened that same year in an unrelated project.
Six were built as part of a 1968 plan: three on the Archer Avenue Lines, opened in 1988, and three on the 63rd Street Lines, opened in 1989. The new South Ferry station was built and connected to the existing Whitehall Street–South Ferry station in 2009. The one-stop 7 Subway Extension to the west side of Manhattan, consisting of the 34th Street–Hudson Yards station, was opened in 2015, and three stations on the Second Avenue Subway in the Upper East Side were opened as part of Phase 1 of the line at the beginning of 2017.
Lines and routes
Many rapid transit systems run relatively static routings, so that a train "line" is more or less synonymous with a train "route". In New York City, routings change often, for various reasons. Within the nomenclature of the subway, the "line" describes the physical railroad track or series of tracks that a train "route" uses on its way from one terminal to another. "Routes" (also called "services") are distinguished by a letter or a number and "lines" have names. Trains display their route designation.
There are train services in the subway system, including three short shuttles. Each route has a color and a local or express designation representing the Manhattan trunk line of the service. New York City residents seldom refer to services by color (e.g., "blue line" or "green line") but out-of-towners and tourists often do.
The , , , , , , and trains are fully local and make all stops. The , , , , , , , , , , and trains have portions of express and local service. , , , and trains vary by direction, day, or time of day. The letter is used for three shuttle services: Franklin Avenue Shuttle, Rockaway Park Shuttle, and 42nd Street Shuttle.
Though the subway system operates on a 24-hour basis, during late night hours some of the designated routes do not run, run as a shorter route (often referred to as the "shuttle train" version of its full-length counterpart) or run with a different stopping pattern. These are usually indicated by smaller, secondary route signage on station platforms. Because there is no nightly system shutdown for maintenance, tracks and stations must be maintained while the system is operating. This work sometimes necessitates service changes during midday, overnight hours, and weekends.
When parts of lines are temporarily shut down for construction purposes, the transit authority can substitute free shuttle buses (using MTA Regional Bus Operations bus fleet) to replace the routes that would normally run on these lines. The Transit Authority announces planned service changes through its website, via placards that are posted on station and interior subway-car walls, and through its Twitter page.
Nomenclature
Subway map
Current official transit maps of the New York City Subway are based on a 1979 design by Michael Hertz Associates. The maps are not geographically accurate due to the complexity of the system (Manhattan being the smallest borough, but having the most services), but they do show major city streets as an aid to navigation. The newest edition took effect on June 27, 2010, and makes Manhattan bigger and Staten Island smaller, with minor tweaks happening to the map when more permanent changes occur.
Earlier diagrams of the subway, the first being produced in 1958, had the perception of being more geographically inaccurate than the diagrams today. The design of the subway map by Massimo Vignelli, published by the MTA between 1972 and 1979, has become a modern classic but the MTA deemed the map flawed due to its placement of geographical elements.
A late night-only version of the map was introduced on January 30, 2012. On September 16, 2011, the MTA introduced a Vignelli-style interactive subway map, "The Weekender", an online map that provides information about any planned work, from late Friday night to early Monday morning. In October 2020, the MTA launched a digital version of the map showing real-time service patterns and service changes, designed by Work & Co.
Several privately produced schematics are available online or in printed form, such as those by Hagstrom Map.
Stations
Out of the 472 stations, 470 are served 24 hours a day. Underground stations in the New York City Subway are typically accessed by staircases going down from street level. Many of these staircases are painted in a common shade of green, with slight or significant variations in design. Other stations have unique entrances reflective of their location or date of construction. Several station entrance stairs, for example, are built into adjacent buildings. Nearly all station entrances feature color-coded globe or square lamps signifying their status as an entrance. The current number of stations is smaller than the peak of the system. In addition to the demolition of former elevated lines, which collectively have resulted in the demolition of over a hundred stations, other closed stations and unused portions of existing stations remain in parts of the system.
Concourse
Many stations in the subway system have mezzanines. Mezzanines allow for passengers to enter from multiple locations at an intersection and proceed to the correct platform without having to cross the street before entering. Inside mezzanines are fare control areas, where passengers physically pay their fare to enter the subway system. In many older stations, the fare control area is at platform level with no mezzanine crossovers. Many elevated stations also have platform-level fare control with no common station house between directions of service.
Upon entering a station, passengers may use station booths (formerly known as token booths) or vending machines to buy their fare, which is currently stored in a MetroCard or OMNY card. Each station has at least one booth, typically located at the busiest entrance. After swiping the card at a turnstile, customers enter the fare-controlled area of the station and continue to the platforms. Inside fare control are "Off-Hours Waiting Areas", which consist of benches and are identified by a yellow sign.
Platforms
A typical subway station has waiting platforms ranging from long. Some are longer. Platforms of former commuter rail stations—such as those on the IND Rockaway Line, are even longer. With the many different lines in the system, one platform often serves more than one service. Passengers need to look at the overhead signs to see which trains stop there and when, and at the arriving train to identify it.
There are several common platform configurations. On a double track line, a station may have one center island platform used for trains in both directions, or two side platforms, one for each direction. For lines with three or four tracks with express service, local stops will have side platforms and the middle one or two tracks will not stop at the station. On these lines, express stations typically have two island platforms, one for each direction. Each island platform provides a cross-platform interchange between local and express services. Some four-track lines with express service have two tracks each on two levels and use both island and side platforms.
Accessibility
Since the majority of the system was built before 1990, the year the Americans with Disabilities Act (ADA) went into effect, many New York City Subway stations were not designed to be accessible to all. Since then, elevators have been built in newly constructed stations to comply with the ADA. (Most grade-level stations required little modification to meet ADA standards.) Many accessible stations have AutoGate access. In addition, the MTA identified "key stations", high-traffic and/or geographically important stations, which must conform to the ADA when they are extensively renovated. Under plans from the MTA in 2016, the number of ADA accessible stations would go up to 144 by 2020. , there were ADA-accessible stations.
Over the years, the MTA has been involved in a number of lawsuits over the lack of accessibility in its stations. The Eastern Paralyzed Veterans Association filed what may have been the first of these suits in 1979, based on state law. The lawsuits have relied on a number of different legal bases, but most have centered around the MTA's failure to include accessibility as a part of its plans for remodeling various stations. , ADA-accessibility projects are expected to be started or completed at 51 stations as part of the 2020–2024 Capital Program. This would allow one of every two to four stations on every line to be accessible, so that all non-accessible stops would be a maximum of two stops from an accessible station.
In 2022, the MTA agreed in a settlement to make 95 percent of subway and Staten Island Railway stations accessible by 2055. By comparison, all but one of Boston's MBTA subway stations are accessible, the Chicago "L" plans all stations to be accessible in the 2030s, the Toronto subway will be fully accessible by 2025, and Montreal Metro plans all stations to be accessible by 2038. Both the Boston and Chicago systems are as old or older than the New York City Subway, though each of these systems has fewer stations than the New York City Subway. Newer systems, such as the Washington Metro and Bay Area Rapid Transit, have been fully accessible from their opening in the 1970s.
Rolling stock
In November 2016, the New York City Subway had cars on the roster.
A typical New York City Subway train consists of 8 to 11 cars, although shuttles can have as few as two, and the train can range from in length.
The system maintains two separate fleets of cars, one for the A Division routes and another for the B Division routes. A Division equipment is approximately wide and long, whereas B Division equipment is about wide and either or long. 75-foot cars can not be used over the BMT Eastern Division in regular service due to tight turning radii on Eastern Division lines; the turning radii for curves on Eastern Division lines are as tight as .
Cars purchased by the City of New York since the inception of the IND and the other divisions beginning in 1948 are identified by the letter "R" followed by a number; e.g.: R32. This number is the contract number under which the cars were purchased. Cars with nearby contract numbers (e.g.: R1 through R9, or R26 through R29, or R143 through R179) may be relatively identical, despite being purchased under different contracts and possibly built by different manufacturers.
From 1999 to 2019, the R142, R142A, R143, R160, R179 and R188 were placed into service. These cars are collectively known as New Technology Trains (NTTs) due to modern innovations such as LED and LCD route signs and information screens, as well as recorded train announcements and the ability to facilitate Communication-Based Train Control (CBTC).
As part of the 2017–2020 MTA Financial Plan, 600 subway cars will have electronic display signs installed to improve customer experience.
Fares
Riders pay a single fare to enter the subway system and may transfer between trains at no extra cost until they exit via station turnstiles; the fare is a flat rate regardless of how far or how long the rider travels. Thus, riders must swipe their MetroCard or tap a contactless payment card or smartphone on an OMNY reader upon entering the subway system, but not a second time upon leaving.
, nearly all fares are paid by MetroCard or OMNY. As of August 2023, the base fare is $2.90. Fares can be paid with most credit or debit cards using the OMNY readers, with a reusable MetroCard, or with single-use tickets. The MTA offers 7-day and 30-day unlimited ride programs that can lower the effective per-ride fare significantly. Reduced fares are available for the elderly and people with disabilities.
Fares were stored in a money room at 370 Jay Street in Downtown Brooklyn starting in 1951, when the building opened as a headquarters for the New York City Board of Transportation. The building is close to the lines of all three subway divisions (the IRT, BMT, and IND) and thus was a convenient location to collect fares, including tokens and cash, via money trains. Passageways from the subway stations, including a visible door in the Jay Street IND station, lead to a money sorting room in the basement of the building. The money trains were replaced by armored trucks in 2006.
MetroCard
In June 1993, a fare system called the MetroCard was introduced, which allows riders to use magnetic stripe cards that store the value equal to the amount paid to a subway station booth clerk or vending machine. The MetroCard was enhanced in 1997 to allow passengers to make free transfers between subways and buses within two hours and several MetroCard-only transfers between subway stations were added in 2001. With the addition of unlimited-ride MetroCards in 1998, the New York City Transit system was the last major transit system in the United States with the exception of BART in San Francisco to introduce passes for unlimited bus and rapid transit travel. , MetroCard is to be retired at an undetermined date.
OMNY
On October 23, 2017, it was announced that the MetroCard would be phased out and replaced by OMNY, a contactless fare payment system by San Diego-based Cubic Transportation Systems, with fare payment being made using Apple Pay, Google Pay, debit/credit cards with near-field communication technology, or radio-frequency identification cards. As of December 31, 2020, OMNY is available on all MTA buses and at all subway stations.
Modernization
Since the late 20th century, the MTA has started several projects to maintain and improve the subway. In the 1990s, it started converting the BMT Canarsie Line to use communications-based train control, utilizing a moving block signal system that allowed more trains to use the tracks and thus increasing passenger capacity. After the Canarsie Line tests were successful, the MTA expanded the automation program in the 2000s and 2010s to include other lines. As part of another program called FASTRACK, the MTA started closing sections of lines during weekday nights in 2012,
in order to allow workers to clean these lines without being hindered by train movements. It expanded the program beyond Manhattan the next year after noticing how efficient the FASTRACK program was compared to previous service diversions. In 2015, the MTA announced a wide-ranging improvement program as part of the 2015–2019 Capital Program. Thirty stations would be extensively rebuilt under the Enhanced Station Initiative, and new R211 subway cars would be able to fit more passengers.
The MTA has also started some projects to improve passenger amenities. It added train arrival "countdown clocks" to most A Division stations (except on the IRT Flushing Line, serving the ) and the BMT Canarsie Line () by late 2011, allowing passengers on these routes to see train arrival times using real-time data. A similar countdown-clock project for the B Division and the Flushing Line was deferred until 2016, when a new Bluetooth-based clock system was tested successfully. Beginning in 2011, the MTA also started "Help Point" to aid with emergency calls or station agent assistance. The Help Point project was deemed successful, and the MTA subsequently installed Help Points in all stations. Interactive touchscreen "On The Go! Travel Station" kiosks, which give station advisories, itineraries, and timetables, were installed starting in 2011, with the program also being expanded after a successful pilot. Cellular phone and wireless data in stations, first installed in 2011 as part of yet another pilot program, was also expanded systemwide due to positive passenger feedback. Finally, credit-card trials at several subway stations in 2006 and 2010 led to proposals for contactless payment to replace the aging MetroCard.
Safety and security
Signaling
Signaling has evolved during a century of operation, and MTA uses a mixture of old and new systems. Most routes use block signaling but a few routes are also being retrofitted with communications-based train control (CBTC), which would allow trains to run without train operator input.
Wayside block signaling
The system currently uses automatic block signaling with fixed wayside signals and automatic train stops to provide safe train operation across the whole system. The New York City Subway system has, for the most part, used block signaling since its first line opened, and many portions of the current signaling system were installed between the 1930s and 1960s. These signals work by preventing trains from entering a "block" occupied by another train. Typically, the blocks are long. Red and green lights show whether a block is occupied or vacant. The train's maximum speed will depend on how many blocks are open in front of it. The signals do not register a train's speed, nor where in the block the train is located.
Subway trains are stopped mechanically at all signals showing "stop". To make train stops safe and effective, wayside trippers must not be moved to trip ("stop") position until the train has fully passed.
Communications-based train control
In the late 1990s and early 2000s, the MTA began automating the subway by installing CBTC, which supplements rather than replaces the fixed-block signal system; it allows trains to operate more closely together with lower headways. The BMT Canarsie Line, on which the runs, was chosen for pilot testing because it is a self-contained line that does not operate in conjunction with other lines. CBTC became operational in February 2009. Due to an unexpected ridership increase, the MTA ordered additional cars, and increased service from 15 trains to 26 trains per hour, an achievement beyond the capability of the block system. The total cost of the project was $340 million.
After the success of the BMT Canarsie Line automation, the IRT Flushing Line, carrying the , was next chosen to get CBTC. Estimated to cost US$1.4 billion, the project was completed in November 2018. By 2018, CBTC was in the process of being installed on several other routes as well, particularly the IND Queens Boulevard Line () and IND Culver Line (). The total cost for the entire Queens Boulevard Line is estimated at over $900 million, and the Queens Boulevard CBTC project was completed in 2022. Funding for CBTC on the IND Eighth Avenue Line is also provided in the 2015–2019 capital plan, and the IND Crosstown Line and IND Fulton Street Line were also being equipped with CBTC . The widespread installation of CBTC includes retrofitting many newer subway cars and replacement of older cars.
Eventually, the MTA has plans to automate a much larger portion, using One Person Train Operation (OPTO) in conjunction with CBTC. At the current pace of installation, it would take 175 years for CBTC to be installed at a cost of $20 billion. The Flushing line operated at almost 30 trains an hour using the signal system installed when the line was built, but after the CBTC installation it became possible that an additional two trains per hour could be operated. In March 2018, New York City Transit Authority president Andy Byford announced a new plan for resignaling the subway with CBTC, which would only take 10 to 15 years, compared to the previous estimate of 40 years. This would cost $8 to $15 billion.
The New York City Subway uses a system known as Automatic Train Supervision (ATS) for dispatching and train routing on the A Division (the Flushing line and the trains used on the do not have ATS.) ATS allows dispatchers in the Operations Control Center (OCC) to see where trains are in real time, and whether each individual train is running early or late. Dispatchers can hold trains for connections, re-route trains, or short-turn trains to provide better service when a disruption causes delays.
Train accidents
Despite the signal system, there have been at least 64 major train accidents since 1918, when a train bound for South Ferry collided with two trains halted near Jackson Avenue on the IRT White Plains Road Line in the Bronx. Several accidents resulted when the train operator ran through red signals and rear-ended the subway train in front of it; this resulted from the signaling practice of "keying by", which allowed train operators to bypass red signals. The deadliest accident, the Malbone Street Wreck, occurred on November 1, 1918, beneath the intersection of Flatbush Avenue, Ocean Avenue, and Malbone Street (the latter of which is now Empire Boulevard) near the Prospect Park station of the then-BRT Brighton Line in Brooklyn, killing 93 people. As a result of accidents, especially more recent ones such as the 1995 Williamsburg Bridge crash, timer signals were installed. These signals have resulted in reduced speeds across the system. Accidents such as derailments are also due to broken equipment, such as the rails and the train itself.
Passenger safety
Track safety and suicides
A portion of subway-related deaths in New York consists of suicides committed by jumping in front of an oncoming train. Between 1990 and 2003, 343 subway-related suicides were registered out of a citywide total of 7,394 (4.6%) and subway-related suicides increased by 30%, despite a decline in overall suicide numbers.
Due to increase in people hit by trains in 2013, in late 2013 and early 2014 the MTA started a test program, with four systems installed and strategies instituted to eliminate the number of people hit by trains. Closed-circuit television cameras, a web of laser beams stretched across the tracks, radio frequencies transmitted across the tracks, and thermal imaging cameras focused on the station's tracks were installed. The tests were successful enough that the 2015–2019 capital program included similar installations system-wide.
The MTA also expressed interest in starting a pilot program to install platform edge doors. Several planned stations in the New York City Subway may possibly feature platform screen doors, including at future stations such as those part of the Second Avenue Subway. In October 2017, it was announced that as part of a pilot program, the Third Avenue station would be refitted with platform screen doors during the 14th Street Tunnel shutdown in 2019–2020. The $30 million for the platform edge door pilot program was diverted to another project in 2018. Following a series of incidents, MTA announced another PSD pilot program at three stations in February 2022: the platform at Times Square; the platform at Sutphin Boulevard–Archer Avenue–JFK Airport; and the Third Avenue station. Numerous challenges come with platform doors. Some subway lines operate multiple subway car models, and their doors do not align. Many platforms are not strong enough to hold the additional weight of a platform barrier, thus requiring extensive renovations if they were to be installed.
Crime
Crime rates have varied, but there was a downward trend from the 1990s to 2014. To fight crime, various approaches have been used over the years, including an "If You See Something, Say Something" campaign and, starting in 2016, banning people who commit a crime in the subway system from entering the system for a certain length of time.
In July 1985, the Citizens Crime Commission of New York City published a study showing riders abandoning the subway, fearing the frequent robberies and generally bad circumstances. Crime rates in the subway and the city dropped in 1993, part of a larger citywide decrease in crime. Michael Bloomberg stated in a November 2004 press release: "Today, the subway system is safer than it has been at any time since we started tabulating subway crime statistics nearly 40 years ago." Although ridership decreased by 40 percent from 2019 to 2022, the number of crimes in the system remained roughly the same, prompting riders to express concerns over increased crime. The subway recorded eight murders in 2021, the highest annual total in 25 years; by October 2022, nine people had been murdered that year alone.
The subway system has been the target of some mass attacks, though such attacks are relatively rare. On December 11, 2017, there was an attempted bombing at the Times Square–42nd Street station, injuring four people including the attacker. On April 12, 2022, a shooting attack occurred on the N train, injuring 29 people including 10 who were shot.
Photography
After the September 11, 2001, attacks, the MTA exercised extreme caution regarding anyone taking photographs or recording video inside the system and proposed banning all photography and recording in a meeting around June 2004. Due to strong response from both the public and from civil rights groups, the rule of conduct was dropped. In November 2004, the MTA again put this rule up for approval, but was again denied, though many police officers and transit workers still confront or harass people taking photographs or videos. On April 3, 2009, the NYPD issued a directive to officers stating that it is legal to take pictures within the subway system so long as it is not accompanied with suspicious activity.
, the MTA Rules of Conduct, Restricted Areas and Activities section states that anyone may take pictures or record videos, provided that they do not use any of three tools: lights, reflectors, or tripods. These three tools are permitted only by members of the press who have identification issued by the NYPD.
Terrorism prevention
On July 22, 2005, in response to bombings in London, the New York City Police Department introduced a new policy of randomly searching passengers' bags as they approached turnstiles. The NYPD claimed that no form of racial profiling would be conducted when these searches actually took place. The NYPD has come under fire from some groups that claim purely random searches without any form of threat assessment would be ineffectual. Donna Lieberman, executive director of the NYCLU, stated, "This NYPD bag search policy is unprecedented, unlawful and ineffective. It is essential that police be aggressive in maintaining security in public transportation. But our very real concerns about terrorism do not justify the NYPD subjecting millions of innocent people to suspicionless searches in a way that does not identify any person seeking to engage in terrorist activity and is unlikely to have any meaningful deterrent effect on terrorist activity." The searches were upheld by the United States Court of Appeals for the Second Circuit in MacWade v. Kelly.
On April 11, 2008, MTA received a Ferrara Fire Apparatus Hazardous Materials Response Truck, which went into service three days later. It will be used in the case of a chemical or bioterrorist attack.
Najibullah Zazi and others were arrested in September 2009 and pleaded guilty in 2010 to being part of an al-Qaeda plan to undertake suicide bombings on the New York City subway system.
Challenges
2009–2010 budget cuts
The MTA faced a budget deficit of US$1.2 billion in 2009. This resulted in fare increases (three times from 2008 to 2010) and service reductions (including the elimination of two part-time subway services, the and ). Several other routes were modified due to the deficit. The was made a full-time local in Manhattan (in contrast to being a weekend local/weekday express before 2010), while the was extended nine stations north to Astoria–Ditmars Boulevard on weekdays, both to cover the discontinued . The was combined with the , routing it over the Chrystie Street Connection, IND Sixth Avenue Line and IND Queens Boulevard Line to Forest Hills–71st Avenue on weekdays instead of via the BMT Fourth Avenue Line and BMT West End Line to Bay Parkway. The was truncated to Court Square full-time. Construction headways on eleven routes were lengthened, and off-peak service was lengthened on seven routes.
2017–2021 state of emergency
In June 2017, Governor Andrew Cuomo signed an executive order declaring a state of emergency for the New York City Subway after a series of derailments, track fires, and overcrowding incidents. On June 27, 2017, thirty-nine people were injured when an A train derailed at 125th Street, damaging tracks and signals then catching fire. On July 21, 2017, the second set of wheels on a southbound Q train jumped the track near Brighton Beach, with nine people suffering injuries due to improper maintenance of the car in question. To solve the system's problems, the MTA officially announced the Genius Transit Challenge on June 28, where contestants could submit ideas to improve signals, communications infrastructure, or rolling stock.
On July 25, 2017, Chairman Joe Lhota announced a two-phase, $9 billion New York City Subway Action Plan to stabilize the subway system and to prevent the continuing decline of the system. The first phase, costing $836 million, consisted of five categories of improvements in Signal and Track Maintenance, Car Reliability, System Safety and Cleanliness, Customer Communication, and Critical Management Group. The $8 billion second phase would implement the winning proposals from the Genius Transit Challenge and fix more widespread problems. Six winning submissions for the Genius Transit Challenge were announced in March 2018.
In October 2017, city comptroller Scott Stringer released an analysis that subway delays could cost up to $389 million or $243.1 million or $170.2 million per year depending on the length of the delays. In November 2017, The New York Times published its investigation into the crisis. It found that the crisis had arisen as a result of financially unsound decisions by local and state politicians from both the Democratic and Republican parties. According to the Times, these decisions included overspending; overpaying unions and interest groups; advertising superficial improvement projects while ignoring more important infrastructure; and agreeing to high-interest loans that would have been unnecessary without these politicians' other interventions. By this time, the subway's 65% average on-time performance was the lowest among all major cities' transit systems, and every non-shuttle subway route's on-time performance had declined in the previous ten years. The state of emergency ended on June 30, 2021, after previously being renewed 49 times. , on-time performance across all routes is at 80.6 percent. Worsening subway reliability and service cuts in the early 2020s have been attributed to chronic mismanagement at the agency and a botched restructuring plan that was implemented under former Governor Andrew Cuomo.
Capacity constraints
Several subway lines have reached their operational limits in terms of train frequency and passengers, according to data released by the Transit Authority. By 2007, the E, L, and all A Division services except the 42nd Street Shuttle were beyond capacity, as well as portions of the train. In April 2013, New York magazine reported that the system was more crowded than it had been in the previous 66 years. The subway reached a daily ridership of 6 million for 29 days in 2014, and was expected to record a similar ridership level for 55 days in 2015; by comparison, in 2013, daily ridership never reached 6 million. In particular, the express tracks of the IRT Lexington Avenue Line and IND Queens Boulevard Line are noted for operating at full capacity during peak hours. The Long Island Rail Road East Side Access project, which opened in January 2023, was expected to bring many more commuters to the Lexington Avenue Line. The Second Avenue Subway was built to relieve pressure on the Lexington Avenue Line () by shifting an estimated 225,000 passengers. Following the onset of the COVID-19 pandemic in New York City in 2020, there was enough of a ridership decrease that these routes were no longer crammed to capacity during rush hours, although they still experienced some crowding.
By early 2016, delays as a result of overcrowding were up to more than 20,000 every month, four times the amount in 2012. The overcrowded trains have resulted in an increase of assaults. With less platform space, more passengers are forced to be on the edge of the platform resulting in the increased possibility of passengers falling on the track. The MTA is considering platform screen doors, which exist on the AirTrain JFK to prevent passengers falling onto the tracks. , platform screen doors were planned to be installed in three stations, following an increase in people being pushed onto the tracks.
Expanding service frequency via CBTC
The MTA has sought to relieve overcrowding by upgrading signaling systems on some lines to use communications-based train control. CBTC installation on the Flushing Line is expected to increase the rate of trains per hour on the , but little relief will come to other crowded lines until later. The , which is overcrowded during rush hours, already has CBTC operation. The installation of CBTC has reduced the L's running time by 3%. Even with CBTC, there are limits on the potential increased service. For L service to be increased further, a power upgrade as well as additional space for the L to turn around at its Manhattan terminus, Eighth Avenue, are needed.
Service frequency and car capacity
Due to an increase of ridership, the MTA has tried to increase capacity wherever possible by adding more frequent service, specifically during the evening hours. This increase is not likely to keep up with the growth of subway ridership. Some lines have capacity for additional trains during peak times, but there are too few subway cars for this additional service to be operated.
As part of the R211 subway car order, the MTA is planning to test a train of 10 open-gangway experimental prototype cars, which could increase capacity by up to 10% by utilizing space between cars. The order could be expanded to include up to 750 open-gangway cars.
Platform crowd control
The MTA is also testing smaller ideas on some services. Starting in late 2015, 100 "station platform controllers" were deployed for the F, 6, and 7 trains, to manage the flow of passengers on and off crowded trains during morning rush hours. There were a total of 129 such employees, who also answer passengers' questions about subway directions, rather than having conductors answer them and thus delaying the trains. In early 2017, the test was expanded to the afternoon peak period with an increase of 35 platform conductors. In November of the same year, 140 platform controllers and 90 conductors gained iPhone 6S devices so they could receive notifications of, and tell riders about, subway disruptions. Subway guards, the predecessors to the platform controllers, were first used during the Great Depression and World War II.
Shortened "next stop" announcements on trains were being tested on the 2 and 5 trains. "Step aside" signs on the platforms, reminding boarding passengers to let departing passengers off the train first, were tested at Grand Central–42nd Street, 51st Street, and 86th Street on the Lexington Avenue Line. Cameras would also be installed so the MTA could observe passenger overcrowding.
In systems like the London Underground, stations are simply closed off when they are overcrowded; that type of restriction is not necessary yet on the New York City Subway, according to MTA spokesman Kevin Ortiz.
Subway flooding
Service on the subway system is occasionally disrupted by flooding from rainstorms, even minor ones. Rainwater can disrupt signals underground and require the electrified third rail to be shut off. Every day, the MTA moves 13 million gallons of water when it is not raining. The pumps and drainage system can handle a rainfall rate of per hour. Since 1992, $357 million has been used to improve 269 pump rooms. By August 2007, $115 million was earmarked to upgrade the remaining 18 pump rooms.
Despite these improvements, the transit system continues to experience flooding problems. On August 8, 2007, after more than of rain fell within an hour, the subway system flooded, causing almost every subway service to either be disabled or seriously disrupted, effectively halting the morning rush. On September 1, 2021, when of rain per hour fell during Hurricane Ida, service on the entire subway system was suspended.
As part of a $130 million and an estimated 18-month project, the MTA began installing new subway grates in September 2008 in an attempt to prevent rain from overflowing into the subway system. The metallic structures, designed with the help of architectural firms and meant as a piece of public art, are placed atop existing grates but with a sleeve to prevent debris and rain from flooding the subway. The racks will at first be installed in the three most flood-prone areas as determined by hydrologists: Jamaica, Tribeca, and the Upper West Side. Each neighborhood has its own distinct design, some featuring a wave-like deck which increases in height and features seating (as in Jamaica), others with a flatter deck that includes seating and a bike rack.
In October 2012, Hurricane Sandy caused significant damage to New York City, and many subway tunnels were inundated with floodwater. The subway opened with limited service two days after the storm and was running at 80 percent capacity within five days; some infrastructure needed years to repair. A year after the storm, MTA spokesperson Kevin Ortiz said, "This was unprecedented in terms of the amount of damage that we were seeing throughout the system." The storm flooded nine of the system's 14 underwater tunnels, many subway lines, and several subway yards, as well as completely destroying a portion of the IND Rockaway Line and much of the South Ferry terminal station. Reconstruction required many partial or total closures on several lines and tunnels. Heavy flooding also occurred in September 2021 during Hurricane Ida and in September 2023 during the aftermath of Tropical Storm Ophelia.
Full and partial subway closures
Before 2011 there have been some full subway closures for transit strikes (January 1–13, 1966, April 1–11, 1980, December 20–22, 2005) and blackouts (November 9–10, 1965, July 13–14, 1977, August 14–16, 2003).
On August 27, 2011, due to the approach of Hurricane Irene, the MTA suspended subway service at noon in anticipation of heavy flooding on tracks and in tunnels. It was the first weather-caused shutdown in the history of the system. Service was restored by August 29.
On October 29, 2012, a full closure was ordered before the arrival of Hurricane Sandy. All services on the subway, the Long Island Rail Road and Metro-North were gradually shut down by 7:00 P.M. to protect passengers, employees, and equipment from the coming storm. The storm caused serious damage to the system, especially the IND Rockaway Line, upon which many sections between Howard Beach–JFK Airport and Hammels Wye on the Rockaway Peninsula were heavily damaged, leaving it essentially isolated from the rest of the system. This required the NYCTA to truck in 20 R32 subway cars to the line to provide some interim service (temporarily designated the ). Also, several of the system's tunnels under the East River were flooded by the storm surge. South Ferry suffered serious water damage and did not reopen until April 4, 2013, by restoring service to the older loop-configured station that had been replaced in 2009; the stub-end terminal tracks remained out of service until June 2017.
Since 2015, there have been several blizzard-related subway shutdowns. On January 26, 2015, another full closure was ordered by New York Governor Andrew Cuomo due to the January 2015 nor'easter, originally projected to leave New York City with of snow; this was the first shutdown in the system's history to be ordered due to snow. The next day, the subway system was partially reopened. Several residents criticized the decision to shut down the subway system due to snow, as the nor'easter dropped much less snow in the city than originally expected, totaling only in Central Park. For subsequent snowstorms, the MTA published a winter underground-only subway service plan. When this plan is in effect, all above-ground stations would be closed and all above-ground service suspended, except at 125th Street and Broadway, where trains would run above ground but skip the station. Underground service would remain operational, except at a small number of stations that would be closed because of their proximity to above-ground portions of the system. This plan was first used on January 23, 2016, during the January 2016 United States blizzard; it was also used on March 14, 2017, due to the March 2017 nor'easter. On August 4, 2020, service at above-ground stations was suspended due to the high wind gusts brought by Tropical Storm Isaias.
Starting on May 6, 2020, as a result of the COVID-19 pandemic in New York City, stations were closed between 1:00 a.m. and 5:00 a.m. for cleaning and disinfecting. Nevertheless, over 500 trains continued running every 20 minutes between 1 a.m. and 5 a.m., carrying only transit workers and emergency personnel. The trains kept running because there was not enough space in the system to store all trains simultaneously, and also so that they could easily resume service upon the start of rush hour at 5 a.m. In February 2021, the overnight closures were shortened to between 2 and 4 a.m., and in May 2021, Cuomo announced that 24-hour service would resume on May 17. This was the longest shutdown in the subway's history.
Litter and rodents
Litter accumulation in the subway system is perennial. In the 1970s and 1980s, dirty trains and platforms, as well as graffiti, were a serious problem. The situation had improved since then, but the 2010 budget crisis, which caused over 100 of the cleaning staff to lose their jobs, threatened to curtail trash removal. Every day, the MTA removes 40 tons of trash from 3,500 trash receptacles.
The New York City Subway system is infested with rats. Rats are sometimes seen on platforms, and are commonly seen foraging through garbage thrown onto the tracks. They are believed to pose a health hazard, and on rare instances have been known to bite humans. Subway stations notorious for rat infestation include Chambers Street, Jay Street–MetroTech, West Fourth Street, Spring Street and 145th Street.
Decades of efforts to eradicate or simply thin the rat population in the system have been a failure. In March 2009, the Transit Authority announced a series of changes to its vermin control strategy, including new poison formulas and experimental trap designs. In October 2011, they announced a new initiative to clean 25 subway stations, along with their garbage rooms, of rat infestations. That same month, the MTA announced a pilot program aimed at reducing levels of garbage in the subways by removing all garbage bins from the subway platforms. The initiative was tested at the Eighth Street–New York University and Flushing–Main Street stations. As of March 2016, stations along the BMT Jamaica Line, BMT Myrtle Avenue Line, and various other stations had their garbage cans removed due to the success of the program. In March 2017 the program was ended as a failure.
The old vacuum trains that are designed to remove trash from the tracks are ineffective and often broken. A 2016 study by Travel Math had the New York City Subway listed as the dirtiest subway system in the country based on the number of viable bacteria cells. In August 2016, the MTA announced that it had initiated Operation Track Sweep, an aggressive plan to dramatically reduce the amount of trash on the tracks and in the subway environment. This was expected to reduce track fires and train delays. As part of the plan, the frequency of station track cleaning would be increased, and 94 stations would be cleaned per two-week period, an increase from the previous rate of 34 stations every two weeks. The MTA launched an intensive two-week, system-wide cleaning on September 12, 2016. Several vacuum trains were delivered in 2018 and 2019. The operation planned to also include 27 new refuse cars.
Noise
Rolling stock on the New York City Subway produces high levels of noise that exceed guidelines set by the World Health Organization and the U.S. Environmental Protection Agency. In 2006, Columbia University's Mailman School of Public Health found noise levels averaged 95 decibel (dB) inside subway cars and 94 dB on platforms. Daily exposure to noise at such levels for as little as 30 minutes can lead to hearing loss. Noise on one in 10 platforms exceeded 100 dB. Under WHO and EPA guidelines, noise exposure at that level is limited to 1.5 minutes. A subsequent study by Columbia and the University of Washington found higher average noise levels in the subway (80.4 dB) than on commuter trains including Port Authority Trans-Hudson (PATH) (79.4 dB), Metro-North (75.1 dB) and Long Island Rail Road (LIRR) (74.9 dB). Since the decibel scale is a logarithmic scale, sound at 95 dB is 10 times more intense than at 85 dB, 100 times more intense than at 75 dB, and so forth. In the second study, peak subway noise registered at 102.1 dB.
For the construction of the Second Avenue Subway, the MTA, with the engineering firm Arup, worked to reduce the noise levels in stations. In order to reduce noise for all future stations starting with the Second Avenue Subway, the MTA is investing in low-vibration track using ties encased in concrete-covered rubber and neoprene pads. Continuously welded rail, which is also being installed, reduces the noise being made by the wheels of trains. The biggest change that is going to be made is in the design of stations. Current stations were built with tile and stone, which bounce sound everywhere, while newer stations will have the ceilings lined with absorbent fiberglass or mineral wool that will direct sound toward the train and not the platform. With less noise from the trains, platform announcements could be heard more clearly. They will be clearer with speakers spaced periodically on the platform, angled so that announcements can be heard by the riders. The Second Avenue Subway has the first stations to test this technology.
Public relations and cultural impact
Entertainment
The subway is a popular venue for busking. A permit is not required to perform, but certain codes of conduct are required. Some buskers are affiliated with Music Under New York (MUNY), a part of the Arts & Design program by the MTA. Since 1987, MTA has sponsored the MUNY program in which street musicians enter a competitive contest to be assigned to the preferred high traffic locations. Each year, applications are reviewed and approximately 70 eligible performers are selected and contacted to participate in live auditions held for one day.
Miss Subways
From 1941 to 1976, the Board of Transportation/New York City Transit Authority sponsored the "Miss Subways" publicity campaign. In the musical On the Town, the character Miss Turnstiles is based on the Miss Subways campaign. The campaign was resurrected in 2004, for one year, as "Ms. Subways". It was part of the 100th anniversary celebrations. The monthly campaign, which included the winners' photos and biographical blurbs on placards in subway cars, featured such winners as Mona Freeman and prominent New York City restaurateur Ellen Goodman. The winner of this contest was Caroline Sanchez-Bernat, an actress from Morningside Heights.
Subway Series
Subway Series is a term applied to any series of baseball games between New York City teams, as opposing teams can travel to compete merely by using the subway system. Subway Series is a term long used in New York, going back to series between the Brooklyn Dodgers or New York Giants and the New York Yankees in the 1940s and 1950s. Today, the term is used to describe the rivalry between the Yankees and the New York Mets. During the 2000 World Series, cars on the 4 train (which stopped at Yankee Stadium) were painted with Yankee colors, while cars on the 7 train (which stopped at Shea Stadium) had Mets colors.
Holiday Nostalgia Train
Since 2003, the MTA has operated a Holiday Nostalgia Train on Sundays in November and December, from the first Sunday after Thanksgiving to the Sunday before Christmas Day, except in 2011 and 2023, when the train operated on Saturdays instead of Sundays. This train is made of vintage cars from the R1–9 fleet, which have been preserved by Railway Preservation Corp. and the New York Transit Museum. Until 2017, the train made all stops between Second Avenue in Manhattan and Queens Plaza in Queens via the IND Sixth Avenue Line and the IND Queens Boulevard Line. In 2017, the train ran between Second Avenue and 96th Street via the newly opened Second Avenue Subway. Since 2018, the northern terminal is now located at 145th Street, except for 2024, which had its northern terminal at 96th Street–Second Avenue.
The contract, car numbers (and year built) used generally comprises R1 100 (built 1930), R1 103 (1930), R1 381 (1931), R4 401 (1932), R4 484 (1932) – Bulls Eye lighting and a test P.A. system added in 1946, R6-3 1000 (1935), R6-1 1300 (1937), R7A 1575 (1938) – rebuilt in 1947 as a prototype for the R10 subway car, and R9 1802 (1940).
Full train wraps
Since 2008, the MTA has tested full train wraps on 42nd Street Shuttle rolling stock. In full train wraps, advertising entirely covers the interiors and exteriors of the train, as opposed to other routes, whose stock generally only displays advertising on placards inside the train. While most advertisements are well received, a few advertisements have been controversial. Among the more contentious wraps that were withdrawn are a 2015 ad for the TV show The Man in the High Castle, which featured a Nazi flag, and an ad for Fox Sports 1, in which a shuttle train and half of its seats were plastered with negative quotes about the New York Knicks, one of the city's NBA teams.
Other routes have seen limited implementation of full train wraps. For instance, in 2010, one R142A train set on the 6 route was wrapped with a Target advertisement. In 2014, the Jaguar F-Type was advertised on train sets running on the F route. Some of these wraps have also been controversial, such as a Lane Bryant wrap in 2015 that displayed lingerie models on the exteriors of train cars.
LGBT Pride-themed trains and MetroCards
The New York City Subway system commemorates Pride Month in June with Pride-themed posters. The MTA celebrated Stonewall 50 - WorldPride NYC 2019 in June 2019 with rainbow-themed Pride logos on the subway trains as well as Pride-themed MetroCards.
Guerrilla art
The New York City Subway system has been a target for unauthorized or "guerrilla" art since the 1970s, beginning with graffiti and tagging. Originally thought of as vandalism, the art form eventually emerged as an authoritative typology in the 1980s, especially with the release of the 1983 documentary Style Wars. Prominent pop-artist Keith Haring got his start tagging blank billboards on subway platforms with chalk art. In 2019–2020, the Bronx Museum mounted an exhibition of graffiti-tagged subway cars.
More contemporary installations have taken place as well. In 2014, artist London Kaye yarn-bombed the L train, wrapping metal hand poles in knit fabric. In 2019, artist Ian Callender used projectors to show accurate views of the cityscape above moving 6 trains on the ceilings of entire cars. In 2021, illustrator Devon Rodriguez went viral for his drawings of fellow commuters.
No Pants Ride
In 2002, the New York City Subway began hosting an event called the No Pants Subway Ride, where people ride the subway without their pants. The event is typically held each January but has not been held since 2020 due to the COVID-19 pandemic.
| Technology | Trains | null |
285522 | https://en.wikipedia.org/wiki/Superheating | Superheating | In thermodynamics, superheating (sometimes referred to as boiling retardation, or boiling delay) is the phenomenon in which a liquid is heated to a temperature higher than its boiling point, without boiling. This is a so-called metastable state or metastate, where boiling might occur at any time, induced by external or internal effects. Superheating is achieved by heating a homogeneous substance in a clean container, free of nucleation sites, while taking care not to disturb the liquid.
This may occur by microwaving water in a very smooth container. Disturbing the water may cause an unsafe eruption of hot water and result in burns.
Cause
Water is said to "boil" when bubbles of water vapor grow without bound, bursting at the surface. For a vapor bubble to expand, the temperature must be high enough that the vapor pressure exceeds the ambient pressure (the atmospheric pressure, primarily). Below that temperature, a water vapor bubble will shrink and vanish.
Superheating is an exception to this simple rule; a liquid is sometimes observed not to boil even though its vapor pressure does exceed the ambient pressure. The cause is an additional force, the surface tension, which suppresses the growth of bubbles.
Surface tension makes the bubble act like an elastic balloon. The pressure inside is raised slightly by the "skin" attempting to contract. For the bubble to expand, the temperature must be raised slightly above the boiling point to generate enough vapor pressure to overcome both surface tension and ambient pressure.
What makes superheating so explosive is that a larger bubble is easier to inflate than a small one; just as when blowing up a balloon, the hardest part is getting started. It turns out the excess pressure due to surface tension is inversely proportional to the diameter of the bubble. That is, .
This can be derived by imagining a plane cutting a bubble into two halves. Each half is pulled towards the middle with a surface tension force , which must be balanced by the force from excess pressure . So we obtain , which simplifies to .
This means if the largest bubbles in a container are small, only a few micrometres in diameter, overcoming the surface tension may require a large , requiring exceeding the boiling point by several degrees Celsius. Once a bubble does begin to grow, the surface tension pressure decreases, so it expands explosively in a positive feedback loop. In practice, most containers have scratches or other imperfections which trap pockets of air that provide starting bubbles, and impure water containing small particles can also trap air pockets. Only a smooth container of purified liquid can reliably superheat.
Occurrence via microwave oven
Superheating can occur when an undisturbed container of water is heated in a microwave oven. At the time the container is removed, the lack of nucleation sites prevents boiling, leaving the surface calm. However, once the water is disturbed, some of it violently flashes to steam, potentially spraying boiling water out of the container. The boiling can be triggered by jostling the cup, inserting a stirring device, or adding a substance like instant coffee or sugar. The chance of superheating is greater with smooth containers, because scratches or chips can house small pockets of air, which serve as nucleation points. Superheating is more likely after repeated heating and cooling cycles of an undisturbed container, as when a forgotten coffee cup is re-heated without being removed from a microwave oven. This is due to heating cycles releasing dissolved gases such as oxygen and nitrogen from the solvent. There are ways to prevent superheating in a microwave oven, such as putting a spoon or stir stick into the container beforehand or using a scratched container. To avoid a dangerous sudden boiling, it is recommended not to microwave water for an excessive amount of time.
Applications
Superheating of hydrogen liquid is used in bubble chambers.
| Physical sciences | Phase transitions | Physics |
285759 | https://en.wikipedia.org/wiki/Limit%20of%20a%20function | Limit of a function | Although the function is not defined at zero, as becomes closer and closer to zero, becomes arbitrarily close to 1. In other words, the limit of as approaches zero, equals 1.
In mathematics, the limit of a function is a fundamental concept in calculus and analysis concerning the behavior of that function near a particular input which may or may not be in the domain of the function.
Formal definitions, first devised in the early 19th century, are given below. Informally, a function assigns an output to every input . We say that the function has a limit at an input , if gets closer and closer to as moves closer and closer to . More specifically, the output value can be made arbitrarily close to if the input to is taken sufficiently close to . On the other hand, if some inputs very close to are taken to outputs that stay a fixed distance apart, then we say the limit does not exist.
The notion of a limit has many applications in modern calculus. In particular, the many definitions of continuity employ the concept of limit: roughly, a function is continuous if all of its limits agree with the values of the function. The concept of limit also appears in the definition of the derivative: in the calculus of one variable, this is the limiting value of the slope of secant lines to the graph of a function.
History
Although implicit in the development of calculus of the 17th and 18th centuries, the modern idea of the limit of a function goes back to Bolzano who, in 1817, introduced the basics of the epsilon-delta technique (see (ε, δ)-definition of limit below) to define continuous functions. However, his work was not known during his lifetime.
In his 1821 book , Augustin-Louis Cauchy discussed variable quantities, infinitesimals and limits, and defined continuity of by saying that an infinitesimal change in necessarily produces an infinitesimal change in , while Grabiner claims that he used a rigorous epsilon-delta definition in proofs. In 1861, Weierstrass first introduced the epsilon-delta definition of limit in the form it is usually written today. He also introduced the notations and
The modern notation of placing the arrow below the limit symbol is due to Hardy, which is introduced in his book A Course of Pure Mathematics in 1908.
Motivation
Imagine a person walking on a landscape represented by the graph . Their horizontal position is given by , much like the position given by a map of the land or by a global positioning system. Their altitude is given by the coordinate . Suppose they walk towards a position , as they get closer and closer to this point, they will notice that their altitude approaches a specific value . If asked about the altitude corresponding to , they would reply by saying .
What, then, does it mean to say, their altitude is approaching ? It means that their altitude gets nearer and nearer to —except for a possible small error in accuracy. For example, suppose we set a particular accuracy goal for our traveler: they must get within ten meters of . They report back that indeed, they can get within ten vertical meters of , arguing that as long as they are within fifty horizontal meters of , their altitude is always within ten meters of .
The accuracy goal is then changed: can they get within one vertical meter? Yes, supposing that they are able to move within five horizontal meters of , their altitude will always remain within one meter from the target altitude . Summarizing the aforementioned concept we can say that the traveler's altitude approaches as their horizontal position approaches , so as to say that for every target accuracy goal, however small it may be, there is some neighbourhood of where all (not just some) altitudes correspond to all the horizontal positions, except maybe the horizontal position itself, in that neighbourhood fulfill that accuracy goal.
The initial informal statement can now be explicated:
In fact, this explicit statement is quite close to the formal definition of the limit of a function, with values in a topological space.
More specifically, to say that
is to say that can be made as close to as desired, by making close enough, but not equal, to .
The following definitions, known as -definitions, are the generally accepted definitions for the limit of a function in various contexts.
Functions of a single variable
-definition of limit
Suppose is a function defined on the real line, and there are two real numbers and . One would say. The limit of of , as approaches , exists, and it equals and write,
or alternatively, say tends to as tends to , and write,
if the following property holds: for every real , there exists a real such that for all real , implies . Symbolically,
For example, we may say
because for every real , we can take , so that for all real , if , then .
A more general definition applies for functions defined on subsets of the real line. Let be a subset of Let be a real-valued function. Let be a point such that there exists some open interval containing with It is then said that the limit of as approaches is , if:
Or, symbolically:
For example, we may say
because for every real , we can take , so that for all real , if , then . In this example, contains open intervals around the point 1 (for example, the interval (0, 2)).
Here, note that the value of the limit does not depend on being defined at , nor on the value —if it is defined. For example, let
because for every , we can take , so that for all real , if , then . Note that here is undefined.
In fact, a limit can exist in which equals where is the interior of , and are the isolated points of the complement of . In our previous example where We see, specifically, this definition of limit allows a limit to exist at 1, but not 0 or 2.
The letters and can be understood as "error" and "distance". In fact, Cauchy used as an abbreviation for "error" in some of his work, though in his definition of continuity, he used an infinitesimal rather than either or (see Cours d'Analyse). In these terms, the error (ε) in the measurement of the value at the limit can be made as small as desired, by reducing the distance (δ) to the limit point. As discussed below, this definition also works for functions in a more general context. The idea that and represent distances helps suggest these generalizations.
Existence and one-sided limits
Alternatively, may approach from above (right) or below (left), in which case the limits may be written as
or
respectively. If these limits exist at p and are equal there, then this can be referred to as the limit of at . If the one-sided limits exist at , but are unequal, then there is no limit at (i.e., the limit at does not exist). If either one-sided limit does not exist at , then the limit at also does not exist.
A formal definition is as follows. The limit of as approaches from above is if:
For every , there exists a such that whenever , we have .
The limit of as approaches from below is if:
For every , there exists a such that whenever , we have .
If the limit does not exist, then the oscillation of at is non-zero.
More general definition using limit points and subsets
Limits can also be defined by approaching from subsets of the domain.
In general: Let be a real-valued function defined on some Let be a limit point of some —that is, is the limit of some sequence of elements of distinct from . Then we say the limit of , as approaches from values in , is , written
if the following holds:
Note, can be any subset of , the domain of . And the limit might depend on the selection of . This generalization includes as special cases limits on an interval, as well as left-handed limits of real-valued functions (e.g., by taking to be an open interval of the form ), and right-handed limits (e.g., by taking to be an open interval of the form ). It also extends the notion of one-sided limits to the included endpoints of (half-)closed intervals, so the square root function can have limit 0 as approaches 0 from above:
since for every , we may take such that for all , if , then .
This definition allows a limit to be defined at limit points of the domain , if a suitable subset which has the same limit point is chosen.
Notably, the previous two-sided definition works on which is a subset of the limit points of .
For example, let The previous two-sided definition would work at but it wouldn't work at 0 or 2, which are limit points of .
Deleted versus non-deleted limits
The definition of limit given here does not depend on how (or whether) is defined at . Bartle refers to this as a deleted limit, because it excludes the value of at . The corresponding non-deleted limit does depend on the value of at , if is in the domain of . Let be a real-valued function. The non-deleted limit of , as approaches , is if
The definition is the same, except that the neighborhood now includes the point , in contrast to the deleted neighborhood . This makes the definition of a non-deleted limit less general. One of the advantages of working with non-deleted limits is that they allow to state the theorem about limits of compositions without any constraints on the functions (other than the existence of their non-deleted limits).
Bartle notes that although by "limit" some authors do mean this non-deleted limit, deleted limits are the most popular.
Examples
Non-existence of one-sided limit(s)
The function
has no limit at (the left-hand limit does not exist due to the oscillatory nature of the sine function, and the right-hand limit does not exist due to the asymptotic behaviour of the reciprocal function, see picture), but has a limit at every other -coordinate.
The function
(a.k.a., the Dirichlet function) has no limit at any -coordinate.
Non-equality of one-sided limits
The function
has a limit at every non-zero -coordinate (the limit equals 1 for negative and equals 2 for positive ). The limit at does not exist (the left-hand limit equals 1, whereas the right-hand limit equals 2).
Limits at only one point
The functions
and
both have a limit at and it equals 0.
Limits at countably many points
The function
has a limit at any -coordinate of the form where is any integer.
Limits involving infinity
Limits at infinity
Let be a function defined on The limit of as approaches infinity is , denoted
means that:
Similarly, the limit of as approaches minus infinity is , denoted
means that:
For example,
because for every , we can take such that for all real , if , then .
Another example is that
because for every , we can take such that for all real , if , then .
Infinite limits
For a function whose values grow without bound, the function diverges and the usual limit does not exist. However, in this case one may introduce limits with infinite values.
Let be a function defined on The statement the limit of as approaches is infinity, denoted
means that:
The statement the limit of as approaches is minus infinity, denoted
means that:
For example,
because for every , we can take such that for all real , if , then .
These ideas can be used together to produce definitions for different combinations, such as
or
For example,
because for every , we can take such that for all real , if , then .
Limits involving infinity are connected with the concept of asymptotes.
These notions of a limit attempt to provide a metric space interpretation to limits at infinity. In fact, they are consistent with the topological space definition of limit if
a neighborhood of −∞ is defined to contain an interval for some
a neighborhood of ∞ is defined to contain an interval where and
a neighborhood of is defined in the normal way metric space
In this case, is a topological space and any function of the form with is subject to the topological definition of a limit. Note that with this topological definition, it is easy to define infinite limits at finite points, which have not been defined above in the metric sense.
Alternative notation
Many authors allow for the projectively extended real line to be used as a way to include infinite values as well as extended real line. With this notation, the extended real line is given as and the projectively extended real line is where a neighborhood of ∞ is a set of the form The advantage is that one only needs three definitions for limits (left, right, and central) to cover all the cases.
As presented above, for a completely rigorous account, we would need to consider 15 separate cases for each combination of infinities (five directions: −∞, left, central, right, and +∞; three bounds: −∞, finite, or +∞). There are also noteworthy pitfalls. For example, when working with the extended real line, does not possess a central limit (which is normal):
In contrast, when working with the projective real line, infinities (much like 0) are unsigned, so, the central limit does exist in that context:
In fact there are a plethora of conflicting formal systems in use.
In certain applications of numerical differentiation and integration, it is, for example, convenient to have signed zeroes.
A simple reason has to do with the converse of namely, it is convenient for to be considered true.
Such zeroes can be seen as an approximation to infinitesimals.
Limits at infinity for rational functions
There are three basic rules for evaluating limits at infinity for a rational function (where and are polynomials):
If the degree of is greater than the degree of , then the limit is positive or negative infinity depending on the signs of the leading coefficients;
If the degree of and are equal, the limit is the leading coefficient of divided by the leading coefficient of ;
If the degree of is less than the degree of , the limit is 0.
If the limit at infinity exists, it represents a horizontal asymptote at . Polynomials do not have horizontal asymptotes; such asymptotes may however occur with rational functions.
Functions of more than one variable
Ordinary limits
By noting that represents a distance, the definition of a limit can be extended to functions of more than one variable. In the case of a function defined on we defined the limit as follows: the limit of as approaches is , written
if the following condition holds:
For every , there exists a such that for all in and in , whenever we have ,
or formally:
Here is the Euclidean distance between and . (This can in fact be replaced by any norm , and be extended to any number of variables.)
For example, we may say
because for every , we can take such that for all real and real , if then .
Similar to the case in single variable, the value of at does not matter in this definition of limit.
For such a multivariable limit to exist, this definition requires the value of approaches along every possible path approaching . In the above example, the function
satisfies this condition. This can be seen by considering the polar coordinates
which gives
Here is a function of r which controls the shape of the path along which is approaching . Since is bounded between [−1, 1], by the sandwich theorem, this limit tends to 0.
In contrast, the function
does not have a limit at . Taking the path , we obtain
while taking the path , we obtain
Since the two values do not agree, does not tend to a single value as approaches .
Multiple limits
Although less commonly used, there is another type of limit for a multivariable function, known as the multiple limit. For a two-variable function, this is the double limit. Let be defined on we say the double limit of as approaches and approaches is , written
if the following condition holds:
For such a double limit to exist, this definition requires the value of approaches along every possible path approaching , excluding the two lines and . As a result, the multiple limit is a weaker notion than the ordinary limit: if the ordinary limit exists and equals , then the multiple limit exists and also equals . The converse is not true: the existence of the multiple limits does not imply the existence of the ordinary limit. Consider the example
where
but
does not exist.
If the domain of is restricted to then the two definitions of limits coincide.
Multiple limits at infinity
The concept of multiple limit can extend to the limit at infinity, in a way similar to that of a single variable function. For we say the double limit of as and approaches infinity is , written
if the following condition holds:
We say the double limit of as and approaches minus infinity is , written
if the following condition holds:
Pointwise limits and uniform limits
Let Instead of taking limit as , we may consider taking the limit of just one variable, say, , to obtain a single-variable function of , namely In fact, this limiting process can be done in two distinct ways. The first one is called pointwise limit. We say the pointwise limit of as approaches is , denoted
or
Alternatively, we may say tends to pointwise as approaches , denoted
or
This limit exists if the following holds:
Here, is a function of both and . Each is chosen for a specific point of . Hence we say the limit is pointwise in . For example,
has a pointwise limit of constant zero function
because for every fixed , the limit is clearly 0. This argument fails if is not fixed: if is very close to , the value of the fraction may deviate from 0.
This leads to another definition of limit, namely the uniform limit. We say the uniform limit of on as approaches is , denoted
or
Alternatively, we may say tends to uniformly on as approaches , denoted
or
This limit exists if the following holds:
Here, is a function of only but not . In other words, δ is uniformly applicable to all in . Hence we say the limit is uniform in . For example,
has a uniform limit of constant zero function
because for all real , is bounded between . Hence no matter how behaves, we may use the sandwich theorem to show that the limit is 0.
Iterated limits
Let We may consider taking the limit of just one variable, say, , to obtain a single-variable function of , namely and then take limit in the other variable, namely , to get a number . Symbolically,
This limit is known as iterated limit of the multivariable function. The order of taking limits may affect the result, i.e.,
in general.
A sufficient condition of equality is given by the Moore-Osgood theorem, which requires the limit to be uniform on .
Functions on metric spaces
Suppose and are subsets of metric spaces and , respectively, and is defined between and , with , a limit point of and . It is said that the limit of as approaches is and write
if the following property holds:
Again, note that need not be in the domain of , nor does need to be in the range of , and even if is defined it need not be equal to .
Euclidean metric
The limit in Euclidean space is a direct generalization of limits to vector-valued functions. For example, we may consider a function such that
Then, under the usual Euclidean metric,
if the following holds:
In this example, the function concerned are finite-dimension vector-valued function. In this case, the limit theorem for vector-valued function states that if the limit of each component exists, then the limit of a vector-valued function equals the vector with each component taken the limit:
Manhattan metric
One might also want to consider spaces other than Euclidean space. An example would be the Manhattan space. Consider such that
Then, under the Manhattan metric,
if the following holds:
Since this is also a finite-dimension vector-valued function, the limit theorem stated above also applies.
Uniform metric
Finally, we will discuss the limit in function space, which has infinite dimensions. Consider a function in the function space We want to find out as approaches , how will tend to another function , which is in the function space The "closeness" in this function space may be measured under the uniform metric. Then, we will say the uniform limit of on as approaches is and write
or
if the following holds:
In fact, one can see that this definition is equivalent to that of the uniform limit of a multivariable function introduced in the previous section.
Functions on topological spaces
Suppose and are topological spaces with a Hausdorff space. Let be a limit point of , and . For a function , it is said that the limit of as approaches is , written
if the following property holds:
This last part of the definition can also be phrased "there exists an open punctured neighbourhood of such that ".
The domain of does not need to contain . If it does, then the value of at is irrelevant to the definition of the limit. In particular, if the domain of is (or all of ), then the limit of as exists and is equal to if, for all subsets of with limit point , the limit of the restriction of to exists and is equal to . Sometimes this criterion is used to establish the non-existence of the two-sided limit of a function on by showing that the one-sided limits either fail to exist or do not agree. Such a view is fundamental in the field of general topology, where limits and continuity at a point are defined in terms of special families of subsets, called filters, or generalized sequences known as nets.
Alternatively, the requirement that be a Hausdorff space can be relaxed to the assumption that be a general topological space, but then the limit of a function may not be unique. In particular, one can no longer talk about the limit of a function at a point, but rather a limit or the set of limits at a point.
A function is continuous at a limit point of and in its domain if and only if is the (or, in the general case, a) limit of as tends to .
There is another type of limit of a function, namely the sequential limit. Let be a mapping from a topological space into a Hausdorff space , a limit point of and . The sequential limit of as tends to is if
For every sequence () in that converges to , the sequence converges to .
If is the limit (in the sense above) of as approaches , then it is a sequential limit as well, however the converse need not hold in general. If in addition is metrizable, then is the sequential limit of as approaches if and only if it is the limit (in the sense above) of as approaches .
Other characterizations
In terms of sequences
For functions on the real line, one way to define the limit of a function is in terms of the limit of sequences. (This definition is usually attributed to Eduard Heine.) In this setting:
if, and only if, for all sequences (with not equal to for all ) converging to the sequence converges to . It was shown by Sierpiński in 1916 that proving the equivalence of this definition and the definition above, requires and is equivalent to a weak form of the axiom of choice. Note that defining what it means for a sequence to converge to requires the epsilon, delta method.
Similarly as it was the case of Weierstrass's definition, a more general Heine definition applies to functions defined on subsets of the real line. Let be a real-valued function with the domain . Let be the limit of a sequence of elements of Then the limit (in this sense) of is as approaches
if for every sequence (so that for all , is not equal to ) that converges to , the sequence converges to . This is the same as the definition of a sequential limit in the preceding section obtained by regarding the subset of as a metric space with the induced metric.
In non-standard calculus
In non-standard calculus the limit of a function is defined by:
if and only if for all is infinitesimal whenever is infinitesimal. Here are the hyperreal numbers and is the natural extension of to the non-standard real numbers. Keisler proved that such a hyperreal definition of limit reduces the quantifier complexity by two quantifiers. On the other hand, Hrbacek writes that for the definitions to be valid for all hyperreal numbers they must implicitly be grounded in the ε-δ method, and claims that, from the pedagogical point of view, the hope that non-standard calculus could be done without ε-δ methods cannot be realized in full.
Bŀaszczyk et al. detail the usefulness of microcontinuity in developing a transparent definition of uniform continuity, and characterize Hrbacek's criticism as a "dubious lament".
In terms of nearness
At the 1908 international congress of mathematics F. Riesz introduced an alternate way defining limits and continuity in concept called "nearness". A point is defined to be near a set if for every there is a point so that . In this setting the
if and only if for all is near whenever is near .
Here is the set This definition can also be extended to metric and topological spaces.
Relationship to continuity
The notion of the limit of a function is very closely related to the concept of continuity. A function is said to be continuous at if it is both defined at and its value at equals the limit of as approaches :
We have here assumed that is a limit point of the domain of .
Properties
If a function is real-valued, then the limit of at is if and only if both the right-handed limit and left-handed limit of at exist and are equal to .
The function is continuous at if and only if the limit of as approaches exists and is equal to . If is a function between metric spaces and , then it is equivalent that transforms every sequence in which converges towards into a sequence in which converges towards .
If is a normed vector space, then the limit operation is linear in the following sense: if the limit of as approaches is and the limit of as approaches is , then the limit of as approaches is . If is a scalar from the base field, then the limit of as approaches is .
If and are real-valued (or complex-valued) functions, then taking the limit of an operation on and (e.g., , , , , ) under certain conditions is compatible with the operation of limits of and . This fact is often called the algebraic limit theorem. The main condition needed to apply the following rules is that the limits on the right-hand sides of the equations exist (in other words, these limits are finite values including 0). Additionally, the identity for division requires that the denominator on the right-hand side is non-zero (division by 0 is not defined), and the identity for exponentiation requires that the base is positive, or zero while the exponent is positive (finite).
These rules are also valid for one-sided limits, including when is ∞ or −∞. In each rule above, when one of the limits on the right is ∞ or −∞, the limit on the left may sometimes still be determined by the following rules.
(see also Extended real number line).
In other cases the limit on the left may still exist, although the right-hand side, called an indeterminate form, does not allow one to determine the result. This depends on the functions and . These indeterminate forms are:
See further L'Hôpital's rule below and Indeterminate form.
Limits of compositions of functions
In general, from knowing that and it does not follow that
However, this "chain rule" does hold if one of the following additional conditions holds:
(that is, is continuous at ), or
does not take the value near (that is, there exists a such that if then ).
As an example of this phenomenon, consider the following function that violates both additional restrictions:
Since the value at is a removable discontinuity,
for all .
Thus, the naïve chain rule would suggest that the limit of is 0. However, it is the case that
and so
for all .
Limits of special interest
Rational functions
For a nonnegative integer and constants and
This can be proven by dividing both the numerator and denominator by . If the numerator is a polynomial of higher degree, the limit does not exist. If the denominator is of higher degree, the limit is 0.
Trigonometric functions
Exponential functions
Logarithmic functions
L'Hôpital's rule
This rule uses derivatives to find limits of indeterminate forms or , and only applies to such cases. Other indeterminate forms may be manipulated into this form. Given two functions and , defined over an open interval containing the desired limit point , then if:
or and
and are differentiable over and
for all and
exists,
then:
Normally, the first condition is the most important one.
For example:
Summations and integrals
Specifying an infinite bound on a summation or integral is a common shorthand for specifying a limit.
A short way to write the limit
is An important example of limits of sums such as these are series.
A short way to write the limit
is
A short way to write the limit
is
| Mathematics | Calculus and analysis | null |
285773 | https://en.wikipedia.org/wiki/Limit%20of%20a%20sequence | Limit of a sequence | As the positive integer becomes larger and larger, the value becomes arbitrarily close to . We say that "the limit of the sequence equals ."
In mathematics, the limit of a sequence is the value that the terms of a sequence "tend to", and is often denoted using the symbol (e.g., ). If such a limit exists and is finite, the sequence is called convergent. A sequence that does not converge is said to be divergent. The limit of a sequence is said to be the fundamental notion on which the whole of mathematical analysis ultimately rests.
Limits can be defined in any metric or topological space, but are usually first encountered in the real numbers.
History
The Greek philosopher Zeno of Elea is famous for formulating paradoxes that involve limiting processes.
Leucippus, Democritus, Antiphon, Eudoxus, and Archimedes developed the method of exhaustion, which uses an infinite sequence of approximations to determine an area or a volume. Archimedes succeeded in summing what is now called a geometric series.
Grégoire de Saint-Vincent gave the first definition of limit (terminus) of a geometric series in his work Opus Geometricum (1647): "The terminus of a progression is the end of the series, which none progression can reach, even not if she is continued in infinity, but which she can approach nearer than a given segment."
Pietro Mengoli anticipated the modern idea of limit of a sequence with his study of quasi-proportions in Geometriae speciosae elementa (1659). He used the term quasi-infinite for unbounded and quasi-null for vanishing.
Newton dealt with series in his works on Analysis with infinite series (written in 1669, circulated in manuscript, published in 1711), Method of fluxions and infinite series (written in 1671, published in English translation in 1736, Latin original published much later) and Tractatus de Quadratura Curvarum (written in 1693, published in 1704 as an Appendix to his Optiks). In the latter work, Newton considers the binomial expansion of , which he then linearizes by taking the limit as tends to .
In the 18th century, mathematicians such as Euler succeeded in summing some divergent series by stopping at the right moment; they did not much care whether a limit existed, as long as it could be calculated. At the end of the century, Lagrange in his Théorie des fonctions analytiques (1797) opined that the lack of rigour precluded further development in calculus. Gauss in his study of hypergeometric series (1813) for the first time rigorously investigated the conditions under which a series converged to a limit.
The modern definition of a limit (for any there exists an index so that ...) was given by Bernard Bolzano (Der binomische Lehrsatz, Prague 1816, which was little noticed at the time), and by Karl Weierstrass in the 1870s.
Real numbers
In the real numbers, a number is the limit of the sequence , if the numbers in the sequence become closer and closer to , and not to any other number.
Examples
If for constant , then .
If , then .
If when is even, and when is odd, then . (The fact that whenever is odd is irrelevant.)
Given any real number, one may easily construct a sequence that converges to that number by taking decimal approximations. For example, the sequence converges to . The decimal representation is the limit of the previous sequence, defined by
Finding the limit of a sequence is not always obvious. Two examples are (the limit of which is the number e) and the arithmetic–geometric mean. The squeeze theorem is often useful in the establishment of such limits.
Definition
We call the limit of the sequence , which is written
, or
,
if the following condition holds:
For each real number , there exists a natural number such that, for every natural number , we have .
In other words, for every measure of closeness , the sequence's terms are eventually that close to the limit. The sequence is said to converge to or tend to the limit .
Symbolically, this is:
.
If a sequence converges to some limit , then it is convergent and is the only limit; otherwise is divergent. A sequence that has zero as its limit is sometimes called a null sequence.
Illustration
Properties
Some other important properties of limits of real sequences include the following:
When it exists, the limit of a sequence is unique.
Limits of sequences behave well with respect to the usual arithmetic operations. If and exists, then
provided
For any continuous function , if exists, then exists too. In fact, any real-valued function is continuous if and only if it preserves the limits of sequences (though this is not necessarily true when using more general notions of continuity).
If for all greater than some , then .
(Squeeze theorem) If for all greater than some , and , then .
(Monotone convergence theorem) If is bounded and monotonic for all greater than some , then it is convergent.
A sequence is convergent if and only if every subsequence is convergent.
If every subsequence of a sequence has its own subsequence which converges to the same point, then the original sequence converges to that point.
These properties are extensively used to prove limits, without the need to directly use the cumbersome formal definition. For example, once it is proven that , it becomes easy to show—using the properties above—that (assuming that ).
Infinite limits
A sequence is said to tend to infinity, written
, or
,
if the following holds:
For every real number , there is a natural number such that for every natural number , we have ; that is, the sequence terms are eventually larger than any fixed .
Symbolically, this is:
.
Similarly, we say a sequence tends to minus infinity, written
, or
,
if the following holds:
For every real number , there is a natural number such that for every natural number , we have ; that is, the sequence terms are eventually smaller than any fixed .
Symbolically, this is:
.
If a sequence tends to infinity or minus infinity, then it is divergent. However, a divergent sequence need not tend to plus or minus infinity, and the sequence provides one such example.
Metric spaces
Definition
A point of the metric space is the limit of the sequence if:
For each real number , there is a natural number such that, for every natural number , we have .
Symbolically, this is:
.
This coincides with the definition given for real numbers when and .
Properties
When it exists, the limit of a sequence is unique, as distinct points are separated by some positive distance, so for less than half this distance, sequence terms cannot be within a distance of both points.
For any continuous function f, if exists, then . In fact, a function f is continuous if and only if it preserves the limits of sequences.
Cauchy sequences
A Cauchy sequence is a sequence whose terms ultimately become arbitrarily close together, after sufficiently many initial terms have been discarded. The notion of a Cauchy sequence is important in the study of sequences in metric spaces, and, in particular, in real analysis. One particularly important result in real analysis is the Cauchy criterion for convergence of sequences: a sequence of real numbers is convergent if and only if it is a Cauchy sequence. This remains true in other complete metric spaces.
Topological spaces
Definition
A point of the topological space is a or of the sequence if:
For every neighbourhood of , there exists some such that for every , we have .
This coincides with the definition given for metric spaces, if is a metric space and is the topology generated by .
A limit of a sequence of points in a topological space is a special case of a limit of a function: the domain is in the space , with the induced topology of the affinely extended real number system, the range is , and the function argument tends to , which in this space is a limit point of .
Properties
In a Hausdorff space, limits of sequences are unique whenever they exist. This need not be the case in non-Hausdorff spaces; in particular, if two points and are topologically indistinguishable, then any sequence that converges to must converge to and vice versa.
Hyperreal numbers
The definition of the limit using the hyperreal numbers formalizes the intuition that for a "very large" value of the index, the corresponding term is "very close" to the limit. More precisely, a real sequence tends to L if for every infinite hypernatural , the term is infinitely close to (i.e., the difference is infinitesimal). Equivalently, L is the standard part of :
.
Thus, the limit can be defined by the formula
.
where the limit exists if and only if the righthand side is independent of the choice of an infinite .
Sequence of more than one index
Sometimes one may also consider a sequence with more than one index, for example, a double sequence . This sequence has a limit if it becomes closer and closer to when both n and m becomes very large.
Example
If for constant , then .
If , then .
If , then the limit does not exist. Depending on the relative "growing speed" of and , this sequence can get closer to any value between and .
Definition
We call the double limit of the sequence , written
, or
,
if the following condition holds:
For each real number , there exists a natural number such that, for every pair of natural numbers , we have .
In other words, for every measure of closeness , the sequence's terms are eventually that close to the limit. The sequence is said to converge to or tend to the limit .
Symbolically, this is:
.
The double limit is different from taking limit in n first, and then in m. The latter is known as iterated limit. Given that both the double limit and the iterated limit exists, they have the same value. However, it is possible that one of them exist but the other does not.
Infinite limits
A sequence is said to tend to infinity, written
, or
,
if the following holds:
For every real number , there is a natural number such that for every pair of natural numbers , we have ; that is, the sequence terms are eventually larger than any fixed .
Symbolically, this is:
.
Similarly, a sequence tends to minus infinity, written
, or
,
if the following holds:
For every real number , there is a natural number such that for every pair of natural numbers , we have ; that is, the sequence terms are eventually smaller than any fixed .
Symbolically, this is:
.
If a sequence tends to infinity or minus infinity, then it is divergent. However, a divergent sequence need not tend to plus or minus infinity, and the sequence provides one such example.
Pointwise limits and uniform limits
For a double sequence , we may take limit in one of the indices, say, , to obtain a single sequence . In fact, there are two possible meanings when taking this limit. The first one is called pointwise limit, denoted
, or
,
which means:
For each real number and each fixed natural number , there exists a natural number such that, for every natural number , we have .
Symbolically, this is:
.
When such a limit exists, we say the sequence converges pointwise to .
The second one is called uniform limit, denoted
,
,
, or
,
which means:
For each real number , there exists a natural number such that, for every natural number and for every natural number , we have .
Symbolically, this is:
.
In this definition, the choice of is independent of . In other words, the choice of is uniformly applicable to all natural numbers . Hence, one can easily see that uniform convergence is a stronger property than pointwise convergence: the existence of uniform limit implies the existence and equality of pointwise limit:
If uniformly, then pointwise.
When such a limit exists, we say the sequence converges uniformly to .
Iterated limit
For a double sequence , we may take limit in one of the indices, say, , to obtain a single sequence , and then take limit in the other index, namely , to get a number . Symbolically,
.
This limit is known as iterated limit of the double sequence. The order of taking limits may affect the result, i.e.,
in general.
A sufficient condition of equality is given by the Moore-Osgood theorem, which requires the limit to be uniform in .
| Mathematics | Calculus and analysis | null |
285802 | https://en.wikipedia.org/wiki/Marabou%20stork | Marabou stork | The marabou stork (Leptoptilos crumenifer) is a large wading bird in the stork family Ciconiidae native to sub-Saharan Africa. It breeds in both wet and arid habitats, often near human habitation, especially landfill sites. It is sometimes called the "undertaker bird" due to its shape from behind: cloak-like wings and back, skinny white legs, and sometimes a large white mass of "hair". It has often been credited with the largest wingspan of any land bird, with an average of and some recorded examples of up to .
Taxonomy
The marabou stork was formally described in 1831 by the French naturalist René Lesson. He placed it in the stork genus Ciconia and coined the binomial name Ciconia crumenifera. He specified that locality as Senegal. The species epithet means "carrying a purse around the neck". The species is now placed with the lesser adjutant and the greater adjutant in the genus Leptoptilos that Lesson had introduced at the same time he described the marabou stork. The species is monotypic: no subspecies are recognised.
The common name marabou is thought to be derived from the Arabic word murābit meaning quiet or hermit-like. The species was originally described as Ciconia crumenifera. When the species was moved into the genus Leptoptilos, the ending was modified to crumeniferus and this was used by many authors until it was noted that the correct masculine ending to match the genus is crumenifer.
Description
The marabou stork is a massive bird: large specimens are thought to reach a height of and a weight of . A wingspan of was accepted by Fisher and Peterson, who ranked the species as having the largest wing-spread of any living bird. Even higher measurements of up to have been reported, although no measurement over has been verified. It is often credited with the largest spread of any landbird, to rival the Andean condor; more typically, however, these storks measure across the wings, which is about a foot less than the average Andean condor wingspan and nearly two feet less than the average of the largest albatrosses and pelicans. Typical weight is , unusually as low as , and length (from bill to tail) is . Females are smaller than males. Bill length can range from . Unlike most storks, the three Leptoptilos species fly with the neck retracted like a heron.
The marabou is unmistakable due to its size, bare head and neck, black back, and white underparts. It has a huge bill, a pink gular sac at its throat (crumenifer(us) means "carrier of a pouch for money"), a neck ruff, and white legs and black wings. The sexes are alike, but the young bird is browner and has a smaller bill. Full maturity is not reached for up to four years.
Behavior and ecology
Like most storks, the marabou is gregarious and a colonial breeder. In the African dry season (when food is more readily available as the pools shrink), it builds a tree nest in which two or three eggs are laid. It is known to be quite ill-tempered.
It also resembles other storks in that it is not very vocal, but indulges in bill-rattling courtship displays. The throat sac is also used to make various noises at that time.
Breeding
The marabou stork breeds in Africa south of the Sahara. In East Africa, the birds interact with humans and breed in urban areas. In southern African countries, the birds breed mainly in less populated areas. The marabou stork breeds in colonies, starting during the dry season. The female lays two to three eggs in a small nest made of sticks; eggs hatch after an incubation period of 30 days. Their young reach sexual maturity at 4 years of age. Lifespan is 43 years in captivity and 25 years in wild.
Feeding
The marabou stork is a frequent scavenger, and the naked head and long neck are adaptations to this livelihood, as it is with the vultures with which the stork often feeds. In both cases, a feathered head would become rapidly clotted with blood and other substances when the bird's head was inside a large corpse, and the bare head is easier to keep clean.
This large and powerful bird eats mainly carrion, scraps, and faeces but will opportunistically eat almost any animal matter it can swallow. It occasionally eats other birds including Quelea nestlings, pigeons, doves, pelican and cormorant chicks, and even flamingos. During the breeding season, adults scale back on carrion and take mostly small, live prey since nestlings need this kind of food to survive. Common prey at this time may consist of fish, frogs, insects, eggs, small mammals and reptiles such as crocodile hatchlings and eggs, lizards and snakes. Though known to eat putrid and seemingly inedible foods, these storks may sometimes wash food in water to remove soil.
When feeding on carrion, marabou frequently follow vultures, which are better equipped with hooked bills for tearing through carrion meat and may wait for the vultures to cast aside a piece, steal a piece of meat directly from the vulture or wait until the vultures are done. As with vultures, marabou storks perform an important natural function by cleaning areas via their ingestion of carrion and waste.
Increasingly, marabous have become dependent on human garbage and hundreds of the huge birds can be found around African dumps or waiting for a hand out in urban areas. Marabous eating human garbage have been seen to devour virtually anything that they can swallow, including shoes and pieces of metal. Marabous conditioned to eating from human sources have been known to lash out when refused food.
Threats
Fully grown marabou storks have few natural enemies, and have high annual survival rate, though lions have reportedly preyed on some individuals in ambush. A number of endoparasites have been identified in wild marabous including Cheilospirura, Echinura and Acuaria nematodes, Amoebotaenia sphenoides (Cestoda) and Dicrocoelium hospes (Trematoda).
Human uses
Marabou down is frequently used in the trimming of various items of clothing and hats, as well as fishing lures. Turkey down and similar feathers have been used as a substitute for making 'marabou' trimming.
Gallery
| Biology and health sciences | Pelecanimorphae | Animals |
286069 | https://en.wikipedia.org/wiki/Mixture | Mixture | In chemistry, a mixture is a material made up of two or more different chemical substances which can be separated by physical method. It is an impure substance made up of 2 or more elements or compounds mechanically mixed together in any proportion. A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions, suspensions or colloids.
Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds, without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point, may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them).
Characteristics of mixtures
All mixtures can be characterized as being separable by mechanical means (e.g. purification, distillation, electrolysis, chromatography, heat, filtration, gravitational sorting, centrifugation). Mixtures differ from chemical compounds in the following ways:
The substances in a mixture can be separated using physical methods such as filtration, freezing, and distillation.
There is little or no energy change when a mixture forms (see Enthalpy of mixing).
The substances in a mixture keep their separate properties.
In the example of sand and water, neither one of the two substances changed in any way when they are mixed. Although the sand is in the water it still keeps the same properties that it had when it was outside the water.
mixtures have variable compositions, while compounds have a fixed, definite formula.
when mixed, individual substances keep their properties in a mixture, while if they form a compound their properties can change.
The following table shows the main properties and examples for all possible phase combinations of the three "families" of mixtures :
Homogeneous and heterogeneous mixtures
Mixtures can be either homogeneous or heterogeneous: a mixture of uniform composition and in which all components are in the same phase, such as salt in water, is called homogeneous, whereas a mixture of non-uniform composition and of which the components can be easily identified, such as sand in water, it is called heterogeneous.
In addition, "uniform mixture" is another term for homogeneous mixture and "non-uniform mixture" is another term for heterogeneous mixture. These terms are derived from the idea that a homogeneous mixture has a uniform appearance, or only one phase, because the particles are evenly distributed. However, a heterogeneous mixture has constituent substances that are in different phases and easily distinguishable from one another. In addition, a heterogeneous mixture may have a uniform (e.g. a colloid) or non-uniform (e.g. a pencil) composition.
Several solid substances, such as salt and sugar, dissolve in water to form homogeneous mixtures or "solutions", in which there are both a solute (dissolved substance) and a solvent (dissolving medium) present. Air is an example of a solution as well: a homogeneous mixture of gaseous nitrogen solvent, in which oxygen and smaller amounts of other gaseous solutes are dissolved. Mixtures are not limited in either their number of substances or the amounts of those substances, though in most solutions, the solute-to-solvent proportion can only reach a certain point before the mixture separates and becomes heterogeneous.
A homogeneous mixture is characterized by uniform dispersion of its constituent substances throughout; the substances exist in equal proportion everywhere within the mixture. Differently put, a homogeneous mixture will be the same no matter from where in the mixture it is sampled. For example, if a solid-liquid solution is divided into two halves of equal volume, the halves will contain equal amounts of both the liquid medium and dissolved solid (solvent and solute)
Homogeneous mixtures
Solutions
A solution is equivalent to a "homogeneous mixture". In solutions, solutes will not settle out after any period of time and they cannot be removed by physical methods, such as a filter or centrifuge. As a homogeneous mixture, a solution has one phase (solid, liquid, or gas), although the phase of the solute and solvent may initially have been different (e.g., salt water).
Gases
Gases exhibit by far the greatest space (and, consequently, the weakest intermolecular forces) between their atoms or molecules; since intermolecular interactions are minuscule in comparison to those in liquids and solids, dilute gases very easily form solutions with one another. Air is one such example: it can be more specifically described as a gaseous solution of oxygen and other gases dissolved in nitrogen (its major component).
Heterogeneous mixtures
Examples of heterogeneous mixtures are emulsions and foams. In most cases, the mixture consists of two main constituents. For an emulsion, these are immiscible fluids such as water and oil. For a foam, these are a solid and a fluid, or a liquid and a gas. On larger scales both constituents are present in any region of the mixture, and in a well-mixed mixture in the same or only slightly varying concentrations. On a microscopic scale, however, one of the constituents is absent in almost any sufficiently small region. (If such absence is common on macroscopic scales, the combination of the constituents is a dispersed medium, not a mixture.) One can distinguish different characteristics of heterogeneous mixtures by the presence or absence of continuum percolation of their constituents. For a foam, a distinction is made between reticulated foam in which one constituent forms a connected network through which the other can freely percolate, or a closed-cell foam in which one constituent is present as trapped in small cells whose walls are formed by the other constituents. A similar distinction is possible for emulsions. In many emulsions, one constituent is present in the form of isolated regions of typically a globular shape, dispersed throughout the other constituent. However, it is also possible each constituent forms a large, connected network. Such a mixture is then called bicontinuous.
Distinguishing between mixture types
Making a distinction between homogeneous and heterogeneous mixtures is a matter of the scale of sampling. On a coarse enough scale, any mixture can be said to be homogeneous, if the entire article is allowed to count as a "sample" of it. On a fine enough scale, any mixture can be said to be heterogeneous, because a sample could be as small as a single molecule. In practical terms, if the property of interest of the mixture is the same regardless of which sample of it is taken for the examination used, the mixture is homogeneous.
Gy's sampling theory quantitatively defines the heterogeneity of a particle as:
where , , , , and are respectively: the heterogeneity of the th particle of the population, the mass concentration of the property of interest in the th particle of the population, the mass concentration of the property of interest in the population, the mass of the th particle in the population, and the average mass of a particle in the population.
During sampling of heterogeneous mixtures of particles, the variance of the sampling error is generally non-zero.
Pierre Gy derived, from the Poisson sampling model, the following formula for the variance of the sampling error in the mass concentration in a sample:
in which V is the variance of the sampling error, N is the number of particles in the population (before the sample was taken), q i is the probability of including the ith particle of the population in the sample (i.e. the first-order inclusion probability of the ith particle), m i is the mass of the ith particle of the population and a i is the mass concentration of the property of interest in the ith particle of the population.
The above equation for the variance of the sampling error is an approximation based on a linearization of the mass concentration in a sample.
In the theory of Gy, correct sampling is defined as a sampling scenario in which all particles have the same probability of being included in the sample. This implies that q i no longer depends on i, and can therefore be replaced by the symbol q. Gy's equation for the variance of the sampling error becomes:
where abatch is that concentration of the property of interest in the population from which the sample is to be drawn and Mbatch is the mass of the population from which the sample is to be drawn.
Health effects
Air pollution research show biological and health effects after exposure to mixtures are more potent than effects from exposures of individual components.
Homogenization
Properties of a mixture
Chemical substance
Mixing (process engineering)
| Physical sciences | Chemical mixtures: General | null |
286217 | https://en.wikipedia.org/wiki/Flash%20evaporation | Flash evaporation | Flash evaporation (or partial evaporation) is the partial vapor that occurs when a saturated liquid stream undergoes a reduction in pressure by passing through a throttling valve or other throttling device. This process is one of the simplest unit operations. If the throttling valve or device is located at the entry into a pressure vessel so that the flash evaporation occurs within the vessel, then the vessel is often referred to as a flash drum.
If the saturated liquid is a single-component liquid (for example, propane or liquid ammonia), a part of the liquid immediately "flashes" into vapor. Both the vapor and the residual liquid are cooled to the saturation temperature of the liquid at the reduced pressure. This is often referred to as "auto-refrigeration" and is the basis of most conventional vapor compression refrigeration systems.
If the saturated liquid is a multi-component liquid (for example, a mixture of propane, isobutane and normal butane), the flashed vapor is richer in the more volatile components than is the remaining liquid.
Uncontrolled flash evaporation can result in a boiling liquid expanding vapor explosion (BLEVE).
Flash evaporation of a single-component liquid
The flash evaporation of a single-component liquid is an isenthalpic process and is often referred to as an adiabatic flash. The following equation, derived from a simple heat balance around the throttling valve or device, is used to predict how much of a single-component liquid is vaporized.
{| border="0" cellpadding="2"
|-
|align=right|where:
|
|-
!align=right|
|align=left|= weight ratio of vaporized liquid / total mass
|-
!align=right|
|align=left|= upstream liquid enthalpy at upstream temperature and pressure, J/kg
|-
!align=right|
|align=left|= flashed vapor enthalpy at downstream pressure and corresponding saturation temperature, J/kg
|-
!align=right|
|align=left|= residual liquid enthalpy at downstream pressure and corresponding saturation temperature, J/kg
|}
If the enthalpy data required for the above equation is unavailable, then the following equation may be used.
{| border="0" cellpadding="2"
|-
|align=right|where:
|
|-
!align=right|
|align=left|= weight fraction vaporized
|-
!align=right|
|align=left|= liquid specific heat at upstream temperature and pressure, J/(kg °C)
|-
!align=right|
|align=left|= upstream liquid temperature, °C
|-
!align=right|
|align=left|= liquid saturation temperature corresponding to the downstream pressure, °C
|-
!align=right|
|align=left|= liquid heat of vaporization at downstream pressure and corresponding saturation temperature, J/kg
|}
Here, the words "upstream" and "downstream" refer to before and after the liquid passes through the throttling valve or device.
This type of flash evaporation is used in the desalination of brackish water or ocean water by "Multi-Stage Flash Distillation." The water is heated and then routed into a reduced-pressure flash evaporation "stage" where some of the water flashes into steam. This steam is subsequently condensed into salt-free water. The residual salty liquid from that first stage is introduced into a second flash evaporation stage at a pressure lower than the first stage pressure. More water is flashed into steam which is also subsequently condensed into more salt-free water. This sequential use of multiple flash evaporation stages is continued until the design objectives of the system are met. A large part of the world's installed desalination capacity uses multi-stage flash distillation. Typically such plants have 24 or more sequential stages of flash evaporation.
Equilibrium flash of a multi-component liquid
The equilibrium flash of a multi-component liquid may be visualized as a simple distillation process using a single equilibrium stage. It is very different and more complex than the flash evaporation of single-component liquid. For a multi-component liquid, calculating the amounts of flashed vapor and residual liquid in equilibrium with each other at a given temperature and pressure requires a trial-and-error iterative solution. Such a calculation is commonly referred to as an equilibrium flash calculation. It involves solving the Rachford-Rice equation:
where:
zi is the mole fraction of component i in the feed liquid (assumed to be known);
β is the fraction of feed that is vaporised;
Ki is the equilibrium constant of component i.
The equilibrium constants Ki are in general functions of many parameters, though the most important is arguably temperature; they are defined as:
where:
xi is the mole fraction of component i in liquid phase;
yi is the mole fraction of component i in gas phase.
Once the Rachford-Rice equation has been solved for β, the compositions xi and yi can be immediately calculated as:
The Rachford-Rice equation can have multiple solutions for β, at most one of which guarantees that all xi and yi will be positive. In particular, if there is only one β for which:
then that β is the solution; if there are multiple such β'''s, it means that either Kmax<1 or Kmin>1, indicating respectively that no gas phase can be sustained (and therefore β=0) or conversely that no liquid phase can exist (and therefore β=1).
It is possible to use Newton's method for solving the above water equation, but there is a risk of converging to the wrong value of β; it is important to initialise the solver to a sensible initial value, such as (βmax+βmin'')/2 (which is however not sufficient: Newton's method makes no guarantees on stability), or, alternatively, use a bracketing solver such as the bisection method or the Brent method, which are guaranteed to converge but can be slower.
The equilibrium flash of multi-component liquids is very widely utilized in petroleum refineries, petrochemical and chemical plants and natural gas processing plants.
Contrast with spray drying
Spray drying is sometimes seen as a form of flash evaporation. However, although it is a form of liquid evaporation, it is quite different from flash evaporation.
In spray drying, a slurry of very small solids is rapidly dried by suspension in a hot gas. The slurry is first atomized into very small liquid droplets which are then sprayed into a stream of hot dry air. The liquid rapidly evaporates leaving behind dry powder or dry solid granules. The dry powder or solid granules are recovered from the exhaust air by using cyclones, bag filters or electrostatic precipitators.
Natural flash evaporation
Natural flash vaporization or flash deposition may occur during earthquakes resulting in deposition of minerals held in supersaturated solutions, sometimes even valuable ore in the case of auriferous, gold-bearing, waters. This results when blocks of rock are rapidly pulled and pushed away from each other by jog faults.
| Physical sciences | Phase separations | Chemistry |
286245 | https://en.wikipedia.org/wiki/Sodium%20silicate | Sodium silicate | Sodium silicate is a generic name for chemical compounds with the formula or ·, such as sodium metasilicate (), sodium orthosilicate (), and sodium pyrosilicate (). The anions are often polymeric. These compounds are generally colorless transparent solids or white powders, and soluble in water in various amounts.
Sodium silicate is also the technical and common name for a mixture of such compounds, chiefly the metasilicate, also called waterglass, water glass, or liquid glass. The product has a wide variety of uses, including the formulation of cements, coatings, passive fire protection, textile and lumber processing, manufacture of refractory ceramics, as adhesives, and in the production of silica gel. The commercial product, available in water solution or in solid form, is often greenish or blue owing to the presence of iron-containing impurities.
In industry, the various grades of sodium silicate are characterized by their SiO2:Na2O weight ratio (which can be converted to molar ratio by multiplication with 1.032). The ratio can vary between 1:2 and 3.75:1. Grades with ratio below 2.85:1 are termed alkaline. Those with a higher SiO2:Na2O ratio are described as neutral.
History
Soluble silicates of alkali metals (sodium or potassium) were observed by European alchemists in the 16th century. Giambattista della Porta observed in 1567 that tartari salis (cream of tartar, potassium bitartrate) caused powdered crystallum (quartz) to melt at a lower temperature. Other possible early references to alkali silicates were made by Basil Valentine in 1520, and by Agricola in 1550. Around 1640, Jan Baptist van Helmont reported the formation of alkali silicates as a soluble substance made by melting sand with excess alkali, and observed that the silica could be precipitated quantitatively by adding acid to the solution.
In 1646, Glauber made potassium silicate, which he called liquor silicum, by melting potassium carbonate (obtained by calcinating cream of tartar) and sand in a crucible, and keeping it molten until it ceased to bubble (due to the release of carbon dioxide). The mixture was allowed to cool and then was ground to a fine powder. When the powder was exposed to moist air, it gradually formed a viscous liquid, which Glauber called "Oleum oder Liquor Silicum, Arenæ, vel Crystallorum" (i.e., oil or solution of silica, sand or quartz crystal).
However, it was later claimed that the substances prepared by those alchemists were not waterglass as it is understood today. That would have been prepared in 1818 by Johann Nepomuk von Fuchs, by treating silicic acid with an alkali; the result being soluble in water, "but not affected by atmospheric changes".
The terms "water glass" and "soluble glass" were used by Leopold Wolff in 1846, by Émile Kopp in 1857, and by Hermann Krätzer in 1887.
In 1892, Rudolf Von Wagner distinguished soda, potash, double (soda and potash), and fixing (i.e., stabilizing) as types of water glass. The fixing type was "a mixture of silica well saturated with potash water glass and a sodium silicate" used to stabilize inorganic water color pigments on cement work for outdoor signs and murals.
Properties
Sodium silicates are colorless glassy or crystalline solids, or white powders. Except for the most silicon-rich ones, they are readily soluble in water, producing alkaline solutions. When dried up it still can be rehydrated in water.
Sodium silicates are stable in neutral and alkaline solutions. In acidic solutions, the silicate ions react with hydrogen ions to form silicic acids, which tend to decompose into hydrated silicon dioxide gel. Heated to drive off the water, the result is a hard translucent substance called silica gel, widely used as a desiccant. It can withstand temperatures up to 1100 °C.
Production
Solutions of sodium silicates can be produced by treating a mixture of silica (usually as quartz sand), caustic soda, and water, with hot steam in a reactor. The overall reaction is
2x NaOH + → · + x
Sodium silicates can also be obtained by dissolving silica (whose melting point is 1713 °C) in molten sodium carbonate (that melts with decomposition at 851 °C):
x + → · +
The material can be obtained also from sodium sulfate (melting point 884 °C) with carbon as a reducing agent:
2x + C + 2 → 2 · + 2 +
In 1990, 4 million tons of alkali metal silicates were produced.
Ferrosilicon
Sodium silicate may be produced as a part of hydrogen production by dissolving ferrosilicon in an aqueous sodium hydroxide (NaOH·H2O) solution:
2NaOH + Si + H2O → 2Na2SiO3 + 2H2
Bayer process
Though unprofitable, Na2SiO3 is a byproduct of Bayer process which is often converted to calcium silicate (Ca2SiO4).
Uses
The main applications of sodium silicates are in detergents, paper industry (as a deinking agent), water treatment, and construction materials.
Adhesives
The adhesive properties of sodium silicate were noted as early as the 1850s and have been widely used at least since the First World War. The largest application of sodium silicate solutions is a cement for producing cardboard. When used as a paper cement, the sodium silicate joint tends to crack within a few years, at which point it no longer holds the paper surfaces cemented together.
Sodium silicate solutions can also be used as a spin-on adhesive layer to bond glass to glass or a silicon dioxide–covered silicon wafer to one another. Sodium silicate glass-to-glass bonding has the advantage that it is a low-temperature bonding technique, as opposed to fusion bonding. It also requires less processing than glass-to-glass anodic bonding, which requires an intermediate layer such as silicon nitride (SiN) to act as a diffusion barrier for sodium ions. The deposition of such a layer requires a low-pressure chemical vapor deposition step. A disadvantage of sodium silicate bonding, however, is that it is very difficult to eliminate air bubbles. This is in part because the technique does not require a vacuum and also does not use field assistance as in anodic bonding. This lack of field assistance can sometimes be beneficial, because field assistance can provide such high attraction between wafers as to bend a thinner wafer and collapse onto nanofluidic cavity or MEMS elements.
Coatings
Sodium silicate may be used for various paints and coatings, such as those used on welding rods. Such coatings can be cured in two ways. One method is to heat a thin layer of sodium silicate into a gel and then into a hard film. To make the coating water-resistant, high temperatures of are needed. The temperature is slowly raised to to dehydrate the film and avoid steaming and blistering. The process must be relatively slow, but infrared lamps may be used at first. In the other method, when high temperatures are not practical, the water resistance may be achieved by chemicals (or esters), such as boric acid, phosphoric acid, sodium fluorosilicate, and aluminium phosphate. Before application, an aqueous solution of sodium silicate is mixed with a curing agent.
It is used in detergent auxiliaries such as complex sodium disilicate and modified sodium disilicate. The detergent granules gain their ruggedness from a coating of silicates.
Water treatment
Sodium silicate is used as an alum coagulant and an iron flocculant in wastewater treatment plants. Sodium silicate binds to colloidal molecules, creating larger aggregates that sink to the bottom of the water column. The microscopic negatively charged particles suspended in water interact with sodium silicate. Their electrical double layer collapses due to the increase of ionic strength caused by the addition of sodium silicate (doubly negatively charged anion accompanied by two sodium cations) and they subsequently aggregate. This process is called coagulation.
Foundries, refractories and pottery
It is used as a binder of the sand when doing sand casting of all common metals. It allows for the rapid production of a strong mold or core by three main methods.
Method 1 requires passing carbon dioxide gas through the mixture of sand and sodium silicate in the sand molding box or core box. The carbon dioxide reacts with the sodium silicate to form solid silica gel and sodium carbonate. This provides adequate strength to remove the now hardened sand shape from the forming tool. Additional strength occurs as any unreacted sodium silicate in the sand shape dehydrates.
Method 2 requires adding an ester (reaction product of an acid and an alcohol) to the mixture of sand and sodium silicate before it is placed into the molding box or core box. As the ester hydrolyzes from the water in the liquid sodium silicate, an acid is released which causes the liquid sodium silicate to gel. Once the gel has formed, it will dehydrate to a glassy phase as a result of syneresis. Commonly used esters include acetate esters of glycerol and ethylene glycol as well as carbonate esters of propylene and ethylene glycol. The higher the water solubility of the ester, the faster the hardening of the sand.
Method 3 requires microwave energy to heat and dehydrate the mixture of sand and sodium silicate in the sand molding box or core box. The forming tools must pass through microwaves for this to work well. Because sodium silicate has a high dielectric constant, it absorbs microwave energy very rapidly. Fully dehydrated sand shapes can be produced within a minute of microwave exposure. This method produces the highest strength of sand shapes bonded with sodium silicate.
Since the sodium silicate does not burn during casting (it can actually melt at pouring temperatures above 1800 °F), it is common to add organic materials to provide for enhanced sand breakdown after casting. The additives include sugar, starch, carbons, wood flour and phenolic resins.
Water glass is a useful binder for solids, such as vermiculite and perlite. When blended with the latter lightweight fraction, water glass can be used to make hard, high-temperature insulation boards used for refractories, passive fire protection, and high-temperature insulations, such as in moulded pipe insulation applications. When mixed with finely divided mineral powders, such as vermiculite dust (which is common scrap from the exfoliation process), one can produce high temperature adhesives. The intumescence disappears in the presence of finely divided mineral dust, whereby the waterglass becomes a mere matrix. Waterglass is inexpensive and abundantly available, which makes its use popular in many refractory applications.
Sodium silicate is used as a deflocculant in casting slips helping reduce viscosity and the need for large amounts of water to liquidize the clay body. It is also used to create a crackle effect in pottery, usually wheel-thrown. A vase or bottle is thrown on the wheel, fairly narrow and with thick walls. Sodium silicate is brushed on a section of the piece. After five minutes, the wall of the piece is stretched outward with a rib or hand. The result is a wrinkled or cracked look.
It is also the main agent in "magic water", which is used when joining clay pieces, especially if the moisture level of the two differs.
Dyes
Sodium silicate solution is used as a fixative for hand dyeing with reactive dyes that require a high pH to react with the textile fiber. After the dye is applied to a cellulose-based fabric, such as cotton or rayon, or onto silk, it is allowed to dry, after which the sodium silicate is painted on to the dyed fabric, covered with plastic to retain moisture, and left to react for an hour at room temperature.
Repair work
Sodium silicate is used, along with magnesium silicate, in muffler repair and fitting paste. Magnesium silicate can be mixed with a solution of sodium silicate to form a thick paste that is easy to apply. When the exhaust system of an internal combustion engine heats up to its operating temperature, the heat drives out all of the excess water from the paste. The silicate compounds that are left over have glass-like properties, making a temporary, brittle repair that can be reinforced with glass fibre.
Sodium silicate can be used to fill gaps in the head gasket of an engine. This is especially useful for aluminium alloy cylinder heads, which are sensitive to thermally induced surface deflection. Sodium silicate is added to the cooling system through the radiator and allowed to circulate. When the sodium silicate reaches its "conversion" temperature of , it loses water molecules and forms a glass seal with a re-melting temperature above . This repair can last two years or longer, and symptoms disappear instantly. However, this repair works only when the sodium silicate reaches its "conversion" temperature. Also, sodium silicate (glass particulate) contamination of lubricants is detrimental to their function, and contamination of engine oil is a serious possibility in situations in which a coolant-to-oil leak is present.
Sodium silicate solution is used to inexpensively, quickly, and permanently disable automobile engines. Running an engine with half a U.S. gallon (or about two liters) of a sodium silicate solution instead of motor oil causes the solution to precipitate, catastrophically damaging the engine's bearings and pistons within a few minutes. In the United States, this procedure was used to comply with requirements of the Car Allowance Rebate System (CARS) program.
Construction
A mixture of sodium silicate and sawdust has been used in between the double skin of certain safes. This not only makes them more fire resistant, but also makes cutting them open with an oxyacetylene torch extremely difficult due to the smoke emitted.
Sodium silicate is frequently used in drilling fluids to stabilize and avoid the collapse of borehole walls. It is particularly useful when drill holes pass through argillaceous formations containing swelling clay minerals such as smectite or montmorillonite.
Concrete treated with a sodium silicate solution helps to reduce porosity in most masonry products such as concrete, stucco, and plasters. This effect aids in reducing water penetration, but has no known effect on reducing water vapor transmission and emission. A chemical reaction occurs with the excess Ca(OH)2 (portlandite) present in the concrete that permanently binds the silicates with the surface, making them far more durable and water repellent. This treatment generally is applied only after the initial cure has taken place (approximately seven days depending on conditions). These coatings are known as silicate mineral paint. An example of the reaction of sodium silicate with the calcium hydroxide found in concrete to form calcium silicate hydrate (CSH) gel, the main product in hydrated Portland cement, follows.
+ + → +
Crystal gardens
When crystals of a number of metallic salts are dropped into a solution of water glass, simple or branching stalagmites of colored metal silicates are formed. This phenomenon has been used by manufacturers of toys and chemistry sets to provide instructive enjoyment to many generations of children from the early 20th century until the present. An early mention of crystals of metallic salts forming a "chemical garden" in sodium silicate is found in the 1946 Modern Mechanix magazine. Metal salts used included the sulfates and/or chlorides of copper, cobalt, iron, nickel, and manganese.
Sealants
Sodium silicate with additives was injected into the ground to harden it and thereby to prevent further leakage of highly radioactive water from the Fukushima Daiichi nuclear power plant in Japan in April, 2011. The residual heat carried by the water used for cooling the damaged reactors accelerated the setting of the injected mixture.
On June 3, 1958, the USS Nautilus, the world's first nuclear submarine, visited Everett and Seattle. In Seattle, crewmen dressed in civilian clothing were sent in to secretly buy 140 quarts (160 liters) of an automotive product containing sodium silicate (originally identified as Stop Leak) to repair a leaking condenser system. The Nautilus was en route to the North Pole on a top secret mission to cross the North Pole submerged.
Firearms
A historical use of the adhesive properties of sodium silicates is the production of paper cartridges for black powder revolvers produced by Colt's Manufacturing Company between 1851 and 1873, especially during the American Civil War. Sodium silicate was used to seal combustible nitrated paper together to form a conical paper cartridge to hold the black powder, as well as to cement the lead ball or conical bullet into the open end of the paper cartridge. Such sodium silicate cemented paper cartridges were inserted into the cylinders of revolvers, thereby speeding the reloading of cap-and-ball black powder revolvers. This use largely ended with the introduction of Colt revolvers employing brass-cased cartridges starting in 1873. Similarly, sodium silicate was also used to cement the top wad into brass shotgun shells, thereby eliminating any need for a crimp at the top of the brass shotgun shell to hold a shotgun shell together. Reloading brass shotgun shells was widely practiced by self-reliant American farmers during the 1870s, using the same waterglass material that was also used to preserve eggs. The cementing of the top wad on a shotgun shell consisted of applying from three to five drops of waterglass on the top wad to secure it to the brass hull. Brass hulls for shotgun shells were superseded by paper hulls starting around 1877. The newer paper-hulled shotgun shells used a roll crimp in place of a waterglass-cemented joint to hold the top wad in the shell. However, whereas brass shotshells with top wads cemented with waterglass could be reloaded nearly indefinitely (given powder, wad, and shot, of course), the paper hulls that replaced the brass hulls could be reloaded only a few times.
Food and medicine
Sodium silicate and other silicates are the primary components in "instant" wrinkle remover creams, which temporarily tighten the skin to minimize the appearance of wrinkles and under-eye bags. These creams, when applied as a thin film and allowed to dry for a few minutes, can present dramatic results. This effect is not permanent, lasting from a few minutes up to a couple of hours. It works like water cement, once the muscle starts to move, it cracks and leaves white residues on the skin.
Waterglass has been used as an egg preservative with large success, primarily when refrigeration is not available. Fresh-laid eggs are immersed in a solution of sodium silicate (waterglass). After being immersed in the solution, they are removed and allowed to dry. A permanent air tight coating remains on the eggs. If they are then stored in appropriate environment, the majority of bacteria which would otherwise cause them to spoil are kept out and their moisture is kept in. According to the cited source, treated eggs can be kept fresh using this method for up to five months. When boiling eggs preserved that way, the shell is no longer permeable to air, and the egg will tend to crack unless a hole in the shell is made (e.g., with a pin) in order to allow steam to escape.
Sodium silicate's flocculant properties are also used to clarify wine and beer by precipitating colloidal particles. As a clearing agent, though, sodium silicate is sometimes confused with isinglass which is prepared from collagen extracted from the dried swim bladders of sturgeon and other fishes. Eggs can be preserved in a bucket of waterglass gel, and their shells are sometimes also used (baked and crushed) to clear wine.
Sodium silicate gel is also used as a substrate for algal growth in aquaculture hatcheries.
| Physical sciences | Salts | null |
286260 | https://en.wikipedia.org/wiki/Precipitation | Precipitation | In meteorology, precipitation is any product of the condensation of atmospheric water vapor that falls from clouds due to gravitational pull. The main forms of precipitation include drizzle, rain, sleet, snow, ice pellets, graupel and hail. Precipitation occurs when a portion of the atmosphere becomes saturated with water vapor (reaching 100% relative humidity), so that the water condenses and "precipitates" or falls. Thus, fog and mist are not precipitation; their water vapor does not condense sufficiently to precipitate, so fog and mist do not fall. (Such a non-precipitating combination is a colloid.) Two processes, possibly acting together, can lead to air becoming saturated with water vapor: cooling the air or adding water vapor to the air. Precipitation forms as smaller droplets coalesce via collision with other rain drops or ice crystals within a cloud. Short, intense periods of rain in scattered locations are called showers.
Moisture that is lifted or otherwise forced to rise over a layer of sub-freezing air at the surface may be condensed by the low temperature into clouds and rain. This process is typically active when freezing rain occurs. A stationary front is often present near the area of freezing rain and serves as the focus for forcing moist air to rise. Provided there is necessary and sufficient atmospheric moisture content, the moisture within the rising air will condense into clouds, namely nimbostratus and cumulonimbus if significant precipitation is involved. Eventually, the cloud droplets will grow large enough to form raindrops and descend toward the Earth where they will freeze on contact with exposed objects. Where relatively warm water bodies are present, for example due to water evaporation from lakes, lake-effect snowfall becomes a concern downwind of the warm lakes within the cold cyclonic flow around the backside of extratropical cyclones. Lake-effect snowfall can be locally heavy. Thundersnow is possible within a cyclone's comma head and within lake effect precipitation bands. In mountainous areas, heavy precipitation is possible where upslope flow is maximized within windward sides of the terrain at elevation. On the leeward side of mountains, desert climates can exist due to the dry air caused by compressional heating. Most precipitation occurs within the tropics and is caused by convection.
Precipitation is a major component of the water cycle, and is responsible for depositing most of the fresh water on the planet. Approximately of water falls as precipitation each year: over oceans and over land. Given the Earth's surface area, that means the globally averaged annual precipitation is , but over land it is only . Climate classification systems such as the Köppen climate classification system use average annual rainfall to help differentiate between differing climate regimes. Global warming is already causing changes to weather, increasing precipitation in some geographies, and reducing it in others, resulting in additional extreme weather.
Precipitation may occur on other celestial bodies. Saturn's largest satellite, Titan, hosts methane precipitation as a slow-falling drizzle, which has been observed as rain puddles at its equator and polar regions.
Types
Mechanisms of producing precipitation include convective, stratiform, and orographic rainfall. Convective processes involve strong vertical motions that can cause the overturning of the atmosphere in that location within an hour and cause heavy precipitation, while stratiform processes involve weaker upward motions and less intense precipitation. Precipitation can be divided into three categories, based on whether it falls as liquid water, liquid water that freezes on contact with the surface, or ice. Mixtures of different types of precipitation, including types in different categories, can fall simultaneously. Liquid forms of precipitation include rain and drizzle. Rain or drizzle that freezes on contact within a subfreezing air mass is called "freezing rain" or "freezing drizzle". Frozen forms of precipitation include snow, ice needles, ice pellets, hail, and graupel.
Measurement
Liquid precipitation Rainfall (including drizzle and rain) is usually measured using a rain gauge and expressed in units of millimeters (mm) of height or depth. Equivalently, it can be expressed as a physical quantity with dimension of volume of water per collection area, in units of liters per square meter (L/m2); as 1L=1dm3=1mm·m2, the units of area (m2) cancel out, resulting in simply "mm". This also corresponds to an area density expressed in kg/m2, if assuming that 1 liter of water has a mass of 1 kg (water density), which is acceptable for most practical purposes. The corresponding English unit used is usually inches. In Australia before metrication, rainfall was also measured in "points", each of which was defined as one-hundredth of an inch.
Solid precipitation A snow gauge is usually used to measure the amount of solid precipitation. Snowfall is usually measured in centimeters by letting snow fall into a container and then measure the height. The snow can then optionally be melted to obtain a water equivalent measurement in millimeters like for liquid precipitation. The relationship between snow height and water equivalent depends on the water content of the snow; the water equivalent can thus only provide a rough estimate of snow depth. Other forms of solid precipitation, such as snow pellets and hail or even sleet (rain and snow mixed), can also be melted and measured as their respective water equivalents, usually expressed in millimeters as for liquid precipitation.
Air becomes saturated
Cooling air to its dew point
The dew point is the temperature to which a parcel of air must be cooled in order to become saturated, and (unless super-saturation occurs) condenses to water. Water vapor normally begins to condense on condensation nuclei such as dust, ice, and salt in order to form clouds. The cloud condensation nuclei concentration will determine the cloud microphysics. An elevated portion of a frontal zone forces broad areas of lift, which form cloud decks such as altostratus or cirrostratus. Stratus is a stable cloud deck which tends to form when a cool, stable air mass is trapped underneath a warm air mass. It can also form due to the lifting of advection fog during breezy conditions.
There are four main mechanisms for cooling the air to its dew point: adiabatic cooling, conductive cooling, radiational cooling, and evaporative cooling. Adiabatic cooling occurs when air rises and expands. The air can rise due to convection, large-scale atmospheric motions, or a physical barrier such as a mountain (orographic lift). Conductive cooling occurs when the air comes into contact with a colder surface, usually by being blown from one surface to another, for example from a liquid water surface to colder land. Radiational cooling occurs due to the emission of infrared radiation, either by the air or by the surface underneath. Evaporative cooling occurs when moisture is added to the air through evaporation, which forces the air temperature to cool to its wet-bulb temperature, or until it reaches saturation.
Adding moisture to the air
The main ways water vapor is added to the air are: wind convergence into areas of upward motion, precipitation or virga falling from above, daytime heating evaporating water from the surface of oceans, water bodies or wet land, transpiration from plants, cool or dry air moving over warmer water, and lifting air over mountains.
Forms of precipitation
Raindrops
Coalescence occurs when water droplets fuse to create larger water droplets, or when water droplets freeze onto an ice crystal, which is known as the Bergeron process. The fall rate of very small droplets is negligible, hence clouds do not fall out of the sky; precipitation will only occur when these coalesce into larger drops. droplets with different size will have different terminal velocity that cause droplets collision and producing larger droplets, Turbulence will enhance the collision process. As these larger water droplets descend, coalescence continues, so that drops become heavy enough to overcome air resistance and fall as rain.
Raindrops have sizes ranging from mean diameter, above which they tend to break up. Smaller drops are called cloud droplets, and their shape is spherical. As a raindrop increases in size, its shape becomes more oblate, with its largest cross-section facing the oncoming airflow. Contrary to the cartoon pictures of raindrops, their shape does not resemble a teardrop. Intensity and duration of rainfall are usually inversely related, i.e., high intensity storms are likely to be of short duration and low intensity storms can have a long duration. Rain drops associated with melting hail tend to be larger than other rain drops. The METAR code for rain is RA, while the coding for rain showers is SHRA.
Ice pellets
Ice pellets or sleet are a form of precipitation consisting of small, translucent balls of ice. Ice pellets are usually (but not always) smaller than hailstones. They often bounce when they hit the ground, and generally do not freeze into a solid mass unless mixed with freezing rain. The METAR code for ice pellets is PL.
Ice pellets form when a layer of above-freezing air exists with sub-freezing air both above and below. This causes the partial or complete melting of any snowflakes falling through the warm layer. As they fall back into the sub-freezing layer closer to the surface, they re-freeze into ice pellets. However, if the sub-freezing layer beneath the warm layer is too small, the precipitation will not have time to re-freeze, and freezing rain will be the result at the surface. A temperature profile showing a warm layer above the ground is most likely to be found in advance of a warm front during the cold season.
Hail
Like other precipitation, hail forms in storm clouds when supercooled water droplets freeze on contact with condensation nuclei, such as dust or dirt. The storm's updraft blows the hailstones to the upper part of the cloud. The updraft dissipates and the hailstones fall down, back into the updraft, and are lifted again. Hail has a diameter of or more. Within METAR code, GR is used to indicate larger hail, of a diameter of at least . GR is derived from the French word grêle. Smaller-sized hail, as well as snow pellets, use the coding of GS, which is short for the French word grésil. Stones just larger than golf ball-sized are one of the most frequently reported hail sizes. Hailstones can grow to and weigh more than . In large hailstones, latent heat released by further freezing may melt the outer shell of the hailstone. The hailstone then may undergo 'wet growth', where the liquid outer shell collects other smaller hailstones. The hailstone gains an ice layer and grows increasingly larger with each ascent. Once a hailstone becomes too heavy to be supported by the storm's updraft, it falls from the cloud.
Snowflakes
Snow crystals form when tiny supercooled cloud droplets (about 10 μm in diameter) freeze. Once a droplet has frozen, it grows in the supersaturated environment. Because water droplets are more numerous than the ice crystals the crystals are able to grow to hundreds of micrometers in size at the expense of the water droplets. This process is known as the Wegener–Bergeron–Findeisen process. The corresponding depletion of water vapor causes the droplets to evaporate, meaning that the ice crystals grow at the droplets' expense. These large crystals are an efficient source of precipitation, since they fall through the atmosphere due to their mass, and may collide and stick together in clusters, or aggregates. These aggregates are snowflakes, and are usually the type of ice particle that falls to the ground. Guinness World Records list the world's largest snowflakes as those of January 1887 at Fort Keogh, Montana; allegedly one measured wide.
Although the ice is clear, scattering of light by the crystal facets and hollows/imperfections mean that the crystals often appear white in color due to diffuse reflection of the whole spectrum of light by the small ice particles. The shape of the snowflake is determined broadly by the temperature and humidity at which it is formed. Rarely, at a temperature of around , snowflakes can form in threefold symmetry—triangular snowflakes. The most common snow particles are visibly irregular, although near-perfect snowflakes may be more common in pictures because they are more visually appealing. No two snowflakes are alike, as they grow at different rates and in different patterns depending on the changing temperature and humidity within the atmosphere through which they fall on their way to the ground. The METAR code for snow is SN, while snow showers are coded SHSN.
Diamond dust
Diamond dust, also known as ice needles or ice crystals, forms at temperatures approaching due to air with slightly higher moisture from aloft mixing with colder, surface-based air. They are made of simple ice crystals, hexagonal in shape. The METAR identifier for diamond dust within international hourly weather reports is IC.
Occult deposition
Occult deposition occurs when mist or air that is highly saturated with water vapour interacts with the leaves of trees or shrubs it passes over.
Causes
Frontal activity
Stratiform or dynamic precipitation occurs as a consequence of slow ascent of air in synoptic systems (on the order of cm/s), such as over surface cold fronts, and over and ahead of warm fronts. Similar ascent is seen around tropical cyclones outside of the eyewall, and in comma-head precipitation patterns around mid-latitude cyclones. A wide variety of weather can be found along an occluded front, with thunderstorms possible, but usually their passage is associated with a drying of the air mass. Occluded fronts usually form around mature low-pressure areas. Precipitation may occur on celestial bodies other than Earth. When it gets cold, Mars has precipitation that most likely takes the form of ice needles, rather than rain or snow.
Convection
Convective rain, or showery precipitation, occurs from convective clouds, e.g. cumulonimbus or cumulus congestus. It falls as showers with rapidly changing intensity. Convective precipitation falls over a certain area for a relatively short time, as convective clouds have limited horizontal extent. Most precipitation in the tropics appears to be convective; however, it has been suggested that stratiform precipitation also occurs. Graupel and hail indicate convection. In mid-latitudes, convective precipitation is intermittent and often associated with baroclinic boundaries such as cold fronts, squall lines, and warm fronts.
Orographic effects
Orographic precipitation occurs on the windward (upwind) side of mountains and is caused by the rising air motion of a large-scale flow of moist air across the mountain ridge, resulting in adiabatic cooling and condensation. In mountainous parts of the world subjected to relatively consistent winds (for example, the trade winds), a more moist climate usually prevails on the windward side of a mountain than on the leeward or downwind side. Moisture is removed by orographic lift, leaving drier air (see katabatic wind) on the descending and generally warming, leeward side where a rain shadow is observed.
In Hawaii, Mount Waiʻaleʻale, on the island of Kauai, is notable for its extreme rainfall, as it has the second-highest average annual rainfall on Earth, with . Storm systems affect the state with heavy rains between October and March. Local climates vary considerably on each island due to their topography, divisible into windward (Koolau) and leeward (Kona) regions based upon location relative to the higher mountains. Windward sides face the east to northeast trade winds and receive much more rainfall; leeward sides are drier and sunnier, with less rain and less cloud cover.
In South America, the Andes mountain range blocks Pacific moisture that arrives in that continent, resulting in a desertlike climate just downwind across western Argentina. The Sierra Nevada range creates the same effect in North America forming the Great Basin and Mojave Deserts. Similarly, in Asia, the Himalaya mountains create an obstacle to monsoons which leads to extremely high precipitation on the southern side and lower precipitation levels on the northern side.
Snow
Extratropical cyclones can bring cold and dangerous conditions with heavy rain and snow with winds exceeding , (sometimes referred to as windstorms in Europe). The band of precipitation that is associated with their warm front is often extensive, forced by weak upward vertical motion of air over the frontal boundary which condenses as it cools and produces precipitation within an elongated band, which is wide and stratiform, meaning falling out of nimbostratus clouds.
Southwest of extratropical cyclones, curved cyclonic flow bringing cold air across the relatively warm water bodies can lead to narrow lake-effect snow bands. Those bands bring strong localized snowfall which can be understood as follows: Large water bodies such as lakes efficiently store heat that results in significant temperature differences (larger than 13 °C or 23 °F) between the water surface and the air above. Because of this temperature difference, warmth and moisture are transported upward, condensing into vertically oriented clouds (see satellite picture) which produce snow showers. The temperature decrease with height and cloud depth are directly affected by both the water temperature and the large-scale environment. The stronger the temperature decrease with height, the deeper the clouds get, and the greater the precipitation rate becomes.
In mountainous areas, heavy snowfall accumulates when air is forced to ascend the mountains and squeeze out precipitation along their windward slopes, which in cold conditions, falls in the form of snow. Because of the ruggedness of terrain, forecasting the location of heavy snowfall remains a significant challenge.
Within the tropics
The wet, or rainy, season is the time of year, covering one or more months, when most of the average annual rainfall in a region falls. The term green season is also sometimes used as a euphemism by tourist authorities. Areas with wet seasons are dispersed across portions of the tropics and subtropics. Savanna climates and areas with monsoon regimes have wet summers and dry winters. Tropical rainforests technically do not have dry or wet seasons, since their rainfall is equally distributed through the year. Some areas with pronounced rainy seasons will see a break in rainfall mid-season when the Intertropical Convergence Zone or monsoon trough move poleward of their location during the middle of the warm season. When the wet season occurs during the warm season, or summer, rain falls mainly during the late afternoon and early evening hours. The wet season is a time when air quality improves, freshwater quality improves, and vegetation grows significantly. Soil nutrients diminish and erosion increases. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season.
Tropical cyclones, a source of very heavy rainfall, consist of large air masses several hundred miles across with low pressure at the centre and with winds blowing inward towards the centre in either a clockwise direction (southern hemisphere) or counterclockwise (northern hemisphere). Although cyclones can take an enormous toll in lives and personal property, they may be important factors in the precipitation regimes of places they impact, as they may bring much-needed precipitation to otherwise dry regions. Areas in their path can receive a year's worth of rainfall from a tropical cyclone passage.
Large-scale geographical distribution
On the large scale, the highest precipitation amounts outside topography fall in the tropics, closely tied to the Intertropical Convergence Zone, itself the ascending branch of the Hadley cell. Mountainous locales near the equator in Colombia are amongst the wettest places on Earth. North and south of this are regions of descending air that form subtropical ridges where precipitation is low; the land surface underneath these ridges is usually arid, and these regions make up most of the Earth's deserts. An exception to this rule is in Hawaii, where upslope flow due to the trade winds lead to one of the wettest locations on Earth.
Measurement
The standard way of measuring rainfall or snowfall is the standard rain gauge, which can be found in plastic and metal varieties. The inner cylinder is filled by of rain, with overflow flowing into the outer cylinder. Plastic gauges have markings on the inner cylinder down to resolution, while metal gauges require use of a stick designed with the appropriate markings. After the inner cylinder is filled, the amount inside is discarded, then filled with the remaining rainfall in the outer cylinder until all the fluid in the outer cylinder is gone, adding to the overall total until the outer cylinder is empty. These gauges are used in the winter by removing the funnel and inner cylinder and allowing snow and freezing rain to collect inside the outer cylinder. Some add anti-freeze to their gauge so they do not have to melt the snow or ice that falls into the gauge. Once the snowfall/ice is finished accumulating, or as is approached, one can either bring it inside to melt, or use lukewarm water to fill the inner cylinder with in order to melt the frozen precipitation in the outer cylinder, keeping track of the warm fluid added, which is subsequently subtracted from the overall total once all the ice/snow is melted.
When a precipitation measurement is made, various networks exist across the United States and elsewhere where rainfall measurements can be submitted through the Internet, such as CoCoRAHS or GLOBE. If a network is not available in the area where one lives, the nearest local weather office will likely be interested in the measurement.
Hydrometeor definition
A concept used in precipitation measurement is the hydrometeor. Any particulates of liquid or solid water in the atmosphere are known as hydrometeors. Formations due to condensation, such as clouds, haze, fog, and mist, are composed of hydrometeors. All precipitation types are made up of hydrometeors by definition, including virga, which is precipitation which evaporates before reaching the ground. Particles blown from the Earth's surface by wind, such as blowing snow and blowing sea spray, are also hydrometeors, as are hail and snow.
Satellite estimates
Although surface precipitation gauges are considered the standard for measuring precipitation, there are many areas in which their use is not feasible. This includes the vast expanses of ocean and remote land areas. In other cases, social, technical or administrative issues prevent the dissemination of gauge observations. As a result, the modern global record of precipitation largely depends on satellite observations.
Satellite sensors now in practical use for precipitation fall into two categories. Thermal infrared (IR) sensors record a channel around 11 micron wavelength and primarily give information about cloud tops. Due to the typical structure of the atmosphere, cloud-top temperatures are approximately inversely related to cloud-top heights, meaning colder clouds almost always occur at higher altitudes. Further, cloud tops with a lot of small-scale variation are likely to be more vigorous than smooth-topped clouds. Various mathematical schemes, or algorithms, use these and other properties to estimate precipitation from the IR data.
Additional sensor channels and products have been demonstrated to provide additional useful information including visible channels, additional IR channels, water vapor channels and atmospheric sounding retrievals. However, most precipitation data sets in current use do not employ these data sources.
Return period
The likelihood or probability of an event with a specified intensity and duration is called the return period or frequency. The intensity of a storm can be predicted for any return period and storm duration, from charts based on historical data for the location. The term 1 in 10 year storm describes a rainfall event which is rare and is only likely to occur once every 10 years, so it has a 10 percent likelihood any given year. The rainfall will be greater and the flooding will be worse than the worst storm expected in any single year. The term 1 in 100 year storm describes a rainfall event which is extremely rare and which will occur with a likelihood of only once in a century, so has a 1 percent likelihood in any given year. The rainfall will be extreme and flooding to be worse than a 1 in 10 year event. As with all probability events, it is possible though unlikely to have two "1 in 100 Year Storms" in a single year.
Uneven pattern of precipitation
A significant portion of the annual precipitation in any particular place (no weather station in Africa or South America were considered) falls on only a few days, typically about 50% during the 12 days with the most precipitation.
Role in Köppen climate classification
Rain forests are characterized by high rainfall, with definitions setting minimum normal annual rainfall between . A tropical savanna is a grassland biome located in semi-arid to semi-humid climate regions of subtropical and tropical latitudes, with rainfall between a year. They are widespread on Africa, and are also found in India, the northern parts of South America, Malaysia, and Australia. The humid subtropical climate zone is where winter rainfall (and sometimes snowfall) is associated with large storms that the westerlies steer from west to east. Most summer rainfall occurs during thunderstorms and from occasional tropical cyclones. Humid subtropical climates lie on the east side continents, roughly between latitudes 20° and 40° degrees from the equator.
An oceanic (or maritime) climate is typically found along the west coasts at the middle latitudes of all the world's continents, bordering cool oceans, as well as southeastern Australia, and is accompanied by plentiful precipitation year-round. The Mediterranean climate regime resembles the climate of the lands in the Mediterranean Basin, parts of western North America, parts of western and southern Australia, in southwestern South Africa and in parts of central Chile. The climate is characterized by hot, dry summers and cool, wet winters. A steppe is a dry grassland. Subarctic climates are cold with continuous permafrost and little precipitation.
Effect on agriculture
Precipitation, especially rain, has a dramatic effect on agriculture. All plants need at least some water to survive, therefore rain (being the most effective means of watering) is important to agriculture. While a regular rain pattern is usually vital to healthy plants, too much or too little rainfall can be harmful, even devastating to crops. Drought can kill crops and increase erosion, while overly wet weather can cause harmful fungus growth. Plants need varying amounts of rainfall to survive. For example, certain cacti require small amounts of water, while tropical plants may need up to hundreds of inches of rain per year to survive.
In areas with wet and dry seasons, soil nutrients diminish and erosion increases during the wet season. Animals have adaptation and survival strategies for the wetter regime. The previous dry season leads to food shortages into the wet season, as the crops have yet to mature. Developing countries have noted that their populations show seasonal weight fluctuations due to food shortages seen before the first harvest, which occurs late in the wet season.
Climate change
Increasing temperatures tend to increase evaporation which leads to more precipitation. Precipitation has generally increased over land north of 30°N from 1900 to 2005 but has declined over the tropics since the 1970s. Globally there has been no statistically significant overall trend in precipitation over the past century, although trends have varied widely by region and over time. In 2018, a study assessing changes in precipitation across spatial scales using a high-resolution global precipitation dataset of over 33+ years, concluded that "While there are regional trends, there is no evidence of increase in precipitation at the global scale in response to the observed global warming."
Each region of the world is going to have changes in precipitation due to their unique conditions. Eastern portions of North and South America, northern Europe, and northern and central Asia have become wetter. The Sahel, the Mediterranean, southern Africa and parts of southern Asia have become drier. There has been an increase in the number of heavy precipitation events over many areas during the past century, as well as an increase since the 1970s in the prevalence of droughts—especially in the tropics and subtropics. Changes in precipitation and evaporation over the oceans are suggested by the decreased salinity of mid- and high-latitude waters (implying more precipitation), along with increased salinity in lower latitudes (implying less precipitation, more evaporation, or both). Over the contiguous United States, total annual precipitation increased at an average rate of 6.1% per century since 1900, with the greatest increases within the East North Central climate region (11.6% per century) and the South (11.1%). Hawaii was the only region to show a decrease (−9.25%).
Changes due to urban heat island
The urban heat island warms cities above surrounding suburbs and rural areas. This extra heat leads to greater upward motion, which can induce additional shower and thunderstorm activity. Rainfall rates downwind of cities are increased between 48% and 116%. Partly as a result of this warming, monthly rainfall is about 28% greater between downwind of cities, compared with upwind. Some cities induce a total precipitation increase of 51%.
Forecasting
The Quantitative Precipitation Forecast (abbreviated QPF) is the expected amount of liquid precipitation accumulated over a specified time over a specified area. A QPF will be specified when a measurable precipitation type reaching a minimum threshold is forecast for any hour during a QPF valid period. Precipitation forecasts tend to be bound by synoptic hours such as 0000, 0600, 1200, and 1800 GMT. The terrain is considered in QPFs using topography or based upon climatological precipitation patterns from observations with fine detail. Starting in the mid to late 1990s, QPFs were used within hydrologic forecast models to simulate impact to rivers throughout the United States. Forecast models show significant sensitivity to humidity levels within the planetary boundary layer, or in the lowest levels of the atmosphere, which decreases with height. QPF can be generated on a quantitative, forecasting amounts, or a qualitative, forecasting the probability of a specific amount, basis. Radar imagery forecasting techniques show higher skill than model forecasts within six to seven hours of the time of the radar image. The forecasts can be verified through the use of rain gauge measurements, weather radar estimates, or a combination of both. Various skill scores can be determined to measure the value of the rainfall forecast.
| Physical sciences | Precipitation | null |
286262 | https://en.wikipedia.org/wiki/Precipitation%20%28chemistry%29 | Precipitation (chemistry) | In an aqueous solution, precipitation is the "sedimentation of a solid material (a precipitate) from a liquid solution". The solid formed is called the precipitate. In case of an inorganic chemical reaction leading to precipitation, the chemical reagent causing the solid to form is called the precipitant.
The clear liquid remaining above the precipitated or the centrifuged solid phase is also called the supernate or supernatant.
The notion of precipitation can also be extended to other domains of chemistry (organic chemistry and biochemistry) and even be applied to the solid phases (e.g. metallurgy and alloys) when solid impurities segregate from a solid phase.
Supersaturation
The precipitation of a compound may occur when its concentration exceeds its solubility. This can be due to temperature changes, solvent evaporation, or by mixing solvents. Precipitation occurs more rapidly from a strongly supersaturated solution.
The formation of a precipitate can be caused by a chemical reaction. When a barium chloride solution reacts with sulphuric acid, a white precipitate of barium sulphate is formed. When a potassium iodide solution reacts with a lead(II) nitrate solution, a yellow precipitate of lead(II) iodide is formed.
Inorganic chemistry
Precipitate formation is useful in the detection of the type of cation in a salt. To do this, an alkali first reacts with the unknown salt to produce a precipitate that is the hydroxide of the unknown salt. To identify the cation, the color of the precipitate and its solubility in excess are noted. Similar processes are often used in sequence – for example, a barium nitrate solution will react with sulfate ions to form a solid barium sulfate precipitate, indicating that it is likely that sulfate ions are present.
A common example of precipitation from aqueous solution is that of silver chloride. When silver nitrate (AgNO3) is added to a solution of potassium chloride (KCl) the precipitation of a white solid (AgCl) is observed.
The ionic equation allows to write this reaction by detailing the dissociated ions present in aqueous solution.
Reductive precipitation
The Walden reductor is an illustration of a reduction reaction directly accompanied by the precipitation of a less soluble compound because of its lower chemical valence:
The Walden reductor made of tiny silver crystals obtained by the immersion of a copper wire into a solution of silver nitrate is used to reduce to their lower valence any metallic ion located above the silver couple in the redox potential scale.
Colloidal suspensions
Without sufficient attraction forces (e.g., Van der Waals force) to aggregate the solid particles together and to remove them from solution by gravity (settling), they remain in suspension and form colloids. Sedimentation can be accelerated by high speed centrifugation. The compact mass thus obtained is sometimes referred to as a 'pellet'.
Digestion and precipitates ageing
Digestion, or precipitate ageing, happens when a freshly formed precipitate is left, usually at a higher temperature, in the solution from which it precipitates. It results in purer and larger recrystallized particles. The physico-chemical process underlying digestion is called Ostwald ripening.
Organic chemistry
While precipitation reactions can be used for making pigments, removing ions from solution in water treatment, and in classical qualitative inorganic analysis, precipitation is also commonly used to isolate the products of an organic reaction during workup and purification operations. Ideally, the product of the reaction is insoluble in the solvent used for the reaction. Thus, it precipitates as it is formed, preferably forming pure crystals. An example of this would be the synthesis of porphyrins in refluxing propionic acid. By cooling the reaction mixture to room temperature, crystals of the porphyrin precipitate, and are collected by filtration on a Büchner filter as illustrated by the photograph here beside:
Precipitation may also occur when an antisolvent (a solvent in which the product is insoluble) is added, drastically reducing the solubility of the desired product. Thereafter, the precipitate may be easily separated by decanting, filtration, or by centrifugation. An example would be the synthesis of Cr3+tetraphenylporphyrin chloride: water is added to the dimethylformamide (DMF) solution in which the reaction occurred, and the product precipitates. Precipitation is useful in purifying many other products: e.g., crude bmim-Cl is taken up in acetonitrile, and dropped into ethyl acetate, where it precipitates.
Biochemistry
Proteins purification and separation can be performed by precipitation in changing the nature of the solvent or the value of its relative permittivity (e.g., by replacing water by ethanol), or by increasing the ionic strength of the solution. As proteins have complex tertiary and quaternary structures due to their specific folding and various weak intermolecular interactions (e.g., hydrogen bridges), these superstructures can be modified and proteins denaturated and precipitated. Another important application of an antisolvent is in ethanol precipitation of DNA.
Metallurgy and alloys
In solid phases, precipitation occurs if the concentration of one solid is above the solubility limit in the host solid, due to e.g. rapid quenching or
ion implantation, and the temperature is high enough that diffusion can lead to segregation into precipitates. Precipitation in solids is routinely used to synthesize nanoclusters.
In metallurgy, precipitation from a solid solution is also a way to strengthen alloys.
Precipitation of ceramic phases in metallic alloys such as zirconium hydrides in zircaloy cladding of nuclear fuel pins can also render metallic alloys brittle and lead to their mechanical failure. Correctly mastering the precise temperature and pressure conditions when cooling down spent nuclear fuels is therefore essential to avoid damaging their cladding and to preserve the integrity of the spent fuel elements on the long term in dry storage casks and in geological disposal conditions.
Industrial processes
Hydroxide precipitation is probably the most widely used industrial precipitation process in which metal hydroxides are formed by adding calcium hydroxide (slaked lime) or sodium hydroxide (caustic soda) as precipitant.
History
Powders derived from different precipitation processes have also historically been known as 'flowers'.
| Physical sciences | Other reactions | Chemistry |
286454 | https://en.wikipedia.org/wiki/Settling | Settling | Settling is the process by which particulates move towards the bottom of a liquid and form a sediment. Particles that experience a force, either due to gravity or due to centrifugal motion will tend to move in a uniform manner in the direction exerted by that force. For gravity settling, this means that the particles will tend to fall to the bottom of the vessel, forming sludge or slurry at the vessel base.
Settling is an important operation in many applications, such as mining, wastewater and drinking water treatment, biological science, space propellant reignition,
and scooping.
Physics
For settling particles that are considered individually, i.e. dilute particle solutions, there are two main forces enacting upon any particle. The primary force is an applied force, such as gravity, and a drag force that is due to the motion of the particle through the fluid. The applied force is usually not affected by the particle's velocity, whereas the drag force is a function of the particle velocity.
For a particle at rest no drag force will be exhibited, which causes the particle to accelerate due to the applied force. When the particle accelerates, the drag force acts in the direction opposite to the particle's motion, retarding further acceleration, in the absence of other forces drag directly opposes the applied force. As the particle increases in velocity eventually the drag force and the applied force will approximately equate, causing no further change in the particle's velocity. This velocity is known as the terminal velocity, settling velocity or fall velocity of the particle. This is readily measurable by examining the rate of fall of individual particles.
The terminal velocity of the particle is affected by many parameters, i.e. anything that will alter the particle's drag. Hence the terminal velocity is most notably dependent upon grain size, the shape (roundness and sphericity) and density of the grains, as well as to the viscosity and density of the fluid.
Single particle drag
Stokes' drag
For dilute suspensions, Stokes' law predicts the settling velocity of small spheres in fluid, either air or water. This originates due to the strength of viscous forces at the surface of the particle providing the majority of the retarding force. Stokes' law finds many applications in the natural sciences, and is given by:
where w is the settling velocity, ρ is density (the subscripts p and f indicate particle and fluid respectively), g is the acceleration due to gravity, r is the radius of the particle and μ is the dynamic viscosity of the fluid.
Stokes' law applies when the Reynolds number, Re, of the particle is less than 0.1. Experimentally Stokes' law is found to hold within 1% for , within 3% for and within 9% . With increasing Reynolds numbers, Stokes law begins to break down due to the increasing importance of fluid inertia, requiring the use of empirical solutions to calculate drag forces.
Newtonian drag
Defining a drag coefficient, , as the ratio of the force experienced by the particle divided by the impact pressure of the fluid, a coefficient that can be considered as the transfer of available fluid force into drag is established. In this region the inertia of the impacting fluid is responsible for the majority of force transfer to the particle.
For a spherical particle in the Stokes regime this value is not constant, however in the Newtonian drag regime the drag on a sphere can be approximated by a constant, 0.44. This constant value implies that the efficiency of transfer of energy from the fluid to the particle is not a function of fluid velocity.
As such the terminal velocity of a particle in a Newtonian regime can again be obtained by equating the drag force to the applied force, resulting in the following expression
Transitional drag
In the intermediate region between Stokes drag and Newtonian drag, there exists a transitional regime, where the analytical solution to the problem of a falling sphere becomes problematic. To solve this, empirical expressions are used to calculate drag in this region. One such empirical equation is that of Schiller and Naumann, and may be valid for :
Hindered settling
Stokes, transitional and Newtonian settling describe the behaviour of a single spherical particle in an infinite fluid, known as free settling. However this model has limitations in practical application. Alternate considerations, such as the interaction of particles in the fluid, or the interaction of the particles with the container walls can modify the settling behaviour. Settling that has these forces in appreciable magnitude is known as hindered settling. Subsequently, semi-analytic or empirical solutions may be used to perform meaningful hindered settling calculations.
Applications
The solid-gas flow systems are present in many industrial applications, as dry, catalytic reactors, settling tanks, pneumatic conveying of solids, among others. Obviously, in industrial operations the drag rule is not simple as a single sphere settling in a stationary fluid. However, this knowledge indicates how drag behaves in more complex systems, which are designed and studied by engineers applying empirical and more sophisticated tools.
For example, 'settling tanks' are used for separating solids and/or oil from another liquid. In food processing, the vegetable is crushed and placed inside of a settling tank with water. The oil floats to the top of the water then is collected. In drinking water and waste water treatment a flocculant or coagulant is often added prior to settling to form larger particles that settle out quickly in a settling tank or (lamella) clarifier, leaving the water with a lower turbidity.
In winemaking, the French term for this process is débourbage. This step usually occurs in white wine production before the start of fermentation.
Settleable solids analysis
Settleable solids are the particulates that settle out of a still fluid. Settleable solids can be quantified for a suspension using an Imhoff cone. The standard Imhoff cone of transparent glass or plastic holds one liter of liquid and has calibrated markings to measure the volume of solids accumulated in the bottom of the conical container after settling for one hour. A standardized Imhoff cone procedure is commonly used to measure suspended solids in wastewater or stormwater runoff. The simplicity of the method makes it popular for estimating water quality. To numerically gauge the stability of suspended solids and predict agglomeration and sedimentation events, zeta potential is commonly analyzed. This parameter indicates the electrostatic repulsion between solid particles and can be used to predict whether aggregation and settling will occur over time.
The water sample to be measured should be representative of the total stream. Samples are best collected from the discharge falling from a pipe or over a weir, because samples skimmed from the top of a flowing channel may fail to capture larger, high-density solids moving along the bottom of the channel. The sampling bucket is vigorously stirred to uniformly re-suspend all collected solids immediately before pouring the volume required to fill the cone. The filled cone is immediately placed in a stationary holding rack to allow quiescent settling. The rack should be located away from heating sources, including direct sunlight, which might cause currents within the cone from thermal density changes of the liquid contents. After 45 minutes of settling, the cone is partially rotated about its axis of symmetry just enough to dislodge any settled material adhering to the side of the cone. Accumulated sediment is observed and measured fifteen minutes later, after one hour of total settling time.
| Physical sciences | Other separations | Chemistry |
286560 | https://en.wikipedia.org/wiki/Bastion | Bastion | A bastion is a structure projecting outward from the curtain wall of a fortification, most commonly angular in shape and positioned at the corners of the fort. The fully developed bastion consists of two faces and two flanks, with fire from the flanks being able to protect the curtain wall and the adjacent bastions. Compared with the medieval fortified towers they replaced, bastion fortifications offered a greater degree of passive resistance and more scope for ranged defence in the age of gunpowder artillery. As military architecture, the bastion is one element in the style of fortification dominant from the mid 16th to mid 19th centuries.
Evolution
By the middle of the 15th century, artillery pieces had become powerful enough to make the traditional medieval round tower and curtain wall obsolete. This was exemplified by the campaigns of Charles VII of France who reduced the towns and castles held by the English during the latter stages of the Hundred Years War, and by the fall of Constantinople in 1453 to the large cannon of the Turkish army.
During the Eighty Years War (1568–1648) Dutch military engineers developed the concepts further by lengthening the faces and shortening the curtain walls of the bastions. The resulting construction was called a bolwerk. To augment this change they placed v-shaped outworks known as ravelins in front of the bastions and curtain walls to protect them from direct artillery fire.
These ideas were further developed and incorporated into the trace italienne forts by Sébastien Le Prestre de Vauban, that remained in use during the Napoleonic Wars.
Effectiveness
Bastions differ from medieval towers in a number of respects. Bastions are lower than towers and are normally of similar height to the adjacent curtain wall. The height of towers, although making them difficult to scale, also made them easy for artillery to destroy. A bastion would normally have a ditch in front, the opposite side of which would be built up above the natural level then slope away gradually. This glacis shielded most of the bastion from the attacker's cannon while the distance from the base of the ditch to the top of the bastion meant it was still difficult to scale.
In contrast to typical late medieval towers, bastions (apart from early examples) were flat sided rather than curved. This eliminated dead ground making it possible for the defenders to fire upon any point directly in front of the bastion.
Bastions also cover a larger area than most towers. This allows more cannons to be mounted and provided enough space for the crews to operate them.
Surviving examples of bastions are usually faced with masonry. Unlike the wall of a tower this was just a retaining wall; cannonballs were expected to pass through this and be absorbed by a greater thickness of hard-packed earth or rubble behind. The top of the bastion was exposed to enemy fire, and normally would not be faced with masonry as cannonballs hitting the surface would scatter lethal stone shards among the defenders.
If a bastion was successfully stormed, it could provide the attackers with a stronghold from which to launch further attacks. Some bastion designs attempted to minimise this problem. This could be achieved by the use of retrenchments in which a trench was dug across the rear (gorge) of the bastion, isolating it from the main rampart.
Types
Various kinds of bastions have been used throughout history:
Solid bastions are those that are filled up entirely and have the ground even with the height of the rampart, without any empty space towards the centre.
Void or hollow bastions are those that have a rampart, or parapet, only around their flanks and faces, so that a void space is left towards the centre. The ground is so low, that if the rampart is taken, no retrenchment can be made in the centre, but what will lie under the fire of the besieged.
A flat bastion is one built in the middle of a curtain, or enclosed court, when the court is too large to be defended by the bastions at its extremes.
A cut bastion is that which has a re-entering angle at the point. It was sometimes also called bastion with a tenaille. Such bastions were used, when without such a structure, the angle would be too acute. The term cut bastion is also used for one that is cut off from the place by some ditch.
A composed bastion is when the two sides of the interior polygon are very unequal, which also makes the gorges unequal.
A regular bastion is that which has proportionate faces, flanks, and gorges, such as the octagonal bastion the symbol of the town of Nanaimo in British Columbia.
A deformed or irregular bastion is one which lacks one of its demi-gorges; one side of the interior polygon being too short.
A demi-bastion has only one face and flank. To fortify the angle of a place that is too acute, they cut the point, and place two demi-bastions, which make a tenaille, or re-entry angle. Their chief use is before a hornwork or crownwork.
Gallery
| Technology | Fortification | null |
286621 | https://en.wikipedia.org/wiki/Web%20colors | Web colors | Web colors are colors used in displaying web pages on the World Wide Web; they can be described by way of three methods: a color may be specified as an RGB triplet, in hexadecimal format (a hex triplet) or according to its common English name in some cases. A color tool or other graphics software is often used to generate color values. In some uses, hexadecimal color codes are specified with notation using a leading number sign (#). A color is specified according to the intensity of its red, green and blue components, each represented by eight bits. Thus, there are 24 bits used to specify a web color within the sRGB gamut, and 16,777,216 colors that may be so specified.
Colors outside the sRGB gamut can be specified in Cascading Style Sheets by making one or more of the red, green and blue components negative or greater than 100%, so the color space is theoretically an unbounded extrapolation of sRGB similar to scRGB. Specifying a non-sRGB color this way requires the RGB() function call. It is impossible with the hexadecimal syntax (and thus impossible in legacy HTML documents that do not use CSS).
The first versions of Mosaic and Netscape Navigator used the X11 color names as the basis for their color lists, as both started as X Window System applications.
Web colors have an unambiguous colorimetric definition, sRGB, which relates the chromaticities of a particular phosphor set, a given transfer curve, adaptive whitepoint, and viewing conditions. These have been chosen to be similar to many real-world monitors and viewing conditions, to allow rendering to be fairly close to the specified values even without color management. User agents vary in the fidelity with which they represent the specified colors. More advanced user agents use color management to provide better color fidelity; this is particularly important for Web-to-print applications.
Hex triplet
A hex triplet is a six-digit (or eight-digit), three-byte (or four-byte) hexadecimal number used in HTML, CSS, SVG, and other computing applications to represent colors. The bytes represent the red, green, and blue components of the color. The optional fourth byte refers to alpha channel. One byte represents a number in the range 00 to FF (in hexadecimal notation), or 0 to 255 in decimal notation. This represents the least (0) to the most (255) intensity of each of the color components. Thus web colors specify colors in the 24-bit RGB color scheme. The hex triplet is formed by concatenating three bytes in hexadecimal notation, in the following order:
Byte 1: red value (color type red)
Byte 2: green value (color type green)
Byte 3: blue value (color type blue)
Byte 4 (optional): alpha value
For example, consider the color where the red/green/blue values are decimal numbers: red=123, green=58, blue=30 (a hardwood brown color). The decimal numbers 123, 58, and 30 are equivalent to the hexadecimal numbers 7B, 3A, and 1E, respectively. The hex triplet is obtained by concatenating the six hexadecimal digits together, 7B3A1E in this example.
If any one of the three color values is less than 10 hex (16 decimal), it must be represented with a leading zero so that the triplet always has exactly six digits. For example, the decimal triplet 4, 8, 16 would be represented by the hex digits 04, 08, 10, forming the hex triplet 040810.
The number of colors that can be represented by this system is 2563, 166, or 224 = 16,777,216.
Shorthand hexadecimal form
An abbreviated, three (hexadecimal)-digit or four-digit form can be used, but can cause errors if software or maintenance scripts are only expecting the long form. Expanding this form to the six-digit form is as simple as repeating each digit: 09C becomes 0099CC as presented on the following CSS example:
.threedigit { color: #09C; }
.sixdigit { color: #0099CC; } /* same color as above */
This shorthand form reduces the palette to 4,096 colors, equivalent of 12-bit color as opposed to 24-bit color using the whole six-digit form (16,777,216 colors). This limitation is sufficient for many text-based documents.
Converting RGB to hexadecimal
RGB values are usually given in the 0–255 range; if they are in the 0–1 range, the values are multiplied by 255 before conversion. This number divided by sixteen (integer division; ignoring any remainder) gives the first hexadecimal digit (between 0 and F, where the letters A to F represent the numbers 10 to 15. See hexadecimal for more details). The remainder gives the second hexadecimal digit. For instance, the RGB value 58 (as shown in the previous example of hex triplets) divides into 3 groups of 16, thus the first digit is 3. A remainder of ten gives the hexadecimal number 3A. Likewise, the RGB value 201 divides into 12 groups of 16, thus the first digit is C. A remainder of nine gives the hexadecimal number C9. This process is repeated for each of the three color values.
Conversion between number bases is a common feature of calculators, including both hand-held models and the software calculators bundled with most modern operating systems. Web-based tools specifically for converting color values are also available.
HTML color names
Recent W3C specifications of color names distinguishes between basic and extended colors. In HTML and XHTML, colors can be used for text, background color, frame borders, tables, and individual table cells.
Basic colors
The basic colors are 16 colors defined in the HTML 4.01 specification, ratified in 1999, as follows (names are defined in this context to be case-insensitive):
These 16 were labeled as sRGB and included in the HTML 3.0 specification, which noted they were "the standard 16 colors supported with the Windows VGA palette."
Extended colors
Extended colors are the result of merging specifications from HTML 4.01, CSS 2.0, SVG 1.0 and CSS3 User Interfaces (CSS3 UI).
Several colors are defined by web browsers. A particular browser may not recognize all of these colors, but as of 2005, all modern, general-use, graphical browsers support the full list of colors. Many of these colors are from the list of X11 color names distributed with the X Window System. These colors were standardized by SVG 1.0, and are accepted by SVG Full user agents. They are not part of SVG Tiny.
The list of colors shipped with the X11 product varies between implementations and clashes with certain of the HTML names such as green. X11 colors are defined as simple RGB (hence, no particular color space), rather than sRGB. This means that the list of colors found in X11 (e.g., in /usr/lib/X11/rgb.txt) should not directly be used to choose colors for the web.
The list of web "X11 colors" from the CSS3 specification, along with their hexadecimal and decimal equivalents, is shown below. Compare the alphabetical lists in the W3C standards. This includes the common synonyms: aqua (HTML4/CSS 1.0 standard name) and cyan (common sRGB name), fuchsia (HTML4/CSS 1.0 standard name) and magenta (common sRGB name), gray (HTML4/CSS 1.0 standard name) and grey.
CSS colors
The Cascading Style Sheets specification defines the same number of named colors as the HTML 4 spec, namely the 16 HTML colors, and 124 colors from the Netscape X11 color list for a total of 140 names that were recognized by Internet Explorer (IE) 3.0 and Netscape Navigator 3.0. Blooberry.com notes that Opera 2.1 and Safari 1 also included Netscape's expanded list of 140 color names, but later discovered 14 names not included with Opera 3.5 on Windows 98.
In CSS 2.1, the color 'orange' (one of the 140) was added to the section with the 16 HTML4 colors as a 17th color. The CSS3.0 specification did not include orange in the "HTML4 color keywords" section, which was renamed as "Basic color keywords". In the same reference, the "SVG color keywords" section, was renamed "Extended color keywords", after starting out as "X11 color keywords" in an earlier working draft. The working draft for the level 4 color module combines the Basic and Extended sections together in a simple "Named Colors" section.
|}
CSS 2, SVG and CSS 2.1 allow web authors to use system colors, which are color names whose values are taken from the operating system, picking the operating system's highlighted text color, or the background color for tooltip controls. This enables web authors to style their content in line with the operating system of the user agent. The CSS3 color module has deprecated the use of system colors in favor of CSS3 UI System Appearance property, which itself was subsequently dropped from CSS3.
The CSS3 specification also introduces HSL color space values to style sheets:
/* RGB model */
p { color: #F00 } /* #rgb */
p { color: #FF0000 } /* #rrggbb */
p { color: rgb(255, 0, 0) } /* integer range 0 - 255 */
p { color: rgb(100%, 0%, 0%) } /* float range 0.0% - 100.0% */
/* RGB with alpha channel, added to CSS3 */
p { color: rgba(255, 0, 0, 0.5) } /* 50% opacity, semi-transparent */
/* HSL model, added to CSS3 */
p { color: hsl(0, 100%, 50%) } /* red */
p { color: hsl(120, 100%, 50%) } /* green */
p { color: hsl(120, 100%, 25%) } /* dark green */
p { color: hsl(120, 100%, 75%) } /* light green */
p { color: hsl(120, 50%, 50%) } /* pastel green */
/* HSL model with alpha channel */
p { color: hsla(120, 100%, 50%, 1) } /* green */
p { color: hsla(120, 100%, 50%, 0.5) } /* semi-transparent green */
p { color: hsla(120, 100%, 50%, 0.1) } /* very transparent green */
CSS also supports the special color transparent, which represents an alpha value of zero; by default, transparent is rendered as an invisible nominal black: rgba(0, 0, 0, 0). It was introduced in CSS1 but its scope of use has expanded over the versions.
CSS Color 4
Level 4 of the CSS Color specification introduced several new CSS color formats.
Besides new ways to write colors, it also introduces the concept of mixing colors in a non-sRGB color space, a first step towards fixing a well-known issue in color gradients. Some sections explaining color theory and common operations like gamut mapping are also added to aid implementation.
p { color: #F80A } /* #rgba */
p { color: #FF8800AA } /* #rrggbbaa */
p { color: rgb(255.0 136.0 0.0 / 0.667) } /* float range 0.0 - 255.0 for higher than 8-bit precision */
p { color: rgb(100% 53.3% 0% / 66.7%) } /* float range 0.0% - 100.0% */
p { color: color(sRGB 1 0.533 0 / 0.667) } /* color() function with color space */
Device independent color
CSS Color 4 introduces several different formats for device independent color that can display the entirety of visible color (in a capable screen), including:
CIE Lab and LCH
OKLab and OKLCH (preferred over Lab/LCH)
XYZ (D50 or D65 [default])
Predefined color spaces
A number of RGB spaces with gamuts that are wider than sRGB are also introduced through the new color() function:
Display P3
Prophoto
REC.2020
Adobe 1998 RGB
A linearized variant of sRGB is also defined for color mixing.
Other formats
On 21 June 2014, the CSS WG added the color RebeccaPurple to the Editor's Draft of the Colors module level 4, to commemorate Eric Meyer's daughter Rebecca, who died on 7 June 2014, her sixth birthday.
|}
CSS4 also introduces the HWB color model as an alternative to HSL/HSV.
CSS Color 5
The draft CSS Color 5 specification introduces syntax for mixing and manipulating existing colors, including:
A color-mix() function for mixing colors
Relative color syntax for manipulating components of an existing color
Custom color spaces are also supported via ICC profiles. This allows the use of CMYK on web pages.
Web-safe colors
In the mid-1990s, many displays were only capable of displaying 256 colors, dictated by the hardware or changeable by a "color table". When a color was found (e.g., in an image) that was not available, a different one had to be used. This was done by either using the closest color or by using dithering.
There were various attempts to make a "standard" color palette. A set of colors was needed that could be shown without dithering on 256-color displays; the number 216 was chosen partly because computer operating systems customarily reserved sixteen to twenty colors for their own use; it was also selected because it allowed exactly six equally spaced shades of red, green, and blue (6 × 6 × 6 = 216), each from 00 to FF (including both limits).
The list of colors was presented as if it had special properties that render it immune to dithering, but on 256-color displays applications could actually set a palette of any selection of colors that they chose, dithering the rest. These colors were chosen specifically because they matched the palettes selected by various browser applications. There were not very different palettes in use in different browsers.
"Web-safe" colors had a flaw in that, on systems such as X11 where the palette is shared between applications, smaller color cubes (5×5×5 or 4×4×4) were allocated by browsers—the "web-safe" colors would dither on such systems. Different results were obtained by providing an image with a larger range of colors and allowing the browser to quantize the color space if needed, rather than suffer the quality loss of a double quantization.
Through the 2000s, use of 256-color displays in personal computers dropped sharply in favour of 24-bit (TrueColor) displays, and the use of "web-safe" colors has fallen into practical disuse.
The "web-safe" colors do not all have standard names, but each can be specified by an RGB triplet: each component (red, green, and blue) takes one of the six values from the following table (out of the 256 possible values available for each component in full 24-bit color).
The following table shows all of the "web-safe" colors. One shortcoming of the web-safe palette is its small range of light colors for webpage backgrounds, whereas the intensities at the low end of the range, such as the two darkest, are similar to each other, making them hard to distinguish. Values flanked by "*" (asterisk) are part of the "really safe palette;" see Safest web colors, below.
Color table
Safest web colors
Designers were encouraged to stick to these 216 "web-safe" colors in their websites because there were a lot of 8-bit color displays when the 216-color palette was developed. David Lehn and Hadley Stern discovered that only 22 of the 216 colors in the web-safe palette are reliably displayed without inconsistent remapping on 16-bit computer displays. They called these 22 colors "the really safe palette"; it consists largely of shades of green, yellow, and cyan.
Accessibility
Color selection
Some browsers and devices do not support colors. For these displays or blind and colorblind users, Web content depending on colors can be unusable or difficult to use.
Either no colors should be specified (to invoke the browser's default colors), or both the background and all foreground colors (such as the colors of plain text, unvisited links, hovered links, active links, and visited links) should be specified to avoid black on black or white on white effects.
Color contrast
The Web Content Accessibility Guidelines recommend a contrast ratio of at least 4.5:1 between the relative luminance of text and its background color or at least 3:1 for large text. Enhanced accessibility requires contrast ratios greater than 7:1.
However, addressing accessibility concerns is not simply a matter of increasing the contrast ratio. As a report to the Web Accessibility Initiative indicates, dyslexic readers are better served by contrast ratios below the maximum. The recommendations they refer to of off-black (#0A0A0A) on off-white (#FFFFE5) and black (#000000) on creme (#FAFAC8) have contrast ratios of 11.7:1 and 20.3:1 respectively. Among their other color pairs, brown (#282800) on dark green (#A0A000) has a contrast ratio of 3.24:1, which is less than the WCAG recommendation, dark brown (#1E1E00) on light green (#B9B900) has a contrast ratio of 4.54:1 and blue (#00007D) on yellow (#FFFF00) has a contrast ratio of 11.4:1. The colors named in the report use different color values than the web colors of the same name.
| Physical sciences | Basics | Physics |
18716923 | https://en.wikipedia.org/wiki/Algebra | Algebra | Algebra is the branch of mathematics that studies certain abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic operations other than the standard arithmetic operations, such as addition and multiplication.
Elementary algebra is the main form of algebra taught in schools. It examines mathematical statements using variables for unspecified values and seeks to determine for which values the statements are true. To do so, it uses different methods of transforming equations to isolate variables. Linear algebra is a closely related field that investigates linear equations and combinations of them called systems of linear equations. It provides methods to find the values that solve all equations in the system at the same time, and to study the set of these solutions.
Abstract algebra studies algebraic structures, which consist of a set of mathematical objects together with one or several operations defined on that set. It is a generalization of elementary and linear algebra, since it allows mathematical objects other than numbers and non-arithmetic operations. It distinguishes between different types of algebraic structures, such as groups, rings, and fields, based on the number of operations they use and the laws they follow, called axioms. Universal algebra and category theory provide general frameworks to investigate abstract patterns that characterize different classes of algebraic structures.
Algebraic methods were first studied in the ancient period to solve specific problems in fields like geometry. Subsequent mathematicians examined general techniques to solve equations independent of their specific applications. They described equations and their solutions using words and abbreviations until the 16th and 17th centuries, when a rigorous symbolic formalism was developed. In the mid-19th century, the scope of algebra broadened beyond a theory of equations to cover diverse types of algebraic operations and structures. Algebra is relevant to many branches of mathematics, such as geometry, topology, number theory, and calculus, and other fields of inquiry, like logic and the empirical sciences.
Definition and etymology
Algebra is the branch of mathematics that studies algebraic structures and the operations they use. An algebraic structure is a non-empty set of mathematical objects, such as the integers, together with algebraic operations defined on that set, like addition and multiplication. Algebra explores the laws, general characteristics, and types of algebraic structures. Within certain algebraic structures, it examines the use of variables in equations and how to manipulate these equations.
Algebra is often understood as a generalization of arithmetic. Arithmetic studies operations like addition, subtraction, multiplication, and division, in a particular domain of numbers, such as the real numbers. Elementary algebra constitutes the first level of abstraction. Like arithmetic, it restricts itself to specific types of numbers and operations. It generalizes these operations by allowing indefinite quantities in the form of variables in addition to numbers. A higher level of abstraction is found in abstract algebra, which is not limited to a particular domain and examines algebraic structures such as groups and rings. It extends beyond typical arithmetic operations by also covering other types of operations. Universal algebra is still more abstract in that it is not interested in specific algebraic structures but investigates the characteristics of algebraic structures in general.
The term "algebra" is sometimes used in a more narrow sense to refer only to elementary algebra or only to abstract algebra. When used as a countable noun, an algebra is a specific type of algebraic structure that involves a vector space equipped with a certain type of binary operation. Depending on the context, "algebra" can also refer to other algebraic structures, like a Lie algebra or an associative algebra.
The word algebra comes from the Arabic term (), which originally referred to the surgical treatment of bonesetting. In the 9th century, the term received a mathematical meaning when the Persian mathematician Muhammad ibn Musa al-Khwarizmi employed it to describe a method of solving equations and used it in the title of a treatise on algebra, [The Compendious Book on Calculation by Completion and Balancing] which was translated into Latin as . The word entered the English language in the 16th century from Italian, Spanish, and medieval Latin. Initially, its meaning was restricted to the theory of equations, that is, to the art of manipulating polynomial equations in view of solving them. This changed in the 19th century when the scope of algebra broadened to cover the study of diverse types of algebraic operations and structures together with their underlying axioms, the laws they follow.
Major branches
Elementary algebra
Elementary algebra, also called school algebra, college algebra, and classical algebra, is the oldest and most basic form of algebra. It is a generalization of arithmetic that relies on variables and examines how mathematical statements may be transformed.
Arithmetic is the study of numerical operations and investigates how numbers are combined and transformed using the arithmetic operations of addition, subtraction, multiplication, division, exponentiation, extraction of roots, and logarithm. For example, the operation of addition combines two numbers, called the addends, into a third number, called the sum, as in
Elementary algebra relies on the same operations while allowing variables in addition to regular numbers. Variables are symbols for unspecified or unknown quantities. They make it possible to state relationships for which one does not know the exact values and to express general laws that are true, independent of which numbers are used. For example, the equation belongs to arithmetic and expresses an equality only for these specific numbers. By replacing the numbers with variables, it is possible to express a general law that applies to any possible combination of numbers, like the commutative property of multiplication, which is expressed in the equation
Algebraic expressions are formed by using arithmetic operations to combine variables and numbers. By convention, the lowercase letters , , and represent variables. In some cases, subscripts are added to distinguish variables, as in , , and . The lowercase letters , , and are usually used for constants and coefficients. The expression is an algebraic expression created by multiplying the number 5 with the variable and adding the number 3 to the result. Other examples of algebraic expressions are and
Some algebraic expressions take the form of statements that relate two expressions to one another. An equation is a statement formed by comparing two expressions, saying that they are equal. This can be expressed using the equals sign (), as in Inequations involve a different type of comparison, saying that the two sides are different. This can be expressed using symbols such as the less-than sign (), the greater-than sign (), and the inequality sign (). Unlike other expressions, statements can be true or false, and their truth value usually depends on the values of the variables. For example, the statement is true if is either 2 or −2 and false otherwise. Equations with variables can be divided into identity equations and conditional equations. Identity equations are true for all values that can be assigned to the variables, such as the equation Conditional equations are only true for some values. For example, the equation is only true if is 5.
The main goal of elementary algebra is to determine the values for which a statement is true. This can be achieved by transforming and manipulating statements according to certain rules. A key principle guiding this process is that whatever operation is applied to one side of an equation also needs to be done to the other side. For example, if one subtracts 5 from the left side of an equation one also needs to subtract 5 from the right side to balance both sides. The goal of these steps is usually to isolate the variable one is interested in on one side, a process known as solving the equation for that variable. For example, the equation can be solved for by adding 7 to both sides, which isolates on the left side and results in the equation
There are many other techniques used to solve equations. Simplification is employed to replace a complicated expression with an equivalent simpler one. For example, the expression can be replaced with the expression since by the distributive property. For statements with several variables, substitution is a common technique to replace one variable with an equivalent expression that does not use this variable. For example, if one knows that then one can simplify the expression to arrive at In a similar way, if one knows the value of one variable one may be able to use it to determine the value of other variables.
Algebraic equations can be interpreted geometrically to describe spatial figures in the form of a graph. To do so, the different variables in the equation are understood as coordinates and the values that solve the equation are interpreted as points of a graph. For example, if is set to zero in the equation , then must be −1 for the equation to be true. This means that the -pair is part of the graph of the equation. The -pair by contrast, does not solve the equation and is therefore not part of the graph. The graph encompasses the totality of -pairs that solve the equation.
Polynomials
A polynomial is an expression consisting of one or more terms that are added or subtracted from each other, like Each term is either a constant, a variable, or a product of a constant and variables. Each variable can be raised to a positive-integer power. A monomial is a polynomial with one term while two- and three-term polynomials are called binomials and trinomials. The degree of a polynomial is the maximal value (among its terms) of the sum of the exponents of the variables (4 in the above example). Polynomials of degree one are called linear polynomials. Linear algebra studies systems of linear polynomials. A polynomial is said to be univariate or multivariate, depending on whether it uses one or more variables.
Factorization is a method used to simplify polynomials, making it easier to analyze them and determine the values for which they evaluate to zero. Factorization consists in rewriting a polynomial as a product of several factors. For example, the polynomial can be factorized as The polynomial as a whole is zero if and only if one of its factors is zero, i.e., if is either −2 or 5. Before the 19th century, much of algebra was devoted to polynomial equations, that is equations obtained by equating a polynomial to zero. The first attempts for solving polynomial equations were to express the solutions in terms of th roots. The solution of a second-degree polynomial equation of the form is given by the quadratic formula
Solutions for the degrees 3 and 4 are given by the cubic and quartic formulas. There are no general solutions for higher degrees, as proven in the 19th century by the Abel–Ruffini theorem. Even when general solutions do not exist, approximate solutions can be found by numerical tools like the Newton–Raphson method.
The fundamental theorem of algebra asserts that every univariate polynomial equation of positive degree with real or complex coefficients has at least one complex solution. Consequently, every polynomial of a positive degree can be factorized into linear polynomials. This theorem was proved at the beginning of the 19th century, but this does not close the problem since the theorem does not provide any way for computing the solutions.
Linear algebra
Linear algebra starts with the study of systems of linear equations. An equation is linear if it can be expressed in the form where , , ..., and are constants. Examples are and . A system of linear equations is a set of linear equations for which one is interested in common solutions.
Matrices are rectangular arrays of values that have been originally introduced for having a compact and synthetic notation for systems of linear equations. For example, the system of equations
can be written as
where and are the matrices
Under some conditions on the number of rows and columns, matrices can be added, multiplied, and sometimes inverted. All methods for solving linear systems may be expressed as matrix manipulations using these operations. For example, solving the above system consists of computing an inverted matrix such that where is the identity matrix. Then, multiplying on the left both members of the above matrix equation by one gets the solution of the system of linear equations as
Methods of solving systems of linear equations range from the introductory, like substitution and elimination, to more advanced techniques using matrices, such as Cramer's rule, the Gaussian elimination, and LU decomposition. Some systems of equations are inconsistent, meaning that no solutions exist because the equations contradict each other. Consistent systems have either one unique solution or an infinite number of solutions.
The study of vector spaces and linear maps form a large part of linear algebra. A vector space is an algebraic structure formed by a set with an addition that makes it an abelian group and a scalar multiplication that is compatible with addition (see vector space for details). A linear map is a function between vector spaces that is compatible with addition and scalar multiplication. In the case of finite-dimensional vector spaces, vectors and linear maps can be represented by matrices. It follows that the theories of matrices and finite-dimensional vector spaces are essentially the same. In particular, vector spaces provide a third way for expressing and manipulating systems of linear equations. From this perspective, a matrix is a representation of a linear map: if one chooses a particular basis to describe the vectors being transformed, then the entries in the matrix give the results of applying the linear map to the basis vectors.
Systems of equations can be interpreted as geometric figures. For systems with two variables, each equation represents a line in two-dimensional space. The point where the two lines intersect is the solution of the full system because this is the only point that solves both the first and the second equation. For inconsistent systems, the two lines run parallel, meaning that there is no solution since they never intersect. If two equations are not independent then they describe the same line, meaning that every solution of one equation is also a solution of the other equation. These relations make it possible to seek solutions graphically by plotting the equations and determining where they intersect. The same principles also apply to systems of equations with more variables, with the difference being that the equations do not describe lines but higher dimensional figures. For instance, equations with three variables correspond to planes in three-dimensional space, and the points where all planes intersect solve the system of equations.
Abstract algebra
Abstract algebra, also called modern algebra, is the study of algebraic structures. An algebraic structure is a framework for understanding operations on mathematical objects, like the addition of numbers. While elementary algebra and linear algebra work within the confines of particular algebraic structures, abstract algebra takes a more general approach that compares how algebraic structures differ from each other and what types of algebraic structures there are, such as groups, rings, and fields. The key difference between these types of algebraic structures lies in the number of operations they use and the laws they obey. In mathematics education, abstract algebra refers to an advanced undergraduate course that mathematics majors take after completing courses in linear algebra.
On a formal level, an algebraic structure is a set of mathematical objects, called the underlying set, together with one or several operations. Abstract algebra is primarily interested in binary operations, which take any two objects from the underlying set as inputs and map them to another object from this set as output. For example, the algebraic structure has the natural numbers () as the underlying set and addition () as its binary operation. The underlying set can contain mathematical objects other than numbers, and the operations are not restricted to regular arithmetic operations. For instance, the underlying set of the symmetry group of a geometric object is made up of geometric transformations, such as rotations, under which the object remains unchanged. Its binary operation is function composition, which takes two transformations as input and has the transformation resulting from applying the first transformation followed by the second as its output.
Group theory
Abstract algebra classifies algebraic structures based on the laws or axioms that its operations obey and the number of operations it uses. One of the most basic types is a group, which has one operation and requires that this operation is associative and has an identity element and inverse elements. An operation is associative if the order of several applications does not matter, i.e., if is the same as for all elements. An operation has an identity element or a neutral element if one element e exists that does not change the value of any other element, i.e., if An operation has inverse elements if for any element there exists a reciprocal element that undoes . If an element operates on its inverse then the result is the neutral element e, expressed formally as Every algebraic structure that fulfills these requirements is a group. For example, is a group formed by the set of integers together with the operation of addition. The neutral element is 0 and the inverse element of any number is The natural numbers with addition, by contrast, do not form a group since they contain only positive integers and therefore lack inverse elements.
Group theory examines the nature of groups, with basic theorems such as the fundamental theorem of finite abelian groups and the Feit–Thompson theorem. The latter was a key early step in one of the most important mathematical achievements of the 20th century: the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 2004, that culminated in a complete classification of finite simple groups.
Ring theory and field theory
A ring is an algebraic structure with two operations that work similarly to addition and multiplication of numbers and are named and generally denoted similarly. A ring is a commutative group under addition: the addition of the ring is associative, commutative, and has an identity element and inverse elements. The multiplication is associative and distributive with respect to addition; that is, and Moreover, multiplication is associative and has an identity element generally denoted as . Multiplication needs not to be commutative; if it is commutative, one has a commutative ring. The ring of integers () is one of the simplest commutative rings.
A field is a commutative ring such that and each nonzero element has a multiplicative inverse. The ring of integers does not form a field because it lacks multiplicative inverses. For example, the multiplicative inverse of is which is not an integer. The rational numbers, the real numbers, and the complex numbers each form a field with the operations of addition and multiplication.
Ring theory is the study of rings, exploring concepts such as subrings, quotient rings, polynomial rings, and ideals as well as theorems such as Hilbert's basis theorem. Field theory is concerned with fields, examining field extensions, algebraic closures, and finite fields. Galois theory explores the relation between field theory and group theory, relying on the fundamental theorem of Galois theory.
Theories of interrelations among structures
Besides groups, rings, and fields, there are many other algebraic structures studied by algebra. They include magmas, semigroups, monoids, abelian groups, commutative rings, modules, lattices, vector spaces, algebras over a field, and associative and non-associative algebras. They differ from each other in regard to the types of objects they describe and the requirements that their operations fulfill. Many are related to each other in that a basic structure can be turned into a more advanced structure by adding additional requirements. For example, a magma becomes a semigroup if its operation is associative.
Homomorphisms are tools to examine structural features by comparing two algebraic structures. A homomorphism is a function from the underlying set of one algebraic structure to the underlying set of another algebraic structure that preserves certain structural characteristics. If the two algebraic structures use binary operations and have the form and then the function is a homomorphism if it fulfills the following requirement: The existence of a homomorphism reveals that the operation in the second algebraic structure plays the same role as the operation does in the first algebraic structure. Isomorphisms are a special type of homomorphism that indicates a high degree of similarity between two algebraic structures. An isomorphism is a bijective homomorphism, meaning that it establishes a one-to-one relationship between the elements of the two algebraic structures. This implies that every element of the first algebraic structure is mapped to one unique element in the second structure without any unmapped elements in the second structure.
Another tool of comparison is the relation between an algebraic structure and its subalgebra. The algebraic structure and its subalgebra use the same operations, which follow the same axioms. The only difference is that the underlying set of the subalgebra is a subset of the underlying set of the algebraic structure. All operations in the subalgebra are required to be closed in its underlying set, meaning that they only produce elements that belong to this set. For example, the set of even integers together with addition is a subalgebra of the full set of integers together with addition. This is the case because the sum of two even numbers is again an even number. But the set of odd integers together with addition is not a subalgebra because it is not closed: adding two odd numbers produces an even number, which is not part of the chosen subset.
Universal algebra is the study of algebraic structures in general. As part of its general perspective, it is not concerned with the specific elements that make up the underlying sets and considers operations with more than two inputs, such as ternary operations. It provides a framework for investigating what structural features different algebraic structures have in common. One of those structural features concerns the identities that are true in different algebraic structures. In this context, an identity is a universal equation or an equation that is true for all elements of the underlying set. For example, commutativity is a universal equation that states that is identical to for all elements. A variety is a class of all algebraic structures that satisfy certain identities. For example, if two algebraic structures satisfy commutativity then they are both part of the corresponding variety.
Category theory examines how mathematical objects are related to each other using the concept of categories. A category is a collection of objects together with a collection of morphisms or "arrows" between those objects. These two collections must satisfy certain conditions. For example, morphisms can be joined, or composed: if there exists a morphism from object to object , and another morphism from object to object , then there must also exist one from object to object . Composition of morphisms is required to be associative, and there must be an "identity morphism" for every object. Categories are widely used in contemporary mathematics since they provide a unifying framework to describe and analyze many fundamental mathematical concepts. For example, sets can be described with the category of sets, and any group can be regarded as the morphisms of a category with just one object.
History
The origin of algebra lies in attempts to solve mathematical problems involving arithmetic calculations and unknown quantities. These developments happened in the ancient period in Babylonia, Egypt, Greece, China, and India. One of the earliest documents on algebraic problems is the Rhind Mathematical Papyrus from ancient Egypt, which was written around 1650 BCE. It discusses solutions to linear equations, as expressed in problems like "A quantity; its fourth is added to it. It becomes fifteen. What is the quantity?" Babylonian clay tablets from around the same time explain methods to solve linear and quadratic polynomial equations, such as the method of completing the square.
Many of these insights found their way to the ancient Greeks. Starting in the 6th century BCE, their main interest was geometry rather than algebra, but they employed algebraic methods to solve geometric problems. For example, they studied geometric figures while taking their lengths and areas as unknown quantities to be determined, as exemplified in Pythagoras' formulation of the difference of two squares method and later in Euclid's Elements. In the 3rd century CE, Diophantus provided a detailed treatment of how to solve algebraic equations in a series of books called Arithmetica. He was the first to experiment with symbolic notation to express polynomials. Diophantus's work influenced Arab development of algebra with many of his methods reflected in the concepts and techniques used in medieval Arabic algebra. In ancient China, The Nine Chapters on the Mathematical Art, a book composed over the period spanning from the 10th century BCE to the 2nd century CE, explored various techniques for solving algebraic equations, including the use of matrix-like constructs.
There is no unanimity as to whether these early developments are part of algebra or only precursors. They offered solutions to algebraic problems but did not conceive them in an abstract and general manner, focusing instead on specific cases and applications. This changed with the Persian mathematician al-Khwarizmi, who published his The Compendious Book on Calculation by Completion and Balancing in 825 CE. It presents the first detailed treatment of general methods that can be used to manipulate linear and quadratic equations by "reducing" and "balancing" both sides. Other influential contributions to algebra came from the Arab mathematician Thābit ibn Qurra also in the 9th century and the Persian mathematician Omar Khayyam in the 11th and 12th centuries.
In India, Brahmagupta investigated how to solve quadratic equations and systems of equations with several variables in the 7th century CE. Among his innovations were the use of zero and negative numbers in algebraic equations. The Indian mathematicians Mahāvīra in the 9th century and Bhāskara II in the 12th century further refined Brahmagupta's methods and concepts. In 1247, the Chinese mathematician Qin Jiushao wrote the Mathematical Treatise in Nine Sections, which includes an algorithm for the numerical evaluation of polynomials, including polynomials of higher degrees.
The Italian mathematician Fibonacci brought al-Khwarizmi's ideas and techniques to Europe in books including his Liber Abaci. In 1545, the Italian polymath Gerolamo Cardano published his book Ars Magna, which covered many topics in algebra, discussed imaginary numbers, and was the first to present general methods for solving cubic and quartic equations. In the 16th and 17th centuries, the French mathematicians François Viète and René Descartes introduced letters and symbols to denote variables and operations, making it possible to express equations in an abstract and concise manner. Their predecessors had relied on verbal descriptions of problems and solutions. Some historians see this development as a key turning point in the history of algebra and consider what came before it as the prehistory of algebra because it lacked the abstract nature based on symbolic manipulation.
In the 17th and 18th centuries, many attempts were made to find general solutions to polynomials of degree five and higher. All of them failed. At the end of the 18th century, the German mathematician Carl Friedrich Gauss proved the fundamental theorem of algebra, which describes the existence of zeros of polynomials of any degree without providing a general solution. At the beginning of the 19th century, the Italian mathematician Paolo Ruffini and the Norwegian mathematician Niels Henrik Abel were able to show that no general solution exists for polynomials of degree five and higher. In response to and shortly after their findings, the French mathematician Évariste Galois developed what came later to be known as Galois theory, which offered a more in-depth analysis of the solutions of polynomials while also laying the foundation of group theory. Mathematicians soon realized the relevance of group theory to other fields and applied it to disciplines like geometry and number theory.
Starting in the mid-19th century, interest in algebra shifted from the study of polynomials associated with elementary algebra towards a more general inquiry into algebraic structures, marking the emergence of abstract algebra. This approach explored the axiomatic basis of arbitrary algebraic operations. The invention of new algebraic systems based on different operations and elements accompanied this development, such as Boolean algebra, vector algebra, and matrix algebra. Influential early developments in abstract algebra were made by the German mathematicians David Hilbert, Ernst Steinitz, and Emmy Noether as well as the Austrian mathematician Emil Artin. They researched different forms of algebraic structures and categorized them based on their underlying axioms into types, like groups, rings, and fields.
The idea of the even more general approach associated with universal algebra was conceived by the English mathematician Alfred North Whitehead in his 1898 book A Treatise on Universal Algebra. Starting in the 1930s, the American mathematician Garrett Birkhoff expanded these ideas and developed many of the foundational concepts of this field. The invention of universal algebra led to the emergence of various new areas focused on the algebraization of mathematicsthat is, the application of algebraic methods to other branches of mathematics. Topological algebra arose in the early 20th century, studying algebraic structures such as topological groups and Lie groups. In the 1940s and 50s, homological algebra emerged, employing algebraic techniques to study homology. Around the same time, category theory was developed and has since played a key role in the foundations of mathematics. Other developments were the formulation of model theory and the study of free algebras.
Applications
The influence of algebra is wide-reaching, both within mathematics and in its applications to other fields. The algebraization of mathematics is the process of applying algebraic methods and principles to other branches of mathematics, such as geometry, topology, number theory, and calculus. It happens by employing symbols in the form of variables to express mathematical insights on a more general level, allowing mathematicians to develop formal models describing how objects interact and relate to each other.
One application, found in geometry, is the use of algebraic statements to describe geometric figures. For example, the equation describes a line in two-dimensional space while the equation corresponds to a sphere in three-dimensional space. Of special interest to algebraic geometry are algebraic varieties, which are solutions to systems of polynomial equations that can be used to describe more complex geometric figures. Algebraic reasoning can also solve geometric problems. For example, one can determine whether and where the line described by intersects with the circle described by by solving the system of equations made up of these two equations. Topology studies the properties of geometric figures or topological spaces that are preserved under operations of continuous deformation. Algebraic topology relies on algebraic theories such as group theory to classify topological spaces. For example, homotopy groups classify topological spaces based on the existence of loops or holes in them.
Number theory is concerned with the properties of and relations between integers. Algebraic number theory applies algebraic methods and principles to this field of inquiry. Examples are the use of algebraic expressions to describe general laws, like Fermat's Last Theorem, and of algebraic structures to analyze the behavior of numbers, such as the ring of integers. The related field of combinatorics uses algebraic techniques to solve problems related to counting, arrangement, and combination of discrete objects. An example in algebraic combinatorics is the application of group theory to analyze graphs and symmetries. The insights of algebra are also relevant to calculus, which uses mathematical expressions to examine rates of change and accumulation. It relies on algebra, for instance, to understand how these expressions can be transformed and what role variables play in them. Algebraic logic employs the methods of algebra to describe and analyze the structures and patterns that underlie logical reasoning, exploring both the relevant mathematical structures themselves and their application to concrete problems of logic. It includes the study of Boolean algebra to describe propositional logic as well as the formulation and analysis of algebraic structures corresponding to more complex systems of logic.
Algebraic methods are also commonly employed in other areas, like the natural sciences. For example, they are used to express scientific laws and solve equations in physics, chemistry, and biology. Similar applications are found in fields like economics, geography, engineering (including electronics and robotics), and computer science to express relationships, solve problems, and model systems. Linear algebra plays a central role in artificial intelligence and machine learning, for instance, by enabling the efficient processing and analysis of large datasets. Various fields rely on algebraic structures investigated by abstract algebra. For example, physical sciences like crystallography and quantum mechanics make extensive use of group theory, which is also employed to study puzzles such as Sudoku and Rubik's cubes, and origami. Both coding theory and cryptology rely on abstract algebra to solve problems associated with data transmission, like avoiding the effects of noise and ensuring data security.
Education
Algebra education mostly focuses on elementary algebra, which is one of the reasons why elementary algebra is also called school algebra. It is usually not introduced until secondary education since it requires mastery of the fundamentals of arithmetic while posing new cognitive challenges associated with abstract reasoning and generalization. It aims to familiarize students with the formal side of mathematics by helping them understand mathematical symbolism, for example, how variables can be used to represent unknown quantities. An additional difficulty for students lies in the fact that, unlike arithmetic calculations, algebraic expressions are often difficult to solve directly. Instead, students need to learn how to transform them according to certain laws, often with the goal of determining an unknown quantity.
Some tools to introduce students to the abstract side of algebra rely on concrete models and visualizations of equations, including geometric analogies, manipulatives including sticks or cups, and "function machines" representing equations as flow diagrams. One method uses balance scales as a pictorial approach to help students grasp basic problems of algebra. The mass of some objects on the scale is unknown and represents variables. Solving an equation corresponds to adding and removing objects on both sides in such a way that the sides stay in balance until the only object remaining on one side is the object of unknown mass. Word problems are another tool to show how algebra is applied to real-life situations. For example, students may be presented with a situation in which Naomi's brother has twice as many apples as Naomi. Given that both together have twelve apples, students are then asked to find an algebraic equation that describes this situation () and to determine how many apples Naomi has
At the university level, mathematics students encounter advanced algebra topics from linear and abstract algebra. Initial undergraduate courses in linear algebra focus on matrices, vector spaces, and linear maps. Upon completing them, students are usually introduced to abstract algebra, where they learn about algebraic structures like groups, rings, and fields, as well as the relations between them. The curriculum typically also covers specific instances of algebraic structures, such as the systems of the rational numbers, the real numbers, and the polynomials.
| Mathematics | Algebra | null |
18717261 | https://en.wikipedia.org/wiki/Trigonometry | Trigonometry | Trigonometry () is a branch of mathematics concerned with relationships between angles and side lengths of triangles. In particular, the trigonometric functions relate the angles of a right triangle with ratios of its side lengths. The field emerged in the Hellenistic world during the 3rd century BC from applications of geometry to astronomical studies. The Greeks focused on the calculation of chords, while mathematicians in India created the earliest-known tables of values for trigonometric ratios (also called trigonometric functions) such as sine.
Throughout history, trigonometry has been applied in areas such as geodesy, surveying, celestial mechanics, and navigation.
Trigonometry is known for its many identities. These
trigonometric identities are commonly used for rewriting trigonometrical expressions with the aim to simplify an expression, to find a more useful form of an expression, or to solve an equation.
History
Sumerian astronomers studied angle measure, using a division of circles into 360 degrees. They, and later the Babylonians, studied the ratios of the sides of similar triangles and discovered some properties of these ratios but did not turn that into a systematic method for finding sides and angles of triangles. The ancient Nubians used a similar method.
In the 3rd century BC, Hellenistic mathematicians such as Euclid and Archimedes studied the properties of chords and inscribed angles in circles, and they proved theorems that are equivalent to modern trigonometric formulae, although they presented them geometrically rather than algebraically. In 140 BC, Hipparchus (from Nicaea, Asia Minor) gave the first tables of chords, analogous to modern tables of sine values, and used them to solve problems in trigonometry and spherical trigonometry. In the 2nd century AD, the Greco-Egyptian astronomer Ptolemy (from Alexandria, Egypt) constructed detailed trigonometric tables (Ptolemy's table of chords) in Book 1, chapter 11 of his Almagest. Ptolemy used chord length to define his trigonometric functions, a minor difference from the sine convention we use today. (The value we call sin(θ) can be found by looking up the chord length for twice the angle of interest (2θ) in Ptolemy's table, and then dividing that value by two.) Centuries passed before more detailed tables were produced, and Ptolemy's treatise remained in use for performing trigonometric calculations in astronomy throughout the next 1200 years in the medieval Byzantine, Islamic, and, later, Western European worlds.
The modern definition of the sine is first attested in the Surya Siddhanta, and its properties were further documented in the 5th century (AD) by Indian mathematician and astronomer Aryabhata. These Greek and Indian works were translated and expanded by medieval Islamic mathematicians. In 830 AD, Persian mathematician Habash al-Hasib al-Marwazi produced the first table of cotangents. By the 10th century AD, in the work of Persian mathematician Abū al-Wafā' al-Būzjānī, all six trigonometric functions were used. Abu al-Wafa had sine tables in 0.25° increments, to 8 decimal places of accuracy, and accurate tables of tangent values. He also made important innovations in spherical trigonometry The Persian polymath Nasir al-Din al-Tusi has been described as the creator of trigonometry as a mathematical discipline in its own right. He was the first to treat trigonometry as a mathematical discipline independent from astronomy, and he developed spherical trigonometry into its present form. He listed the six distinct cases of a right-angled triangle in spherical trigonometry, and in his On the Sector Figure, he stated the law of sines for plane and spherical triangles, discovered the law of tangents for spherical triangles, and provided proofs for both these laws. Knowledge of trigonometric functions and methods reached Western Europe via Latin translations of Ptolemy's Greek Almagest as well as the works of Persian and Arab astronomers such as Al Battani and Nasir al-Din al-Tusi. One of the earliest works on trigonometry by a northern European mathematician is De Triangulis by the 15th century German mathematician Regiomontanus, who was encouraged to write, and provided with a copy of the Almagest, by the Byzantine Greek scholar cardinal Basilios Bessarion with whom he lived for several years. At the same time, another translation of the Almagest from Greek into Latin was completed by the Cretan George of Trebizond. Trigonometry was still so little known in 16th-century northern Europe that Nicolaus Copernicus devoted two chapters of De revolutionibus orbium coelestium to explain its basic concepts.
Driven by the demands of navigation and the growing need for accurate maps of large geographic areas, trigonometry grew into a major branch of mathematics. Bartholomaeus Pitiscus was the first to use the word, publishing his Trigonometria in 1595. Gemma Frisius described for the first time the method of triangulation still used today in surveying. It was Leonhard Euler who fully incorporated complex numbers into trigonometry. The works of the Scottish mathematicians James Gregory in the 17th century and Colin Maclaurin in the 18th century were influential in the development of trigonometric series. Also in the 18th century, Brook Taylor defined the general Taylor series.
Trigonometric ratios
Trigonometric ratios are the ratios between edges of a right triangle. These ratios depend only on one acute angle of the right triangle, since any two right triangles with the same acute angle are similar.
So, these ratios define functions of this angle that are called trigonometric functions. Explicitly, they are defined below as functions of the known angle A, where a, b and h refer to the lengths of the sides in the accompanying figure.
In the following definitions, the hypotenuse is the side opposite to the 90-degree angle in a right triangle; it is the longest side of the triangle and one of the two sides adjacent to angle A. The adjacent leg is the other side that is adjacent to angle A. The opposite side is the side that is opposite to angle A. The terms perpendicular and base are sometimes used for the opposite and adjacent sides respectively. See below under Mnemonics.
Sine (denoted sin), defined as the ratio of the side opposite the angle to the hypotenuse.
Cosine (denoted cos), defined as the ratio of the adjacent leg (the side of the triangle joining the angle to the right angle) to the hypotenuse.
Tangent (denoted tan), defined as the ratio of the opposite leg to the adjacent leg.
The reciprocals of these ratios are named the cosecant (csc), secant (sec), and cotangent (cot), respectively:
The cosine, cotangent, and cosecant are so named because they are respectively the sine, tangent, and secant of the complementary angle abbreviated to "co-".
With these functions, one can answer virtually all questions about arbitrary triangles by using the law of sines and the law of cosines. These laws can be used to compute the remaining angles and sides of any triangle as soon as two sides and their included angle or two angles and a side or three sides are known.
Mnemonics
A common use of mnemonics is to remember facts and relationships in trigonometry. For example, the sine, cosine, and tangent ratios in a right triangle can be remembered by representing them and their corresponding sides as strings of letters. For instance, a mnemonic is SOH-CAH-TOA:
Sine = Opposite ÷ Hypotenuse
Cosine = Adjacent ÷ Hypotenuse
Tangent = Opposite ÷ Adjacent
One way to remember the letters is to sound them out phonetically (i.e. , similar to Krakatoa). Another method is to expand the letters into a sentence, such as "Some Old Hippie Caught Another Hippie Trippin' On Acid".<ref>A sentence more appropriate for high schools is "Some Old Horse Came A''Hopping Through Our Alley". </ref>
The unit circle and common trigonometric values
Trigonometric ratios can also be represented using the unit circle, which is the circle of radius 1 centered at the origin in the plane. In this setting, the terminal side of an angle A placed in standard position will intersect the unit circle in a point (x,y), where and . This representation allows for the calculation of commonly found trigonometric values, such as those in the following table:
Trigonometric functions of real or complex variables
Using the unit circle, one can extend the definitions of trigonometric ratios to all positive and negative arguments (see trigonometric function).
Graphs of trigonometric functions
The following table summarizes the properties of the graphs of the six main trigonometric functions:
Inverse trigonometric functions
Because the six main trigonometric functions are periodic, they are not injective (or, 1 to 1), and thus are not invertible. By restricting the domain of a trigonometric function, however, they can be made invertible.
The names of the inverse trigonometric functions, together with their domains and range, can be found in the following table:
Power series representations
When considered as functions of a real variable, the trigonometric ratios can be represented by an infinite series. For instance, sine and cosine have the following representations:
With these definitions the trigonometric functions can be defined for complex numbers. When extended as functions of real or complex variables, the following formula holds for the complex exponential:
This complex exponential function, written in terms of trigonometric functions, is particularly useful.
Calculating trigonometric functions
Trigonometric functions were among the earliest uses for mathematical tables. Such tables were incorporated into mathematics textbooks and students were taught to look up values and how to interpolate between the values listed to get higher accuracy. Slide rules had special scales for trigonometric functions.
Scientific calculators have buttons for calculating the main trigonometric functions (sin, cos, tan, and sometimes cis and their inverses). Most allow a choice of angle measurement methods: degrees, radians, and sometimes gradians. Most computer programming languages provide function libraries that include the trigonometric functions. The floating point unit hardware incorporated into the microprocessor chips used in most personal computers has built-in instructions for calculating trigonometric functions.
Other trigonometric functions
In addition to the six ratios listed earlier, there are additional trigonometric functions that were historically important, though seldom used today. These include the chord (), the versine () (which appeared in the earliest tables), the coversine (), the haversine (), the exsecant (), and the excosecant (). See List of trigonometric identities for more relations between these functions.
Applications
Astronomy
For centuries, spherical trigonometry has been used for locating solar, lunar, and stellar positions, predicting eclipses, and describing the orbits of the planets.
In modern times, the technique of triangulation is used in astronomy to measure the distance to nearby stars, as well as in satellite navigation systems.
Navigation
Historically, trigonometry has been used for locating latitudes and longitudes of sailing vessels, plotting courses, and calculating distances during navigation.
Trigonometry is still used in navigation through such means as the Global Positioning System and artificial intelligence for autonomous vehicles.
Surveying
In land surveying, trigonometry is used in the calculation of lengths, areas, and relative angles between objects.
On a larger scale, trigonometry is used in geography to measure distances between landmarks.
Periodic functions
The sine and cosine functions are fundamental to the theory of periodic functions, such as those that describe sound and light waves. Fourier discovered that every continuous, periodic function could be described as an infinite sum of trigonometric functions.
Even non-periodic functions can be represented as an integral of sines and cosines through the Fourier transform. This has applications to quantum mechanics and communications, among other fields.
Optics and acoustics
Trigonometry is useful in many physical sciences, including acoustics, and optics. In these areas, they are used to describe sound and light waves, and to solve boundary- and transmission-related problems.
Other applications
Other fields that use trigonometry or trigonometric functions include music theory, geodesy, audio synthesis, architecture, electronics, biology, medical imaging (CT scans and ultrasound), chemistry, number theory (and hence cryptology), seismology, meteorology, oceanography, image compression, phonetics, economics, electrical engineering, mechanical engineering, civil engineering, computer graphics, cartography, crystallography and game development.
Identities
Trigonometry has been noted for its many identities, that is, equations that are true for all possible inputs.
Identities involving only angles are known as trigonometric identities. Other equations, known as triangle identities, relate both the sides and angles of a given triangle.
Triangle identities
In the following identities, A, B and C are the angles of a triangle and a, b and c are the lengths of sides of the triangle opposite the respective angles (as shown in the diagram).
Law of sines
The law of sines (also known as the "sine rule") for an arbitrary triangle states:
where is the area of the triangle and R is the radius of the circumscribed circle of the triangle:
Law of cosines
The law of cosines (known as the cosine formula, or the "cos rule") is an extension of the Pythagorean theorem to arbitrary triangles:
or equivalently:
Law of tangents
The law of tangents''', developed by François Viète, is an alternative to the Law of Cosines when solving for the unknown edges of a triangle, providing simpler computations when using trigonometric tables. It is given by:
Area
Given two sides a and b and the angle between the sides C, the area of the triangle is given by half the product of the lengths of two sides and the sine of the angle between the two sides:
Trigonometric identities
Pythagorean identities
The following trigonometric identities are related to the Pythagorean theorem and hold for any value:
The second and third equations are derived from dividing the first equation by and , respectively.
Euler's formula
Euler's formula, which states that , produces the following analytical identities for sine, cosine, and tangent in terms of e and the imaginary unit i:
Other trigonometric identities
Other commonly used trigonometric identities include the half-angle identities, the angle sum and difference identities, and the product-to-sum identities.
| Mathematics | Geometry and topology | null |
13010605 | https://en.wikipedia.org/wiki/Load%20management | Load management | Load management, also known as demand-side management (DSM), is the process of balancing the supply of electricity on the network with the electrical load by adjusting or controlling the load rather than the power station output. This can be achieved by direct intervention of the utility in real time, by the use of frequency sensitive relays triggering the circuit breakers (ripple control), by time clocks, or by using special tariffs to influence consumer behavior. Load management allows utilities to reduce demand for electricity during peak usage times (peak shaving), which can, in turn, reduce costs by eliminating the need for peaking power plants. In addition, some peaking power plants can take more than an hour to bring on-line which makes load management even more critical should a plant go off-line unexpectedly for example. Load management can also help reduce harmful emissions, since peaking plants or backup generators are often dirtier and less efficient than base load power plants. New load-management technologies are constantly under development — both by private industry and public entities.
Brief history
Modern utility load management began about 1938, using ripple control. By 1948 ripple control was a practical system in wide use.
The Czechs first used ripple control in the 1950s. Early transmitters were low power, compared to modern systems, only 50 kilovolt-amps. They were rotating generators that fed a 1050 Hz signal into transformers attached to power distribution networks. Early receivers were electromechanical relays. Later, in the 1970s, transmitters with high-power semiconductors were used. These are more reliable because they have no moving parts. Modern Czech systems send a digital "telegram". Each telegram takes about thirty seconds to send. It has pulses about one second long. There are several formats, used in different districts.
In 1972, Theodore George "Ted" Paraskevakos, while working for Boeing in Huntsville, Alabama, developed a sensor monitoring system which used digital transmission for security, fire, and medical alarm systems as well as meter-reading capabilities for all utilities. This technology was a spin-off of his patented automatic telephone line identification system, now known as caller ID. In, 1974, Paraskevakos was awarded a U.S. patent for this technology.
At the request of the Alabama Power Company, Paraskevakos developed a load-management system along with automatic meter-reading technology. In doing so, he utilized the ability of the system to monitor the speed of the watt power meter disc and, consequently, power consumption. This information, along with the time of day, gave the power company the ability to instruct individual meters to manage water heater and air conditioning consumption in order to prevent peaks in usage during the high consumption portions of the day. For this approach, Paraskevakos was awarded multiple patents.
Advantages and operating principles
Since electrical energy is a form of energy that cannot be effectively stored in bulk, it must be generated, distributed, and consumed immediately. When the load on a system approaches the maximum generating capacity, network operators must either find additional supplies of energy or find ways to curtail the load, hence load management. If they are unsuccessful, the system will become unstable and blackouts can occur.
Long-term load management planning may begin by building sophisticated models to describe the physical properties of the distribution network (i.e. topology, capacity, and other characteristics of the lines), as well as the load behavior. The analysis may include scenarios that account for weather forecasts, the predicted impact of proposed load-shed commands, estimated time-to-repair for off-line equipment, and other factors.
The utilization of load management can help a power plant achieve a higher capacity factor, a measure of average capacity utilization. Capacity factor is a measure of the output of a power plant compared to the maximum output it could produce. Capacity factor is often defined as the ratio of average load to capacity or the ratio of average load to peak load in a period of time. A higher load factor is advantageous because a power plant may be less efficient at low load factors, a high load factor means fixed costs are spread over more kWh of output (resulting in a lower price per unit of electricity), and a higher load factor means greater total output. If the power load factor is affected by non-availability of fuel, maintenance shut-down, unplanned breakdown, or reduced demand (as consumption pattern fluctuate throughout the day), the generation has to be adjusted, since grid energy storage is often prohibitively expensive.
Smaller utilities that buy power instead of generating their own find that they can also benefit by installing a load control system. The penalties they must pay to the energy provider for peak usage can be significantly reduced. Many report that a load control system can pay for itself in a single season.
Comparisons to demand response
When the decision is made to curtail load, it is done so on the basis of system reliability. The utility in a sense "owns the switch" and sheds loads only when the stability or reliability of the electrical distribution system is threatened. The utility (being in the business of generating, transporting, and delivering electricity) will not disrupt their business process without due cause. Load management, when done properly, is non-invasive, and imposes no hardship on the consumer. The load should be shifted to off peak hours.
Demand response places the "on-off switch" in the hands of the consumer using devices such as a smart grid controlled load control switch. While many residential consumers pay a flat rate for electricity year-round, the utility's costs actually vary constantly, depending on demand, the distribution network, and composition of the company's electricity generation portfolio. In a free market, the wholesale price of energy varies widely throughout the day. Demand response programs such as those enabled by smart grids attempt to incentivize the consumer to limit usage based upon cost concerns. As costs rise during the day (as the system reaches peak capacity and more expensive peaking power plants are used), a free market economy should allow the price to rise. A corresponding drop in demand for the commodity should meet a fall in price. While this works for predictable shortages, many crises develop within seconds due to unforeseen equipment failures. They must be resolved in the same time-frame in order to avoid a power blackout. Many utilities who are interested in demand response have also expressed an interest in load control capability so that they might be able to operate the "on-off switch" before price updates could be published to the consumers.
The application of load control technology continues to grow today with the sale of both radio frequency and powerline communication based systems. Certain types of smart meter systems can also serve as load control systems. Charge control systems can prevent the recharging of electric vehicles during peak hours. Vehicle-to-grid systems can return electricity from an electric vehicle's batteries to the utility, or they can throttle the recharging of the vehicle batteries to a slower rate.
Ripple control
Ripple control is a common form of load control, and is used in many countries around the world, including the United States, Australia, Czech Republic, New Zealand, the United Kingdom, Germany, the Netherlands, and South Africa. Ripple control involves superimposing a higher-frequency signal (usually between 100 and 1600 Hz) onto the standard 50–60 Hz of the main power signal. When receiver devices attached to non-essential residential or industrial loads receive this signal, they shut down the load until the signal is disabled or another frequency signal is received.
Early implementations of ripple control occurred during World War II in various parts of the world using a system that communicates over the electrical distribution system. Early systems used rotating generators attached to distribution networks through transformers. Ripple control systems are generally paired with a two- (or more) tiered pricing system, whereby electricity is more expensive during peak times (evenings) and cheaper during low-usage times (early morning).
Affected residential devices will vary by region, but may include residential electric hot-water heaters, air conditioners, pool pumps, or crop-irrigation pumps. In a distribution network outfitted with load control, these devices are outfitted with communicating controllers that can run a program that limits the duty cycle of the equipment under control. Consumers are usually rewarded for participating in the load control program by paying a reduced rate for energy. Proper load management by the utility allows them to practice load shedding to avoid rolling blackouts and reduce costs.
Ripple control can be unpopular because sometimes devices can fail to receive the signal to turn on comfort equipment, e.g. hot water heaters or baseboard electrical heaters. Modern electronic receivers are more reliable than old electromechanical systems. Also, some modern systems repeat the telegrams to turn on comfort devices. Also, by popular demand, many ripple control receivers have a switch to force comfort devices on.
Modern ripple controls send a digital telegram, from 30 to 180 seconds long. Originally these were received by electromechanical relays. Now they are often received by microprocessors. Many systems repeat telegrams to assure that comfort devices (e.g. water heaters) are turned on. Since the broadcast frequencies are in the range of human hearing, they often vibrate wires, filament light-bulbs or transformers in an audible way.
The telegrams follow different standards in different areas. For example, in the Czech Republic, different districts use "ZPA II 32S", "ZPA II 64S" and Versacom. ZPA II 32S sends a 2.33 second on, a 2.99 second off, then 32 one-second pulses (either on or off), with an "off time" between each pulse of one second. ZPA II 64S has a much shorter off time, permitting 64 pulses to be sent, or skipped.
Nearby regions use different frequencies or telegrams, to assure that telegrams operate only in the desired region. The transformers that attach local grids to interties intentionally do not have the equipment (bridging capacitors) to pass ripple control signals into long-distance power lines.
Each data pulse of a telegram could double the number of commands, so that 32 pulses permit 2^32 distinct commands. However, in practice, particular pulses are linked to particular types of device or service. Some telegrams have unusual purposes. For example most ripple control systems have a telegram to set clocks in attached devices, e.g. to midnight.
Zellweger off-peak is one common brand of ripple control systems.
Radio ripple control
In recent years, radio-based load management (sometimes known as "radio ripple control") signalling systems have been introduced to replace traditional power wire ripple signalling systems. Some radio-based load management systems have been criticised for lacking sufficient security measures, potentially compromising power grid security or allowing street lighting to be turned on or off.
Frequency-based decentralized demand control
Greater loads physically slow the rotors of a grid's synchronized generators. This causes AC mains to have a slightly reduced frequency when a grid is heavily loaded. The reduced frequency is immediately sensible across the entire grid. Inexpensive local electronics can easily and precisely measure mains frequencies and turn off sheddable loads. In some cases, this feature is nearly free, e.g. if the controlling equipment (such as an electric power meter, or the thermostat in an air-conditioning system) already has a microcontroller. Most electronic electric power meters internally measure frequency, and require only demand control relays to turn off equipment. In other equipment, often the only needed extra equipment is a resistor divider to sense the mains cycle and a schmitt trigger (a small integrated circuit) so the microcontrollers' digital input can sense a reliable fast digital edge. A schmitt trigger is already standard equipment on many microcontrollers.
The main advantage over ripple control is greater customer convenience: Unreceived ripple control telegrams can cause a water heater to remain off, causing a cold shower. Or, they can cause an airconditioner to remain off, resulting in a sweltering home. In contrast, as the grid recovers, its frequency naturally rises to normal, so frequency-controlled load control automatically enables water heaters, air-conditioners and other comfort equipment. The cost of equipment can be less, and there are no concerns about overlapping or unreached ripple control regions, mis-received codes, transmitter power, etc.
The main disadvantage compared to ripple control is a less fine-grained control. For example, a grid authority has only a limited ability to select which loads are shed. In controlled war-time economies, this can be a substantial disadvantage.
The system was invented in PNNL in the early 21st century, and has been shown to stabilize grids.
Examples of schemes
In many countries, including United States, United Kingdom and France, the power grids routinely use privately held, emergency diesel generators in load management schemes
Florida
The largest residential load control system in the world is found in Florida and is managed by Florida Power and Light. It utilizes 800,000 load control transponders (LCTs) and controls 1,000 MW of electrical power (2,000 MW in an emergency). FPL has been able to avoid the construction of numerous new power plants due to their load management programs.
Australia and New Zealand
Since the 1950s, Australia and New Zealand have had a system of load management based on ripple control, allowing the electricity supply for domestic and commercial water storage heaters to be switched off and on, as well as allowing remote control of nightstore heaters and street lights. Ripple injection equipment located within each local distribution network signals to ripple control receivers at the customer's premises. Control may either done manually by the local distribution network company in response to local outages or requests to reduce demand from the transmission system operator (i.e. Transpower), or automatically when injection equipment detects mains frequency falling below 49.2 Hz. Ripple control receivers are assigned to one of several ripple channels to allow the network company to only turn off supply on part of the network, and to allow staged restoration of supply to reduce the impact of a surge in demand when power is restored to water heaters after a period of time off.
Depending on the area, the consumer may have two electricity meters, one for normal supply ("Anytime") and one for the load-managed supply ("Controlled"), with Controlled supply billed at a lower rate per kilowatt-hour than Anytime supply. For those with load-managed supply but only a single meter, electricity is billed at the "Composite" rate, priced between Anytime and Controlled.
Czech Republic
The Czechs have operated ripple control systems since the 1950s.
France
France has an EJP tariff, which allows it to disconnect certain loads and to encourage consumers to disconnect certain loads. This tariff is no longer available for new clients (as of July 2009). The Tempo tariff also includes different types of days with different prices, but has been discontinued for new clients as well (as of July 2009). Reduced prices during nighttime are available for customers for a higher monthly fee.
Germany
The distribution system operator Westnetz and gridX piloted a load management solution. The solution enables the grid operator to communicate with local energy management systems and adjust the available load for EV charging in response to the state of the grid.
United Kingdom
Rltec in the UK in 2009 reported that domestic refrigerators are being sold fitted with their dynamic load response systems. In 2011 it was announced that the Sainsbury supermarket chain will use dynamic demand technology on their heating and ventilation equipment.
In the UK, night storage heaters are often used with a time-switched off-peak supply option - Economy 7 or Economy 10. There is also a programme that allows industrial loads to be disconnected using circuit breakers triggered automatically by frequency sensitive relays fitted on site. This operates in conjunction with Standing Reserve, a programme using diesel generators. These can also be remotely switched using BBC Radio 4 Longwave Radio teleswitch.
SP transmission deployed Dynamic Load Management scheme in Dumfries and Galloway area using real time monitoring of embedded generation and disconnecting them, should an overload be detected on the transmission Network.
| Technology | Electricity transmission and distribution | null |
19857818 | https://en.wikipedia.org/wiki/Mandibular%20fracture | Mandibular fracture | Mandibular fracture, also known as fracture of the jaw, is a break through the mandibular bone. In about 60% of cases the break occurs in two places. It may result in a decreased ability to fully open the mouth. Often the teeth will not feel properly aligned or there may be bleeding of the gums. Mandibular fractures occur most commonly among males in their 30s.
Mandibular fractures are typically the result of trauma. This can include a fall onto the chin or a hit from the side. Rarely they may be due to osteonecrosis or tumors in the bone. The most common area of fracture is at the condyle (36%), body (21%), angle (20%) and symphysis (14%). Rarely the fracture may occur at the ramus (3%) or coronoid process (2%). While a diagnosis can occasionally be made with plain X-ray, modern CT scans are more accurate.
Immediate surgery is not necessarily required. Occasionally people may go home and follow up for surgery in the next few days. A number of surgical techniques may be used including maxillomandibular fixation and open reduction internal fixation (ORIF). People are often put on antibiotics such as penicillin for a brief period of time. The evidence to support this practice; however, is poor.
Signs and symptoms
General
By far, the two most common symptoms described are pain and the feeling that teeth no longer correctly meet (traumatic malocclusion, or disocclusion). The teeth are very sensitive to pressure (proprioception), so even a small change in the location of the teeth will generate this sensation. People will also be very sensitive to touching the area of the jaw that is broken, or in the case of condylar fracture the area just in front of the tragus of the ear.
Other symptoms may include loose teeth (teeth on either side of the fracture will feel loose because the fracture is mobile), numbness (because the inferior alveolar nerve runs along the jaw and can be compressed by a fracture) and trismus (difficulty opening the mouth).
Outside the mouth, signs of swelling, bruising and deformity can all be seen. Condylar fractures are deep, so it is rare to see significant swelling although, the trauma can cause fracture of the bone on the anterior aspect of the external auditory meatus so bruising or bleeding can sometimes be seen in the ear canal. Mouth opening can be diminished (less than 3 cm). There can be numbness or altered sensation (anesthesia/paraesthesia in the chin and lower lip (the distribution of the mental nerve).
Intraorally, if the fracture occurs in the tooth bearing area, a step may seen between the teeth on either side of the fracture or a space can be seen (often mistaken for a lost tooth) and bleeding from the gingiva in the area. There can be an open bite where the lower teeth, no longer meet the upper teeth. In the case of a unilateral condylar fracture the back teeth on the side of the fracture will meet and the open bite will get progressively greater towards the other side of the mouth.
Sometimes bruising will develop in the floor of the mouth (sublingual eccymosis) and the fracture can be moved by moving either side of the fracture segment up and down. For fractures that occur in the non-tooth bearing area (condyle, ramus, and sometimes the angle) an open bite is an important clinical feature since little else, other than swelling, may be apparent.
Condylar
This type of fractured mandible can involve one condyle (unilateral) or both (bilateral). Unilateral condylar fracture may cause restricted and painful jaw movement. There may be swelling over the temporomandibular joint region and bleeding from the ear because of lacerations to the external auditory meatus. The hematoma may spread downwards and backwards behind the ear, which may be confused with Battle's sign (a sign of a base of skull fracture), although this is an uncommon finding so if present, intra-cranial injury must be ruled out. If the bones fracture and overlie each other there may be shortening of the height of the ramus. This results in gagging of the teeth on the fractured side (the teeth meet too soon on the fractured side, and not on the non fractured side, i.e. "open bite" that becomes progressively worse to the unaffected side). When the mouth is opened, there may be deviation of the mandible towards the fractured side. Bilateral condylar fractures may cause the above signs and symptoms, but on both sides. Malocclusion and restricted jaw movement are usually more severe. Bilateral body or parasymphysis fractures are sometimes termed "flail mandible", and can cause involuntary posterior movement of the tongue with subsequent obstruction of the upper airway. Displacement of the condyle through the roof of the glenoid fossa and into the middle cranial fossa is rare. Other rare complications of mandibular trauma include internal carotid artery injury, and obliteration of the ear canal due to posterior condylar dislocation. Bilateral condylar fractures combined with a symphyseal fracture is sometimes termed a guardsman's fracture. The name comes from this injury occurring in soldiers who faint on parade grounds and strike the floor with their chin.
Diagnosis
Plain film radiography
Traditionally, plain films of the mandible would be exposed but had lower sensitivity and specificity owing to overlap of structures. Views included AP (for parasymphsis), lateral oblique (body, ramus, angle, coronoid process) and Towne's (condyle) views. Condylar fractures can be especially difficult to identify, depending on the direction of condylar displacement or dislocation so multiple views of it are usually examined with two views at perpendicular angles.
Panoramic radiography
Panoramic radiographs are tomograms where the mandible is in the focal trough and show a flat image of the mandible. Because the curve of the mandible appears in a 2-dimensional image, fractures are easier to spot leading to an accuracy similar to CT except in the condyle region. In addition, broken, missing or malaligned teeth can often be appreciated on a panoramic image which is frequently lost in plain films. Medial/lateral displacement of the fracture segments and especially the condyle are difficult to gauge so the view is sometimes augmented with plain film radiography or computed tomography for more complex mandible fractures.
Computed tomography
Computed tomography is the most sensitive and specific of the imaging techniques. The facial bones can be visualized as slices through the skeletal in either the axial, coronal or sagittal planes. Images can be reconstructed into a 3-dimensional view, to give a better sense of the displacement of various fragments. 3D reconstruction, however, can mask smaller fractures owing to volume averaging, scatter artifact and surrounding structures simply blocking the view of underlying areas.
Research has shown that panoramic radiography is similar to computed tomography in its diagnostic accuracy for mandible fractures and both are more accurate than plain film radiograph. The indications to use CT for mandible fracture vary by region, but it does not seem to add to diagnosis or treatment planning except for comminuted or avulsive type fractures, although, there is better clinician agreement on the location and absence of fractures with CT compared to panoramic radiography.
Classification
There are various classification systems of mandibular fractures in use.
Location
This is the most useful classification, because both the signs and symptoms, and also the treatment are dependent upon the location of the fracture. The mandible is usually divided into the following zones for the purpose of describing the location of a fracture (see diagram): condylar, coronoid process, ramus, angle of mandible, body (molar and premolar areas), parasymphysis and symphysis.
Alveolar
This type of fracture involves the alveolus, also termed the alveolar process of the mandible.
Condylar
Condylar fractures are classified by location compared to the capsule of ligaments that hold the temporomandibular joint (intracapsular or extracapsular), dislocation (whether or not the condylar head has come out of the socket (glenoid fossa) as the muscles (lateral pterygoid) tend to pull the condyle anterior and medial) and neck of the condyle fractures. E.g. extracapsular, non-displaced, neck fracture. Pediatric condylar fractures have special protocols for management.
Coronoid
Because the coronoid process of the mandible lies deep to many structures, including the zygomatic complex (ZMC), it is rare to be broken in isolation. It usually occurs with other mandibular fractures or with fracture of the zygomatic complex or arch. Isolated fractures of the coronoid process should be viewed with suspicion and fracture of the ZMC should be ruled out.
Ramus
Ramus fractures are said to involve a region inferiorly bounded by an oblique line extending from the lower third molar (wisdom tooth) region to the posteroinferior attachment of the masseter muscle, and which could not be better classified as either condylar or coronoid fractures.
Angle
The angle of the mandible refers to the angle created by the arrangement of the body of the mandible and the ramus. Angle fractures are defined as those that involve a triangular region bounded by the anterior border of masseter muscle and an oblique line extending from the lower third molar (wisdom tooth) region to the posteroinferior attachment of the masseter muscle.
Body
Fractures of the mandibular body are defined as those that involve a region bounded anteriorly by the parasymphysis (defined as a vertical line just distal to the canine tooth) and posteriorly by the anterior border of the masseter muscle.
Parasymphysis
Parasymphyseal fractures are defined as mandibular fractures that involve a region bounded bilaterally by vertical lines just distal to the canine tooth.
Symphysis
Symphyseal fractures are a linear fractures that run in the midline of the mandible (the symphysis).
Fracture type
Mandibular fractures are also classified according to categories that describe the condition of the bone fragments at the fracture site and also the presence of communication with the external environment.
Greenstick
Greenstick fractures are incomplete fractures of flexible bone, and for this reason typically occur only in children. This type of fracture generally has limited mobility.
Simple
A simple fracture describes a complete transection of the bone with minimal fragmentation at the fracture site.
Comminuted
The opposite of a simple fracture is a comminuted fracture, where the bone has been shattered into fragments, or there are secondary fractures along the main fracture lines. High velocity injuries (e.g. those caused by bullets, improvised explosive devices, etc...) will frequently cause comminuted fractures.
Compound
A compound fracture is one that communicates with the external environment. In the case of mandibular fractures, communication may occur through the skin of the face or with the oral cavity. Mandibular fractures that involve the tooth-bearing portion of the jaw are by definition compound fractures, because there is at least a communication via the periodontal ligament with the oral cavity and with more displaced fractures there may be frank tearing of the gingival and alveolar mucosa.
Involvement of teeth
When a fracture occurs in the tooth bearing portion of the mandible, whether or not it is dentate or edentulous will affect treatment. Wiring of the teeth helps stabilize the fracture (either during placement of osteosynthesis or as a treatment by itself), so the lack of teeth will guide treatment. When an edentulous mandible (no teeth) is less than 1 cm in height (as measured on panoramic radiograph or CT scan) additional risks apply because the blood flow from the marrow (endosseous) is minimal and the healing bone must rely on blood supply from the periosteum surrounding the bone. If a fracture occurs in a child with mixed dentition different treatment protocols are needed.
Other fractures of the body, are classified as open or closed. Because fractures that involve the teeth, by definition, communicate with the mouth this distinction is largely lost in mandible fractures. Condylar, ramus, and coronoid process fractures are generally closed whereas angle, body and parasymphsis fractures are generally open.
Displacement
The degree to which the segments are separated. The larger the separation, the more difficult it is to bring them back together (approximate the segments)
Favourability
For angle and posterior body fractures, when the angle of the fracture line is angled back (more posterior at the top of the jaw and more anterior at the bottom of the jaw) the muscles tend to bring the fracture segments together. This is called favorable. When the angle of the fractures is pointing to the front, it is unfavorable.
Age of the fracture
While mandible fractures have similar complication rates whether treated immediately or days later, older fractures are believed to have higher non-union and infection rates although the data on this makes it difficult to draw firm conclusions.
Treatment
Like all fractures, consideration has to be given to other illnesses that might jeopardize the patient, then to reduction and fixation of the fracture itself. Except in avulsive type injuries, or those where there might be airway compromise, a several day delay in the treatment of mandible fractures seems to have little impact on the outcome or complication rates.
General considerations
Since mandible fractures are usually the result of blunt force trauma to the head and face, other injuries need to be considered before the mandible fracture. First and foremost is compromise of the airway. While rare, bilateral mandible fractures that are unstable can cause the tongue to fall back and block the airway. Fractures such as a symphyseal or bilateral parasymphyseal may lead to mobility of the central portion of the mandible where genioglossus attaches, and allow the tongue to fall backwards and block the airway. In larger fractures, or those from high velocity injuries, soft tissue swelling can block the airway.
In addition to the potential for airway compromise, the force delivered to break the jaw can be great enough to either fracture the cervical spine or cause intra-cranial injury (head injury). It is common for both to be assessed with facial fractures.
Finally, vascular injury can result (with particular attention to the internal carotid and jugular) from high velocity injuries or severely displaced mandible fractures.
Loss of consciousness combined with aspiration of tooth fragments, blood and possibly dentures mean that the airway may be threatened.
Reduction
Reduction refers to approximating the ends of the bones edges that are broken. This is done with either an open technique, where an incision is made, the fracture is found and is physically manipulated into place, or closed technique where no incision is made.
The mouth is unique, in that the teeth are well secured to the bone ends but come through epithelium (mucosa). A leg or wrist, for instance, has no such structure to help with a closed reduction. In addition, when the fracture happens to be in a tooth bearing area of the jaws, aligning the teeth well usually results in alignment of the fracture segments.
To align the teeth, circumdental wiring is often used where wire strands (typically 24 gauge or 26 gauge) are wrapped around each tooth then attached to a stainless steel arch bar. When the maxillary (top) and mandibular (bottom) teeth are aligned together, this brings the fracture segments into place. Higher tech solutions are also available, to help reduce the segments with arch bars using bonding technology.
Fixation
Simple fractures are usually treated with closed reduction and indirect skeletal fixation, more commonly referred to as maxillo-mandibular fixation (MMF). The closed reduction is explained above. The indirect skeletal fixation is accomplished by placing an arch bar, secured to the teeth on the maxillary and mandibular dentition, then securing the top and bottom arch bars with wire loops.
Many alternatives exist to secure the maxillary and mandibular dentition including resin bonded arch bars, Ivy loops (small eyelets of wires), orthodontic bands and MMF bone screws where titanium screws with holes in the head of them are screwed into the basal bone of the jaws then secured with wire.
Closed reduction with direct skeletal fixation follows the same premise as MMF except that wires are passed through the skin and around the bottom jaw in the mandibule and through the piriform rim or zygomatic buttresses of the maxilla then joined to secure the jaws. The option is sometimes used when a patient is edentulous (has no teeth) and rigid internal fixation cannot be used.
Open reduction with direct skeletal fixation allows the bones to be directly mandibulated through an incision so that the fractured ends meet, then they can be secured together either rigidly (with screws or plates and screws) or non-rigidly (with transosseous wires). There are a multitude of various plate and screw combinations including compression plates, non-compression plates, lag-screws, mini-plates and biodegradable plates.
External fixation, which can be used with either open or closed reduction uses a pin system, where long screws are passed through the skin and into either side of a fracture segment (typically 2 pins per side) then secured in place using an external fixator. This is a more common approach when the bone is heavily comminuted (shattered into small pieces, for instance in a bullet wound) and when the bone is infected (osteomyelitis).
Regardless of the method of fixation, the bone need to remain relatively stable for a period of 3–6 weeks. On average, the bone gains 80% of its strength by 3 weeks and 90% of it by 4 weeks. There is great variation depending on the severity of injury, health of the wound, and age of the patient.
Current clinical evidence
A 2013 Cochrane review assessed clinical studies on surgical (open reduction) and non-surgical (closed reduction) management of mandible fractures that do not involve the condyle. The review found insufficient evidence to recommend the effectiveness of any single intervention.
Special considerations
Condyle
The best treatment for condylar fractures is controversial. There are two main options, namely closed reduction or open reduction and fixation. Closed reduction may involve intermaxillary fixation, where the jaws are splinted together in the correct position for a period of weeks. Open reduction involves surgical exposure of the fracture site, which can be carried out via incisions within the mouth or incisions outside the mouth over the area of the condyle. Open reduction is sometimes combined with use of an endoscope to aid visualization of fracture site. Although closed reduction carries a risk of the bone healing out of position, with consequent alteration of the bite or the creation of facial asymmetry, it does not risk temporary damage to the facial nerve or result in any facial scar that accompanies open reduction. A systematic review was unable to find sufficient evidence of the superiority of one method over another in the management of condylar fractures. Paediatric condylar fractures are especially problematic, owing to the remaining growth potential and possibility of ankylosis of the joint. Early mobilization is often recommended as in the Walker protocol.
Edentulous mandible
A broken jaw that has no teeth in it faces two additional issues. First, the lack of teeth makes reduction and fixation using MMF difficult. Instead of placing circumdental wires around the teeth, existing dentures can be left in (or Gunning splints, a type of temporary denture) and the mandible fixated to the maxilla using skeletal fixation (circummandibular and circumzygomatic wires) or using MMF bone screws. More commonly, open reduction and rigid internal fixation is placed.
When the width of the mandible is less than 1 cm, the jaw loses its endosteal blood supply. Instead, the blood supply comes largely from the periosteum. Open reduction (which normally strips the periosteum during the dissection) can lead to avascular necrosis. In these cases, oral surgeons sometimes opt for external fixation, closed reduction, supraperiosteal dissection or other techniques to maintain the periosteal blood flow.
High velocity injuries
In high velocity injuries, the soft tissue can be severely damaged far from the bullet wound itself due to hydrostatic shock. Because of this the airway must be carefully managed and vessels well examined. Because the jaw can be highly comminuted, MMF and rigid internal fixation can be difficult. Instead, external fixation is often used,.
Pathologic fracture
Fractures where large cysts or tumours are in the area (and weaken the jaw), where there is an area of osteomyelitis or where osteonecrosis exist cause special challenges to fixation and healing. Cysts and tumours can limit effective bone to bone contact and osteomyelitis or osteonecrosis compromise blood supply to the bone. In all of the situations, healing will be delayed and sometimes, resection is the only alternative to treatment.
Prognosis
The healing time for a routine mandible fractures is 4–6 weeks whether MMF or rigid internal fixation (RIF) is used. For comparable fractures, patients who received MMF will lose more weight and take longer to regain mouth opening, whereas, those who receive RIF have higher infection rates.
The most common long-term complications are loss of sensation in the mandibular nerve, malocclusion and loss of teeth in the line of fracture. The more complicated the fracture (infection, comminution, displacement) the higher the risk of fracture.
Condylar fractures have higher rates of malocclusion which in turn are dependent on the degree of displacement and/or dislocation. When the fracture is intracapsular there is a higher rate of late-term osteoarthritis and the potential for ankylosis although the later is a rare complication as long as mobilization is early. Pediatric condylar fractures have higher rates of ankylosis and the potential for growth disturbance.,
Rarely, mandibular fracture can lead to Frey's syndrome.
Epidemiology
Mandible fracture causes vary by the time period and the region studied. In North America, blunt force trauma (a punch) is the leading cause of mandible fracture whereas in India, motor vehicle collisions are now a leading cause. On battle grounds, it is more likely to be high velocity injuries (bullets and shrapnel). Prior to the routine use of seat belts, airbags and modern safety measures, motor vehicle collisions were a leading cause of facial trauma. The relationship to blunt force trauma explains why 80% of all mandible fractures occur in males. Mandibular fracture is a rare complication of third molar removal, and may occur during the procedure or afterwards. With respect to trauma patients, roughly 10% have some sort of facial fracture, the majority of which come from motor vehicle collisions. When the person is unrestrained in a car, the risk of fracture rises 50% and when an unhelmeted motorcyclist the risk rises 4-fold.
History
Management of mandible fractures has been mentioned as early as 1700 B.C. in the Edwin Smith Papyrus and later by Hippocrates in 460 B.C., "Displaced but incomplete fractures of the mandible where continuity of the bone is preserved should be reduced by pressing the lingual surface with the fingers...". Open reduction was described as early as 1869. Since the late 19th century, modern techniques including MMF (see above) have been described with titanium based rigid internal fixation becoming commonplace since the 1970s and biodegradable plates and screws being available since the 1980s.
| Biology and health sciences | Types | Health |
23982752 | https://en.wikipedia.org/wiki/Light-dependent%20reactions | Light-dependent reactions | Light-dependent reactions are certain photochemical reactions involved in photosynthesis, the main process by which plants acquire energy. There are two light dependent reactions: the first occurs at photosystem II (PSII) and the second occurs at photosystem I (PSI).
PSII absorbs a photon to produce a so-called high energy electron which transfers via an electron transport chain to cytochrome bf and then to PSI. The then-reduced PSI, absorbs another photon producing a more highly reducing electron, which converts NADP to NADPH. In oxygenic photosynthesis, the first electron donor is water, creating oxygen (O2) as a by-product. In anoxygenic photosynthesis, various electron donors are used.
Cytochrome b6f and ATP synthase work together to produce ATP (photophosphorylation) in two distinct ways. In non-cyclic photophosphorylation, cytochrome b6f uses electrons from PSII and energy from PSI to pump protons from the stroma to the lumen. The resulting proton gradient across the thylakoid membrane creates a proton-motive force, used by ATP synthase to form ATP. In cyclic photophosphorylation, cytochrome b6f uses electrons and energy from PSI to create more ATP and to stop the production of NADPH. Cyclic phosphorylation is important to create ATP and maintain NADPH in the right proportion for the light-independent reactions.
The net-reaction of all light-dependent reactions in oxygenic photosynthesis is:
2 + 2 + 3ADP + 3P → + 2 H + 2NADPH + 3ATP
PSI and PSII are light-harvesting complexes. If a special pigment molecule in a photosynthetic reaction center absorbs a photon, an electron in this pigment attains the excited state and then is transferred to another molecule in the reaction center. This reaction, called photoinduced charge separation, is the start of the electron flow and transforms light energy into chemical forms.
Light dependent reactions
In chemistry, many reactions depend on the absorption of photons to provide the energy needed to overcome the activation energy barrier and hence can be labelled light-dependent. Such reactions range from the silver halide reactions used in photographic film to the creation and destruction of ozone in the upper atmosphere. This article discusses a specific subset of these, the series of light-dependent reactions related to photosynthesis in living organisms.
Reaction center
The reaction center is in the thylakoid membrane. It transfers absorbed light energy to a dimer of chlorophyll pigment molecules near the periplasmic (or thylakoid lumen) side of the membrane. This dimer is called a special pair because of its fundamental role in photosynthesis. This special pair is slightly different in PSI and PSII reaction centers. In PSII, it absorbs photons with a wavelength of 680 nm, and is therefore called P680. In PSI, it absorbs photons at 700 nm and is called P700. In bacteria, the special pair is called P760, P840, P870, or P960. "P" here means pigment, and the number following it is the wavelength of light absorbed.
Electrons in pigment molecules can exist at specific energy levels. Under normal circumstances, they are at the lowest possible energy level, the ground state. However, absorption of light of the right photon energy can lift them to a higher energy level. Any light that has too little or too much energy cannot be absorbed and is reflected. The electron in the higher energy level is unstable and will quickly return to its normal lower energy level. To do this, it must release the absorbed energy. This can happen in various ways. The extra energy can be converted into molecular motion and lost as heat, or re-emitted by the electron as light (fluorescence). The energy, but not the electron itself, may be passed onto another molecule; this is called resonance energy transfer. If an electron of the special pair in the reaction center becomes excited, it cannot transfer this energy to another pigment using resonance energy transfer. Under normal circumstances, the electron would return to the ground state, but because the reaction center is arranged so that a suitable electron acceptor is nearby, the excited electron is taken up by the acceptor. The loss of the electron gives the special pair a positive charge and, as an ionization process, further boosts its energy. The formation of a positive charge on the special pair and a negative charge on the acceptor is referred to as photoinduced charge separation. The electron can be transferred to another molecule. As the ionized pigment returns to the ground state, it takes up an electron and gives off energy to the oxygen evolving complex so it can split water into electrons, protons, and molecular oxygen (after receiving energy from the pigment four times). Plant pigments usually utilize the last two of these reactions to convert the sun's energy into their own.
This initial charge separation occurs in less than 10 picoseconds (10 seconds). In their high-energy states, the special pigment and the acceptor could undergo charge recombination; that is, the electron on the acceptor could move back to neutralize the positive charge on the special pair. Its return to the special pair would waste a valuable high-energy electron and simply convert the absorbed light energy into heat. In the case of PSII, this backflow of electrons can produce reactive oxygen species leading to photoinhibition. Three factors in the structure of the reaction center work together to suppress charge recombination nearly completely:
Another electron acceptor is less than 1 nanometer away from the first acceptor, and so the electron is rapidly transferred farther away from the special pair.
An electron donor is less than 1 nm away from the special pair, and so the positive charge is neutralized by the transfer of another electron.
The electron transfer back from the electron acceptor to the positively charged special pair is especially slow. The rate of an electron transfer reaction increases with its thermodynamic favorability up to a point and then decreases. The back transfer is so favorable that it takes place in the inverted region where electron-transfer rates become slower.
Thus, electron transfer proceeds efficiently from the first electron acceptor to the next, creating an electron transport chain that ends when it has reached NADPH.
In chloroplasts
The photosynthesis process in chloroplasts begins when an electron of P680 of PSII attains a higher-energy level. This energy is used to reduce a chain of electron acceptors that have subsequently higher redox potentials. This chain of electron acceptors is known as an electron transport chain. When this chain reaches PSI, an electron is again excited, creating a high redox-potential. The electron transport chain of photosynthesis is often put in a diagram called the Z-scheme, because the redox diagram from P680 to P700 resembles the letter Z.
The final product of PSII is plastoquinol, a mobile electron carrier in the membrane. Plastoquinol transfers the electron from PSII to the proton pump, cytochrome b6f. The ultimate electron donor of PSII is water. Cytochrome b6f transfers the electron chain to PSI through plastocyanin molecules. PSI can continue the electron transfer in two different ways. It can transfer the electrons either to plastoquinol again, creating a cyclic electron flow, or to an enzyme called FNR (Ferredoxin—NADP(+) reductase), creating a non-cyclic electron flow. PSI releases FNR into the stroma, where it reduces to NADPH.
Activities of the electron transport chain, especially from cytochrome b6f, lead to pumping of protons from the stroma to the lumen. The resulting transmembrane proton gradient is used to make ATP via ATP synthase.
The overall process of the photosynthetic electron transport chain in chloroplasts is:
Photosystem II
PSII is extremely complex, a highly organized transmembrane structure that contains a water splitting complex, chlorophylls and carotenoid pigments, a reaction center (P680), pheophytin (a pigment similar to chlorophyll), and two quinones. It uses the energy of sunlight to transfer electrons from water to a mobile electron carrier in the membrane called plastoquinone:
Plastoquinol, in turn, transfers electrons to cyt bf, which feeds them into PSI.
Water-splitting complex
The step → P680 is performed by an imperfectly understood structure embedded within PSII called the water-splitting complex or oxygen-evolving complex (OEC). It catalyzes a reaction that splits water into electrons, protons and oxygen,
using energy from P680. The actual steps of the above reaction possibly occur in the following way (Kok's diagram of S-states):
(I) 2 (monoxide) (II) OH. (hydroxide) (III) (peroxide) (IV) (super oxide)(V) (di-oxygen). (Dolai's mechanism)
The electrons are transferred to special chlorophyll molecules (embedded in PSII) that are promoted to a higher-energy state by the energy of photons.
Reaction center
The excitation P680 → P680 of the reaction center pigment P680 occurs here. These special chlorophyll molecules embedded in PSII absorb the energy of photons, with maximal absorption at 680 nm. Electrons within these molecules are promoted to a higher-energy state. This is one of two core processes in photosynthesis, and it occurs with astonishing efficiency (greater than 90%) because, in addition to direct excitation by light at 680 nm, the energy of light first harvested by antenna proteins at other wavelengths in the light-harvesting system is also transferred to these special chlorophyll molecules.
This is followed by the electron transfer P680 → pheophytin, and then on to plastoquinol, which occurs within the reaction center of PSII. The electrons are transferred to plastoquinone and two protons, generating plastoquinol, which released into the membrane as a mobile electron carrier. This is the second core process in photosynthesis. The initial stages occur within picoseconds, with an efficiency of 100%. The seemingly impossible efficiency is due to the precise positioning of molecules within the reaction center. This is a solid-state process, not a typical chemical reaction. It occurs within an essentially crystalline environment created by the macromolecular structure of PSII. The usual rules of chemistry (which involve random collisions and random energy distributions) do not apply in solid-state environments.
Link of water-splitting complex and chlorophyll excitation
When the excited chlorophyll P passes the electron to pheophytin, it converts to high-energy P, which can oxidize the tyrosineZ (or YZ) molecule by ripping off one of its hydrogen atoms. The high-energy oxidized tyrosine gives off its energy and returns to the ground state by taking up a proton and removing an electron from the oxygen-evolving complex and ultimately from water. Kok's S-state diagram shows the reactions of water splitting in the oxygen-evolving complex.
Summary
PSII is a transmembrane structure found in all chloroplasts. It splits water into electrons, protons and molecular oxygen. The electrons are transferred to plastoquinol, which carries them to a proton pump. The oxygen is released into the atmosphere.
The emergence of such an incredibly complex structure, a macromolecule that converts the energy of sunlight into chemical energy and thus potentially useful work with efficiencies that are impossible in ordinary experience, seems almost magical at first glance. Thus, it is of considerable interest that, in essence, the same structure is found in purple bacteria.
Cytochrome bf
PSII and PSI are connected by a transmembrane proton pump, cytochrome bf complex (plastoquinol—plastocyanin reductase; ). Electrons from PSII are carried by plastoquinol to cyt bf, where they are removed in a stepwise fashion (re-forming plastoquinone) and transferred to a water-soluble electron carrier called plastocyanin. This redox process is coupled to the pumping of four protons across the membrane. The resulting proton gradient (together with the proton gradient produced by the water-splitting complex in PSI) is used to make ATP via ATP synthase.
The structure and function of cytochrome bf (in chloroplasts) is very similar to cytochrome bc1 (Complex III in mitochondria). Both are transmembrane structures that remove electrons from a mobile, lipid-soluble electron carrier (plastoquinone in chloroplasts; ubiquinone in mitochondria) and transfer them to a mobile, water-soluble electron carrier (plastocyanin in chloroplasts; cytochrome c in mitochondria). Both are proton pumps that produce a transmembrane proton gradient. In fact, cytochrome b6 and subunit IV are homologous to mitochondrial cytochrome b and the Rieske iron-sulfur proteins of the two complexes are homologous. However, cytochrome f and cytochrome c1 are not homologous.
Photosystem I
PSI accepts electrons from plastocyanin and transfers them either to NADPH (noncyclic electron transport) or back to cytochrome bf (cyclic electron transport):
plastocyanin → P700 → P700* → FNR → NADPH
↑ ↓
bf ← phylloquinone
PSI, like PSII, is a complex, highly organized transmembrane structure that contains antenna chlorophylls, a reaction center (P700), phylloquinone, and a number of iron-sulfur proteins that serve as intermediate redox carriers.
The light-harvesting system of PSI uses multiple copies of the same transmembrane proteins used by PSII. The energy of absorbed light (in the form of delocalized, high-energy electrons) is funneled into the reaction center, where it excites special chlorophyll molecules (P700, with maximum light absorption at 700 nm) to a higher energy level. The process occurs with astonishingly high efficiency.
Electrons are removed from excited chlorophyll molecules and transferred through a series of intermediate carriers to ferredoxin, a water-soluble electron carrier. As in PSII, this is a solid-state process that operates with 100% efficiency.
There are two different pathways of electron transport in PSI. In noncyclic electron transport, ferredoxin carries the electron to the enzyme ferredoxin reductase (FNR) that reduces to NADPH. In cyclic electron transport, electrons from ferredoxin are transferred (via plastoquinol) to a proton pump, cytochrome bf. They are then returned (via plastocyanin) to P700. NADPH and ATP are used to synthesize organic molecules from . The ratio of NADPH to ATP production can be adjusted by adjusting the balance between cyclic and noncyclic electron transport.
It is noteworthy that PSI closely resembles photosynthetic structures found in green sulfur bacteria, just as PSII resembles structures found in purple bacteria.
In bacteria
PSII, PSI, and cytochrome b6f are found in chloroplasts. All plants and all photosynthetic algae contain chloroplasts, which produce NADPH and ATP by the mechanisms described above. In essence, the same transmembrane structures are also found in cyanobacteria.
Unlike plants and algae, cyanobacteria are prokaryotes. They do not contain chloroplasts; rather, they bear a striking resemblance to chloroplasts themselves. This suggests that organisms resembling cyanobacteria were the evolutionary precursors of chloroplasts. One imagines primitive eukaryotic cells taking up cyanobacteria as intracellular symbionts in a process known as endosymbiosis.
Cyanobacteria
Cyanobacteria contain both PSI and PSII. Their light-harvesting system is different from that found in plants (they use phycobilins, rather than chlorophylls, as antenna pigments), but their electron transport chain
→ PSII → plastoquinol → b6f → cytochrome c6 → PSI → ferredoxin → NADPH
↑ ↓
b6f ← plastoquinol
is, in essence, the same as the electron transport chain in chloroplasts. The mobile water-soluble electron carrier is cytochrome c6 in cyanobacteria, having been replaced by plastocyanin in plants.
Cyanobacteria can also synthesize ATP by oxidative phosphorylation, in the manner of other bacteria. The electron transport chain is
NADH dehydrogenase → plastoquinol → b6f → cyt c6 → cyt aa3 →
where the mobile electron carriers are plastoquinol and cytochrome c6, while the proton pumps are NADH dehydrogenase, cyt b6f and cytochrome aa3 (member of the COX3 family).
Cyanobacteria are the only bacteria that produce oxygen during photosynthesis. Earth's primordial atmosphere was anoxic. Organisms like cyanobacteria produced our present-day oxygen-containing atmosphere.
The other two major groups of photosynthetic bacteria, purple bacteria and green sulfur bacteria, contain only a single photosystem and do not produce oxygen.
Purple bacteria
Purple bacteria contain a single photosystem that is structurally related to PSII in cyanobacteria and chloroplasts:
P870 → P870* → ubiquinone → cyt bc1 → cyt c2 → P870
This is a cyclic process in which electrons are removed from an excited chlorophyll molecule (bacteriochlorophyll; P870), passed through an electron transport chain to a proton pump (cytochrome bc1 complex; similar to the chloroplastic one), and then returned to the chlorophyll molecule. The result is a proton gradient that is used to make ATP via ATP synthase. As in cyanobacteria and chloroplasts, this is a solid-state process that depends on the precise orientation of various functional groups within a complex transmembrane macromolecular structure.
To make NADPH, purple bacteria use an external electron donor (hydrogen, hydrogen sulfide, sulfur, sulfite, or organic molecules such as succinate and lactate) to feed electrons into a reverse electron transport chain.
Green sulfur bacteria
Green sulfur bacteria contain a photosystem that is analogous to PSI in chloroplasts:
P840 → P840* → ferredoxin → NADH
↑ ↓
cyt c553 ← bc1 ← menaquinol
There are two pathways of electron transfer. In cyclic electron transfer, electrons are removed from an excited chlorophyll molecule, passed through an electron transport chain to a proton pump, and then returned to the chlorophyll. The mobile electron carriers are, as usual, a lipid-soluble quinone and a water-soluble cytochrome. The resulting proton gradient is used to make ATP.
In noncyclic electron transfer, electrons are removed from an excited chlorophyll molecule and used to reduce NAD+ to NADH. The electrons removed from P840 must be replaced. This is accomplished by removing electrons from , which is oxidized to sulfur (hence the name "green sulfur bacteria").
Purple bacteria and green sulfur bacteria occupy relatively minor ecological niches in the present day biosphere. They are of interest because of their importance in precambrian ecologies, and because their methods of photosynthesis were the likely evolutionary precursors of those in modern plants.
History
The first ideas about light being used in photosynthesis were proposed by Jan IngenHousz in 1779 who recognized it was sunlight falling on plants that was required, although Joseph Priestley had noted the production of oxygen without the association with light in 1772. Cornelis Van Niel proposed in 1931 that photosynthesis is a case of general mechanism where a photon of light is used to photo decompose a hydrogen donor and the hydrogen being used to reduce .
Then in 1939, Robin Hill demonstrated that isolated chloroplasts would make oxygen, but not fix , showing the light and dark reactions occurred in different places. Although they are referred to as light and dark reactions, both of them take place only in the presence of light. This led later to the discovery of photosystems I and II.
| Biology and health sciences | Metabolic processes | Biology |
20975731 | https://en.wikipedia.org/wiki/Lamprey | Lamprey | Lampreys (sometimes inaccurately called lamprey eels) are a group of jawless fish comprising the order Petromyzontiformes . The adult lamprey is characterized by a toothed, funnel-like sucking mouth. The common name "lamprey" is probably derived from Latin , which may mean "stone licker" ( "to lick" + "stone"), though the etymology is uncertain. Lamprey is sometimes seen for the plural form.
There are about 38 known extant species of lampreys and around seven known extinct species. They are classified in three families: two small families in the Southern Hemisphere (Geotriidae, Mordaciidae) and one large family in the Northern Hemisphere; (Petromyzontidae).
Genetic evidence suggests that lampreys are more closely related to hagfish, the only other living group of jawless fish, than they are to jawed vertebrates, forming the superclass Cyclostomi. The oldest fossils of stem-group lampreys are from the latest Devonian Period, around 360 million years ago, with modern looking forms only appearing during the Jurassic Period, around 163 million years ago, with the modern families likely splitting from each sometime between the Middle Jurassic and the end of the Cretaceous.
Modern lampreys spend the majority of their life in the juvenile "ammocoete" stage, where they burrow into the sediment and filter feed. Adult carnivorous lampreys are the most well-known species, and feed by boring into the flesh of other fish (or in rare cases marine mammals) to consume flesh and/or blood; but only 18 species of lampreys engage in this predatory lifestyle (with Caspiomyzon suggested to feed on carrion rather than live prey). Of the 18 carnivorous species, nine migrate from saltwater to freshwater to breed (some of them also have freshwater populations), and nine live exclusively in freshwater. All non-carnivorous forms are freshwater species. Adults of the non-carnivorous species do not feed; they live on reserves acquired as ammocoetes.
Distribution
Lampreys live mostly in coastal and fresh waters and are found in most temperate regions. Some species (e.g. Geotria australis, Petromyzon marinus, and Entosphenus tridentatus) travel significant distances in the open ocean, as evidenced by their lack of reproductive isolation between populations. Other species are found in land-locked lakes. Their larvae (ammocoetes) have a low tolerance for high water temperatures, which may explain why they are not distributed in the tropics.
Lamprey distribution may be adversely affected by river habitat loss, overfishing and pollution. In Britain, at the time of the 11th-century Norman Conquest of England, lampreys were found as far upstream in the River Thames as Petersham. The reduction of pollution in the Thames and River Wear has led to recent sightings in London and Chester-le-Street.
Distribution of lampreys may also be adversely affected by dams and other construction projects due to disruption of migration routes and obstruction of access to spawning grounds. Conversely, the construction of artificial channels has exposed new habitats for colonisation, notably in North America where sea lampreys have become a significant introduced pest in the Great Lakes. Active control programs to control lampreys are undergoing modifications due to concerns of drinking water quality in some areas.
Biology
Anatomy
Adults superficially resemble eels in that they have scaleless, elongated bodies, with the largest species, the sea lamprey having a maximum body length of around . Lacking paired fins, adult lampreys have one nostril atop the head and seven gill pores on each side of the head.
The brain of the lamprey is divided into the telencephalon, diencephalon, midbrain, cerebellum, and medulla.
Lampreys have been described as the only living vertebrates to have four eyes, having a single pair of regular eyes as well as two parietal eyes: a pineal and parapineal one (the exception is members of Mordacia). The eyes of juvenile lampreys are poorly developed eyespot-like structures that are covered in non-transparent skin, while the eyes of adult lampreys are well developed. Accommodation is done by flattening the cornea, which pushes the lens towards the retina. The eye of family Mordaciidae possess just a single type of photoreceptor (rod-like), the family Petromyzontidae possess two photoreceptor types (a cone-like and a rod-like), and the family Geotriidae possesses five types of photoreceptors.
The buccal cavity, anterior to the gonads, is responsible for attaching the animal, through suction, to either a stone or its prey. This then allows the tongue to make contact with the stone to rasp algae, or tear at the flesh of their prey to yield blood.
The last common ancestor of lampreys appears to have been specialized to feed on the blood and body fluids of other fish after metamorphosis. They attach their mouthparts to the target animal's body, then use three horny plates (laminae) on the tip of their piston-like tongue, one transversely and two longitudinally placed, to scrape through surface tissues until they reach body fluids. The teeth on their oral disc are primarily used to help the animal attach itself to its prey. Made of keratin and other proteins, lamprey teeth have a hollow core to give room for replacement teeth growing under the old ones. Some of the original blood-feeding forms have evolved into species that feed on both blood and flesh, and some who have become specialized to eat flesh and may even invade the internal organs of the host. Tissue feeders can also involve the teeth on the oral disc in the excision of tissue. As a result, the flesh-feeders have smaller buccal glands as they do not require the production of anticoagulant continuously and mechanisms for preventing solid material entering the branchial pouches, which could otherwise potentially clog the gills. A study of the stomach content of some lampreys has shown the remains of intestines, fins and vertebrae from their prey.
Close to the jaws of juvenile lampreys, a muscular flap-like structure called the velum is present, which serves to generate a water current towards the mouth opening, which enables feeding and respiration.
The unique morphological characteristics of lampreys, such as their cartilaginous skeleton, suggest they are the sister taxon (see cladistics) of all living jawed vertebrates (gnathostomes). They are usually considered the most basal group of the Vertebrata. Instead of true vertebrae, they have a series of cartilaginous structures called arcualia arranged above the notochord. Hagfish, which resemble lampreys, have traditionally been considered the sister taxon of the true vertebrates (lampreys and gnathostomes) but DNA evidence suggests that they are in fact the sister taxon of lampreys.
The heart of the lamprey is anterior to the intestines. It contains the sinus, one atrium, and one ventricle protected by the pericardial cartilages.
The pineal gland, a photosensitive organ regulating melatonin production by capturing light signals through the photoreceptor cell converting them into intercellular signals of the lamprey is located in the midline of its body, for lamprey, the pineal eye is accompanied by the parapineal organ.
One of the key physical components to the lamprey are the intestines, which are located ventral to the notochord. Intestines aid in osmoregulation by intaking water from its environment and desalinating the water they intake to an iso-osmotic state with respect to blood, and are also responsible for digestion.Studies have shown that lampreys are among the most energy-efficient swimmers. Their swimming movements generate low-pressure zones around the body, which pull rather than push their bodies through the water.
Different species of lamprey have many shared physical characteristics. The same anatomical structure can serve different functions in the lamprey depending on whether or not it is carnivorous. The mouth and suction capabilities of the lamprey not only allow it to cling to a fish as a parasite, but provide it with limited climbing ability so that it can travel upstream and up ramps or rocks to breed. This ability has been studied in an attempt to better understand how lampreys battle the current and move forward despite only being able to hold onto the rock at a single point. Some scientists are also hoping to design ramps that will optimize the lamprey's climbing ability, as lampreys are valued as food in the Northwest United States and need to travel upstream to reproduce.
Many lampreys exhibit countershading, a form of camouflage. Similarly to many other aquatic species, most lampreys have a dark-colored back, which enables them to blend in with the ground below when seen from above by a predator. Their light-colored undersides allow them to blend in with the bright air and water above them if a predator sees them from below.
Lamprey coloration can also vary according to the region and specific environment in which the species is found. Some species can be distinguished by their unique markings – for example, Geotria australis individuals display two bluish stripes running the length of its body as an adult. These markings can also sometimes be used to determine what stage of the life cycle the lamprey is in; G. australis individuals lose these stripes when they approach the reproductive phase and begin to travel upstream. Another example is Petromyzon marinus, which shifts to more of an orange color as it reaches the reproductive stage in its life cycle.
Genetics and immunology
Northern lampreys (Petromyzontidae) have the highest number of chromosomes (164–174) among vertebrates. Due to certain peculiarities in their adaptive immune system, the study of lampreys provides valuable insight into the evolution of vertebrate adaptive immunity. Generated from a somatic recombination of leucine-rich repeat gene segments, lamprey leukocytes express surface variable lymphocyte receptors (VLRs). This convergently evolved characteristic allows them to have lymphocytes that work as the T cells and B cells present in higher vertebrates immune system. Pouched lamprey (Geotria australis) larvae also have a very high tolerance for free iron in their bodies, and have well-developed biochemical systems for detoxification of the large quantities of these metal ions.
Lifecycle
The adults spawn in nests of sand, gravel and pebbles in clear streams. After hatching from the eggs, young larvae—called ammocoetes—will drift downstream with the current till they reach soft and fine sediment in silt beds, where they will burrow in silt, mud and detritus, taking up an existence as filter feeders, collecting detritus, algae, and microorganisms. The eyes of the larvae are underdeveloped, but are capable of discriminating changes in illuminance. Ammocoetes can grow from to about . Many species change color during a diurnal cycle, becoming dark at day and pale at night. The skin also has photoreceptors, light sensitive cells, most of them concentrated in the tail, which helps them to stay buried. Lampreys may spend up to eight years as ammocoetes, while species such as the Arctic lamprey may only spend one to two years as larvae, prior to undergoing a metamorphosis which generally lasts 3–4 months, but can vary between species. While metamorphosing, they do not eat.
The rate of water moving across the ammocoetes' feeding apparatus is the lowest recorded in any suspension feeding animal, and they therefore require water rich in nutrients to fulfill their nutritional needs. While the majority of (invertebrate) suspension feeders thrive in waters containing under 1 mg suspended organic solids per litre (<1 mg/L), ammocoetes demand minimum 4 mg/L, with concentrations in their habitats having been measured up to 40 mg/L.
During metamorphosis the lamprey loses both the gallbladder and the biliary tract, and the endostyle turns into a thyroid gland.
Some species, including those that are not carnivorous and do not feed even following metamorphosis, live in freshwater for their entire lifecycle, spawning and dying shortly after metamorphosing. In contrast, many species are anadromous and migrate to the sea, beginning to prey on other animals while still swimming downstream after their metamorphosis provides them with eyes, teeth, and a sucking mouth. Those that are anadromous are carnivorous, feeding on fishes or marine mammals.
Anadromous lampreys spend up to four years in the sea before migrating back to freshwater, where they spawn. Adults create nests (called redds) by moving rocks, and females release thousands of eggs, sometimes up to 100,000. The male, intertwined with the female, fertilizes the eggs simultaneously. Being semelparous, both adults die after the eggs are fertilized.
Research on sea lampreys has revealed that sexually mature males use a specialized heat-producing tissue in the form of a ridge of fat cells near the anterior dorsal fin to stimulate females. After having attracted a female with pheromones, the heat detected by the female through body contact will encourage spawning.
Classification
Taxonomists place lampreys and hagfish in the subphylum Vertebrata of the phylum Chordata, which also includes the invertebrate subphyla Tunicata (sea-squirts) and the fish-like Cephalochordata (lancelets or Amphioxus). Recent molecular and morphological phylogenetic studies place lampreys and hagfish in the infraphylum Agnatha or Agnathostomata (both meaning without jaws). The other vertebrate infraphylum is Gnathostomata (jawed mouths) and includes the classes Chondrichthyes (sharks), Osteichthyes (bony fishes), Amphibia, Reptilia, Aves, and Mammalia.
Some researchers have classified lampreys as the sole surviving representatives of the Linnean class Cephalaspidomorphi. Cephalaspidomorpha is sometimes given as a subclass of the Cephalaspidomorphi.
Fossil evidence now suggests lampreys and cephalaspids acquired their shared characters by convergent evolution.
The 5th edition of Fishes of the World classifies lampreys within the Class Petromyzontida, a taxon called Petromyzonti in Eschmeyer's Catalog of Fishes.
The debate about their systematics notwithstanding, lampreys constitute a single order Petromyzontiformes. Sometimes still seen is the alternative spelling "Petromyzoniformes", based on the argument that the type genus is Petromyzon and not "Petromyzonta" or similar. Throughout most of the 20th century, both names were used indiscriminately, even by the same author in subsequent publications. In the mid-1970s, the ICZN was called upon to fix one name or the other, and after much debate had to resolve the issue by voting. Thus, in 1980, the spelling with a "t" won out, and in 1981, it became official that all higher-level taxa based on Petromyzon have to start with "Petromyzont-".
Phylogeny based on Brownstein & Near, 2023.
Geotria australis Gray 1851 (Pouched lamprey)
Geotria macrostoma (Burmeister 1868) (Argentinian lamprey)
Mordacia lapicida (Gray 1851) (Chilean lamprey)
Mordacia mordax (Richardson 1846) (Australian lamprey)
Mordacia praecox Potter 1968 (Non-parasitic/Australian brook lamprey)
Petromyzon marinus Linnaeus 1758 (Sea lamprey)
Ichthyomyzon bdellium (Jordan 1885) (Ohio lamprey)
Ichthyomyzon castaneus Girard 1858 (Chestnut lamprey)
Ichthyomyzon fossor Reighard & Cummins 1916 (Northern brook lamprey)
Ichthyomyzon gagei Hubbs & Trautman 1937 (Southern brook lamprey)
Ichthyomyzon greeleyi Hubbs & Trautman 1937 (Mountain brook lamprey)
Ichthyomyzon unicuspis Hubbs & Trautman 1937 (Silver lamprey)
Caspiomyzon wagneri (Kessler 1870) Berg 1906 (Caspian lamprey)
Caspiomyzon graecus (Renaud & Economidis 2010) (Ionian brook lamprey)
Caspiomyzon hellenicus (Vladykov et al. 1982) (Greek lamprey)
Tetrapleurodon geminis Álvarez 1964 (Mexican brook lamprey)
Tetrapleurodon spadiceus (Bean 1887) (Mexican lamprey)
Entosphenus folletti Vladykov & Kott 1976 (Northern California brook lamprey)
Entosphenus lethophagus (Hubbs 1971) (Pit-Klamath brook lamprey)
Entosphenus macrostomus (Beamish 1982) (Lake lamprey)
Entosphenus minimus (Bond & Kan 1973) (Miller Lake lamprey)
Entosphenus similis Vladykov & Kott 1979 (Klamath river lamprey)
Entosphenus tridentatus (Richardson 1836) (Pacific lamprey)
Lethenteron alaskense Vladykov & Kott 1978 (Alaskan brook lamprey)
Lethenteron appendix (DeKay 1842) (American brook lamprey)
Lethenteron camtschaticum (Tilesius 1811) (Arctic lamprey)
Lethenteron kessleri (Anikin 1905) (Siberian brook lamprey)
Lethenteron ninae Naseka, Tuniyev & Renaud 2009 (Western Transcaucasian lamprey)
Lethenteron reissneri (Dybowski 1869) (Far Eastern brook lamprey)
Lethenteron zanandreai (Vladykov 1955) (Lombardy lamprey)
Eudontomyzon stankokaramani (Karaman 1974) (Drin brook lamprey)
Eudontomyzon morii (Berg 1931) (Korean lamprey)
Eudontomyzon danfordi Regan 1911 (Carpathian brook lamprey)
Eudontomyzon mariae (Berg 1931) (Ukrainian brook lamprey)
Eudontomyzon vladykovi (Oliva & Zanandrea 1959) (Vladykov's lamprey)
Lampetra aepyptera (Abbott 1860) (Least brook lamprey)
Lampetra alavariensis Mateus et al. 2013 (Portuguese lamprey)
Lampetra auremensis Mateus et al. 2013 (Qurem lamprey)
Lampetra ayresi (Günther 1870) (Western river lamprey)
Lampetra fluviatilis (Linnaeus 1758) (European river lamprey)
Lampetra hubbsi (Vladykov & Kott 1976) (Kern brook lamprey)
Lampetra lanceolata Kux & Steiner 1972 (Turkish brook lamprey)
Lampetra lusitanica Mateus et al. 2013 (lusitanic lamprey)
Lampetra pacifica Vladykov 1973 (Pacific brook lamprey)
Lampetra planeri (Bloch 1784) (European brook lamprey)
Lampetra richardsoni Vladykov & Follett 1965 (Western brook lamprey)
Entosphenus macrostomus Dr. Dick Beamish 1980 (Cowichan lake lamprey)
Recent studies differ regarding the timing of the last common ancestor of all living lampreys, with some suggesting a Middle Jurassic date, around 175 million years ago, while other studies have suggested a younger split, dating to the Late Cretaceous. The older date study posited that the Northern and Southern Hemisphere lampreys diverged as part of the breakup of Pangea, while the Late Cretaceous study suggested that modern lampreys emerged in the Southern Hemisphere. It is thought that most modern lamprey diversity emerged during the Cenozoic, particularly within the last 10–20 million years.
Fossil record
The oldest fossil lamprey, Priscomyzon, is known from the latest Devonian of South Africa around 360 million years ago, with other stem-group lampreys, like Pipiscius, Mayomyzon and Hardistiella known from the Carboniferous of North America. These Paleozoic stem-lampreys are small relative to modern lampreys, and while they had well developed oral discs with a small number of radially arranged teeth, they lacked the specialised, heavily toothed discs with plate-like laminae present in modern lampreys, and it is possible that they fed by scraping algae off of animals, rather than feeding by predation/parasitism. They also lacked the modern three stage life cycle including ammocoetes found in modern lampreys, with the juvenile stages of these species closely resembling adults. Myxineidus from the Carboniferous of France, often considered to be a hagfish, has been found to be a lamprey in some studies. The earliest lamprey with the specialised toothed oral disc typical of modern lampreys is Yanliaomyzon from the Middle Jurassic of China around 163 million years old, which is thought to have had a predatory lifestyle like modern lampreys, and probably had a three stage life cycle including ammocoetes. Mesomyzon from the Early Cretaceous of China, which displays the three stage life cycle with ammocoetes, was found in one study to be more closely related to the family Petromyzonidae than to other living lampreys, though other studies have found it to be outside the group containing all living lampreys.
Lamprey and chordate synapomorphies
Synapomorphies are certain characteristics that are shared over evolutionary history. Organisms possessing a notochord, dorsal hollow nerve cord, pharyngeal slits, pituitary gland/endostyle, and a post anal tail during the process of their development are considered to be chordates. Lampreys contain these characteristics that define them as chordates. Lamprey anatomy is very different based on what stage of development they are in. The notochord is derived from the mesoderm and is one of the defining characteristics of a chordate. The notochord provides signaling and mechanical cues to help the organism when swimming. The dorsal nerve cord is another characteristic of lampreys that defines them as chordates. During development this part of the ectoderm rolls creating a hollow tube. This is often why it is referred to as the dorsal "hollow" nerve cord. The third chordate feature, which are the pharyngeal slits, are openings found between the pharynx or throat. Pharyngeal slits are filter feeding organs that help the movement of water through the mouth and out of these slits when feeding. During the lamprey's larval stage they feed by filter feeding. Once lampreys reach their adult phase they become parasitic on other fish, and these gill slits become very important in aiding in the respiration of the organism. The final chordate synapomorphy is the post anal tail, which is muscular and extends behind the anus.
Oftentimes adult amphioxus and lamprey larvae are compared by anatomists due to their similarities. Similarities between adult amphioxus and lamprey larvae include a pharynx with pharyngeal slits, a notochord, a dorsal hollow nerve cord and a series of somites that extend anterior to the otic vesicle.
Use in research
The lamprey has been extensively studied because its relatively simple brain is thought in many respects to reflect the brain structure of early vertebrate ancestors. Beginning in the 1970s, Sten Grillner and his colleagues at the Karolinska Institute in Stockholm followed on from extensive work on the lamprey started by Carl Rovainen in the 1960s that used the lamprey as a model system to work out the fundamental principles of motor control in vertebrates starting in the spinal cord and working toward the brain.
In a series of studies by Rovainen and his student James Buchanan, the cells that formed the neural circuits within the spinal cord capable of generating the rhythmic motor patterns that underlie swimming were examined. Note that there are still missing details in the network scheme despite claims by Grillner that the network is characterised (Parker 2006, 2010). Spinal cord circuits are controlled by specific locomotor areas in the brainstem and midbrain, and these areas are in turn controlled by higher brain structures, including the basal ganglia and tectum.
In a study of the lamprey tectum published in 2007, they found electrical stimulation could elicit eye movements, lateral bending movements, or swimming activity, and the type, amplitude, and direction of movement varied as a function of the location within the tectum that was stimulated. These findings were interpreted as consistent with the idea that the tectum generates goal-directed locomotion in the lamprey.
Lampreys are used as a model organism in biomedical research, where their large reticulospinal axons are used to investigate synaptic transmission. The axons of lamprey are particularly large and allow for microinjection of substances for experimental manipulation.
They are also capable of full functional recovery after complete spinal cord transection. Another trait is the ability to delete several genes from their somatic cell lineages, about 20% of their DNA, which are vital during development of the embryo, but which in humans can cause problems such as cancer later in life, after they have served their purpose. How the genes destined for deletion are targeted is not yet known.
Relationship with humans
Attacks on humans
Although attacks on humans have been documented, they will generally not attack humans unless starved.
As food
People have long eaten lampreys. They were highly appreciated by the ancient Romans. During the Middle Ages they were widely eaten by the upper classes throughout Europe, especially during Lent, when eating meat was prohibited, due to their meaty taste and texture. King Henry I of England is claimed to have been so fond of lampreys that he often ate them, late into life and poor health, against the advice of his physician concerning their richness, and is said to have died from eating "a surfeit of lampreys". Whether or not his lamprey indulgence actually caused his death is unclear, but the phrase persists in British culture.
A lamprey pie was made for the coronation of Elizabeth II in 1953. Sixty years later, the city of Gloucester had to use fish from North America for her Diamond Jubilee, because few lampreys could be found in the River Severn.
In southwestern Europe (Portugal, Spain, and France), Finland and in Latvia (where lamprey is routinely sold in supermarkets), lampreys are a highly prized delicacy. In Finland (county of Nakkila), and Latvia (Carnikava Municipality), the river lamprey is the local symbol, found on their coats of arms. In 2015 the lamprey from Carnikava was included in the Protected designation of origin list by the European Commission.
Sea lamprey is the most sought-after species in Portugal and one of only two that can legally bear the commercial name "lamprey" (lampreia): the other one being Lampetra fluviatilis, the European river lamprey, both according to Portaria (Government regulation no. 587/2006, from 22 June). "Arroz de lampreia" (lamprey rice) and "Lampreia à Bordalesa" (Bordeaux style lamprey) are some of the most important dishes in Portuguese cuisine.
Lampreys are also consumed in Sweden, Russia, Lithuania, Estonia, Japan, and South Korea. In Finland, they are commonly eaten grilled or smoked, but also pickled, or in vinegar.
The mucus and serum of several lamprey species, including the Caspian lamprey (Caspiomyzon wagneri), river lampreys (Lampetra fluviatilis and L. planeri), and sea lamprey (Petromyzon marinus), are known to be toxic, and require thorough cleaning before cooking and consumption.
In Britain, lampreys are commonly used as bait, normally as dead bait. Northern pike, perch, and chub all can be caught on lampreys. Frozen lampreys can be bought from most bait and tackle shops.
As pests
Sea lampreys have become a major pest in the North American Great Lakes. It is generally believed that they gained access to the lakes via canals during the early 20th century, but this theory is controversial.
They are considered an invasive species, have no natural predators in the lakes, and prey on many species of commercial value, such as lake trout.
Lampreys are now found mostly in the streams that feed the lakes, and controlled with special barriers to prevent the upstream movement of adults, or by the application of toxicants called lampricides, which are harmless to most other aquatic species; however, these programs are complicated and expensive, and do not eradicate the lampreys from the lakes, but merely keep them in check.
New programs are being developed, including the use of chemically sterilized male lampreys in a method akin to the sterile insect technique. Finally, pheromones critical to lamprey migratory behaviour have been isolated, their chemical structures determined, and their impact on lamprey behaviour studied, in the laboratory and in the wild, and active efforts are underway to chemically source and to address regulatory considerations that might allow this strategy to proceed.
Control of sea lampreys in the Great Lakes is conducted by the U.S. Fish and Wildlife Service and the Canadian Department of Fisheries and Oceans, and is coordinated by the Great Lakes Fishery Commission. Lake Champlain, bordered by New York, Vermont, and Quebec, and New York's Finger Lakes are also home to high populations of sea lampreys that warrant control. Lake Champlain's lamprey control program is managed by the New York State Department of Environmental Conservation, the Vermont Department of Fish and Wildlife, and the U.S. Fish and Wildlife Service. New York's Finger Lakes sea lamprey control program is managed solely by the New York State Department of Environmental Conservation.
In folklore
In folklore, lampreys are called "nine-eyed eels". The name derives from misconstruing the seven gill pores behind each eye as additional eyes, and doing the same with the nostril on the top of the head (even though there is only one of those, not one per side). Likewise, in the German language, the word for lamprey is Neunauge, which means "nine-eye". In British folklore, the monster known as the Lambton Worm may have been based on a lamprey, since it is described as an eel-like creature with nine eyes.
In Japanese, lamprey are called yatsume-unagi (八つ目鰻, "eight-eyed eels"), thus excluding the nostril from the count.
In literature
Vedius Pollio kept a pool of lampreys into which slaves who incurred his displeasure would be thrown as food. On one occasion, Vedius was punished by Augustus for attempting to do so in his presence:
This incident was incorporated into the plot of the 2003 novel Pompeii by Robert Harris in the incident of Ampliatus feeding a slave to his lampreys.
Lucius Licinius Crassus was mocked by Gnaeus Domitius Ahenobarbus (cos. 54 BC) for weeping over the death of his pet lamprey:
This story is also found in Aelian (Various Histories VII, 4) and Macrobius (Saturnalia III.15.3). It is included by Hugo von Hofmannsthal in the Chandos Letter:
In George R. R. Martin's novel series, A Song of Ice and Fire, Lord Wyman Manderly is mockingly called "Lord Lamprey" by his enemies in reference to his rumored affinity to lamprey pie and his striking obesity.
Kurt Vonnegut, in his late short story "The Big Space Fuck", posits a future America so heavily polluted – "Everything had turned to shit and beer cans", in his words – that the Great Lakes have been infested with a species of massive, man-eating ambulatory lampreys.
In television
In season 3, episode 5 of "The Borgias", whilst out on a hunting trip, Cesare Borgia's mercenary, Micheletto, kills the King of Naples by pushing him into a pool filled with lampreys that King Ferrante had built during his reign of Naples.
In season 9, episode 16, of "Bones" (American TV series from 2005-2017), Agent Booth grabs a lamprey that is escaping from a bag holding a dead body which has been found in a pond. Later, two lamprey are seen in Hodgins' lab.
| Biology and health sciences | Agnatha | null |
20975773 | https://en.wikipedia.org/wiki/Colossal%20squid | Colossal squid | The colossal squid (Mesonychoteuthis hamiltoni) is the world’s largest squid species and the world’s largest mollusc. It belongs to the Cranchiidae family, that of the cockatoo squids or glass squids.
It is sometimes called the Antarctic cranch squid or giant squid (not to be confused with the giant squid in genus Architeuthis) and is believed to be the largest squid species in terms of mass. It is the only recognized member of the genus Mesonychoteuthis and is known from only a small number of specimens. The species is confirmed to reach a mass of at least , though the largest specimens—known only from beaks found in sperm whale stomachs—may perhaps weigh as much as , making it the largest extant invertebrate. Maximum total length has been estimated between and but the former estimate is more likely. The colossal squid has the largest eyes of any known creature ever to exist, with an estimated diameter of to for the largest collected specimen.
The species has similar anatomy to other members of its family, although it is the only member of Cranchiidae to display hooks on its arms, suckers and tentacles. It is known to inhabit the circumantarctic Southern Ocean. It is presumed to be an ambush predator, and is likely a key prey item of the sperm whale.
The first specimens were discovered and described in 1925. In 1981, an adult specimen was discovered; in 2003, a second specimen was collected. Captured in 2007, the largest colossal squid weighed , and is now on display at the Museum of New Zealand Te Papa Tongarewa.
In 2022-23 there were several attempts made by scientists including an ocean exploration non-profit KOLOSSAL to find and film the colossal squid in its natural habitat for the first time to learn more about its biology and ecological behavior. The science team used a tourism vessel to survey 36 locations throughout the Southern Ocean and may have filmed for the first time a small juvenile colossal squid. Researchers have confirmed it is a species of glass squid, but due to marine snow the footage has been harder to confirm without a DNA analysis, and may be Galiteuthis glacialis or a new species of glass squid unknown to science.
More expeditions are being planned for and before 2025, the hundredth year anniversary of the first discovery of the colossal squid, in attempts to find and film an adult colossal squid living freely in its natural environment.
Morphology
The colossal squid shares features common to all squids: a mantle for locomotion, one pair of gills, a beak or tooth, and certain external characteristics like eight arms and two tentacles, a head, and two fins. In general, the morphology and anatomy of the colossal squid are the same as any other squid. However, there are certain morphological characteristics that separate the colossal squid from other squids in its family: the colossal squid is the only squid in its family whose arms and tentacles are equipped with hooks, either swivelling or three-pointed. There are squids in other families that also have hooks, but no other squid in the family Cranchiidae.
Unlike most squid species, the colossal squid exhibits abyssal gigantism, as it is the heaviest living invertebrate species, reaching weights up to . For comparison, squids typically have a mantle length of about and weigh about .
The giant squid also exhibits abyssal gigantism, but the colossal squid is heavier. Although it is unclear what the maximum weight for colossal squids is, analysis of squid beak dimensions from sperm whale stomachs provided estimates that colossal squids may weigh up to .
The colossal squid also has the largest eyes documented in the animal kingdom, with a diameter of .
Distribution and habitat
The squid's known range extends thousands of kilometres north of Antarctica to southern South America, southern South Africa, and the southern tip of New Zealand, making it primarily an inhabitant of the entire circumantarctic Southern Ocean. Colossal squid are also sighted often near Cooperation Sea and less near Ross Sea because of its prey and competitor, the Antarctic toothfish. The region between the Weddell Sea and the western Kerguelen archipelago has been deemed a "hotspot" based on characteristics of the habitat. The squid's vertical distribution appears to correlate directly with age. Young squid are found between , adolescent squid are found and adult squid are found primarily within the mesopelagic and bathypelagic regions of the open ocean.
Behavior
Feeding
Little is known about their behavior, but it is believed to feed on prey such as chaetognatha, large fish such as the Patagonian toothfish, and smaller squid in the deep ocean. A recent study by Remeslo, Yakushev and Laptikhovsky revealed that Antarctic toothfish make up a significant part of the colossal squid's diet; of the 8,000 toothfish brought aboard trawlers between 2011 and 2014, seventy-one showed clear signs of attack by colossal squid. A study in Prydz Bay region of Antarctica found squid remains in a female colossal squid's stomach, suggesting the possibility of cannibalism within this species. Studies measuring the δ15N content of the chitinous beaks of cephalopods to determine trophic ecology levels have demonstrated that the colossal squid is a top predator that is positively correlated with its increased size. This new confirmation of the colossal squid's trophic level suggests that it likely preys on large fishes and smaller squids, according to its size, and that its predators include sperm whales and sleeper sharks.
Metabolism
The colossal squid is thought to have a very slow metabolic rate, needing only around of prey daily for an adult with a mass of . Estimates of its energy requirements suggest it is a slow-moving ambush predator, using its large eyes primarily for prey-detection rather than engaging in active hunting.
Predation
Many sperm whales have scars on their backs, believed to be caused by the hooks of colossal squid. Colossal squid are a major prey item for sperm whales in the Antarctic; 14% of the squid beaks found in the stomachs of these sperm whales are those of the colossal squid, which indicates that colossal squid make up 77% of the biomass consumed by these whales. Many other animals also feed on colossal squid, including the beaked whales, such as southern bottlenose whales, Cuvier's and Baird's beaked whales; the beaked whales essentially resemble oversized dolphins, some with a more pronounced underbite on their snout (or "beak"). They are among the deepest-diving cetaceans ever recorded, besides the sperm whale. This places the beaked whales as some of the few food competitors of the sperm whale. Other possible squid predators include the pilot whale, killer whales, larger southern elephant seals, Patagonian toothfish, southern sleeper sharks (Somniosus antarcticus), Antarctic toothfish, and albatrosses (e.g., the wandering and sooty albatrosses). However, beaks from mature adults have only been recovered from large predators (i.e. sperm whales and southern sleeper sharks), while the other predators only eat juveniles or young adults.
Reproduction
Not much is known about the colossal squid's reproductive cycle, although it does have two distinct sexes. Many species of squid, however, develop sex-specific organs as they age and develop. The adult female colossal squid has been discovered in much shallower waters, which likely implies that females spawn in shallower waters, rather than their normal depth. Additionally, the colossal squid has a high possible fecundity reaching over 4.2 million oocytes which is quite unique compared to other squids in such cold waters. Colossal squid oocytes have been observed at sizes ranging from as large as 3.2x2.1 mm to as small as 1.4x0.5 mm. Sampling of colossal squid ovaries show an average of 2175 eggs per gram. Young squid are thought to spawn near the summer time at surface temperatures of .
Vision
For pelagic organisms of similar weight to the colossal squid, such as the swordfish, the average eye diameter required for visual detection is 10 cm, but the colossal squid's are as large as . The allowed increase in visual detection strategies, including reduced diffraction blurring and greater contrast distinction, must be extremely beneficial to the colossal squid to justify the large energetic expenses to grow, move, camouflage, and maintain these eyes. The colossal squid's increased pupil size has been mathematically proven to overcome the visual complications of the pelagic zone (the combination of downwelling daylight, bioluminescence, and light scattering with increasing distance), especially by monitoring larger volumes of water at once and by detecting long-range changes in plankton bioluminescence via the physical disruption of large moving objects (e.g., sperm whales).
The colossal squid's eyes glow in the dark via long, rectangular light-producing photophores located next to the lens on the front of both eyeballs. Symbiotic bacteria reside within these photophores and luminesce through chemical reaction.
It is hypothesized that the colossal squid's eyes can detect predator movement beyond 120 m, which is the upper limit of the sperm whale's sonar range.
Hearing
Squid have been found to detect the movement of sound waves via organs called statocysts (similar to the human cochlea). Squid statocysts likely respond to low-frequency sounds less than 500 Hz, similar to pelagic fish. Colossal squid are essentially deaf to high frequencies, such as whale sonar, so they rely largely on visual detection mechanisms to avoid predation.
Taxonomy and history
The colossal squid, species Mesonychoteuthis hamiltoni, was discovered in 1925. This species belongs to the class Cephalopoda and family Cranchiidae.
Most of the time, full colossal squid specimens are not collected; as of 2015, only 12 complete colossal squids had ever been recorded, with only half of these being full adults.
Commonly, beak remnants of the colossal squid are collected; 55 beaks of colossal squids have been recorded in total. Less commonly (four times), a fin, mantle, arm or tentacle of a colossal squid was collected.
Notable discoveries
First specimens
The species was first discovered in the form of two arm crowns found in the stomach of a sperm whale in the winter of 1924–1925. This species, then named Mesonychoteuthis hamiltoni after E. Hamilton who made the initial discovery, was formally described by Guy Coburn Robson in 1925.
Entire specimens
In 1981, a Soviet Russian trawler in the Ross Sea, off the coast of Antarctica, caught a large squid with a total length of over , which was later identified as an immature female of M. hamiltoni. In 2003, a complete specimen of a subadult female was found near the surface with a total length of and a mantle length of 2.5 m (8 feet 3 inches). In 2005, the first full living specimen was captured at a depth of while taking a toothfish from a longline off South Georgia Island. Although the mantle was not brought aboard, its length was estimated at over 2.5 m (8 feet 3 inches), and the tentacles measured . The animal is thought to have weighed between .
Largest known specimen
The largest recorded specimen was a female, which are thought to be larger than males, captured in February 2007 by a New Zealand fishing boat in the Ross Sea off Antarctica. The squid was close to death when it was captured and subsequently was taken back to New Zealand for scientific study. The specimen was initially estimated to measure about 10 metres in total length and weigh about 450 kg.
Defrosting and dissection, April–May 2008
Thawing and dissection of the specimen took place at the Museum of New Zealand Te Papa Tongarewa. AUT biologist Steve O'Shea, Tsunemi Kubodera, and AUT biologist Kat Bolstad were invited to the museum to aid in the process, joined by Marine Ecologist Mark Fenwick and Dutch scientist Olaf Blaauw. Media reports suggested scientists at the museum were considering using a giant microwave to defrost the squid because thawing it at room temperature would take several days and it would likely begin to decompose on the outside while the core remained frozen. However, they later opted for the more conventional approach of thawing the specimen in a bath of salt water. After thawing, it was found that the specimen was 495 kg with a mantle length of 2.5 m and a total length of only 4.2 m, probably because the tentacles shrank once the squid was dead.
Parts of the specimen have been examined:
The beak is considerably smaller than some found in the stomachs of sperm whales, suggesting other colossal squid are much larger than this one.
The eye is wide, with a lens across. This is the largest eye of any known animal. These measurements are of the partly collapsed specimen; alive, the eye was probably 30 to 40 cm (12 to 16 in) across.
Inspection of the specimen with an endoscope revealed ovaries containing thousands of eggs.
Exhibition
The Museum of New Zealand Te Papa Tongarewa began displaying this specimen from 13 December 2008. The exhibition was closed between 2018 and 2019, but is currently open again for public viewing at Te Papa.
Conservation status
The colossal squid has been assessed as "least concern" on the IUCN Red List. Furthermore, colossal squid are not targeted by fishermen; rather, they are only caught when they attempt to feed on fish caught on hooks. Additionally, due to their habitat, interactions between humans and colossal squid are considered rare.
| Biology and health sciences | Cephalopods | Animals |
20976473 | https://en.wikipedia.org/wiki/Eel | Eel | Eels are ray-finned fish belonging to the order Anguilliformes (), which consists of eight suborders, 20 families, 164 genera, and about 1000 species. Eels undergo considerable development from the early larval stage to the eventual adult stage and are usually predators.
The term "eel" is also used for some other eel-shaped fish, such as electric eels (genus Electrophorus), swamp eels (order Synbranchiformes), and deep-sea spiny eels (family Notacanthidae). However, these other clades, with the exception of deep-sea spiny eels, whose order Notacanthiformes is the sister clade to true eels, evolved their eel-like shapes independently from the true eels. As a main rule, most eels are marine. Exceptions are the catadromous genus Anguilla and the freshwater moray, which spend most of their life in freshwater, the anadromous rice-paddy eel, which spawns in freshwater, and the freshwater snake eel Stictorhinus.
Description
Eels are elongated fish, ranging in length from in the one-jawed eel (Monognathus ahlstromi) to in the slender giant moray. Adults range in weight from to well over . They possess no pelvic fins, and many species also lack pectoral fins. The dorsal and anal fins are fused with the caudal fin, forming a single ribbon running along much of the length of the animal. Eels swim by generating waves that travel the length of their bodies. They can swim backward by reversing the direction of the wave.
Most eels live in the shallow waters of the ocean and burrow into sand, mud, or amongst rocks. Most eel species are nocturnal, and thus are rarely seen. Sometimes, they are seen living together in holes or "eel pits". Some eels also live in deeper water on the continental shelves and over the slopes deep as . Only members of the Anguilla regularly inhabit fresh water, but they, too, return to the sea to breed.
The heaviest true eel is the European conger. The maximum size of this species has been reported as reaching a length of and a weight of . Other eels are longer, but do not weigh as much, such as the slender giant moray, which reaches .
Life cycle
Eels begin life as flat and transparent larvae, called leptocephali. Eel larvae drift in the sea's surface waters, feeding on marine snow, small particles that float in the water. Eel larvae then metamorphose into glass eels and become elvers before finally seeking out their juvenile and adult habitats. Some individuals of anguillid elvers remains in brackish and marine areas close to coastlines, but most of them enter freshwater where they travel upstream and are forced to climb up obstructions, such as weirs, dam walls, and natural waterfalls.
Gertrude Elizabeth Blood found that the eel fisheries at Ballisodare were greatly improved by the hanging of loosely plaited grass ladders over barriers, enabling elvers to ascend more easily.
Classification
Several sets of classifications of eels exist; some, such as FishBase which divide eels into 20 families, whereas other classification systems such as ITIS and Systema Naturae 2000 include additional eel families, which are noted below.
Genomic studies indicate that there is a monophyletic group that originated among the deep-sea eels.
Taxonomy
The earliest fossil eels are known from the Late Cretaceous (Cenomanian) of Lebanon. These early eels retain primitive traits such as pelvic fins and thus do not appear to be closely related to any extant taxa. Body fossils of modern eels do not appear until the Eocene, although otoliths assignable to extant eel families and even some genera have been recovered from the Campanian and Maastrichtian, indicating some level of diversification among the extant groups prior to the Cretaceous-Paleogene extinction, which is also supported by phylogenetic divergence estimates. One of these otolith taxa, the mud-dwelling Pythonichthys arkansasensis, appears to have thrived in the aftermath of the K-Pg extinction, based on its abundance.
Extant taxa
Taxonomy based on Eschmeyer's Catalog of Fishes:
Order Anguilliformes
Suborder Chlopsoidei
Family Chlopsidae Rafinesque, 1815 (false morays)
Suborder Synaphobranchoidei
Family Protanguillidae G. D. Johnson, Ida & Miya, 2011 (primitive cave eels)
Family Synaphobranchidae J. Y. Johnson, 1862 (cutthroat eels)
Subfamily Simenchelyinae Gill, 1879 (pugnose parasitic eels)
Subfamily Ilyophinae D. S. Jordan & Davis, 1891 (arrowtooth eels or mustard eels)
Subfamily Synaphobranchinae J. Y. Johnson, 1862 (cutthroat eels)
Suborder Anguilloidei
Family Moringuidae Gill, 1885 (spaghetti eels)
Family Anguillidae Rafinesque, 1810 (freshwater eels)
Family Nemichthyidae Kaup. 1859 (snipe eels or threadtail snipe eels)
Family Serrivomeridae Trewavas, 1932 (sawtooth eels)
Family Cyematidae Regan, 1912 (bobtail eels)
Family Monognathidae Trewavas, 1937 (onejaw gulpers)
Family Neocyematidae Poulsen, M. J. Miller, Sado, Hanel, Tsukamoto & Miya, 2018 (orange bobtail eels)
Family Eurypharyngidae Gill, 1883 (gulper eels or pelican eels)
Family Saccopharyngidae Bleeker, 1859 (swallower eels or whiptail gulpers)
Suborder Muraenoidei
Family Heterenchelyidae Regan, 1912 (mud eels)
Family Myrocongridae Gill, 1890 (myroconger eels)
Family Muraenidae Rafinesque, 1815 (moray eels)
Subfamily Uropterygiinae Fowler, 1925 (tailfin moray eels)
Subfamily Muraeninae Rafinesque, 1815 (morays)
Suborder Congroidei
Family Colocongridae Smith, 1976 (shorttail eels)
Family Derichthyidae Gill, 1884 (longneck eels or narrowneck eels)
Family Ophichthidae Günther, 1870 (snake eels and worm eels)
Subfamily Myrophinae Kaup, 1856 (worm eels)
Subfamily Ophichthinae Günther, 1870 (snake eels)
Family Muraenesocidae Kaup, 1859 (pike conger eels)
Family Nettastomatidae Kaup, 1859 (duckbill eels)
Family Congridae Kaup, 1856 (conger eels)
Subfamily Congrinae Kaup, 1856 (congers)
Subfamily Bathymyrinae Böhlke, 1949
Subfamily Heterocongrinae Günther, 1870 (garden eels)
In some classifications, the family Cyematidae of bobtail snipe eels is included in the Anguilliformes, but in the FishBase system that family is included in the order Saccopharyngiformes.
The electric eel of South America is not a true eel but is a South American knifefish more closely related to the carps and catfishes.
Phylogeny
Phylogeny based on Johnson et al. 2012.
Extinct taxa
Based on the Paleobiology Database:
Genus †Abisaadia
Genus †Bolcanguilla
Genus †Eomuraena
Genus †Eomyrophis
Genus †Gazolapodus
Genus †Hayenchelys
Genus †Luenchelys
Genus †Mastygocercus
Genus †Micromyrus
Genus †Mylomyrus
Genus †Palaeomyrus
Genus †Parechelus
Genus †Proserrivomer
Family †Anguillavidae
Family †Anguilloididae
Family †Libanechelyidae
Family †Milananguillidae
Family †Paranguillidae
Family †Patavichthyidae
Family †Proteomyridae
Family †Urenchelyidae
Commercial species
Use by humans
Freshwater eels (unagi) and marine eels (conger eel, anago) are commonly used in Japanese cuisine; foods such as unadon and unajū are popular, but expensive. Eels are also very popular in Chinese cuisine, and are prepared in many different ways. Hong Kong eel prices have often reached 1000 HKD (128.86 US Dollars) per kg, and once exceeded 5000 HKD per kg. In India, eels are popularly eaten in the Northeast. Freshwater eels, known as Kusia in Assamese, are eaten with curry, often with herbs. The European eel and other freshwater eels are mostly eaten in Europe and the United States, and is considered critically endangered. A traditional east London food is jellied eels, although the demand has significantly declined since World War II. The Spanish cuisine delicacy angulas consists of elver (young eels) sautéed in olive oil with garlic; elvers usually reach prices of up to 1000 euro per kg. New Zealand longfin eel is a traditional Māori food in New Zealand. In Italian cuisine, eels from the Valli di Comacchio, a swampy zone along the Adriatic coast, are especially prized, along with freshwater eels of Bolsena Lake and pond eels from Cabras, Sardinia. In northern Germany, the Netherlands, the Czech Republic, Poland, Denmark, and Sweden, smoked eel is considered a delicacy.
Elvers, often fried, were once a cheap dish in the United Kingdom. During the 1990s, their numbers collapsed across Europe. They became a delicacy, and the UK's most expensive species.
Eels, particularly the moray eel, are popular among marine aquarists.
Eel blood is toxic to humans and other mammals, but both cooking and the digestive process destroy the toxic protein.
High consumption of eels is seen in European countries leading to those eel species being considered endangered.
Sustainable consumption
In 2010, Greenpeace International added the European eel, Japanese eel, and American eel to its seafood red list. Japan consumes more than 70% of the global eel catch.
Etymology
The English name "eel" descends from Old English , Common Germanic *ēlaz. Also from the common Germanic are West Frisian , Dutch , German , and Icelandic . Katz (1998) identifies a number of Indo-European cognates, among them the second part of the Latin word for eels, anguilla, attested in its simplex form illa (in a glossary only), and the Greek word for "eel", egkhelys (the second part of which is attested in Hesychius as elyes). The first compound member, anguis ("snake"), is cognate to other Indo-European words for "snake" (compare Old Irish "eel", Old High German "snake", Lithuanian , Greek ophis, okhis, Vedic Sanskrit áhi, Avestan aži, Armenian auj, iž, Old Church Slavonic *ǫžь, all from Proto-Indo-European *h₁ogʷʰis). The word also appears in the Old English word for "hedgehog", which is (meaning "snake eater"), and perhaps in the egi- of Old High German "wall lizard".
According to this theory, the name Bellerophon (, attested in a variant Ἐλλεροφόντης in Eustathius of Thessalonica) is also related, translating to "the slayer of the serpent" (ahihán). In this theory, the ελλερο- is an adjective form of an older word, ελλυ, meaning "snake", which is directly comparable to Hittite ellu-essar- "snake pit". This myth likely came to Greece via Anatolia. In the Hittite version of the myth, the dragon is called Illuyanka: the illuy- part is cognate to the word illa, and the -anka part is cognate to angu, a word for "snake". Since the words for "snake" (and similarly shaped animals) are often subject to taboo in many Indo-European (and non-Indo-European) languages, no unambiguous Proto-Indo-European form of the word for eel can be reconstructed. It may have been *ēl(l)-u-, *ēl(l)-o-, or something similar.
Timeline of genera
In culture
The large lake of Almere, which existed in the early Medieval Netherlands, got its name from the eels which lived in its water (the Dutch word for eel is or , so: "" = "eel lake"). The name is preserved in the new city of Almere in Flevoland, given in 1984 in memory of this body of water on whose site the town is located.
The daylight passage in the spring of elvers upstream along the Thames was at one time called "eel fare". The word 'elver' is thought to be a corruption of "eel fare".
A famous attraction on the French Polynesian island of Huahine (part of the Society Islands) is the bridge across a stream hosting three- to six-foot-long eels, deemed sacred by local culture.
Eel fishing in Nazi-era Danzig plays an important role in Günter Grass' novel The Tin Drum. The cruelty of humans to eels is used as a metaphor for Nazi atrocities, and the sight of eels being killed by a fisherman triggers the madness of the protagonist's mother.
Sinister implications of eels fishing are also referenced in Jo Nesbø's Cockroaches, the second book of the Harry Hole detective series. The book's background includes a Norwegian village where eels in the nearby sea are rumored to feed on the corpses of drowned humans, making the eating of these eels verge on cannibalism.
The 2019 book The Gospel of the Eels by Patrick Svensson commented on the 'eel question' (origins of the order) and its cultural history.
| Biology and health sciences | Fishes | null |
20976520 | https://en.wikipedia.org/wiki/Cuttlefish | Cuttlefish | Cuttlefish, or cuttles, are marine molluscs of the suborder Sepiina. They belong to the class Cephalopoda which also includes squid, octopuses, and nautiluses. Cuttlefish have a unique internal shell, the cuttlebone, which is used for control of buoyancy.
Cuttlefish have large, W-shaped pupils, eight arms, and two tentacles furnished with denticulated suckers, with which they secure their prey. They generally range in size from , with the largest species, the giant cuttlefish (Sepia apama), reaching in mantle length and over in mass.
Cuttlefish eat small molluscs, crabs, shrimp, fish, octopuses, worms, and other cuttlefish. Their predators include dolphins, larger fish (including sharks), seals, seabirds, and other cuttlefish. The typical life expectancy of a cuttlefish is about 1–2 years. Studies are said to indicate cuttlefish to be among the most intelligent invertebrates. Cuttlefish also have one of the largest brain-to-body size ratios of all invertebrates.
The Greco-Roman world valued the cuttlefish as a source of the unique brown pigment the creature releases from its siphon when it is alarmed. The word for the cuttlefish in both Greek and Latin, sepia, now refers to the reddish-brown color sepia in English.
Etymology
"Cuttle" in "cuttlefish", sometimes called "cuttles", is derived from the Old English name for the species, cudele. The word may be cognate with the Old Norse koddi (cushion) and the Middle Low German Kudel (rag).
Taxonomy
Over 120 species of cuttlefish are currently recognized, grouped into six families divided between two suborders. One superfamily and three families are extinct.
Suborder Sepiina: cuttlefish
Superfamily Vasseurioidea
Family Vasseuriidae
Family Belosepiellidae
Superfamily Sepioidea
Family Belosaepiidae
Family Sepiidae (122 species)
Fossil record
The earliest fossils of cuttlefish are from the end of the Cretaceous period, represented by Ceratisepia from the Late Maastrichtian Maastricht Formation of the Netherlands. Although the Jurassic Trachyteuthis was historically considered possibly related to cuttlefish, later studies considered it to be more closely related to octopuses and vampire squids.
Range and habitat
The family Sepiidae, which contains all cuttlefish, inhabits tropical and temperate ocean waters. They are mostly shallow-water animals, although they are known to go to depths of about . They have an unusual biogeographic pattern; they are present along the coasts of East and South Asia, Western Europe, and the Mediterranean, as well as all coasts of Africa and Australia, but are totally absent from the Americas. By the time the family evolved, ostensibly in the Old World, the North Atlantic possibly had become too cold and deep for these warm-water species to cross. The common cuttlefish (Sepia officinalis), is found in the Mediterranean, North and Baltic seas, although populations may occur as far south as South Africa. They are found in sublittoral depths, between the low tide line and the edge of the continental shelf, to about . The cuttlefish is listed under the Red List category of "least concern" by the IUCN Red List of Threatened Species. This means that while some over-exploitation of the marine animal has occurred in some regions due to large-scale commercial fishing, their wide geographic range prevents them from being too threatened. Ocean acidification, however, caused largely by higher levels of carbon dioxide emitted into the atmosphere, is cited as a potential threat. Some studies suggest that ocean acidification does not impair normal embryonic development, survival rates or body size.
Anatomy and physiology
Cuttlebone
Unlike other cephalopods, cuttlefish possess a unique internal structure called the cuttlebone, a highly modified internal shell, which is porous and is made of aragonite. Except for spirula, they are the only coleoid cephalopods with a shell with a phragmocone divided into chambers separated by septa. The pores provide it with buoyancy, which the cuttlefish regulates by changing the gas-to-liquid ratio in the chambered cuttlebone via the ventral siphuncle. Each species' cuttlebone has a distinct shape, size, and pattern of ridges or texture. The cuttlebone is unique to cuttlefish, and is one of the features that distinguish them from their squid relatives.
Visual system
Cuttlefish, like other cephalopods, have sophisticated eyes. The organogenesis and the final structure of the cephalopod eye fundamentally differ from those of vertebrates, such as humans.
Superficial similarities between cephalopod and vertebrate eyes are thought to be examples of convergent evolution. The cuttlefish pupil is a smoothly curving W-shape. Although cuttlefish cannot see color, they can perceive the polarization of light, which enhances their perception of contrast. They have two spots of concentrated sensor cells on their retinas (known as foveae), one to look more forward, and one to look more backward. The eye changes focus by shifting the position of the entire lens with respect to the retina, instead of reshaping the lens as in mammals. Unlike the vertebrate eye, no blind spot exists, because the optic nerve is positioned behind the retina. They are capable of using stereopsis, enabling them to discern depth/distance because their brain calculates the input from both eyes.
The cuttlefish's eyes are thought to be fully developed before birth, and they start observing their surroundings while still in the egg. In consequence, they may prefer to hunt the prey they saw before hatching.
Arms and mantle cavity
Cuttlefish have eight arms and two additional elongated tentacles that are used to grasp prey. The elongated tentacles and mantle cavity serve as defense mechanisms; when approached by a predator, the cuttlefish can suck water into its mantle cavity and spread its arms in order to appear larger than normal. Though the mantle cavity is used for jet propulsion, the main parts of the body that are used for basic mobility are the fins, which can maneuver the cuttlefish in all directions.
Suckers
The suckers of cuttlefish extend most of the length of their arms and along the distal portion of their tentacles. Like other cephalopods, cuttlefish have "taste-by-touch" sensitivity in their suckers, allowing them to discriminate among objects and water currents that they contact.
Circulatory system
The blood of a cuttlefish is an unusual shade of green-blue, because it uses the copper-containing protein haemocyanin to carry oxygen instead of the red, iron-containing protein haemoglobin found in vertebrates' blood. The blood is pumped by three separate hearts: two branchial hearts pump blood to the cuttlefish's pair of gills (one heart for each), and the third pumps blood around the rest of the body. Cuttlefish blood must flow more rapidly than that of most other animals because haemocyanin carries substantially less oxygen than haemoglobin. Unlike most other mollusks, cephalopods like cuttlefish have a closed circulatory system.
Ink
Like other marine mollusks, cuttlefish have ink stores that are used for chemical deterrence, phagomimicry, sensory distraction, and evasion when attacked. Its composition results in a dark colored ink, rich in ammonium salts and amino acids that may have a role in phagomimicry defenses. The ink can be ejected to create a "smoke screen" to hide the cuttlefish's escape, or it can be released as a pseudomorph of similar size to the cuttlefish, acting as a decoy while the cuttlefish swims away.
Human use of this substance is wide-ranged. A common use is in cooking with squid ink to darken and flavor rice and pasta. It adds a black tint and a sweet flavor to the food. In addition to food, cuttlefish ink can be used with plastics and staining of materials. The diverse composition of cuttlefish ink, and its deep complexity of colors, allows for dilution and modification of its color. Cuttlefish ink can be used to make noniridescent reds, blues, and greens, subsequently used for biomimetic colors and materials.
Poison and venom
A common gene between cuttlefish and almost all other cephalopods allows them to produce venom, excreting it through their beak to help kill their prey. Additionally, the muscles of the flamboyant cuttlefish (Metasepia pfefferi) contain a highly toxic, unidentified compound as lethal as the venom of fellow cephalopod, the blue-ringed octopus. However, this toxin is found only in the muscle and is not injected in any form, classifying it as poisonous, not venomous.
Sleep-like behavior
Sleep is a state of immobility characterized by being rapidly reversible, homeostatically controlled, and increasing an organism's arousal threshold.
To date one cephalopod species, Octopus vulgaris, has been shown to satisfy these criteria. Another species, Sepia officinalis, satisfies two of the three criteria but has not yet been tested on the third (arousal threshold). Recent research shows that the sleep-like state in a common species of cuttlefish, Sepia officinalis, shows predictable periods of rapid eye movement, arm twitching and rapid chromatophore changes.
Life cycle
The lifespan of a cuttlefish is typically around one to two years, depending on the species. They hatch from eggs fully developed, around long, reaching around the first two months. Before death, cuttlefish go through senescence when the cephalopod essentially deteriorates, or rots in place. Their eyesight begins to fail, which affects their ability to see, move, and hunt efficiently. Once this process begins, cuttlefish tend to not live long due to predation by other organisms.
Reproduction
Cuttlefish start to actively mate at around five months of age. Male cuttlefish challenge one another for dominance and the best den during mating season. During this challenge, no direct contact is usually made. The animals threaten each other until one of them backs down and swims away. Eventually, the larger male cuttlefish mate with the females by grabbing them with their tentacles, turning the female so that the two animals are face-to-face, then using a specialized tentacle to insert sperm sacs into an opening near the female's mouth. As males can also use their funnels to flush others' sperm out of the female's pouch, the male then guards the female until she lays the eggs a few hours later. After laying her cluster of eggs, the female cuttlefish secretes ink on them making them look very similar to grapes. The egg case is produced through a complex capsule of the female accessory genital glands and the ink bag.
On occasion, a large competitor arrives to threaten the male cuttlefish. In these instances, the male first attempts to intimidate the other male. If the competitor does not flee, the male eventually attacks it to force it away. The cuttlefish that can paralyze the other first, by forcing it near its mouth, wins the fight and the female. Since typically four or five (and sometimes as many as 10) males are available for every female, this behavior is inevitable.
Cuttlefish are indeterminate growers, so smaller cuttlefish always have a chance of finding a mate the next year when they are bigger. Additionally, cuttlefish unable to win in a direct confrontation with a guard male have been observed employing several other tactics to acquire a mate. The most successful of these methods is camouflage; smaller cuttlefish use their camouflage abilities to disguise themselves as a female cuttlefish. Changing their body color, and even pretending to be holding an egg sack, disguised males are able to swim past the larger guard male and mate with the female.
Communication
Cephalopods are able to communicate visually using a diverse range of signals. To produce these signals, cephalopods can vary four types of communication element: chromatic (skin coloration), skin texture (e.g. rough or smooth), posture, and locomotion. Changes in body appearance such as these are sometimes called polyphenism. The common cuttlefish can display 34 chromatic, six textural, eight postural and six locomotor elements, whereas flamboyant cuttlefish use between 42 and 75 chromatic, 14 postural, and seven textural and locomotor elements. The Caribbean reef squid (Sepioteuthis sepioidea) is thought to have up to 35 distinct signalling states.
Chromatic
Cuttlefish are sometimes referred to as the "chameleons of the sea" because of their ability to rapidly alter their skin color – this can occur within one second. Cuttlefish change color and pattern (including the polarization of the reflected light waves), and the shape of the skin to communicate to other cuttlefish, to camouflage themselves, and as a deimatic display to warn off potential predators. Under some circumstances, cuttlefish can be trained to change color in response to stimuli, thereby indicating their color changing is not completely innate.
Cuttlefish can also affect the light's polarization, which can be used to signal to other marine animals, many of which can also sense polarization, as well as being able to influence the color of light as it reflects off their skin. Although cuttlefish (and most other cephalopods) lack color vision, high-resolution polarisation vision may provide an alternative mode of receiving contrast information that is just as defined. The cuttlefish's wide pupil may accentuate chromatic aberration, allowing it to perceive color by focusing specific wavelengths onto the retina.
The three broad categories of color patterns are uniform, mottle, and disruptive. Cuttlefish can display as many as 12 to 14 patterns, 13 of which have been categorized as seven "acute" (relatively brief) and six "chronic" (long-lasting) patterns. although other researchers suggest the patterns occur on a continuum.
The color-changing ability of cuttlefish is due to multiple types of cells. These are arranged (from the skin's surface going deeper) as pigmented chromatophores above a layer of reflective iridophores and below them, leucophores.
Chromatophores
The chromatophores are sacs containing hundreds of thousands of pigment granules and a large membrane that is folded when retracted. Hundreds of muscles radiate from the chromatophore. These are under neural control and when they expand, they reveal the hue of the pigment contained in the sac. Cuttlefish have three types of chromatophore: yellow/orange (the uppermost layer), red, and brown/black (the deepest layer). The cuttlefish can control the contraction and relaxation of the muscles around individual chromatophores, thereby opening or closing the elastic sacs and allowing different levels of pigment to be exposed. Furthermore, the chromatophores contain luminescent protein nanostructures in which tethered pigment granules modify light through absorbance, reflection, and fluorescence between 650 and 720 nm.
For cephalopods in general, the hues of the pigment granules are relatively constant within a species, but can vary slightly between species. For example, the common cuttlefish and the opalescent inshore squid (Doryteuthis opalescens) have yellow, red, and brown, the European common squid (Alloteuthis subulata) has yellow and red, and the common octopus has yellow, orange, red, brown, and black.
In cuttlefish, activation of a chromatophore can expand its surface area by 500%. Up to 200 chromatophores per mm2 of skin may occur. In Loligo plei, an expanded chromatophore may be up to 1.5 mm in diameter, but when retracted, it can measure as little as 0.1 mm.
Iridophores
Retracting the chromatophores reveals the iridophores and leucophores beneath them, thereby allowing cuttlefish to use another modality of visual signalling brought about by structural coloration.
Iridophores are structures that produce iridescent colors with a metallic sheen. They reflect light using plates of crystalline chemochromes made from guanine. When illuminated, they reflect iridescent colors because of the diffraction of light within the stacked plates. Orientation of the chemochromes determines the nature of the color observed. By using biochromes as colored filters, iridophores create an optical effect known as Tyndall or Rayleigh scattering, producing bright blue or blue-green colors. Iridophores vary in size, but are generally smaller than 1 mm. Squid at least are able to change their iridescence. This takes several seconds or minutes, and the mechanism is not understood. However, iridescence can also be altered by expanding and retracting the chromatophores above the iridophores. Because chromatophores are under direct neural control from the brain, this effect can be immediate.
Cephalopod iridophores polarize light. Cephalopods have a rhabdomeric visual system which means they are visually sensitive to polarized light. Cuttlefish use their polarization vision when hunting for silvery fish (their scales polarize light). Female cuttlefish exhibit a greater number of polarized light displays than males and also alter their behavior when responding to polarized patterns. The use of polarized reflective patterns has led some to suggest that cephalopods may communicate intraspecifically in a mode that is "hidden" or "private" because many of their predators are insensitive to polarized light.
Leucophores
Leucophores, usually located deeper in the skin than iridophores, are also structural reflectors using crystalline purines, often guanine, to reflect light. Unlike iridophores, however, leucophores have more organized crystals that reduce diffraction. Given a source of white light, they produce a white shine, in red they produce red, and in blue they produce blue. Leucophores assist in camouflage by providing light areas during background matching (e.g. by resembling light-colored objects in the environment) and disruptive coloration (by making the body appear to be composed of high-contrasting patches).
The reflectance spectra of cuttlefish patterns and several natural substrates (stipple, mottle, disruptive) can be measured using an optic spectrometer.
Intraspecific
Cuttlefish sometimes use their color patterns to signal future intent to other cuttlefish. For example, during agonistic encounters, male cuttlefish adopt a pattern called the intense zebra pattern, considered to be an honest signal. If a male is intending to attack, it adopts a "dark face" change, otherwise, it remains pale.
In at least one species, female cuttlefish react to their own reflection in a mirror and to other females by displaying a body pattern called "splotch". However, they do not use this display in response to males, inanimate objects, or prey. This indicates they are able to discriminate same-sex conspecifics, even when human observers are unable to discern the sex of a cuttlefish in the absence of sexual dimorphism.
Female cuttlefish signal their receptivity to mating using a display called precopulatory grey. Male cuttlefish sometimes use deception toward guarding males to mate with females. Small males hide their sexually dimorphic fourth arms, change their skin pattern to the mottled appearance of females, and change the shape of their arms to mimic those of nonreceptive, egg-laying females.
Displays on one side of a cuttlefish can be independent of the other side of the body; males can display courtship signals to females on one side while simultaneously showing female-like displays with the other side to stop rival males interfering with their courtship.
Interspecific
The deimatic display (a rapid change to black and white with dark 'eyespots' and contour, and spreading of the body and fins) is used to startle small fish that are unlikely to prey on the cuttlefish, but use the flamboyant display towards larger, more dangerous fish, and give no display at all to chemosensory predators such as crabs and dogfish.
One dynamic pattern shown by cuttlefish is dark mottled waves apparently repeatedly moving down the body of the animals. This has been called the passing cloud pattern. In the common cuttlefish, this is primarily observed during hunting, and is thought to communicate to potential prey – "stop and watch me" – which some have interpreted as a type of "hypnosis".
Camouflage
Cuttlefish are able to rapidly change the color of their skin to match their surroundings and create chromatically complex patterns, despite their inability to perceive color, through some mechanism which is not completely understood. They have been seen to have the ability to assess their surroundings and match the color, contrast and texture of the substrate even in nearly total darkness.
The color variations in the mimicked substrate and animal skin are similar. Depending on the species, the skin of cuttlefish responds to substrate changes in distinctive ways. By changing naturalistic backgrounds, the camouflage responses of different species can be measured. Sepia officinalis changes color to match the substrate by disruptive patterning (contrast to break up the outline), whereas S. pharaonis matches the substrate by blending in. Although camouflage is achieved in different ways, and in an absence of color vision, both species change their skin colors to match the substrate. Cuttlefish adapt their own camouflage pattern in ways that are specific for a particular habitat. An animal could settle in the sand and appear one way, with another animal a few feet away in a slightly different microhabitat, settled in algae for example, will be camouflaged quite differently.
Cuttlefish are also able to change the texture of their skin. The skin contains bands of circular muscle which as they contract, push fluid up. These can be seen as little spikes, bumps, or flat blades. This can help with camouflage when the cuttlefish becomes texturally as well as chromatically similar to objects in its environment such as kelp or rocks.
Diet
While the preferred diet of cuttlefish is crabs and fish, they feed on small shrimp shortly after hatching.
Human uses
As food
Cuttlefish are caught for food in the Mediterranean, East Asia, the English Channel, and elsewhere.
In East Asia, dried, shredded cuttlefish is a popular snack food. In the Qing Dynasty manual of Chinese gastronomy, the Suiyuan shidan, the roe of the cuttlefish, is considered a difficult-to-prepare, but sought-after delicacy. Cuttlefish thick soup is a signature dish in Taiwan.
Cuttlefish are quite popular in Europe. For example, in northeast Italy, they are used in risotto al nero di seppia (risotto with cuttlefish ink), also found in Croatia and Montenegro as crni rižot (black risotto), and in various recipes (either grilled or stewed) often served together with polenta. Catalan cuisine, especially that of the coastal regions, uses cuttlefish and squid ink in a variety of tapas and dishes such as arròs negre. Breaded and deep-fried cuttlefish is a popular dish in Andalusia. In Portugal, cuttlefish is present in many popular dishes. Chocos com tinta (cuttlefish in black ink), for example, is grilled cuttlefish in a sauce of its own ink. Cuttlefish is also popular in the region of Setúbal, where it is served as deep-fried strips or in a variant of feijoada, with white beans. Black pasta is often made using cuttlefish ink.
Sepia
Cuttlefish ink was formerly an important dye, called sepia. To extract the sepia pigment from a cuttlefish (or squid), the ink sac is removed and dried then dissolved in a dilute alkali. The resulting solution is filtered to isolate the pigment, which is then precipitated with dilute hydrochloric acid. The isolated precipitate is the sepia pigment. It is relatively chemically inert, which contributes to its longevity. Today, artificial dyes have mostly replaced natural sepia.
Metal casting
Cuttlebone has been used since antiquity to make casts for metal. A model is pushed into the cuttlebone and removed, leaving an impression. Molten gold, silver or pewter can then be poured into the cast.
Smart clothing
Research into replicating biological color-changing has led to engineering artificial chromatophores out of small devices known as dielectric elastomer actuators. Engineers at the University of Bristol have engineered soft materials that mimic the color-changing skin of animals like cuttlefish, paving the way for "smart clothing" and camouflage applications.
Pets
Though cuttlefish are rarely kept as pets, due in part to their fairly short life spans, the most commonly kept are Sepia officinalis and Sepia bandensis. Cuttlefish may fight or even eat each other if there is inadequate tank space for multiple individuals.
| Biology and health sciences | Cephalopods | Animals |
1914541 | https://en.wikipedia.org/wiki/Northern%20giraffe | Northern giraffe | The northern giraffe (Giraffa camelopardalis), also known as three-horned giraffe, is the type species of giraffe, G. camelopardalis, and is native to North Africa, although alternative taxonomic hypotheses have proposed the northern giraffe as a separate species.
Once abundant throughout Africa since the 19th century, the northern giraffe ranged from Senegal, Mali and Nigeria from West Africa to up north in Egypt. The similar West African giraffe lived in Algeria and Morocco in ancient periods until their extinctions due to the Saharan dry climate.
Giraffes collectively are listed as Vulnerable on the IUCN Red List, as the global population is thought to consist of about 97,000 individuals as of 2016.
Taxonomy and evolution
The current IUCN taxonomic scheme lists one species of giraffe with the name G. camelopardalis and nine subspecies. A 2021 whole genome sequencing study suggests the northern giraffe as a separate species, and postulates the existence of three distinct subspecies, and more recently, one extinct subspecies.
Description
Often mistaken with the southern giraffe, the northern giraffe differs by the shape and size of the two distinctive horn-like protuberances known as ossicones on its forehead; they are longer and larger than those of southern giraffe. Male northern giraffes have a third cylindrical ossicone in the center of the head just above the eyes, ranging from long.
Distribution and habitat
Northern giraffes live in savannahs, shrublands, and woodlands. After numerous local extinctions, Northern giraffes are the least numerous giraffe species, and the most endangered. In East Africa, they are mostly found in Kenya and southwestern Ethiopia, and rarely in northeastern Democratic Republic of the Congo and South Sudan. In Central Africa, there are about 2,000 in the Central African Republic, Chad and Cameroon. Once widespread in West Africa, a few hundred Northern giraffes are confined in the Dosso Reserve of Kouré, Niger. They are isolated in South Sudan, Kenya, Chad and Niger. They commonly live both in and outside of protected areas.
The earliest ranges of the Northern giraffes were in Chad during the late Pliocene. Once abundant in North Africa, they lived in Algeria from the early Pleistocene during the Quaternary period. They lived in Morocco, Libya and Egypt until their extinction there around AD 600, as the drying climate of the Sahara made conditions impossible for giraffes. Giraffe bones and fossils have been found across these countries.
| Biology and health sciences | Giraffidae | Animals |
1914985 | https://en.wikipedia.org/wiki/Lingulata | Lingulata | Lingulata is a class of brachiopods, among the oldest of all brachiopods having existed since the Cambrian period (). They are also among the most morphologically conservative of the brachiopods, having lasted from their earliest appearance to the present with very little change in shape. Shells of living specimens found today in the waters around Japan are almost identical to ancient Cambrian fossils.
The Lingulata have tongue-shaped shells (hence the name Lingulata, from the Latin word for "tongue") with a long fleshy stalk, or pedicle, with which the animal burrows into sandy or muddy sediments. They inhabit vertical burrows in these soft sediments with the anterior end facing up and slightly exposed at the sediment surface. The cilia of the lophophore generate a feeding and respiratory current through the lophophore and mantle cavity. The gut is complete and J-shaped.
Lingulata shells are composed of a combination of calcium phosphate, protein and chitin. This is unlike most other shelled marine animals, whose shells are made of calcium carbonate. The Lingulata are inarticulate brachiopods, so named for the simplicity of their hinge mechanism. This mechanism lacks teeth and is held together only by a complex musculature. Both valves are roughly symmetrical.
The genus Lingula (Bruguiere, 1797) is the oldest known animal genus that still contains extant species. It is primarily an Indo-Pacific genus that is harvested for human consumption in Japan and Australia.
| Biology and health sciences | Lophotrochozoa | Animals |
1915903 | https://en.wikipedia.org/wiki/Tegenaria%20domestica | Tegenaria domestica | The spider species Tegenaria domestica, commonly known as the barn funnel weaver in North America and the domestic house spider in Europe, is a member of the funnel-web family Agelenidae.
Distribution and habitat
Domestic house spiders range nearly worldwide. Their global distribution encompasses Europe, North Africa, parts of the Middle East and Central Asia. They have been introduced to the Americas, Australia, and New Zealand.
In Europe, they are found as far north as Scandinavia to as far south as Greece and the Mediterranean sea. It is recorded in the checklist of Danish spider species.
In North America, the species is found from as far north as maritime Canada down to the Southern United States.
Appearance
Domestic house spiders possess elongated bodies with a somewhat flattened cephalothorax and straight abdomen. Their body/legs ratio is typically 50-60%.
T. domestica is one of the smaller species in the genus Tegenaria. Female body length averages between and male body length averages between .
It was previously thought to be a close relative of the Giant House Spider, which has since been moved to the genus Eratigena and has been separated into three distinct species.
Males are usually distinguished from females by having longer, more agile legs, bloated pedipalps and elongated abdomen. Other distinctions are strictly behavioral.
The coloring of an adult T. domestica is typically dark orange to brown or beige (maybe even grayish), with a common characteristic of striped legs and two dull, black, longitudinal stripes on the cephalothorax. The abdomen is mottled in brown, beige, and grey and has a pattern of chevrons running lengthwise across the top (similar to an argyle pattern).
Behavior
Barn funnel weavers are active and agile hunters, relying on both their vision and movement speed as well as web mechanisms. Six out of eight of their eyes are sighted forward, allowing them to detect movement and focus in on prey items. These spiders are also known to be photosensitive, i.e. moving to or fleeing from the light, depending on situations.
Like many agelenids, barn funnel weavers are very precise in their movements. Instead of following a continuous gait pattern, they usually move in short intervals, stopping several times before deciding where to head next.
This spider builds a funnel-shaped web to catch its prey. It usually consists of a multitude of stressed silk threads spun over a flat surface, with a funnel-like structure reaching back into a corner or sheltered area. The spider sits at the back of the funnel shape, waiting for prey to disturb the web. When the silk threads in the web are disturbed vibrations are sent to the spider, notifying the spider there is prey at the mouth of the funnel. The spider will rush out and attack the prey item, dragging it back to the back of the funnel to consume its meal. These webs can become quite large if undisturbed.
Life cycle
Young T. domestica spiders hatch from the egg sac and grow to maturity within a year. Male numbers peak in the summer months of June and July, indicating mating typically occurs during this time. The males usually die in autumn soon after mating and rarely live for over a year. As with most spiders, males of the species are often consumed by the females after mating. Females regularly survive the winter and into the next year, provided they find a suitable sheltered area to winter, and may produce a number of egg sacs. Females that dwell indoors typically live for over one or two years on the same web, with some T. domestica females reportedly surviving for as long as seven years in rarely disturbed and temperate places (attics, basement or cellar parts, storage rooms, etc.).
Defense mechanisms
T. domestica is not a particularly aggressive species and will often retreat when confronted. As long as its web is undisturbed, the spider will usually retreat to the funnel tip and stop responding to any movement whatsoever. If the web is attacked and partially destroyed, the spider will attempt to flee the area or may huddle its body into a ball against the wall or some other nearby object. To usher the spider into a container for removal, place open end in front of the spider and use the container lid if so equipped or similar object to push or corral the spider from behind. Since a spider's first reflex after being disturbed from the rear is to move forward, usually the spider will advance into the container placed in front of it.
Tegenaria species rarely bite. If they do it will be in self-defense, and the bite is unlikely to break the skin.
| Biology and health sciences | Spiders | Animals |
1917015 | https://en.wikipedia.org/wiki/Taxodium%20distichum | Taxodium distichum | Taxodium distichum (baldcypress, bald-cypress, bald cypress, swamp cypress; ;
cipre in Louisiana) is a deciduous conifer in the family Cupressaceae. It is native to the southeastern United States. Hardy and tough, this tree adapts to a wide range of soil types, whether wet, salty, dry, or swampy. It is noted for the russet-red fall color of its lacy needles.
This plant has some cultivated varieties and is often used in groupings in public spaces. Common names include bald cypress, swamp cypress, white cypress, tidewater red cypress, gulf cypress and red cypress.
The bald cypress was designated the official state tree of Louisiana in 1963.
In some cultures, the bald cypress symbolizes longevity, endurance, and mourning.
Bald cypress trees are valued because of their rot-resistant heartwood when the trees are mature. Because of this, the trees are often used for making fence posts, doors, flooring, caskets, and a number of other items.
Description
Taxodium distichum is a large, slow-growing, and long-lived tree. It typically grows to heights of and has a trunk diameter of .
The main trunk is often surrounded by cypress knees. The bark is grayish brown to reddish brown, thin, and fibrous with a stringy texture; it has a vertically, interwoven pattern of shallow ridges and narrow furrows.
The needle-like leaves are long and are simple, alternate, green, and linear, with entire margins. In autumn, the leaves turn yellow or copper red. The bald cypress is deciduous.
This species is monoecious, with male and female cones on a single plant forming on slender, tassel-like structures near the edge of branchlets. The tree produces cones in April and the seeds ripen in October. The male and female strobili are produced from buds formed in late autumn, with pollination in early winter, and mature in about 12 months. Male cones emerge on panicles that are inches long. Female cones are round, resinous and green while young. They then turn hard and then brown as the tree matures. They are globular and in diameter. They have from 20 to 30 spirally arranged, four-sided scales, each bearing one, two, or rarely three triangular seeds. Each cone contains 20 to 40 large seeds. The cones disintegrate at maturity to release the seeds. The seeds are long, the largest of any species of Cupressaceae, and are produced every year, with heavy crops every 3–5 years. The seedlings have three to nine, but usually six, cotyledons each.
The bald cypress grows in full sunlight to partial shade. This species grows best in wet or well-drained soil but can tolerate dry soil. It is moderately able to grow in aerosols of salt water. It does well in acid, neutral and alkaline soils across the full range of light (sandy), medium (loamy), and heavy (clay) soils. It can also grow in saline soils. It can tolerate atmospheric pollution. The cones are often consumed by wildlife.
The tallest known specimen, near Williamsburg, Virginia, is 44.11 m (145 ft) tall, and the stoutest known, in the Real County near Leakey, Texas, has a circumference of 475 in (39 ft). The National Champion Bald Cypress is recognized as the largest member of its species in the country and is listed as such on the National Register of Champion Trees by American Forest. The National Champion Bald Cypress is in the Cat Island National Wildlife Refuge, near St. Francisville, Louisiana, and it is tall, in circumference, and is estimated to be approximately 1,500 years old. The oldest known living specimen, found along the Black River in North Carolina, is at least 2,624 years old, rendering it the oldest living tree in eastern North America.
The Senator, a bald cypress in Longwood, Florida, was tall before the hurricane of 1925, when it lost about in height. It had a circumference of and a diameter of and was estimated to be 3,500 years old. It was burned down accidentally in 2012.
"Big Dan" is one of the oldest living specimens and is found near High Springs, Florida at Camp Kulaqua. It is estimated to be 2,704 years old as of 2020. It is growing in the Hornsby Spring swamp run and is more than 35 feet in circumference.
Gallery
Taxonomy
The closely related Taxodium ascendens (pond cypress) is treated by some botanists as a distinct species, while others classify it as merely a variety of bald cypress, as Taxodium distichum var. imbricatum (Nutt.) Croom. It differs in shorter leaves borne on erect shoots, and in ecology, being largely confined to low-nutrient blackwater habitats. A few authors also treat Taxodium mucronatum as a variety of bald cypress, as T. distichum var. mexicanum Gordon, thereby considering the genus as comprising only one species.
Habitat and distribution
The native range extends from southeastern New Jersey south to Florida and west to Central Texas and southeastern Oklahoma, and also inland up the Mississippi River. Ancient bald cypress forests, with some trees more than 1,700 years old, once dominated swamps in the Southeast. The original range had been thought to only reach as far north as Delaware, but researchers have now found a natural forest on the Cape May Peninsula in southern New Jersey. The species can also be found growing outside its natural native range, in New York and Pennsylvania.
The largest remaining old-growth stands are at Corkscrew Swamp Sanctuary, near Naples, Florida, and in the Three Sisters tract along eastern North Carolina's Black River. The Corkscrew trees are around 500 years of age, and some exceed 40 m in height. In 1985, the Black River trees were cored by a dendrochronologist from the University of Arkansas, who found that some began growing as early as 364 AD. A subsequent visit to the area in 2019 revealed a tree dated by its tree-ring count to 605 B. C., ranking as the ninth-oldest tree in the world.
This species is native to humid climates where annual precipitation ranges from about in Central Texas to along the Gulf Coast. Although it grows best in warm climates, the natural northern limit of the species is not due to a lack of cold tolerance, but to specific reproductive requirements: further north, regeneration is prevented by ice damage to seedlings. Larger trees are able to tolerate much lower temperatures and lower humidity.
In 2012 scuba divers discovered an underwater cypress forest several miles off the coast of Mobile, Alabama, in 60 feet of water. The forest contains trees that could not be dated with radiocarbon methods, indicating that they are more than 50,000 years old and thus most likely lived in the early glacial interval of the last ice age. The cypress forest is well preserved, and when samples are cut they still smell like fresh cypress. A team, which has not yet published its results in a peer-reviewed journal, is studying the site. One possibility is that Hurricane Katrina exposed the grove of bald cypress, which had been protected under ocean floor sediments.
Reproduction and early growth
The bald cypress is monoecious. Male and female strobili mature in one growing season from buds formed the previous year. The male catkins are about in diameter and are borne in slender, purplish, drooping clusters long that are conspicuous during the winter on this deciduous conifer. Pollen is shed in March and April. Female conelets are found singly or in clusters of two or three. The globose cones turn from green to brownish-purple as they mature from October to December. The cones are in diameter and consist of 9 to 15 four-sided scales that break away irregularly after maturity. Each scale can bear two (rarely three) irregular, triangular seeds with thick, horny, warty coats and projecting flanges. The number of seeds per cone averages 16 and ranges from 2 to 34. Cleaned seeds number from about 5,600 to 18,430 per kg (2,540 to 8,360 per lb).
Seed production and dissemination
Some seeds are produced every year, and good seed crops occur at three- to five-year intervals. At maturity, the cone scales with their resin-coated seeds adhering to them, or sometimes entire cones, drop to the water or ground. This drop of mature seeds is often hastened by squirrels, which eat bald cypress seeds, but usually drop several scales with undamaged seeds still attached to each cone they pick. Floodwaters spread the scales or cones along streams and are the most important means of seed dissemination.
Seedling development
Germination is epigeal. Under swamp conditions, germination generally takes place on a sphagnum moss or a wet-muck seedbed. Seeds will not germinate under water, but some will remain viable for 30 months under water. By contrast, seeds usually fail to germinate on better drained soils because of the lack of surface water. Thus, a soil saturated but not flooded for a period of one to three months after seedfall is required for germination.
After germination, seedlings must grow fast enough to keep at least part of their crowns above floodwaters for most of the growing season. Bald cypress seedlings can endure partial shading, but require overhead light for good growth. Seedlings in swamps often reach heights of their first year. Growth is checked when a seedling is completely submerged by flooding, and prolonged submergence kills the seedling.
In nurseries, Taxodium seeds show an apparent internal dormancy that can be overcome by various treatments, usually including cold stratification or submerging in water for 60 days. Nursery beds are sown in spring with pretreated seeds or in fall with untreated seeds. Seedlings usually reach in height during their first (and usually only) year in the nursery. Average size of 1-0 nursery-grown seedlings in a seed source test including 72 families was tall and in diameter.
Control of competing vegetation may be necessary for a year or more for bald cypress planted outside of swamps. Five years after planting on a harrowed and bedded, poorly drained site in Florida, survival was high, but heights had increased only , probably because of heavy herbaceous competition. Seedlings grown in a crawfish pond in Louisiana, where weed control and soil moisture were excellent through June, averaged and diameter at breast height after five years. However, a replicate of the same sources planted in an old soybean field, where weed control and soil moisture were poor, resulted in the same diameter, but a smaller average seedling height of . When planted in a residential yard and weeded and watered, they averaged tall three years later.
Vegetative reproduction
Bald cypress is one of the few conifer species that sprouts. Thrifty sprouts are generally produced from stumps of young trees, but trees up to 60 years old also send up healthy sprouts if the trees are cut during the fall or winter. However, survival of these sprouts is often poor, and those that live are usually poorly shaped and do not make quality saw timber trees. Stumps of trees up to 200 years old may also sprout, but the sprouts are not as vigorous and are more subject to wind damage as the stump decays. In the only report on the rooting of bald cypress cuttings found in the literature, cuttings from trees five years old rooted better than those from older trees.
Ecology
The seeds remain viable for less than one year, and are dispersed in two ways. One is by water: the seeds float and move on water until flooding recedes or the cone is deposited on shore. The second is by wildlife: squirrels eat seeds, but often drop some scales from the cones they harvest. Seeds do not germinate under water and rarely germinate on well-drained soils; seedlings normally become established on continuously saturated, but not flooded, soils for one to three months. After germination, seedlings must grow quickly to escape floodwaters; they often reach a height of 20–75 cm (up to 100 cm in fertilized nursery conditions) in their first year. Seedlings die if inundated for more than about two to four weeks. Natural regeneration is therefore prevented on sites that are always flooded during the growing season. Although vigorous saplings and stump sprouts can produce viable seed, most specimens do not produce seed until they are about 30 years old. In good conditions, bald cypress grows fairly fast when young, then more slowly with age. Trees have been measured to reach 3 m in five years, 21 m in 41 years, and 36 m in height in 96 years; height growth has largely ceased by the time the trees are 200 years old. Some individuals can live over 1,000 years. Determination of the age of an old tree may be difficult because of frequent missing or false rings of stemwood caused by variable and stressful growing environments.
Bald cypress trees growing in swamps have a peculiarity of growth called cypress knees. These are woody projections from the root system project above the ground or water. Their function was once thought to be to provide oxygen to the roots, which grow in the low dissolved oxygen waters typical of a swamp (as in mangroves). However, evidence for this is scant; in fact, roots of swamp-dwelling specimens whose knees are removed do not decrease in oxygen content and the trees continue to thrive. Another more likely function is structural support and stabilization. Bald cypress trees growing on flood-prone sites tend to form buttressed bases, but trees grown on drier sites may lack this feature. The buttressed base usually begins at the soil surface and usually extends up to the maximum annual flooding elevation. Buttressed bases and a strong, intertwined root system allow them to resist very strong winds; even hurricanes rarely overturn them.
Many agents damage T. distichum trees. The main damaging (in some cases lethal) agent is the fungus Lauriliella taxodii, which causes a brown pocket rot known as "pecky cypress." It attacks the heartwood of living trees, usually from the crown down to the roots. A few other fungi attack the sapwood and the heartwood of the tree, but they do not usually cause serious damage. Insects such as the cypress flea beetle (Systena marginalis) and the bald cypress leafroller (Archips goyerana) can seriously damage trees by destroying leaves, cones or bark. Nutrias also clip and unroot young bald cypress seedlings, sometimes killing a whole plantation in a short amount of time.
In 2002, the Indiana Department of Natural Resources identified T. distichum as a state protected plant with the status of Threatened. Globally, the species is listed as of Least Concern by the IUCN.
Cultivation and uses
The bald cypress is hardy and can be planted in hardiness zones 4 through 10 in the US. The species is a popular ornamental tree that is cultivated for its light, feathery foliage and orangey brown to dull red autumnal color. In cultivation it thrives on a wide range of soils, including well-drained sites where it would not grow naturally because juvenile seedlings cannot compete with other vegetation. Cultivation is successful far north of its native range, even to southern Canada. It is also commonly planted in Europe, Asia, and other temperate and subtropical locales. Additionally it is sometimes planted in gardens and parks Eastern Australia with most specimens being found in Temperate to warm temperate areas although two trees are thriving in an open location along a drain on the side of the highway north of Port Douglas Queensland (the tree itself is at a latitude and longitude of-16.4853970, 145.4134609), so it would seem the species can grow well in tropical conditions. It does, however, require hot summers for good growth.
When planted in locales with the cool summers of oceanic climates, growth is healthy but very slow; some specimens in northeastern England have only grown to 4–5 m tall in 50 years and do not produce cones. One of the oldest specimens in Europe was planted in the 1900s in the Arboretum de Pézanin in Burgundy, France. An alley of Louisiana cypress trees was planted in the 18th century in the park of the Château de Rambouillet, southwest of Paris.
Bald cypress has great merchantable yields. In virgin stands, yields from 112 to 196 m3/ha were common, and some stands may have exceeded 1,000 m3/ha.
Building material
Still usable prehistoric wood is often found in swamps as far north as New Jersey, and occasionally as far north as Connecticut, although it is more common in the southeastern states. This partially mineralized wood is harvested from swamps in the southeastern states, and is greatly prized for special uses such as for carvings. The fungus Lauriliella taxodii causes a specific form of the wood called "pecky cypress", which is used for decorative wall paneling.
The bald cypress was used by Native Americans to create coffins, homes, drums and canoes. Joshua D. Brown, the first settler of Kerrville, Texas, made his living producing shingles from bald cypress trees that grew along the Guadalupe River of the Texas Hill Country.
In the southern United States, the odorless wood, which closely resembles that of other Cupressus species, has been valued since colonial times for its resistance to water, making it ideal for use wherever the wood is exposed to the elements. In the first half of the 20th century, it was marketed as "The Wood Eternal."
The lumber is valuable for timber framing, building materials, fence posts, planking in boats, river pilings, doors, blinds, flooring, shingles, garden boxes, caskets, interior trim and cabinetry.
Bald cypress timbers are commonly available in lengths up to 24 feet. This species enjoys predictable lead times for projects. The wood is a very light tan in color and weathers to a uniform silvery gray. Paint and stains adhere well to Bald cypress. Bald cypress most often sees use in outdoor structures such as timber frame pavilions, mid-size farmers markets, porches, exterior awnings and decorative trusses where the species’ weather resistance helps ensure long life.
| Biology and health sciences | Cupressaceae | Plants |
1917508 | https://en.wikipedia.org/wiki/PC-98 | PC-98 | The , commonly shortened to PC-98 or simply , is a lineup of Japanese 16-bit and 32-bit personal computers manufactured by NEC from 1982 to 2003. While based on Intel processors, it uses an in-house architecture making it incompatible with IBM clones; some PC-98 computers used NEC's own V30 processor. The platform established NEC's dominance in the Japanese personal computer market, and, by 1999, more than 18 million units had been sold. While NEC did not market these specific machines in the West, it sold the NEC APC series, which had similar hardware to early PC-98 models.
The PC-98 was initially released as a business-oriented personal computer which had backward compatibility with the successful PC-8800 series. The range of the series was expanded, and in the 1990s it was used in a variety of industry fields including education and hobbies. NEC succeeded in attracting third-party suppliers and a wide range of users, and the PC-98 dominated the Japanese PC market with more than 60% market share by 1991. IBM clones lacked sufficient graphics capabilities to easily handle Japan's multiple writing systems, in particular kanji with its thousands of characters. In addition, Japanese computer manufacturers marketed personal computers that were based on each proprietary architecture for the domestic market. Global PC manufacturers, with the exception of Apple, had failed to overcome the language barrier, and the Japanese PC market was isolated from the global market.
By 1990, average CPUs and graphics capabilities were sufficiently improved. The DOS/V operating system enabled IBM clones to display Japanese text by using a software font only, giving a chance for global PC manufacturers to enter the Japanese PC market. The PC-98 is a non-IBM compatible x86-based computer and is thus capable of running ported (and localized) versions of MS-DOS and Microsoft Windows. However, as Windows spread, software developers no longer had to code their software separately for each specific platform. An influx of cheaper clone computers by American vendors, and later the popularity of Windows 95 reducing the demand for PC-98 legacy applications, led to NEC abandoning compatibility with the PC-98 platform in 1997 and releasing the of Wintel computers, based on the PC System Design Guide.
History
Background
NEC developed mainframes since the 1950s. By 1976, the company had the 4th highest mainframe sales (10.4%) in Japan after IBM (29.6%), Fujitsu (20.1%) and Hitachi (15.8%). NEC did not have a presence in the consumer market, and its subsidiary, New Nippon Electric (later NEC Home Electronics), had limited success with consumer products. NEC's Information Processing Group, which developed mainframes and minicomputers, had not developed a personal computer because they assumed microprocessors were not suitable for computing as they suffered from a lack of performance and reliability. However, the Electronic Device Sales Division developed the microprocessor evaluation kit TK-80, which became unexpectedly popular among hobbyists. , a developer of the TK-80, observed the rise in popularity of personal computers at the 1977 West Coast Computer Faire in San Francisco. Goto and his section manager, , decided to develop a personal computer despite criticism from the Information Processing Group. The division only had a small distribution network of electronic parts stores, so they asked New Nippon Electric to sell the personal computers through their consumer distribution network.
The Electronic Device Sales Division launched the PC-8001 in 1979, and it dominated 40% of the Japanese personal computer market in 1981. The vice president of NEC, , stated:
In April 1981, NEC decided to expand their personal computer lines into three groups: New Nippon Electric, the Information Processing Group, and the Electronic Devices Group, with each specializing in a particular series. New Nippon Electric made 8-bit home computers (PC-6000 series), while the Information Processing Group made 16-bit business personal computers and the Electronic Devices Group made other personal computers such as the PC-8000, the PC-8800 and the PC-100 series.
Development
In the Information Processing Small Systems Division, directed the project and did the product planning. The development team initially planned for the new personal computer to be a small version of the business computer line which originated from the 1973 NEAC System 100. Kazuya Watanabe stated that the personal computer must have Microsoft BASIC, considered compatibility of peripheral devices with previous NEC PCs, and disclosed specifications of its expansion slot. In September 1981, Hamada requested ASCII's Kazuhiko Nishi to rewrite N88-BASIC to run on the Intel 8086 processor, and Nishi replied, saying that he wanted to talk with Bill Gates. Three months later, Nishi rejected Hamada's request because Microsoft was busy with the development of GW-BASIC, and they did not want to produce more variants of Microsoft BASIC. Nishi told him, "Microsoft is rewriting a BASIC that has the same function with more structured internal code, and it will be sold as the definitive 16-bit version named GW-BASIC. We'll provide a BASIC sooner if you choose the Japanese adoption of GW BASIC." Hamada replied, "As I said, we want a BASIC that is compatible with the previous ones." They could not make an agreement.
Hamada could not decide which plan they should develop, a small business computer or a personal computer, because the possibilities of Watanabe's plan was uncertain. While they were visiting software companies to collect and research applications for the PC-8001 and PC-8801, Hamada and Watanabe discovered that the consumer market wanted a 16-bit machine compatible with both PCs. Hamada decided to adopt two plans for different markets. In April 1982, the small business personal computer became the NEC System 20 model 15, which used a proprietary 16-bit microprocessor. The machine was introduced as a new model of traditional business computers, so it was not notable.
In February 1982, the software development team started reverse engineering N88-BASIC and designing N88-BASIC(86). After the team finished in March 1982, they started development on the PC-9801 (named N-10 Project). A PC-9801 prototype was completed at the end of July 1982. The code of N88-BASIC (86) was written completely from scratch, but Nishi pointed out that the bytecode matched Microsoft's. It was unclear if the copyright law could apply to the bytecode. Nishi suggested to Hamada that NEC must have purchased the same amount of Microsoft's product as it corresponded to the license fee, and N88-BASIC(86) must show copyright notification for both Microsoft and NEC. Hamada approved it.
The team considered third-party developers to be very important for the market's expansion. They provided 50–100 prototypes and technical information for independent companies without a fee.
In 1981, the Terminal Units Division of the Information Processing Group also launched the personal computer series , which was branded as the "personal terminal". It used an Intel 8086 processor and a μPD7220 display controller. Its architecture was similar to that of the PC-98, but it mostly ran the proprietary operating system PTOS. NEC introduced it as an intelligent terminal or a workstation and was distinguished within personal computer lines. For this market, Fujitsu released the in 1981, and IBM Japan released the Multistation 5550 in 1983.
Release and growing
The first model, the PC-9801, launched in October 1982, and employs an 8086 CPU. It runs at a clock speed of 5 MHz, with two μPD7220 display controllers (one for text, the other for video graphics), and was shipped with 128 KB of RAM that can be expanded to 640 KB. Its 8-color display has a maximum resolution of 640×400 pixels.
When the PC-9801 launched in 1982, it was initially priced at 298,000 yen (about US$1,200 in 1982 dollars). It can use PC-88 peripherals such as displays and floppy disk drives, and it can run software developed for N88-BASIC with a few modifications. For new buyers, it required either an expensive 1232 KB 8-inch floppy drive or a smaller capacity 320 KB 5¼-inch floppy drive. The basic system can display JIS X 0201 characters including numbers, English characters, and half-width kana, so most users added an optional kanji ROM board for using a Japanese word processor. Its successor, the PC-9801F, employs an 8086-2 CPU, which can selectively run at a speed of either 5 or 8 MHz. The F2 model contains two 640 KB 5¼-inch 2DD (quad density) floppy drives, a JIS level 1 kanji (2,965 characters) font ROM, and was priced at 398,000 yen (about US$1,700 in 1983). It was positively received by engineers and businesses. Ozawa explained the reason why the PC-9801F used a 640 KB floppy drive, "For Japanese business softwares, 320 KB is small, 640 KB is just barely enough, and 1 MB is preferable. We want to choose a 1 MB floppy drive, but its 8-inch drive is expensive, and its 5-inch drive lacks reliability. So, we think 640 KB is the best choice. Also, it can read a 320 KB floppy disk".
The Electronic Devices Group launched the PC-100 in October 1983 and attempted to present a GUI similar to the Apple Lisa. The PC-100 did not sell well due to its time and high cost. Moreover, the marketing competed with the PC-98 of the Information Processing Group, which did not assure distributors. In December 1983, Ouchi decided that NEC would consolidate their personal computer business into two divisions: NEC Home Electronics to deal with the 8-bit home computer line, and Nippon Electric's Information Processing Group to deal with the 16-bit personal computer line. The Electronic Device Group passed off their personal computer business to NEC Home Electronics.
Fujitsu released the in December 1984. It has an Intel 80186 CPU at 8 MHz and a 1.2 MB 5¼-inch 2HD (high density) floppy drive. The FM-16β failed because it bundled CP/M-86, not MS-DOS, and was marketed by Fujitsu's Electronic Devices department instead of the Computers department. They modified their policies in mid-1985, but it was too late. In another incident, Fujitsu bundled a business software package with the FM-11 (predecessor to FM-16β) which discouraged users from purchasing third-party software and forced them to use it for a specific purpose, which caused Fujitsu to fail to expand their platform.
NEC introduced the PC-9801M2 that has two 5¼-inch 2HD floppy drives against the release of FM-16β. This model cannot read 2DD floppy disks. The PC-9801VM uses an NEC V30 CPU clocked at 10 MHz, and was released in July 1985. The VM2 model shipped with two 5¼-inch 2HD floppy drives and supports both 2DD and 2HD floppy disks. It became the best-selling computer in Japan, with annual sales of 210,000 units.
NEC permitted software companies to bundle a subset of MS-DOS 2.11 without a license fee between 1983 and 1987. ASCII and Microsoft allowed it to enter the market and compete with CP/M-86. NEC also let users buy a self-contained application package. It occupied half of the Japanese personal computer market at the end of 1983. As of March 1984, 700 software packages were available for PC-98. In 1987, NEC announced one million PC-98s were shipped, and about 3,000 software packages were available. Masayoshi Son recalled in 1985 that:
NEC took care to maintain compatibility and inheritance. The PC-9801VM can select a clock frequency between 8 and 10 MHz and also offers an optional 8086 card because the V30 has different instruction cycles. The V30 has unique instructions which are not implemented in other Intel x86 processors. Some PC-98 applications use them, so the PC-9801VX (1986) was designed to run Intel 80286 and V30 selectively. The PC-9801RA (1988) has an Intel 80386 and a V30. The PC-9801DA (1990) does not have this, but its clock speed is configurable.
NEC focused heavily on financing advertisements and exhibitions, from in the 1970s, to over in 1985.
While NEC did not market these specific machines in the West, it sold the NEC APC III, which had similar hardware as early PC-98 models. However, NEC began selling an IBM clone (APC IV) outside Japan in 1986. By 1990, PC-9800/9801 units were sold in Japan.
Race with laptops and PC-98 clones
Toshiba had developed laptop computers since the autumn of 1983, while their desktops were a failure in the Japanese PC market. In October 1986, they introduced the which allowed the T3100 to handle Japanese text. NEC did not expect it to become the first successful laptop computer. In Japan, a typical office layout is an open office that is made up of rows of tight desks, so laptop computers received a good reception from corporate customers. In the same month, NEC introduced the PC-98LT laptop computer. This model had poor compatibility with the PC-9801 and could not gain a significant profit. NEC understood, despite difficulties, that the PC-98 needed a new custom chipset to make the motherboard smaller.
In March 1987, Epson announced the first PC-98 clone desktop computer and named it the PC-286 series. NEC investigated and sued Epson on the claim that its BIOS infringed their copyright. Epson canceled their PC-286 model 1-4 and released the PC-286 model 0 whose BIOS was made by another team under a clean room design; it did not have a built-in BASIC interpreter. NEC countered that the PC-286 model 0 lacked compatibility with the PC-98. Although it seemed NEC would not be able to win the lawsuit, Epson settled with NEC in November 1987 after considering the damage that would be done to their reputation.
The PC-286 model 0 employs an Intel 80286 processor operating at 10 MHz - 25% faster than NEC's PC-9801VX using the same CPU at 8 MHz. In June 1987, NEC released a 10 MHz version of the PC-9801VX (VX01, VX21 and VX41 models). They added a BIOS signature check to their operating systems to prevent non-NEC machines from booting the OSes; it was commonly called an "EPSON check". In September 1987, Epson introduced the PC-286V and the PC-286U and also released the BASIC Support ROM to add a BASIC interpreter to their computers. Epson also bundled the Software Installation Program which was a patch kit to remove the EPSON check. Both machines were received well due to their reasonable prices and better compatibility. In 1988, Epson made annual sales of 200,000 units and successfully established PC-98 clones in the Japanese PC market.
In October 1987, Epson released the PC-286L which was a PC-98 compatible laptop before NEC started development of their own laptop. In March 1988, NEC released the PC-9801LV which was a 100% PC-98 compatible laptop. It was accomplished by three custom VLSI chips. These chipsets are also used in other desktops such as the PC-9801UV11 and the PC-9801RA.
In July 1989, Toshiba released the J-3100SS branded as the DynaBook, a true laptop computer which was light and battery operated. It made annual sales of 170,000 units. Four months later, NEC released the PC-9801N and branded it as the "98NOTE". The DynaBook started off well but the 98NOTE outsold it in 1990.
Microsoft and other PC manufacturers developed the AX specification in 1987. It allowed IBM PC clones to handle Japanese text by using special video chips, the Japanese keyboard, and software written for it. However, the AX could not break into the Japanese PC market due to its higher cost and less compatible software.
Sharp X68000 and Fujitsu FM Towns intended to offer a multimedia platform for home users. Both have rich graphics and sound capabilities in comparison to the basic configuration of PC-98. They enjoyed modest success, but not enough to threaten the domination of the PC-98.
The Nikkei Personal Computing magazine stated in January 1992 that "users choose a PC with considering compatibility and expandability. The PC-9800 compatibles is the most popular, and the IBM PC/AT compatibles also gains their strong support. PC users are stubborn and conservative. We conclude these opinions are related to the slump in other PC sales including Fujitsu FMR, Sharp X68000, AX machines, Canon Navi and rapidly declining 8-bit machines like MSX."
As a PC game platform
In the early 1980s, home users chose 8-bit machines rather than 16-bit machines because 16-bit systems were expensive and designed exclusively for business. By the mid-1980s, the Japanese home computer market was dominated by the NEC PC-88, the Fujitsu FM-7, and the Sharp X1. In this era, simulation games was the most popular genre for PC-98, which took advantage of higher clock speed and larger memory reserves. The Daisenryaku series and the Romance of the Three Kingdoms series were particularly popular and they established PC-98 as a PC game platform.
Towards the end of the 1980s, the Japanese PC game platform slowly shifted from PC-88 to PC-98, as the X68000 and the FM Towns also had a niche market. In the 1990s, many computer role-playing games were developed for the PC-98 or imported from other platforms, such as Brandish, Dungeon Master and the Alone in the Dark series. The higher display resolution and higher storage capacity allowed better graphics, but because of the PC-98's lack of hardware sprites, most of the games made for the system were slow-paced. As a result of this limitation, adult dating sims and visual novels appeared as a revival of 1980's adventure games and gained popularity, such as Dōkyūsei and YU-NO. After the PC-98 declined, many Japanese PC game developers shifted the game platform to video game consoles, except for eroge distributed by computer stores.
Price war with DOS/V PCs
In the 1980s and early 1990s, NEC dominated the Japanese domestic PC market with more than 60% of the PCs sold as PC-9801 or PC-8801. In 1990, IBM Japan introduced the DOS/V operating system which enabled displaying Japanese text on standard IBM PC/AT VGA adapters. Other Japanese PC manufacturers joined the PC Open Architecture Developer Group (OADG) organized by IBM Japan and Microsoft. In October 1992, Compaq released a DOS/V computer priced at compared to the lowest priced PC-98 at , causing a price war in the Japanese PC market. In 1993, Toshiba introduced DOS/V computers, Epson founded Epson Direct Corporation to sell DOS/V computers, and Fujitsu started selling DOS/V computers branded as FMV.
In November 1992, NEC introduced a mid-range Windows PC, the PC-9821 which contained an Intel 386SX processor, a CD-ROM drive, 16-bit PCM audio playback, MS-DOS 5.0A, and Windows 3.0A. In January 1993, PC-98 desktops were expanded into three lines: a high-performance Windows-based line named "98MATE", a low-priced MS-DOS line named "98FELLOW", and an all-in-one desktop line named "98MULTi". PC-98s were still popular among Japanese users because they had many applications.
NEC managed to adopt industrial standards and reduce costs. From 1993 to 1995, the PC-98 adopted 72-pin SIMMs, the 3.5-inch 1.44 MB floppy format, IDE storage drives, a 640×480 DOS screen mode, 2D GUI acceleration GPUs, Windows Sound System, PCI slots, and PCMCIA card slots. NEC had outsourced manufacturing of motherboards to Taiwanese companies such as ECS and GVC (acquired by Lite-On).
Decline
Aside from other Japanese domestic platforms which had disappeared, Windows 95 overturned the dominance of the PC-98. The difference in the architecture was not only ineffective for platform-independent environments but also increasing development resources to adopt them.
During the development of Windows 95, NEC sent an average of 20 engineers to Microsoft's office in Seattle. Even though the PC-98 uses some IBM clone components, Windows requires the special driver or HAL to support its IRQ, I/O and C-Bus. The Nikkei Personal Computing magazine wrote, "The PC-98 features a number of MS-DOS applications, but there is no difference between PC-98 and PC/AT clones for using Windows 95. The status of the PC-98 series is not based on its hardware feature or the number of softwares and peripherals, but its strength in promotion, parts procurement and faith in the NEC brand."
In 1997, NEC introduced the PC98-NX series as a main personal computer line that conformed to the PC System Design Guide and was Windows-based IBM PC compatible but not DOS/V compatible. The PC-9801's last successor was the Celeron-based PC-9821Ra43 (with a clock frequency of 433 MHz, using a 440FX chipset-based motherboard design from 1998), which appeared in 2000. NEC announced that the PC-98 would be discontinued in 2003. 18.3 million of PC-98s shipped by the end of shipments in March 2004. The last version of Windows to support PC-98 is Windows 2000.
Hardware
The PC-98 is different from the IBM PC in many ways; for instance, it uses its own 16-bit C-bus (Cバス) instead of the ISA bus; BIOS, I/O port addressing, memory management and graphics output are also different. However, localized MS-DOS, Unix, OS/2, or Windows will still run on PC-9801s.
Expansion bus
All PC-98 desktop models use a 100-pin expansion slot. It has 16 data lines and 24 address lines. The bus frequency is fixed at 5, 8 or 10 MHz. The PC-H98 and PC-9821A series computers use a proprietary 32-bit local bus slot along with 16-bit slots. The 16-bit expansion bus was also called C-bus (Compatible Bus). The PC-9821Xf introduced in 1994 shipped with C-bus slots and PCI slots on the motherboard, replacing the proprietary local bus slot.
Memory
Many PC-9801 models can increase system memory by installing expansion boards, daughterboards, or proprietary SIMMs. They are limited to 14.6 MB, due to 24-bit address pins and reserve space. EMS memory boards for C-bus are also available. The PC-9821Af introduced in 1993 shipped with standard 72-pin SIMMs, broke the 14.6 MB barrier, and supported memory up to 79.6 MB. Later desktop models shipped with standard SIMM or DIMM memory.
The PC-98XA (1985) and its successors, called high-resolution machine or simply hi-reso machine, are capable of 768 KB base memory, but their I/O ports and memory addressing are quite different from normal PC-98s.
Storage
Early PC-9801 models supported 1232 KB 8-inch floppy drives and/or 640 KB -inch floppy drives. Each used different IRQ lines and I/O ports. Later models support both interfaces. High density -inch and -inch floppy disks use the same logical format and data rate as 1232 KB 8-inch floppy disks. They became a non-standard format while formats brought by IBM PC/AT and PS/2 became the industry standard.
The PC-98 supports up to four floppy drives. If the system is booted from a floppy drive, MS-DOS assigns letters to all of the floppy drives before considering hard drives; it does the opposite if it is booted from a hard drive. If the OS was installed on the hard drive, MS-DOS would assign the hard drive as drive "A:" and the floppy as drive "B:"; this would cause incompatibility among Windows PC applications, although it can be resolved with the SETUP command in Windows 95 by turning on the "/AT" switch to assign the Windows system drive to the standard "C:" drive.
The PC-98 uses several different interfaces of hard drives. Early models used Shugart Associates System Interface (SASI) or ST506, and later models used SCSI or IDE drives.
Graphics
A standard PC-98 has two μPD7220 display controllers (a master and a slave) with 12 KB and 256 KB of video RAMs respectively. The master display controller provides video timings and the memory address for the character generator, and the character generator generates a video signal from two bytes of the character code and a single byte of the attribute. The font ROM contains over 7,000 glyphs including the single-byte character set JIS X 0201 and the double-byte character set JIS X 0208, although early models provided the double-byte character set as an option. Each character has a variety of display options, including bits for secret, blinking, reverse, underline, and three intensity bits (grayscale or RGB). The other display controller is set to slave mode and connected to 256 KB of planar video memory, allowing it to display 640 × 400 pixel graphics with 16 colors out of a palette of 4096. This video RAM is divided into pages (2 pages × 4 planes × 32 KB in 640 × 400 with 16 colors), and the programmer can control which page is written to and which page is output. The slave display controller synchronizes with the master, so the text screen can be overlapped onto the graphics.
The high-resolution machines (PC-98XA, XL and PC-H98) offered an 1120 × 750 display mode and aimed for tasks such as CAD and word processing.
The PC-9801U (optional) and VM introduced a custom chipset GRCG (GRaphic CharGer) to access several planar memory in parallel. The PC-9801VX introduced a blitter chip called the EGC (Enhanced Graphic Charger). It had raster operations and bit shifting.
In 1993, NEC introduced a 2D Windows accelerator card for PC-98, called the Window Accelerator Board, which employed a S3 86C928. Video cards for C-bus, local bus and PCI are also available from other manufacturers. DirectX 7.0a is the last official supported version for PC-98.
Sound
The first generation of PC-9801s (the E, F and M models) only have an internal buzzer. PC-9801U2 and later models can change the sound frequency by controlling the programmable interval timer, like the PC speaker. The PC-8801mkIISR home computer, introduced in 1985, has a Yamaha YM2203 FM synthesis, an Atari joystick port and BASIC sound commands. The optional PC-9801-26 sound card is based around these features, although in some PC-9801 models it is integrated with the motherboard. It was replaced by the PC-9801-26K to support the 80286 CPU. This became the most common sound card for playing in-game music on the PC-98.
The PC-9801-26K was succeeded by the PC-9801-73 (1991) and PC-9801-86 (1993) sound card, which employs the YM2608 FM synthesis and adds support for CD-quality PCM playback. The latter has a reasonable price and fully backward compatibility with the 26K sound card, so it gained strong support in PC games. Due to lack of DMA support and poor sound drivers, it often has issues in Windows and created popping and clicking sounds. Late PC-9821 models use the Crystal Semiconductor's Windows Sound System audio codec to resolve this, but the newer sound chip is not compatible with the older conventional sound cards. The PC-9801-118 (1995) sound card has both the YMF297 (hybrid of YM2608 and YMF262) and the WSS audio, but its PCM playback is not compatible with the 86 sound card.
Roland released a music production starter kit for PC-98, which combines the MT-32 synthesizer module, a MIDI interface card and a MIDI editing software. Creative Labs developed a C-bus variant of Sound Blaster 16.
Keyboard
The first PC-9801 model has the same keyboard layout as the PC-8801's except it adds conversion key XFER and five function keys. Later models have some minor changes: NFER, 15 function keys, LED status indicators, and replacing CAPS and alternate action switches.
Mouse
The bus mouse and interface card kit was introduced for PC-98 in 1983. The PC-9801F3 and later models have a mouse interface. Although the PS/2 port became popular among IBM PC clones in the 1990s, the bus mouse was used until the end of PC-98.
Epson clones
Seiko Epson manufactured between 1987 and 1995, as well as compatible peripherals.
In the 1980s, Epson's clones preceded NEC's in terms of its features such as performance and portability. In the early 1990s, Epson concentrated a line of low-priced computer which had low profit margins, but it did not sell well. This made resellers negative. Also, NEC had strong sales in the enterprise market, but Epson did not. Manufacturers of DOS/V computers began to get sales channels, and they became competitors for the PC-98 and its Epson clones. Nikkei Personal Computing magazine reported in 1992 that "NEC has various opinions inside the company about the future PC-98, and it is doubtful whether the PC-98 will continue to be the domestic standard. The decline of the 98 compatible machine business may lead to the decline of the PC-98 itself".
In May 1992, Epson released a high performance machine, the PC-486GR. It has a 32-bit local bus for graphic processing and an Intel 486SX CPU running at 20 MHz, which was faster than NEC's flagship PC-9801FA, which had a 486SX running at 16 MHz. In January 1993, NEC released the 98MATE to compete with Epson's clones and DOS/V computers.
From 1992 to 1994, Epson sold about 200,000 units of PC-98 clones every year. As of 1994, Epson expected only 40% growth in their PC sales despite they expected 100% growth in their peripherals sales by 1995. However, Nikkei Personal Computer magazine expected that Epson would continue manufacturing PC-98 clones for a while because NEC had kept a 50% share of the Japanese PC market.
AST Research Japan released the DualStation 386 SX/16 in 1990 which was both PC-9801 and IBM PC compatible, but it failed because of poor marketing.
Sharp, Sanyo and Seikosha each worked on PC-98 clones, but all gave up. An executive of Sanyo said, "NEC paid far more attention to its copyright than we had imagined. We decided that the loss of our corporate image would be greater than its profit, and cancelled the 98 compatible machine business."
Software
The PC-98 was primarily used for businesses and industry in Japan from 1980s to mid-1990s. As of September 1992, out of 16,000 PC-98 software applications, 60% of them were corporate business software applications (including CAD), 10% of them were operating systems and development tools, 10% of them were educational software applications, with the rest being a mix of graphic design, networking, word processing and games. The Nikkei Personal Computing magazine reported in 1993 that most home users purchased PCs to complete office work at home. The publisher sent a questionnaire to 2000 readers, and out of 1227 readers who answered, 82% of users used it for word processing, 72% for spreadsheets, 47% as a database, and 43% for games.
Ichitaro, a Japanese word processor for the PC-98 and considered one of its killer applications, was released in 1985 and ported to other machines in 1987. A Japanese version of Lotus 1-2-3 was also ported to PC-98 first in 1986. 1 million copies of all Ichitaro versions and 500,000 copies of Lotus 1-2-3 were shipped by 1991.
PC-98 software generally ran from program and data disks (Disk 0 & 1 or A & B). For example, Ichitaro's system disk contained a runtime version of MS-DOS, main programs, an input method editor (ATOK) and its dictionary file. It used the entire space of an 1.2 MB floppy disk. In 1980s, most machines only had two floppy drives because HDDs were an expensive additional feature.
NEC provided a variety of operating systems including CP/M-86, Concurrent CP/M, MS-DOS, PC-UX, OS/2 and Windows (discontinued after Windows 2000). Localized versions of NetWare and FreeBSD were also available.
The PC-98 had many game titles designed for it, many of which made creative use of the system's limitations (as it was originally designed as a business machine) to great commercial success. Despite having hardware specifications inferior to the FM Towns and the X68000, the massive install base and steady flow of game titles (in particular "dōjin" style dating sims and RPGs, as well as games such as Policenauts, YU-NO: A Girl Who Chants Love at the Bound of this World, Koutetsu no Kishi, Mayonaka no Tantei Nightwalker, MechWarrior, Rusty, Hiōden: Mamono-tachi tono Chikai, Shūjin e no Pert-em-Hru, Corpse Party, Slayers, J.B. Harold Murder Club and Touhou Project) kept it as the favored platform for PC game developers in Japan until the rise of the DOS/V clones.
Models
Partial list of PC-98 models sold in the Japanese market (no 1992–2000 models, no notebook models, etc.).
Timeline of PC-9801 models
Reception
Marketing
A journalist explained in 1988 how NEC established the nation of Japanese personal computers:
NEC responded quickly to the new demand for business personal computers.
NEC succeeded in attracting many third-party suppliers and dominated software production and distribution.
NEC adopted Microsoft's MS-DOS as an operating system for the PC-98.
Western computers lacked Japanese support due to its display resolution and memory, so they could not get into the Japanese PC market until the DOS/V and faster computers came out; for example, IBM Japan sold the IBM 5550 instead of the IBM PC. Yoshihiko Hyodo, a programmer who developed the text editor VZ Editor, said two advantages that the PC-98 had were its kanji character memory and non-interlaced monitor. Both provided users with a more comfortable Japanese environment. A senior vice president of Otsuka Shokai (a computer distributor for enterprises) recalled that "early users such as Kao already had office automation with the PC-8000, but it lacked speed and kanji support. Then, the PC-9800 was released, and it was perfect, so distributors and users immediately switched to it."
Shunzo Hamada of NEC thought the biggest reason for the success of PC-98 was that NEC could get software companies to cooperate. He said, "Third-party suppliers of Japanese PCs had already grown up to a certain extent. However, it was not because they were organized. They were born by themselves, and hardware manufacturers didn't touch them. When we developed the PC-9800 series, we changed our method to make a conscious effort to grow them up". PC-98's large software library assured buyers that the machine could be used for all purposes, although most users actually purchased only a few major software.
Ichiran Kou, a computer consultant, pointed out that IBM influenced NEC's strategy. Since 1982, NEC had four personal computer lines, and they covered a wide price range, similar to IBM's mainframe business. However, NEC's computers had poor backward compatibility and as such was criticized by users and software developers. After reforming personal computer lines in 1983, NEC began expanding the PC-9801 series and its number of models exceeded its competitors.
NEC encouraged third-party developers as IBM did for the IBM PC. The basic hardware of PC-98 was also similar to the IBM PC, though it was not IBM compatible. Kou guessed that NEC avoided releasing an IBM compatible PC because the company was proud of developing an original mainframe.
Yasuhiro Uchida, a literature professor, wrote an essay titled "Users chose the most playable PC". He felt the PC-98 was an "ordinary" 16-bit personal computer, but it had plenty of games because it did not deny the playability. He theorized that Fujitsu did not consider the 16-bit personal computer as a game platform, and considered IBM JX handling games to be of minor importance, which made personal computers less attractive. He concluded that the actual value of personal computers must be found by not sellers but consumers.
Legacy
A writer of the ASCII magazine wrote that the Japanese input method and the Japanese video game industry were significantly developed in the PC-98 era. Because the PC-98 had a kanji character ROM, Japanese applications were developed for it, which influenced Japanese input methods being developed for them; the two built on each other. Software companies that developed games for the PC-98 immediately expanded the video game business on the Famicom platform. He believed most programmers learned computer programming on the PC-98 at that time.
Criticism
In the late 1980s, competitors accused NEC of monopolizing the Japanese computer market. Takayoshi Shina, a founder of Sord, said, "The Japanese PC market is suffocating because of one company's dominance. There is no freedom. This is why its prices are 3-4 times as expensive as America's. To fulfill the same international price as America's, we really need the era of clone computers." A software company also complained that "although there are few excellent engineers in Japan, the more incompatible machines appear, the more development resources are divided."
Unlike IBM PCs and the Apple II, every Japanese personal computer had a short lifespan; NEC released a new model of the PC-98 every year. When the PC-9801VX01/21/41 models brought a new BASIC interpreter which supported the Enhanced Graphic Charger (EGC) chipset, most commercial software did not use it as they were written in C. Many developers did not follow it because they wanted to make their software less dependent on a specific platform. A software developer said, "Using the special one (EGC) goes against the trend. I don't want to use it if new machines come out so frequently."
| Technology | Specific hardware | null |
1918188 | https://en.wikipedia.org/wiki/Galley | Galley | A galley is a type of ship optimised for propulsion by oars. Galleys were historically used for warfare, trade, and piracy mostly in the seas surrounding Europe. It developed in the Mediterranean world during antiquity and continued to exist in various forms until the early 19th century. It typically had a long, slender hull, shallow draft, and often a low freeboard. Most types of galleys also had sails that could be used in favourable winds, but they relied primarily on oars to move independently of winds and currents or in battle. The term "galley" originated from a Greek term for a small type of galley and came in use in English from about 1300. It has occasionally been used for unrelated vessels with similar military functions as galley but which were not Mediterranean in origin, such as medieval Scandinavian longships, 16th-century Acehnese ghalis and 18th-century North American gunboats.
Galleys were the primary warships used by the ancient Mediterranean naval powers, including the Phoenicians, Greeks and Romans. The galley remained the dominant type of vessel used for war and piracy in the Mediterranean Sea until the start of the early modern period. A final revival of galley warfare occurred during the 18th century in the Baltic Sea during the wars between Russia, Sweden, and Denmark. In the Mediterranean, they remained in use until the very end of the 18th century, and survived in part because of their prestige and association with chivalry and land warfare. In war, galleys were used in landing raids, as troop transports and were very effective in amphibious warfare. While they usually served in wars or for defense against piracy, galleys also served as trade vessels for high-priority or expensive goods up to the end of the Middle Ages. Its oars guaranteed that it could make progress where a sailing ship would have been becalmed, and its large crew could defend it against attacks from pirates and raiders. This also made it one of the safest and most reliable forms of passenger transport, especially for Christian pilgrims during the High and Late Middle Ages.
For naval combat, galleys were equipped with various weapons: rams and occasionally catapults until late antiquity, Greek fire during the Early Middle Ages, and cannons from the 15th century. However, they relied primarily on their large crews to overpower enemy vessels through boarding. Galleys were the first vessels to effectively use heavy gunpowder artillery against other ships and naval fortifications. Early 16th-century galleys had heavy guns in the bow which were aimed by manoeuvring the entire vessel. Initially, gun galleys posed a serious threat to sailing warships, but were gradually made obsolete by the development of full-rigged ships with superior broadside armament. Galleys were unsuitable in the wider ocean, far from land and bases of resupply. They had difficulty in rough weather. Their role as flexible cruisers and patrol craft in the Mediterranean was also taken over by xebecs and other oar-sail hybrids.
Oars on ancient galleys were usually arranged in 15–30 pairs, from monoremes with a single line of oars to triremes with three lines of oars in a tiered arrangement. Occasionally, much larger polyremes had multiple rowers per oar and hundreds of rowers per galley. Ancient shipwrights built galleys using a labour-intensive, shell-first mortise and tenon technique up until the Early Middle Ages. It was gradually replaced by a less expensive skeleton-first carvel method. The rowing setup was also simplified and eventually developed into a system called with up to three rowers sharing a single bench, handling one oar each. This was suitable for skilled, professional rowers. This was further simplified to the method with rowers sharing a bench but using just a single large oar, sometimes with up to seven or more rowers per oar in the very largest war galleys. This method was more suitable for the use of forced labour, both galley slaves and convicts. Most galleys were equipped with sails that could be used when the wind was favourable: basic square sails until the Early Middle Ages and later lateen sails.
Etymology
The word galley has been attested in English from about 1300. Variants of the same term were established in many other European languages from around 1500 both as a general term for oared warships or more specifically for the Mediterranean-style vessel. The term derives from the Medieval Greek , a smaller version of the dromon, the prime warship of the Byzantine navy. The origin of the Greek word is unclear but could possibly be related to , the Greek word for dogfish shark.
Terminology
Throughout history, there has been a wide variety of terms used for different types of galleys. In modern historical literature, a galley is defined as a vessel relying primarily on oars, but which can also use sails when necessary, and which developed in the Mediterranean. "Galley" is also occasionally used as a generic term for any type of oared vessels that are larger than boats and with similar functions but which are built according to other shipbuilding traditions.
It was only from the Late Middle Ages that a unified galley concept started to come into use. Ancient galleys were named according to the number of oars, the number of banks of oars or rows of rowers. The terms are based on contemporary language use combined with recent compounds of Greek and Latin words. The earliest Greek single-banked galleys are called triaconters (from , "thirty-oars") and penteconters (, "fifty-oars"). For later galleys with more than one bank of oars, the terminology is based on Latin numerals with the suffix -reme from , "oar". A monoreme has one bank of oars, a bireme two, and a trireme three. A human-powered oared vessel is not practically feasible as four or more oars to a bank will either interfere with each other, or be too high above the waterline to be practicable. In describing galleys, any number higher than three did not refer to banks of oars, but to additional rowers per oar. Quinquereme ( + ) was literally a "five-oar", but actually meant that there were more than one rower per oar in a bireme or trireme arrangement. For simplicity, many modern scholars refer to these as "fives", "sixes", "eights", "elevens", etc. Anything above six or seven rows of rowers was uncommon, but even an entirely unique "forty" has been attested from the 3rd century BC. Any galley with more than three or four lines of rowers is often referred to as a "polyreme".
Medieval and early modern galleys were described based on the changing designs that evolved after the ancient designs and rowing arrangement had been forgotten. Among the most important is the Byzantine dromon, the predecessor to the Italian , the final form of the Mediterranean war galley. As galleys became an integral part of an advanced, early modern system of warfare and state administration, they were divided into a number of ranked grades based on the size of the vessel and the number of its crew. The most basic types were the large commander "lantern galleys", half-galleys, galiots, fustas, brigantines, and . Naval historian Jan Glete has described these as a sort of predecessor of the later rating system of the Royal Navy and other sailing fleets in Northern Europe.
Similar oared vessels
Classicist Lionel Casson has applied the term "galley" to oared Viking ships of the Early and High Middle Ages, both their well known longship warships and their less familiar merchant galleys. Oared military vessels built on the British Isles in the 11th to 13th centuries were based on Scandinavian designs, but were referred to as "galleys" because of the similarity in function. Many of them were similar to birlinns (a smaller version of the Highland galley), close relatives of longship types like the snekkja. By the 14th century, they were replaced with balingers in southern Britain while longship-type Highland and Irish galleys and birlinns remained in use throughout the Middle Ages in northern Britain.
The French navy and the Royal Navy built a series of "galley frigates" from around 1670–1690 that were small two-decked sailing cruisers with a single row of oarports on the lower deck, close to the waterline. The three British galley frigates also had distinctive names – James Galley, Charles Galley, and Mary Galley. In the late 18th century, the term "galley" was sometimes used to describe small oared gun-armed vessels. In North America, during the American Revolutionary War and other wars with France and Britain, the early US Navy and the navies they fought built vessels that were referred to "galleys" or "row galleys". These are today best described as brigantines or Baltic-style gunboats. The North American "galleys" were classified based on their military role, and in part due to technicalities in administration and naval financing. In the latter part of the 19th century, the Royal Navy term for the gig (a ship's boat optimised for propulsion by oar) reserved for the captain's use was "galley" even though it was issued to the ship by the navy dockyard as a "gig".
Early history
Among the earliest known watercraft were canoes made from hollowed-out logs, the earliest ancestors of galleys. Their narrow hulls required them to be paddled in a fixed sitting position facing forward, a less efficient form of propulsion than rowing with proper oars, facing backward. Seagoing paddled craft have been attested by finds of terracotta sculptures and lead models in the region of the Aegean Sea from the 3rd millennium BC. However, archaeologists believe that the Stone Age colonization of islands in the Mediterranean around 8,000 BC required larger seaworthy vessels that were paddled and possibly even equipped with sails. The first evidence of more complex craft considered prototypes for later galleys comes from Ancient Egypt during the Old Kingdom (about 2700–2200 BC). Under the rule of pharaoh Pepi I (2332–2283 BC) these vessels were used to transport troops to raid settlements along the Levantine coast and to ship back slaves and timber. During the reign of Hatshepsut (about 1479–1457 BC), Egyptian galleys traded in luxuries on the Red Sea with the enigmatic Land of Punt, as recorded on wall paintings at the Mortuary Temple of Hatshepsut at Deir el-Bahari.
The first Greek galley-like ships appeared around the second half of the 2nd millennium BC. In the epic poem, the Iliad, set in the 12th century BC, oared vessels with a single row of oarmen were used primarily to transport soldiers between land battles. The first recorded naval battle occurred as early as 1175 BC, the Battle of the Delta between Egyptian forces under Ramesses III and the enigmatic alliance known as the Sea Peoples. It is the first known engagement between organized armed forces using sea vessels as weapons of war, though primarily as fighting platforms.
The Phoenicians were among the most significant naval civilizations in early classical antiquity, but little detailed evidence has been found of what kind of ships they used. The best depictions found so far have been small, highly stylized images on seals which illustrate crescent-shaped vessels equipped with a single mast and bank of oars. Colorful frescoes at the Minoan settlement on Santorini (about 1600 BC) depict vessels with tents arranged in a ceremonial procession. Some of the vessels are rowed, but others are paddled. This has been interpreted as a possible ritual reenactment of more ancient types of vessels, alluding to a time before rowing was invented. Little is otherwise known about the use and design of Minoan ships.
Mediterranean galleys from around the 9th century typically had 15 and 25 pairs of oars ("triaconters" and "penteconters", respectively) with just one level of oars on each side, or "monoremes". Sometime during the 8th century the first bireme galleys were built by adding a second level of rowers, one level above the other. This created a second bank of oars, adding more propulsion power with the same length of hull. It made galleys faster, more maneuverable and sturdier. Phoenician shipbuilders were likely the first to build two-level galleys, and bireme designs were soon adopted and further developed by the Greeks. A third bank of oar was added by attaching an outrigger to a bireme. The outrigger was a projecting frame that gave additional leverage for a third rower to handle an oar efficiently. It has been hypothesized that early forms of three-banked ships may have existed as early as 700 BC, but the earliest conclusive written reference dates to 542 BC. These new galleys were called (literally "three-fitted") in Greek. Romans later applied the term which is the origin of "trireme" and the name used most commonly today.
Trade and travel
Until at least the late 2nd century BC, there was no clear distinction between ships of trade and war other than how they were used. River boats plied the waterways of ancient Egypt during the Old Kingdom (2700–2200 BC) and seagoing galley-like vessels were recorded bringing back luxuries from across the Red Sea in the reign of pharaoh Hatshepsut. When rams or cutwaters were fitted to the bows of warships sometime around 700 BC, it resulted in a more distinct split between warships and trade ships. Phoenicians used galleys for trade that were less elongated, carried fewer oars and relied more on sails. Carthaginian trade galley wrecks found off Sicily that date to the 3rd or 2nd century BC had a length to breadth ratio of 6:1, proportions that fell between the 4:1 of sailing merchant ships and the 8:1 or 10:1 of war galleys.
Most of the surviving documentary evidence comes from Greek and Roman shipping, though it is likely that merchant galleys all over the Mediterranean were highly similar. In Greek they were referred to as ("sail-oar-er") to reflect that they relied on both types of propulsion. In Latin they were called actuaria (navis), "(ship) that moves", stressing that they were capable of making progress regardless of weather conditions. As an example of the speed and reliability, during an instance of the famous "Carthago delenda est" speech, Cato the Elder demonstrated the close proximity of the Roman arch enemy Carthage by displaying a fresh fig to his audience that he claimed had been picked in North Africa only three days past. Other cargoes carried by galleys were honey, cheese, meat, and live animals intended for gladiator combat. The Romans had several types of merchant galleys that specialized in various tasks, out of which the with up to 50 rowers was the most versatile, including the (lit. "bean pod") for passenger transport and the , a small-scale express carrier. Many of these designs continued to be used until the Middle Ages.
After the fall of the Western Roman Empire around the 5th century AD, the old Mediterranean economy collapsed and the volume of trade went down drastically. The Eastern Roman Empire neglected to revive overland trade routes, but was dependent on keeping the sea lanes open to keep the empire together. In 600–750 AD bulk trade declined while luxury trade increased. Galleys remained in service, but were profitable mainly in the luxury trade, which set off their high maintenance cost. In the 10th century, there was a sharp increase in piracy which resulted in larger trade ships with more numerous crews. These were mostly built by the growing maritime republics of Italy which were emerging as the dominant sea powers, including Venice, Genoa, and Pisa. Their merchant galleys were similar to dromons, but without heavy weapons and both faster and wider. The largest types were used by Venice, based on galley types like the and . They had tower-like superstructures and could be manned by crews of up to 1,000 men and could be employed in warfare when required. A further boost to the development of the large merchant galleys was the increase in Western European pilgrims traveling to the Holy Land. In Northern Europe, Viking longships and their derivations, knarrs, dominated trading and shipping. They functioned and were propelled similar to the Mediterranean galleys, but developed from a separate building tradition.
In the Mediterranean, merchant galleys continued to be used during the High and Late Middle Ages, even as sailing vessels evolved more efficient hulls and rigging. The zenith in the design of merchant galleys came with the state-owned "" of the Venetian Republic, first built in the 1290s. The great galleys were in all respects larger than contemporary war galleys (up to 46 m) and had a deeper draft, with more room for cargo (140–250 tonnes). With a full complement of rowers ranging from 150 to 180 men, all available to defend the ship from attack, they were also very safe modes of travel. This attracted a business of carrying rich pilgrims to the Holy Land, a trip that could be accomplished in as little 29 days on the route Venice–Jaffa, despite landfalls for rest and watering, or to shelter from rough weather. Later routes linked ports around the Mediterranean, between the Mediterranean and the Black Sea, and between the Mediterranean and Bruges. In 1447 Florentine galleys could stop at as many as 14 ports on their way to and from Alexandria in Egypt.
Ancient and medieval warfare
The earliest use for galleys in warfare was to ferry fighters from one place to another, and until the middle of the 2nd millennium BC had no real distinction from merchant freighters. Around the 14th century BC, the first dedicated fighting ships were developed, sleeker and with cleaner lines than the bulkier merchants. They were used for raiding, capturing merchants and for dispatches. During this early period, raiding became the most important form of organized violence in the Mediterranean region. Casson used the example of Homer's works to show that seaborne raiding was considered a common and legitimate occupation among ancient maritime peoples. The later Athenian historian Thucydides described it as having been "without stigma" before his time.
The development of the ram sometime before the 8th century BC changed the nature of naval warfare, which had until then been a matter of boarding and hand-to-hand fighting. With a heavy projection at the foot of the bow, sheathed with metal, usually bronze, a ship could incapacitate an enemy ship by punching a hole in its planking. The relative speed and nimbleness of ships became important, since a slower ship could be outmaneuvered and disabled by a faster one. The earliest designs had only one row of rowers that sat in undecked hulls, rowing against thole pins, or oarports, that were placed directly along the railings. The practical upper limit for wooden constructions fast and maneuverable enough for warfare was around 25–30 oars per side. By adding another level of oars, a development that occurred no later than c. 750 BC, the galley could be made shorter with as many rowers, while making them strong enough to be effective ramming weapons.
The emergence of more advanced states and intensified competition between them spurred on the development of advanced galleys with multiple banks of rowers. During the middle of the first millennium BC, the Mediterranean powers developed successively larger and more complex vessels, the most advanced being the classical trireme with up to 170 rowers. Triremes fought several important engagements in the naval battles of the Greco-Persian Wars (502–449 BC) and the Peloponnesian War (431–404 BC), including the Battle of Aegospotami in 405 BC, which sealed the defeat of Athens by Sparta and its allies. The trireme was an advanced ship that was expensive to build and to maintain due its large crew. By the 5th century, advanced war galleys had been developed that required sizable states with an advanced economy to build and maintain. It was associated with the latest in warship technology around the 4th century BC and could only be employed by an advanced state with an advanced economy and administration. They required considerable skill to row and oarsmen were mostly free citizens who had years of experience at the oar.
Hellenistic era and rise of the Republic
As civilizations around the Mediterranean grew in size and complexity, both their navies and the galleys that made up their numbers became successively larger. The basic design of two or three rows of oars remained the same, but more rowers were added to each oar. The exact reasons are not known, but are believed to have been caused by addition of more troops and the use of more advanced ranged weapons on ships, such as catapults. The size of the new naval forces also made it difficult to find enough skilled rowers for the one-man-per-oar system of the earliest triremes. With more than one man per oar, a single rower could set the pace for the others to follow, meaning that more unskilled rowers could be employed.
The successor states of Alexander the Great's empire built galleys that were like triremes or biremes in oar layout, but manned with additional rowers for each oar. The ruler Dionysius I of Syracuse (–367 BC) is credited with pioneering the "five" and "six", meaning five or six rows of rowers plying two or three rows of oars. Ptolemy II (283–46 BC) is known to have built a large fleet of very large galleys with several experimental designs rowed by everything from 12 up to 40 rows of rowers, though most of these are considered to have been quite impractical. Fleets with large galleys were put in action in conflicts such as the Punic Wars (246–146 BC) between the Roman Republic and Carthage, which included massive naval battles with hundreds of vessels and tens of thousands of soldiers, seamen, and rowers.
Roman Imperial era
The Battle of Actium in 31 BC between the forces of Augustus and Mark Antony marked the peak of the Roman fleet arm. After Augustus' victory at Actium, most of the Roman fleet was dismantled and burned. The Roman civil wars were fought mostly by land forces, and from the 160s until the 4th century AD, no major fleet actions were recorded. During this time, most of the galley crews were disbanded or employed for entertainment purposes in mock battles or in handling the sail-like sun-screens in the larger Roman arenas. What fleets remained were treated as auxiliaries of the land forces, and galley crewmen themselves called themselves , "soldiers", rather than , "sailors".
The Roman galley fleets were turned into provincial patrol forces that were smaller and relied largely on liburnians, compact biremes with 25 pairs of oars. These were named after an Illyrian tribe known by Romans for their sea roving practices, and these smaller craft were based on, or inspired by, their vessels of choice. The liburnians and other small galleys patrolled the rivers of continental Europe and reached as far as the Baltic, where they were used to fight local uprisings and assist in checking foreign invasions. The Romans maintained numerous bases around the empire: along the rivers of Central Europe, chains of forts along the northern European coasts and the British Isles, Mesopotamia, and North Africa, including Trabzon, Vienna, Belgrade, Dover, Seleucia, and Alexandria. Few actual galley battles in the provinces are found in records. One action in 70 AD at the unspecified location of the "Island of the Batavians" during the Batavian Rebellion was recorded, and included a trireme as the Roman flagship. The last provincial fleet, the classis Britannica, was reduced by the late 200s, though there was a minor upswing under the rule of Constantine (272–337). His rule also saw the last major naval battle of the unified Roman Empire (before the permanent split into Western and Eastern [later "Byzantine"] Empires), the Battle of the Hellespont of 324. Some time after the Battle of the Hellespont, the classical trireme fell out of use, and its design was forgotten.
Early and High Middle Ages
A transition from galley to sailing vessels as the most common types of warships began in the High Middle Ages (). Large high-sided sailing ships had always been formidable obstacles for galleys. To low-freeboard oared vessels, the bulkier sailing ships, the cog and the carrack, were almost like floating fortresses, being difficult to board and even harder to capture. Galleys remained useful as warships throughout the entire Middle Ages because of their maneuverability. Sailing ships of the time had only one mast, usually with just a single, large square sail. This made them cumbersome to steer. Though equipped to beat to windward, their performance at this would have been limited. Galleys were therefore important for coastal raiding and amphibious landings, both key elements of medieval warfare.
In the eastern Mediterranean, the Byzantine Empire struggled with the incursion from invading Muslim Arabs from the 7th century, leading to fierce competition, a buildup of fleet, and war galleys of increasing size. Soon after conquering Egypt and the Levant, the Arab rulers built ships highly similar to Byzantine dromons with the help of local Coptic shipwrights from former Byzantine naval bases. By the 9th century, the struggle between the Byzantines and Arabs had turned the Eastern Mediterranean into a no-man's land for merchant activity. In the 820s Crete was captured by Al-Andalus Muslims who had fled a failed revolt against the Emirate of Cordoba, turning the island into a base for (galley) attacks on Christian shipping until the island was recaptured by the Byzantines in 960.
In the western Mediterranean and Atlantic, the division of the Carolingian Empire in the late 9th century brought on a period of instability, meaning increased piracy and raiding in the Mediterranean, particularly by newly arrived Muslim invaders. The situation was worsened by raiding Scandinavian Vikings who used longships, vessels that in many ways were very close to galleys in design and functionality and also employed similar tactics. To counter the threat, local rulers began to build large oared vessels, some with up to 30 pairs of oars, that were larger, faster, and with higher sides than Viking ships. Scandinavian expansion, including incursions into the Mediterranean and attacks on both Muslim Iberia and even Constantinople itself, subsided by the mid-11th century. By this time, greater stability in merchant traffic was achieved by the emergence of Christian kingdoms such as those of France, Hungary, and Poland. Around the same time, Italian port towns and city states, like Venice, Pisa, and Amalfi, rose on the fringes of the Byzantine Empire as it struggled with eastern threats.
Late Middle Ages
Late medieval maritime warfare was divided in two distinct regions. In the Mediterranean galleys were used for raiding along coasts, and in the constant fighting for naval bases. In the Atlantic and Baltic there was greater focus on sailing ships that were used mostly for troop transport, with galleys providing fighting support. Galleys were still widely used in the north and were the most numerous warships used by Mediterranean powers with interests in the north, especially France, the Iberian kingdoms and the Italian merchant republics. The kings of France operated the Clos de Galées (literally "galley enclosure") in Rouen during the 14th and 15th century where they had southern-style war galleys built . The Clos was built by Genoese in 1298 and they continued to dominate shipbuilding there until its destruction in 1419 so that they wouldn't fall into English hands.
During the 13th and 14th century, the galley evolved into the design that was to remain essentially the same until it was phased out in the early 19th century. The new type descended from the ships used by Byzantine and Muslim fleets in the Early Middle Ages. These were the mainstay of all Christian powers until the 14th century, including the great maritime republics of Genoa and Venice, the Papacy, the Hospitallers, Aragon, and Castile, as well as by various pirates and corsairs. The overall term used for these types of vessels was ("slender galleys"). The later Ottoman navy used similar designs, but they were generally faster under sail, and smaller, but slower under oars. Galley designs were intended solely for close action with hand-held weapons and projectile weapons like bows and crossbows. In the 13th century the Iberian Crown of Aragon built several fleet of galleys with high castles, manned with Catalan crossbowmen, and regularly defeated numerically superior Angevin forces.
Transition to sailing ships
During the early 15th century, sailing ships began to dominate naval warfare in northern waters. While the galley still remained the primary warship in southern waters, a similar transition had begun also among the Mediterranean powers. A Castilian naval raid on the island of Jersey in 1405 became the first recorded battle where a Mediterranean power employed a naval force consisting mostly of cogs or carracks, rather than the oared-powered galleys. The Battle of Gibraltar between Castile and Portugal in 1476 was another important sign of change; it was the first recorded battle where the primary combatants were full-rigged ships armed with wrought-iron guns on the upper decks and in the waists, foretelling of the slow decline of the war galley.
The sailing vessel was always at the mercy of the wind for propulsion, and those that did carry oars were placed at a disadvantage because they were not optimized for oar use. The galley did have disadvantages compared to the sailing vessel though. Their smaller hulls were not able to hold as much cargo and this limited their range as the crews were required to replenish food stuffs more frequently. The low freeboard of the galley meant that in close action with a sailing vessel, the sailing vessel would usually maintain a height advantage. The sailing vessel could also fight more effectively farther out at sea and in rougher wind conditions because of the height of their freeboard.
Under sail, an oared warship was placed at much greater risk as a result of the piercings for the oars which were required to be near the waterline and would allow water to ingress into the galley if the vessel heeled too far to one side. These advantages and disadvantages led the galley to be and remain a primarily coastal vessel. The shift to sailing vessels in the Mediterranean was the result of the negation of some of the galley's advantages as well as the adoption of gunpowder weapons on a much larger institutional scale. The sailing vessel was propelled in a different manner than the galley but the tactics were often the same until the 16th century. The real-estate afforded to the sailing vessel to place larger cannons and other armament mattered little because early gunpowder weapons had limited range and were expensive to produce. The eventual creation of cast iron cannons allowed vessels and armies to be outfitted much more cheaply. The cost of gunpowder also fell in this period.
The armament of both vessel types varied between larger weapons such as bombards and the smaller swivel guns. For logistical purposes it became convenient for those with larger shore establishments to standardize upon a given size of cannon. Traditionally the English in the North and the Venetians in the Mediterranean are seen as some the earliest to move in this direction. The improving sail rigs of northern vessels also allowed them to navigate in the coastal waters of the Mediterranean to a much larger degree than before. Aside from warships the decrease in the cost of gunpowder weapons also led to the arming of merchants. The larger vessels of the north continued to mature while the galley retained its defining characteristics. Attempts were made to stave this off such as the addition of fighting castles in the bow, but such additions to counter the threats brought by larger sailing vessels often offset the advantages of galley.
Early modern war galleys
From around 1450, three major naval powers established a dominance over different parts of the Mediterranean, using galleys as their primary weapons at sea: the Ottomans in the east, Venice in the center and Habsburg Spain in the west. The core of their fleets were concentrated in the three major, wholly dependable naval bases in the Mediterranean: Constantinople, Venice, and Barcelona. Naval warfare in the 16th-century Mediterranean was fought mostly on a smaller scale, with raiding and minor actions dominating. Only three truly major fleet engagements were actually fought in the 16th century: the battles of Preveza in 1538, Djerba in 1560, and Lepanto in 1571. Lepanto became the last large all-galley battle ever, and was also one of the largest battle in sheer number of participants in early modern Europe before the Napoleonic Wars.
The Mediterranean powers also employed galley forces for conflicts outside the Mediterranean. Spain sent galley squadrons to the Netherlands during the later stages of the Eighty Years' War which successfully operated against Dutch forces in the enclosed, shallow coastal waters. From the late 1560s, galleys were also used to transport silver to Genoese bankers to finance Spanish troops against the Dutch uprising. Galleasses and galleys were part of an invasion force of over 16,000 men that conquered the Azores in 1583. Around 2,000 galley rowers were on board ships of the famous 1588 Spanish Armada, though few of these actually made it to the battle itself. Outside European and Middle Eastern waters, Spain built galleys to deal with pirates and privateers in both the Caribbean and the Philippines. Ottoman galleys contested the Portuguese intrusion in the Indian Ocean in the 16th century, but failed against the high-sided, massive Portuguese carracks in open waters. Even though the carracks themselves were soon surpassed by other types of sailing vessels, their greater range, great size, and high superstructures, armed with numerous wrought iron guns easily outmatched the short-ranged, low-freeboard Turkish galleys. The Spanish used galleys to more success in their colonial possessions in the Caribbean and the Philippines to hunt pirates and sporadically used them in the Netherlands and the Bay of Biscay. Spain maintained four permanent galley squadrons to guard its coasts and trade routes against the Ottomans, the French, and their corsairs. Together they formed the largest galley navy in the Mediterranean in the early 17th century. They were the backbone of the Spanish Mediterranean war fleet and were used for ferrying troops, supplies, horses, and munitions to Spain's Italian and African possessions. In Southeast Asia during the 16th and early 17th century, the Aceh Sultanate had fleets of up to 100 native galley-like vessels (ghali) as well as smaller rowed vessels, there were described by Europeans as lancarans, galliots, and fustas. Some of the larger vessels were very large with heavier armament than standard Mediterranean galleys, with raised platforms for infantry and some with stern structures similar in height to that of contemporary galleons.
Introduction of guns
Galleys had been synonymous with warships in the Mediterranean for at least 2,000 years, and continued to fulfill that role with the invention of gunpowder and heavy artillery. Though early 20th-century historians often dismissed the galleys as hopelessly outclassed with the first introduction of naval artillery on sailing ships, it was the galley that was favored by the introduction of heavy naval guns. Galleys were a more "mature" technology with long-established tactics and traditions of supporting social institutions and naval organizations. In combination with the intensified conflicts this led to a substantial increase in the size of galley fleets from c. 1520–80, above all in the Mediterranean, but also in other European theatres. Galleys and similar oared vessels remained uncontested as the most effective gun-armed warships in theory until the 1560s, and in practice for a few decades more, and were actually considered a grave risk to sailing warships. They could effectively fight other galleys, attack sailing ships in calm weather or in unfavorable winds (or deny them action if needed) and act as floating siege batteries. They were also unequaled in their amphibious capabilities, even at extended ranges, as exemplified by French interventions as far north as Scotland in the mid-16th century.
Heavy artillery on galleys was mounted in the bow, which aligned easily with the long-standing tactical tradition of attacking head on, bow first. The ordnance on galleys was heavy from its introduction in the 1480s, and capable of quickly demolishing the high, thin medieval stone walls that still prevailed in the 16th century. This temporarily upended the strength of older seaside fortresses, which had to be rebuilt to cope with gunpowder weapons. The addition of guns also improved the amphibious abilities of galleys as they could make assaults supported with heavy firepower, and were even more effectively defended when beached stern-first. An accumulation and generalizing of bronze cannons and small firearms in the Mediterranean during the 16th century increased the cost of warfare, but also made those dependent on them more resilient to manpower losses. Older ranged weapons, like bows or even crossbows, required considerable skill to handle, sometimes a lifetime of practice, while gunpowder weapons required considerably less training to use successfully. According to an influential study by military historian John F. Guilmartin, this transition in warfare, along with the introduction of much cheaper cast iron guns in the 1580s, proved the "death knell" for the war galley as a significant military vessel. Gunpowder weapons began to displace men as the fighting power of armed forces, making individual soldiers more deadly and effective. As offensive weapons, firearms could be stored for years with minimal maintenance and did not require the expenses associated with soldiers. Manpower could thus be exchanged for capital investments, something which benefited sailing vessels that were already far more economical in their use of manpower. It also served to increase their strategic range and to out-compete galleys as fighting ships.
Zenith in the Mediterranean
Atlantic-style warfare based on large, heavily armed sailing ships began to change naval warfare in the Mediterranean in the early 17th century. In 1616, a small Spanish squadron of five galleons and a patache cruised the eastern Mediterranean and defeated an Ottoman fleet of 55 galleys at the Battle of Cape Celidonia. By 1650, war galleys were used primarily in the struggles between Venice and the Ottoman Empire for strategic island and coastal trading bases and until the 1720s by both France and Spain for largely amphibious and cruising operations or in combination with heavy sailing ships in a major battle, where they played specialized roles. An example of this was when a Spanish fleet used its galleys in a mixed naval/amphibious battle in the second 1641 battle of Tarragona, to break a French naval blockade and land troops and supplies. Even the Venetians, Ottomans, and other Mediterranean powers began to build Atlantic style warships for use in the Mediterranean in the latter part of the century. Christian and Muslim corsairs had been using galleys in sea roving and in support of the major powers in times of war, but largely replaced them with xebecs, various sail/oar hybrids, and a few remaining light galleys in the early 17th century.
No large all-galley battles were fought after the gigantic clash at Lepanto in 1571, and galleys were mostly used as cruisers or for supporting sailing warships as a rearguard in fleet actions, similar to the duties performed by frigates outside the Mediterranean. They could assist damaged ships out of the line, but generally only in very calm weather, as was the case at the Battle of Málaga in 1704. They could also defeat larger ships that were isolated, as when in 1651 a squadron of Spanish galleys captured a French galleon at Formentera. For small states and principalities as well as groups of private merchants, galleys were more affordable than large and complex sailing warships, and were used as defense against piracy. Galleys required less timber to build, the design was relatively simple and they carried fewer guns. They were tactically flexible and could be used for naval ambushes as well amphibious operations. They also required few skilled seamen and were difficult for sailing ships to catch, but vital in hunting down and catching other galleys and oared raiders.
Decline
Among the largest galley fleets in the 17th century were operated by the two major Mediterranean powers, France and Spain. France had by the 1650s become the most powerful state in Europe, and expanded its galley forces under the rule of the absolutist "Sun King" Louis XIV. In the 1690s the French galley corps () reached its all-time peak with more than 50 vessels manned by over 15,000 men and officers, becoming the largest galley fleet in the world at the time. Although there was intense rivalry between France and Spain, not a single galley battle occurred between the two great powers during this period, and virtually no naval battles between other nations either. During the War of the Spanish Succession, French galleys were involved in actions against Antwerp and Harwich, but due to the intricacies of alliance politics there were never any Franco-Spanish galley clashes. In the first half of the 18th century, the other major naval powers in the Mediterranean Sea, the Order of Saint John based in Malta, and of the Papal States in central Italy, cut down drastically on their galley forces. Despite the lack of action, the galley corps received vast resources (25–50% of the French naval expenditures) during the 1660s. It was maintained as a functional fighting force right up until its abolition in 1748, though its primary function was more of a symbol of Louis XIV's absolutist ambitions.
The last recorded battle in the Mediterranean where galleys played a significant part was at Matapan in 1717, between the Ottomans and Venice and its allies, though they had little influence on the outcome. Few large-scale naval battles were fought in the Mediterranean throughout most of the remainder of the 18th century. The Tuscan galley fleet was dismantled around 1718, Naples had only four old vessels by 1734 and the French Galley Corps had ceased to exist as an independent arm in 1748. Venice, the Papal States, and the Knights of Malta were the only state fleets that maintained galleys, though in nothing like their previous quantities. By 1790, there were fewer than 50 galleys in service among all the Mediterranean powers, half of which belonged to Venice.
Northern Europe
Oared vessels remained in use in northern waters for a long time, though in subordinate role and in particular circumstances. In the Italian Wars, French galleys brought up from the Mediterranean to the Atlantic posed a serious threat to the early English Tudor navy during coastal operations. The response came in the building of a considerable fleet of oared vessels, including hybrids with a complete three-masted rig, as well as a Mediterranean-style galleys (that were even attempted to be manned with convicts and slaves). Under King Henry VIII, the English navy used several kinds of vessels that were adapted to local needs. English galliasses (very different from the Mediterranean vessel of the same name) were employed to cover the flanks of larger naval forces while pinnaces and rowbarges were used for scouting or even as a backup for the longboats and tenders for the larger sailing ships. During the Dutch Revolt (1566–1609) both the Dutch and Spanish found galleys useful for amphibious operations in the many shallow waters around the Low Countries where deep-draft sailing vessels could not enter.
While galleys were too vulnerable to be used in large numbers in the open waters of the Atlantic, they were well-suited for use in much of the Baltic Sea by Denmark-Norway, Sweden, Russia, and some of the Central European powers with ports on the southern coast. There were two types of naval battlegrounds in the Baltic. One was the open sea, suitable for large sailing fleets; the other was the coastal areas and especially the chain of small islands and archipelagos that ran almost uninterrupted from Stockholm to the Gulf of Finland. In these areas, conditions were often too calm, cramped, and shallow for sailing ships, but they were excellent for galleys and other oared vessels. Galleys of the Mediterranean type were first introduced in the Baltic Sea around the mid-16th century as competition between the Scandinavian states of Denmark and Sweden intensified. The Swedish galley fleet was the largest outside the Mediterranean, and served as an auxiliary branch of the army. Very little is known about the design of Baltic Sea galleys, except that they were overall smaller than in the Mediterranean and they were rowed by army soldiers rather than convicts or slaves.
18th-century Baltic revival
Galleys were introduced to the Baltic Sea in the 16th century but the details of their designs are lacking due to the absence of records. They might have been built in a more regional style, but the only known depiction from the time shows a typical Mediterranean style vessel. There is conclusive evidence that Denmark-Norway became the first Baltic power to build classic Mediterranean-style galleys in the 1660s, though they proved to be generally too large to be useful in the shallow waters of the Baltic archipelagos. Sweden and especially Russia began to launch galleys and various rowed vessels in great numbers during the Great Northern War in the first two decades of the 18th century. Sweden was late in the game when it came to building an effective oared fighting fleet (, the archipelago fleet, officially , the fleet of the army), while the Russian galley forces under Tsar Peter I developed into a supporting arm for the sailing navy and a well-functioning auxiliary of the army which infiltrated and conducted numerous raids on the eastern Swedish coast in the 1710s.
Sweden and Russia became the two main competitors for Baltic dominance in the 18th century, and built the largest galley fleets in the world at the time. They were used for amphibious operations in Russo-Swedish wars of 1741–43 and 1788–90. The last galleys ever constructed were built in 1796 by Russia, and remained in service well into the 19th century, but saw little action. The last time galleys were deployed in action was when the Russian navy was attacked in Åbo (Turku) in 1854 as part of the Crimean War. In the second half of the 18th century, the role of Baltic galleys in coastal fleets was replaced first with hybrid "archipelago frigates" (such as the turuma or pojama) and xebecs, and after the 1790s with various types of gunboats.
Design and construction
The documentary evidence for the construction of ancient galleys is fragmentary, particularly in pre-Roman times. Plans and schematics in the modern sense did not exist until around the 17th century and nothing comparable has survived from ancient times. How galleys were constructed has therefore been a matter of looking at circumstantial evidence in literature, art, coinage and monuments that include ships, some of them actually in natural size. Since the war galleys floated even with a ruptured hull and virtually never had any ballast or heavy cargo that could sink them, almost no wrecks have so far been found.
On the funerary monument of the Egyptian king Sahure (2487–2475 BC) in Abusir, there are relief images of vessels with a marked sheer (the upward curvature at each end of the hull) and seven pairs of oars along its side, a number that was likely to have been symbolical rather than a realistic depiction, and steering oars in the stern. These vessels have only one mast and vertical stems and sternposts, with the front decorated with an Eye of Horus, the first example of such a decoration. The eye was later used by other Mediterranean cultures to decorate seagoing craft in the belief that it helped to guide the ship safely to its destination. The early Egyptian vessels apparently lacked a keel. To provide a stiffening along its length, they had large cables, trusses, connecting stem and stern resting on massive crutches on deck. They were held in tension to avoid hogging while at sea (bending the ship's construction upward in the middle). In the 15th century BC, Egyptian galley-like craft were still depicted with the distinctive extreme sheer, but had by then developed the distinctive forward-curving stern decorations with ornaments in the shape of lotus flowers. They had possibly developed a primitive type of keel, but still retained the large cables intended to prevent hogging.
The construction of the earliest oared vessels is mostly unknown and highly conjectural. They likely used a mortise construction, but were sewn together rather than pinned together with nails and dowels. Being completely open, they were rowed (or even paddled) from the open deck, and likely had "ram entries", projections from the bow lowered the resistance of moving through water, making them slightly more hydrodynamic. The first true galleys, the triaconters (literally "thirty-oarers") and penteconters ("fifty-oarers") were developed from these early designs and set the standard for the larger designs that would come later. They were rowed on only one level, which made them fairly slow, likely only about . By the 8th century BC the first galleys rowed at two levels had been developed, among the earliest being the two-level penteconters which were considerably shorter than the one-level equivalents, and therefore more maneuverable. They were an estimated 25 m in length and displaced 15 tonnes with 25 pairs of oars. These could have reached an estimated top speed of up to , making them the first genuine warships when fitted with bow rams. They were equipped with a single square sail on mast set roughly halfway along the length of the hull.
Advent of the trireme
By the 5th century BC, the first triremes were in use by various powers in the eastern Mediterranean. This was a fully developed, highly specialized war galley that was capable of high speeds and complex maneuvers. At nearly 40 m in length, displacing up to 50 tonnes, it was more than three times as expensive to build as a two-level penteconter. A trireme also had an additional mast with a smaller square sail placed near the bow. Up to 170 oarsmen sat on three levels with one oar each that varied slightly in length. To accommodate three levels of oars, rowers sat staggered on three levels. Arrangements of the three levels are believed to have varied, but the most well-documented design made use of a projecting structure, or outrigger, where the oarlock in the form of a thole pin was placed. This allowed the outermost row of oarsmen enough leverage for full strokes that made efficient use of their oars.
The first dedicated war galleys fitted with rams were built with a mortise and tenon technique, a so-called shell-first method. In this, the planking of the hull was strong enough to hold the ship together structurally, and was also watertight without the need for caulking. Hulls had sharp bottoms without keelsons in order to support the structure and were reinforced by transverse framing secured with dowels with nails driven through them. To prevent the hull from hogging there was a (υπόζωμα = underbelt), a thick, doubled rope that connected bow with stern. It was kept taut to add strength to the construction along its length, but its exact design or the method of tightening is not known. The ram, the primary weapon of ancient galleys from around the 8th to the 4th century, was not attached directly on the hull but to a structure extending from it. This way the ram could twist off if got stuck after ramming rather than breaking the integrity of the hull. The ram fitting consisted of a massive, projecting timber and the ram itself was a thick bronze casting with horizontal blades that could weigh from 400 kg up to 2 tonnes.
Hellenistic and Roman eras
Galleys from 4th century BC up to the time of the early Roman Empire in the 1st century AD became successively larger. Three levels of oars was the practical upper limit, but it was improved on by making ships longer, broader, and heavier and placing more than one rower per oar. Naval conflict grew more intense and extensive, and by 100 BC galleys with four, five or six rows of oarsmen were commonplace and carried large complements of soldiers and catapults. With high freeboard (up to 3 m) and additional tower structures from which missiles could be shot down onto enemy decks, they were intended to be like floating fortresses. Designs with everything from eight rows of oarsmen and upward were built, but most of them are believed to have been impractical show pieces never used in actual warfare. Ptolemy IV, the Greek pharaoh of Egypt 221–205 BC, is recorded as building a gigantic ship with forty rows of oarsmen, though no specification of its design remains. One suggested design was that of a huge trireme catamaran with up to 14 men per oar and it is assumed that it was intended as a showpiece rather than a practical warship.
With the consolidation of Roman imperial power, the size of both fleets and galleys decreased considerably. The huge polyremes disappeared and the fleet were equipped primarily with triremes and liburnians, compact biremes with 25 pairs of oars that were well suited for patrol duty and chasing down raiders and pirates. In the northern provinces oared patrol boats were employed to keep local tribes in check along the shores of rivers like the Rhine and the Danube. As the need for large warships disappeared, the design of the trireme, the pinnacle of ancient war ship design, fell into obscurity and was eventually forgotten. The last known reference to triremes in battle is dated to 324 at the Battle of the Hellespont. In the late 5th century the Byzantine historian Zosimus declared the knowledge of how to build them to have been long since forgotten.
Middle Ages
The primary warship of the Byzantine navy until the 12th century was the dromon and other similar ship types. Considered an evolution of the Roman liburnian, the term first appeared in the late 5th century, and was commonly used for a specific kind of war-galley by the 6th century. The term (literally "runner") itself comes from the Greek root , "to run", and 6th-century authors like Procopius are explicit in their references to the speed of these vessels. During the next few centuries, as the naval struggle with the Arabs intensified, heavier versions with two or possibly even three banks of oars evolved.
The accepted view is that the main developments which differentiated the early dromons from the liburnians, and that henceforth characterized Mediterranean galleys, were the adoption of a full deck, the abandonment of rams on the bow in favor of an above-water spur, and the gradual introduction of lateen sails. The exact reasons for the abandonment of the ram are unclear. Depictions of upward-pointing beaks in the 4th-century Vatican Vergil manuscript may well illustrate that the ram had already been replaced by a spur in late Roman galleys. One possibility is that the change occurred because of the gradual evolution of the ancient shell-first construction method, against which rams had been designed, into the skeleton-first method, which produced a stronger and more flexible hull, less susceptible to ram attacks. At least by the early 7th century, the ram's original function had been forgotten.
The dromons that Procopius described were single-banked ships of probably 25 oars per side. Unlike ancient vessels, which used an outrigger, these extended directly from the hull. In the later bireme dromons of the 9th and 10th centuries, the two oar banks were divided by the deck, with the first oar bank was situated below, whilst the second oar bank was situated above deck; these rowers were expected to fight alongside the marines in boarding operations. The overall length of these ships was probably about 32 meters. The stern () had a tent that covered the captain's berth; the prow featured an elevated forecastle that acted as a fighting platform and could house one or more siphons for the discharge of Greek fire; and on the largest dromons, there were wooden castles on either side between the masts, providing archers with elevated firing platforms. The bow spur was intended to ride over an enemy ship's oars, breaking them and rendering it helpless against missile fire and boarding actions.
Development of the
From the 12th century, the design of war galleys evolved into the form that would remain largely the same until the building of the last war galleys in the late 18th century. The length to breadth-ratio was a minimum of 8:1. A rectangular , a projecting outrigger, was added to support the oars and the rowers' benches were laid out in a diagonal herringbone pattern angled aft with a central gangway () running along the centerline. The angling of the benches allowed the rowers to handle individual oars without interfering with each other's movements. It was based on the form of the , the smaller Byzantine galleys, and would be known mostly by the Italian term (literally "slender galley"). A second, smaller mast was added sometime in the 13th century and the number of rowers rose from two to three rowers per bench as a standard from the late 13th to the early 14th century. The would make up the bulk the main war fleets of every major naval power in the Mediterranean, assisted by the smaller single-masted , as well as the Christian and Muslim corsairs' fleets. Ottoman galleys were very similar in design, though in general smaller, faster under sail, but slower under oars. The standard size of the galley remained stable from the 14th until the early 16th century, when the introduction of naval artillery began to have effects on design and tactics.
The traditional two side rudders were complemented with a stern rudder sometime after c. 1400 and eventually the side rudders disappeared altogether. It was also during the 15th century that large artillery pieces were first mounted on galleys. Burgundian records from the mid-15th century describe galleys with some form of guns, but do not specify the size. The first conclusive evidence of a large cannon mounted on a galley comes from a woodcut of a Venetian galley in 1486. The first guns were fixed directly on timbers in the bow and aimed directly forward, a placement that would remain largely unchanged until the galley disappeared from active service in the 19th century.
Early modern standardization
With the introduction of guns in the bows of galleys, a permanent wooden structure called (French: rambade; Italian: ; Spanish: ) was introduced. The became standard on virtually all galleys in the early 16th century. There were some variations in the navies of different Mediterranean powers, but the overall layout was the same. The forward-aiming battery was covered by a wooden platform which gave gunners a minimum of protection, and functioned as both a staging area for boarding attacks and as a firing platform for on-board soldiers. After its introduction, the rambade became a standard detail on every fighting galley until the very end of galley era in the early 19th century. By the mid-17th century, galleys reached what has been described as their "final form". Galleys had looked more or less the same for over four centuries and a fairly standardized classification system had been developed by the Mediterranean bureaucracies, based primarily on the number of benches in a vessel. A Mediterranean galley would have 25 pairs of oars and total of about 500 men. Command galleys (, "lantern galleys") were even larger with 30 pairs of oars and up to seven rowers per oar. Armament consisted of one heavy 24- or 36-pounder and two to four 4- to 12-pounders in the bow. Rows of light swivel guns were often placed along the entire length of the galley on the railings for close-quarter defense. The length-to-width ratio of the ships was about 8:1, with two masts carrying one large lateen sail each. In the Baltic, galleys were generally shorter with a length-to-width ratio from 5:1 to 7:1, an adaptation to the cramped conditions of the Baltic archipelagos.
A single mainmast was standard on most war galleys until c. 1600. A second, shorter mast could be raised temporarily in the bows, but became permanent by the early 17th century. It was stepped slightly to the side to allow for the recoil of the heavy guns; the other was placed roughly in the center of the ship. A third smaller mast further astern, akin to a mizzen mast, was also introduced on large galleys, possibly in the early 17th century, but was standard at least by the early 18th century. Galleys had little room for provisions and depended on frequent resupplying and were often beached at night to rest the crew and cook meals. Where cooking areas were actually present, they consisted of a clay-lined box with a hearth or similar cooking equipment fitted on the vessel in place of a rowing bench, usually on the port (left) side.
Propulsion
Rowing
Ancient rowing was done in a fixed seated position with rowers facing the stern, the most efficient rowing position. A sliding stroke, which provided the strength from both legs as well as the arms, was suggested by earlier historians, but no conclusive evidence has supported it. Practical experiments with the full-scale trireme reconstruction Olympias has shown that there was insufficient space to perform a sliding stroke movement, and moving or rolling seats would have been highly impractical to construct with ancient technology. Rowers in ancient war galleys sat below the upper deck with little view of their surroundings. The rowing was therefore managed by supervisors, and coordinated with pipes or rhythmic chanting. Galleys were highly maneuverable, able to turn on their axis or even to row backward, though such maneuvers required a skilled and experienced crew. In galleys with an arrangement of three men per oar, as in the larger polyremes, all would be seated, but the rower furthest inboard would perform a stand-and-sit stroke, getting up on his feet to push the oar forward, and then sitting down again to pull it back.
The faster a vessel travels, the more energy it uses. Reaching high speeds requires energy which a human-powered vessel is incapable of producing. Oar systems generate very low amounts of energy for propulsion (only about 70 W per rower) and the upper limit for rowing in a fixed position is around . Ancient war galleys of the kind used in Classical Greece are by modern historians considered to be the most energy-efficient and fastest of galley designs throughout history. A full-scale replica of a 5th-century BC trireme, the Olympias was built 1985–87 and was put through a series of trials to test its performance. They proved that a cruising speed of about could be maintained for an entire day of travel. Sprinting speeds of up to were possible, but only for a few minutes and would tire the crew quickly. Ancient galleys were lightly built and the original triremes are presumed never to have been surpassed in speed. Medieval galleys are believed to have been considerably slower, especially since they were not designed for ramming. A cruising speed of no more than has been estimated. A sprint speed of up to was possible for 20–30 minutes, but risked exhausting the rowers completely.
Rowing in headwinds or even moderately rough weather was difficult as well as exhausting. In high seas, ancient galleys would set sail to run before the wind. They were highly susceptible to high waves, and could become unmanageable if the projecting rowing frame () became overwhelmed by the waves. Ancient and medieval galleys are assumed to have sailed only with the wind more or less from behind for a top speed of about in fair conditions.
Sails
In ancient galleys under sail, most of the motive power came from a single square sail. It was rigged on a mast somewhat forward of the center of the ship with a smaller mast carrying a head sail in the bow. Triangular lateen sails are attested as early as the 2nd century AD, and gradually became the sail of choice for galleys. By the 9th century, lateens were firmly established as part of the standard galley rig. The lateen rig required a larger crew to handle than a square sail rig, but this was not a problem in the heavily manned galleys. The Byzantine general Belisarius's invasion fleet of 533 was at least partly equipped with lateen sails, making it probable that by the time the lateen had become the standard rig for the dromon, with the traditional square sail gradually falling out of use in medieval seafaring in the Mediterranean. Unlike a square sail rig, the spar of a lateen sail did not pivot around the mast. To change tacks, the entire spar had to be raised vertically and passed to the other side of the mast. Since a lateen spar was longer than the mast itself, and not much shorter than the ship itself, this was more complex and potentially dangerous compared to tacking with a square rig.
Galley slaves
Contrary to the popular image of rowers chained to the oars, conveyed by movies such as Ben Hur, there is no evidence that ancient navies made regular use of condemned criminals or slaves as oarsmen, with the possible exception of Ptolemaic Egypt. Literary evidence indicates that both Greek and Roman navies relied on paid labor or ordinary soldiers to man their galleys. Slaves were put at the oars only in times of crisis. In some cases, these people would be given freedom after the crisis was averted. Roman merchant vessels (usually sailing vessels) were manned by slaves, sometimes even with slaves as ship's master, but this was seldom the case in merchant galleys.
It was only in the early 16th century that galley slaves became commonplace. Both galley fleets and the size of individual vessels increased in size during the early modern period, which required more rowers. The number of benches could not be increased without lengthening hulls beyond their structural limits, and more than three oars per bench was not practicable. The demand for more rowers also meant that the relatively limited number of skilled oarsmen could not keep up with the demand of the larger fleets. It became increasingly common to man galleys with convicts or slaves, which required a simpler method of rowing. The older method of employing professional rowers using the method (one oar per rower, with two to three rower sharing the same bench) was gradually phased out in favor of , which required less skill. A single large oar was used for each bench, with several rowers working the oar together. The number of oarsmen per oar rose from three up to five and in some of the largest command galleys, there could be as many as seven to an oar.
All major Mediterranean powers sentenced criminals to galley service, but initially only in time of war. Christian naval powers such as Spain frequently employed Muslim captives and prisoners of war. The Ottoman navy and its North African corsair allies put Christian prisoners to the oars in large numbers, but also mixed with volunteers. Habsburg Spain relied mostly on servile rowers, in great part because its organizational structure was geared toward employing slaves and convicts. Venice was one of few major Mediterranean powers that used almost only free rowers, a result of their reliance on rowing which required skilled professional rowers. The Knights of Saint John used slaves extensively, as did the Papal States, Florence, and Genoa. The North African ghazi corsairs relied almost entirely on Christian slaves as rowers.
Armament and combat tactics
In the earliest times of naval warfare, boarding was the only means of deciding a naval engagement, but little to nothing is known about the tactics involved. In the first recorded naval battle, the Battle of the Delta, the battle was fought in close combat melee with the support of archers, some perched on raised platforms. The Egyptian victory was commemorated on the mortuary temple of Ramesses III at Medinet Habu and shows intense close quarters fighting and the use of grapnels thrown into the rigging to capsize ships and throwing its crew into the sea.
Around the 8th century BC, ramming began to be employed as war galleys were equipped with heavy bronze rams. Records of the Persian Wars in the early 5th century BC by the Ancient historian Herodotus (–25 BC) show that by this time ramming tactics had evolved among the Greeks. Ramming itself was done by smashing into the rear or side of an enemy ship, punching a hole in the planking. This would not actually sink an ancient galley unless it was heavily laden with cargo and stores. With a normal load, it was buoyant enough to float even with a breached hull. Breaking the enemy's oars was another way of rendering ships immobile, rendering them easier targets. If ramming was not possible or successful, the on-board complement of soldiers would attempt to board and capture the enemy vessel by securing it with grappling irons, accompanied by missile fire with arrows or javelins. Trying to set the enemy ship on fire by hurling incendiary missiles or by pouring the content of fire pots attached to long handles is thought to have been used, especially since smoke below decks would easily disable rowers.
Ramming tactics were gradually superseded in the last centuries BC by the Macedonians and Romans, both primarily land-based powers. Hand-to-hand fighting with large complements of heavy infantry supported by ship-borne catapults dominated the fighting style during the Roman era. Thought this decreased mobility of vessels, it meant that less skill was required from individual oarsmen. Fleets thereby became less dependent on highly skilled rowers with a lifetime of experience at the oar.
Boarding prevails
By the first centuries AD, ramming tactics had completely disappeared along with the knowledge of the design of the ancient trireme. Medieval galleys instead developed a projection, or "spur", in the bow that was designed to break oars and act as a boarding platform for taking enemy ships. The only remaining examples of ramming tactics were occasional attempts to collide with enemy ships in order to destabilize or capsize them.
The Byzantine navy, the largest Mediterranean war fleet throughout most of the Early Middle Ages, employed crescent formations in order to turn the enemy's flanks, as did the Arab fleets that fought them from the 7th century onward. The initial stages in naval battles was an exchanges of missiles, ranging from combustible projectiles to arrows, caltrops, and javelins. The aim was not to sink ships, but to deplete the ranks of enemy crews before boarding, which decided the outcome. Byzantine dromons had pavesades (racks of large shields along the railings) which provided protection to the deck crew. Larger ships also had wooden castles between the masts on either side of the upper decks, which allowed archers to shoot from an elevated firing position.
Later medieval navies continued to use similar tactics, with a line abreast formation as standard, as galleys were intended to be fought from the bow. They were at their weakest along the sides, especially in the middle. The crescent formation employed by the Byzantines continued to be used throughout the Middle Ages. The ships on the edges of the crescent would attempt to crash their bows straight into the sides of the enemy ships at the edge of the formation.
Gun galleys
In large-scale galley-to-galley engagements, tactics remained essentially the same until the end of the 16th century, even with the introduction of heavy guns. Since galleys could close the reliable maximum range of early naval guns faster than the guns could be reloaded, gun crews would hold their fire until the last possible moment. Shortly before impact, all available guns would fire, similar to infantry tactics in the era of short-range, inaccurate firearms. In extreme cases, such last-second discharges could kill dozens of men instantly, dealing a severe shock to the enemy. Unless one side could outmaneuver the other, lines of galleys would crash into each other head on. Individual ships would then be locked bow to bow in close formation and each ship would be fought over in close-quarters combat. As long as a vessel was not completely overrun, reinforcements could be fed into the fight from reserve vessels in the rear.
The earliest guns were of large calibers, initially of wrought iron, which made them weak compared to the cast bronze guns that would become standard in the 16th century. Early on, guns would be fixed directly to the bow timbers, aimed directly forward in the direction of travel. This placement would remain essentially the same until the galley disappeared from active service in the early 19th century. The introduction of heavy guns and handheld firearms did not change tactics considerably. If anything, it made the bow even more important in offense, both as a staging area for boarders and the obvious place for concentrating firepower. The galley itself could easily outperform most sailing vessels before the establishment of the full-rigged ship. It retained a distinct tactical advantage even after the initial introduction of naval artillery because of the ease with which it could be maneuvered to bare its guns upon an opposing vessel.
Symbolism
Galleys were frequently used for ceremonial purposes. In early modern Europe, galleys enjoyed a level of prestige that sailing vessels could not compete with. They were considered to be more closely associated with warfare on land, and fought with similar tactics. Naval warfare did not have the same association with chivalry and martial prowess as land warfare, which was seen as the ultimate achievement of nobility and royalty. In the Baltic, Gustav I, the first king of the modern Swedish state, showed particular interest in galleys, as was befitting a Renaissance prince. Whenever traveling by sea, Gustav, his court, royal bureaucrats, and his bodyguardd would travel by galley. Around the same time, English king Henry VIII had Mediterranean-style galleys built and even manned them with slaves, though the English navy of the time relied mostly on sailing ships.
British naval historian Nicholas Rodger has described the galley as a "supreme symbol of royal power ... derived from its intimate association with armies, and consequently with princes". This association was elevated even further by the French "Sun King", Louis XIV, in the form of a dedicated galley corps. Louis and the French state he ruled created a tool and symbol of royal authority that did little fighting, but was a potent extension of absolutist ambitions. Galleys were built to scale for the royal flotilla on the Grand Canal at the Gardens of Versailles, purely for the amusement of the court. French royal galleys patrolled the Mediterranean, forcing ships of other states to salute the King's banner, convoyed ambassadors and cardinals, and participated obediently in naval parades and royal pageantry. Historian Paul Bamford has described galleys as vessels that "must have appealed to military men and to aristocratic officers ... accustomed to being obeyed and served".
Sentencing criminals, political dissenters and religious deviants to the galleys also turned the French galley corps into a brutal, cost-effective and feared prison system. French Protestants were particularly ill-treated in this system. They were only a small minority of the prisoners, but their experiences came to dominate the legacy of the galley corps. In 1909, French author (1859–1927) wrote that "[a]fter the Bastille, the galleys were the greatest horror of the old regime". Long after convicts stopped serving in the galleys, even after the reign of Napoleon, the term ("galley rower") remained a general term for forced labor and convicts serving harsh sentences.
| Technology | Naval warfare | null |
134572 | https://en.wikipedia.org/wiki/Mountain%20bike | Mountain bike | A mountain bike (MTB) or mountain bicycle is a bicycle designed for off-road cycling. Mountain bikes share some similarities with other bicycles, but incorporate features designed to enhance durability and performance in rough terrain, which often makes them heavier, more complex and less efficient on smooth surfaces. These typically include a suspension fork, large knobby tires, more durable wheels, more powerful brakes, straight, wide handlebars to improve balance and comfort over rough terrain, and wide-ratio gearing optimized for topography, application (e.g., steep climbing or fast descending) and a frame with a suspension mechanism for the rear wheel. Rear suspension is ubiquitous in heavier-duty bikes and now common even in lighter bikes. Dropper seat posts can be installed to allow the rider to quickly adjust the seat height (an elevated seat position is more effective for pedaling, but poses a hazard in aggressive maneuvers).
Mountain bikes are generally specialized for use on mountain trails, single track, fire roads, and other unpaved surfaces. In addition to being used to travel and recreate on those surfaces, many people use mountain bikes primarily on paved surfaces; some may prefer the upright position, plush ride, and stability that mountain bikes often have. Mountain biking terrain commonly has rocks, roots, loose dirt, and steep grades. Many trails have additional technical trail features (TTF) such as log piles, log rides, rock gardens, skinnies, gap jumps, and wall-rides. Mountain bikes are built to handle these types of terrain and features. The heavy-duty construction combined with stronger rims and wider tires has also made this style of bicycle popular with urban riders and couriers who must navigate through potholes and over curbs.
Since the development of the sport of mountain biking in the 1970s, many new subtypes of mountain biking have developed, such as cross-country (XC), trail, all-mountain, enduro, freeride, downhill, and a variety of track and slalom types. Each of these place different demands on the bike, requiring different designs for optimal performance. MTB development has led to an increase in suspension travel, now often up to , and gearing up to 13 speed, to facilitate both climbing and rapid descents. Advances in gearing have also led to the ubiquity of "1x" drivetrains (pronounced "one-by"), simplifying the gearing to one chainring in the front and a wide range cassette at the rear, typically with 9 to 12 sprockets. 1x gearing reduces overall bike weight, increases ground clearance, and greatly simplifies the process of gear selection, but 2- or 3-ring drivetrains are still common on entry-level bikes.
The expressions "all terrain bicycle", "all terrain bike", and the acronym "ATB" are used as synonyms for "mountain bike", but some authors consider them passé.
History
Origins
The original mountain bikes were modified heavy cruiser bicycles used for freewheeling down mountain trails. The sport became popular in the 1970s in Northern California, USA, with riders using older, single-speed balloon tire bicycles to ride down rugged hillsides. These modified bikes were called "ballooners" in California, "klunkers" in Colorado, and "dirt bombers" in Oregon. Joe Breeze, a bicycle frame builder, used this idea and developed what is considered the first mountain bike.
It was not until the late 1970s and early 1980s that road bicycle companies started to manufacture mountain bicycles using high-tech lightweight materials, such as M4 aluminum. The first production mountain bike available was the 1979 Lawwill Pro Cruiser. The frame design was based on a frame that Don Koski fabricated from electrical conduit and a Schwinn Varsity frame. Mert Lawwill had Terry Knight of Oakland build the frames. The bikes sold for about $500 new and were made from 1979 though 1980 (approximate run of 600 bikes).
The first mass production mountain bike was the Specialized Stumpjumper, first produced in 1981. With the rising popularity of mountain bikes, Randolph (Randy) Ross, executive vice president of Ross Bicycles Inc., was quoted in the New York Times saying I'd say these bikes are one of the biggest things that ever happened to the biking industry. Its basic look constitutes "a total shift in image" for the industry.
Throughout the 1990s and 2000s, mountain biking moved from a little-known sport to a mainstream activity complete with an international racing circuit and a world championship, in addition to various free ride competitions, such as the FMB World Tour and the Red Bull Rampage.
Designs
Mountain bikes can usually be divided into four broad categories based on suspension configuration:
Rigid: A mountain bike with large, knobby tires and straight handlebars, but with neither front nor rear suspension.
Hardtail: A mountain bike equipped with a suspension fork for the front wheel, but otherwise a rigid frame.
Soft tail: A recent addition, a mountain bike with pivots in the frame but no rear shock. The flex of the frame absorbs some vibrations. These bikes are usually cross-country bikes.
Full suspension (or dual suspension): A mountain bike equipped with both front and rear suspension. The front suspension is usually a telescopic fork similar to that of a motorcycle, and the rear is suspended by a mechanical linkage with components for absorbing shock.
Modern designs
Gears
Since the 1980s, mountain bikes have had anywhere from 7 to 36 speeds, with 1 to 4 chain-rings on the crankset and 5 to 12 sprockets in the cogset. 30-speed, 33-speed and 36-speed mountain bikes were originally found to be unworkable, as the mud-shedding capabilities of a 10-speed, 11-speed or 12-speed cassette, and the intricacies of a 10-speed, 11-speed or 12-speed rear derailleur were originally not found to be suitable combined with front shifters, although 10, 11 and 12 speed cassettes are now commonplace in single front chainring bicycles, and are also found on some mountain bikes. However, many pro-level mountain bikers have taken to using a narrower 10-speed road chain with a 9-speed setup in an effort to reduce the weight of their bike. In early 2009, component group SRAM announced their release of their XX groupset, which uses a 2-speed front derailleur, and a 10-speed rear derailleur and cassette, similar to that of a road bike. Mud-shedding capabilities of their 10-speed XX cassette are made suitable for MTB use by extensive Computer Numerical Control (CNC) machining of the cassette. Due to the time and cost involved in such a product, they were only aimed at top-end XC-racers. However, 10-speed has become the norm by 2011 and the market leader Shimano even offers its budget groupset "Alivio" in a 10-speed version. In July 2012, SRAM announced a 1×11 drivetrain called XX1 that does not make use of a front derailleur for lighter weight and simplicity. In the 2014 Commonwealth Games at Glasgow all leading riders used 1×11 drivetrains. SRAM's new 1×12 gearing was introduced in 2016 as SRAM Eagle. This gives a single chain ring bike better ability to climb.
Geometry
The critical angles in bicycle geometry are the head angle (the angle of the head tube), and the seat tube angle (the angle of the seat tube). These angles are measured from the horizontal, and drastically affect the rider position and performance characteristics of the bicycle. Mountain bike geometry will often feature a seat tube angle around 73 degrees, with a head tube angle of anywhere from 60 to 73 degrees. The intended application of the bike affects its geometry very heavily. In general, steeper angles (closer to 90 degrees from the horizontal) are more efficient for pedaling up hills and make for sharper handling. Slacker angles (leaning farther from the vertical) are preferred for high speeds and downhill stability.
Suspension
In the past mountain bikes had a rigid frame and fork. In the early 1990s, the first mountain bikes with suspension forks were introduced. This made riding on rough terrain easier and less physically stressful. The first front suspension forks had about 1 to 2 inches (38 to 50 mm) of suspension travel. Once suspension was introduced, bikes with front suspension and rigid, non-suspended rear wheels, or "hardtails", became popular nearly overnight. While the hardtail design has the benefits of lower cost, less maintenance, and better pedaling efficiency, it is slowly losing popularity due to improvements in full suspension designs. Front fork suspensions are now available with of travel or more (see above under Designs.)
Many new mountain bikes integrate a "full suspension" design known as dual suspension, meaning that both the front and rear wheel are fitted with a shock absorber in some form as the wheel attaches to the bike. This provides a smoother ride as the front and rear wheels can now travel up and down to absorb the force of obstacles striking the tires. Dual suspension bikes of a similar quality are considerably more expensive, but this price increase brings an enormous off-road performance upgrade as dual suspension bikes are much faster on downhill and technical/rough sections, than other forms of the mountain bike. This is because when the wheel strikes an obstacle its tendency is to bounce up. Due to some forward energy being lost in the upward movement some speed is lost. Dual suspension bikes solve this problem by absorbing this upward force and transmit it into the shocks of the front and rear wheels, drastically decreasing the translation of forward momentum into useless upward movement. Disadvantages of rear suspension are increased weight, increased price, and with some designs, decreased pedaling efficiency, which is especially noticeable when cycling on roads and hard trails. At first, early rear suspension designs were overly heavy, and susceptible either to pedaling-induced bobbing or lockout.
Disc brakes
Most new mountain bikes use disc brakes. They offer much improved stopping power (less lever pressure is required providing greater braking modulation) over rim brakes under all conditions especially adverse conditions, because they are located at the center of the wheel (on the wheel hub). They therefore remain drier and cleaner than wheel rims, which are more readily soiled or damaged. The disadvantage of disc brakes is their increased cost and often greater weight. Disc brakes do not allow heat to build up in the tires on long descents; instead, heat builds up in the rotor, which can become extremely hot. There are two different kinds of disc brakes: hydraulic, which uses oil in the lines to push the brake pads against the rotors to stop the bike. They cost more but work better. Mechanical, which uses wires in the lines to pull the pads against the rotors.
Wheel and tire design
Typical features of a mountain bike are very wide tyres. The original 26 inch wheel diameter with ≈2.125″ width (ISO 559 mm rim diameter) is increasingly being displaced by 29-inch wheels with ≈2.35″ width (ISO 622 mm rim diameter), as well as the 27.5-inch wheel diameter with ≈2.25 widths (ISO 584 mm rim diameter), particular on smaller frame sizes for shorter riders. Mountain bikes with 24-inch wheels are also available, sometimes for dirt jumping, or as a junior bike.
Bicycle wheel sizes are not precise measurements: a 29-inch mountain bike wheel with a bead seat diameter (the term, bead seat diameter (BSD), is used in the ETRTO tire and rim sizing system), and the average 29″ mountain bike tire is (in ISO notation) 59-622 corresponds to an outside diameter of about 29.15 inches (740 mm).
622 mm wheels are standard on road bikes and are commonly known as 700C. In some countries, mainly in Continental Europe, 700C (622 mm) wheels are commonly called 28 inch wheels. 24 inch wheels are used for dirt jumping bikes and sometimes on freeride bikes, rear wheel only, as this makes the bike more maneuverable. 29 inch wheels were once used for only cross country purposes, but are now becoming more commonplace in other disciplines of mountain biking. A mountain bike with 29″ wheels is often referred to as a 29er, and a bike with 27.5-inch wheels is called a 27.5 Mountain bike or as a marketing term ″650B bike".
Wheels come in a variety of widths, ranging from standard rims suitable for use with tires in the 1.90 to 2.10 in (48 to 53 mm) size, to widths popular with freeride and downhill bicycles. Although heavier wheelsets are favored in the freeride and downhill disciplines, advances in wheel technology continually shave weight off strong wheels. This is highly advantageous as rolling weight greatly affects handling and control, which are very important to the technical nature of freeride and downhill riding.
The widest wheel/tire widths, typically 3.8 in (97 mm) or larger, are sometimes used by icebikers who use their mountain bikes for winter-time riding in snowy conditions.
Manufacturers produce bicycle tires with a wide variety of tread patterns to suit different needs. Among these styles are: slick street tires, street tires with a center ridge and outer tread, fully knobby, front-specific, rear-specific, and snow studded. Some tires can be specifically designed for use in certain weather (wet or dry) and terrain (hard, soft, muddy, etc.) conditions. Other tire designs attempt to be all-around applicable. Within the same intended application, more expensive tires tend to be lighter and have less rolling resistance. Sticky rubber tires are now available for use on freeride and downhill bikes. While these tires wear down more quickly, they provide greater traction in all conditions, especially during cornering. Tires and rims are available in either tubed or tubeless designs, with tubeless tires recently (2004) gaining favor for their pinch flat resistance.
Tires also come with tubes, tubeless and tubeless-ready. Tires with tubes are the standard design and the easiest to use and maintain. Tubeless tires are significantly lighter and often have better performance because you can run them at a lower tire pressure which results in better traction and increasing rolling resistance. Tubeless-ready tires are tires that can use tubes or go tubeless. A liquid sealant is used without the tube to secure the seal to the rim. Popular tire manufacturers include Wilderness Trail Bikes, Schwalbe, Maxxis, Nokian, Michelin, Continental, Tioga, Kenda, Hutchinson, Specialized and Panaracer.
Tandems
Mountain bikes are available in tandem configurations. For example, Cannondale and Santana Cycles offer ones without suspension, while Ellsworth, Nicolai, and Ventana manufacture tandems with full suspension.
| Technology | Human-powered transport | null |
134880 | https://en.wikipedia.org/wiki/Eardrum | Eardrum | In the anatomy of humans and various other tetrapods, the eardrum, also called the tympanic membrane or myringa, is a thin, cone-shaped membrane that separates the external ear from the middle ear. Its function is to transmit changes in pressure of sound from the air to the ossicles inside the middle ear, and thence to the oval window in the fluid-filled cochlea. The ear thereby converts and amplifies vibration in the air to vibration in cochlear fluid. The malleus bone bridges the gap between the eardrum and the other ossicles.
Rupture or perforation of the eardrum can lead to conductive hearing loss. Collapse or retraction of the eardrum can cause conductive hearing loss or cholesteatoma.
Structure
Orientation and relations
The tympanic membrane is oriented obliquely in the anteroposterior, mediolateral, and superoinferior planes. Consequently, its superoposterior end lies lateral to its anteroinferior end.
Anatomically, it relates superiorly to the middle cranial fossa, posteriorly to the ossicles and facial nerve, inferiorly to the parotid gland, and anteriorly to the temporomandibular joint.
Regions
The eardrum is divided into two general regions: the pars flaccida and the pars tensa. The relatively fragile pars flaccida lies above the lateral process of the malleus between the Notch of Rivinus and the anterior and posterior malleal folds. Consisting of two layers and appearing slightly pinkish in hue, it is associated with Eustachian tube dysfunction and cholesteatomas.
The larger pars tensa consists of three layers: skin, fibrous tissue, and mucosa. Its thick periphery forms a fibrocartilaginous ring called the annulus tympanicus or Gerlach's ligament. while the central umbo tents inward at the level of the tip of malleus. The middle fibrous layer, containing radial, circular, and parabolic fibers, encloses the handle of malleus. Though comparatively robust, the pars tensa is the region more commonly associated with perforations.
Umbo
The manubrium (Latin for "handle") of the malleus is firmly attached to the medial surface of the membrane as far as its center, drawing it toward the tympanic cavity. The lateral surface of the membrane is thus concave. The most depressed aspect of this concavity is termed the umbo (Latin for "shield boss").
Nerve supply
Sensation of the outer surface of the tympanic membrane is supplied mainly by the auriculotemporal nerve, a branch of the mandibular nerve (cranial nerve V3), with contributions from the auricular branch of the vagus nerve (cranial nerve X), the facial nerve (cranial nerve VII), and possibly the glossopharyngeal nerve (cranial nerve IX). The inner surface of the tympanic membrane is innervated by the glossopharyngeal nerve.
Clinical significance
Examination
When the eardrum is illuminated during a medical examination, a cone of light radiates from the tip of the malleus to the periphery in the anteroinferior quadrant, this is what is known clinically as 5 o'clock.
Rupture
Unintentional perforation (rupture) has been described in blast injuries and air travel, typically in patients experiencing upper respiratory congestion or general Eustachian tube dysfunction that prevents equalization of pressure in the middle ear. It is also known to occur in swimming, diving (including scuba diving), and martial arts.
Patients with tympanic membrane rupture may experience bleeding, tinnitus, hearing loss, or disequilibrium (vertigo). However, they rarely require medical intervention, as between 80 and 95 percent of ruptures recover completely within two to four weeks. The prognosis becomes more guarded as the force of injury increases.
Surgical puncture for treatment of middle ear infections
In some cases, the pressure of fluid in an infected middle ear is great enough to cause the eardrum to rupture naturally. Usually, this consists of a small hole (perforation), from which fluid can drain out of the middle ear. If this does not occur naturally, a myringotomy (tympanotomy, tympanostomy) can be performed. A myringotomy is a surgical procedure in which a tiny incision is created in the eardrum to relieve pressure caused by excessive buildup of fluid, or to drain pus from the middle ear. The fluid or pus comes from a middle ear infection (otitis media), which is a common problem in children. A tympanostomy tube is inserted into the eardrum to keep the middle ear aerated for a prolonged time and to prevent reaccumulation of fluid. Without the insertion of a tube, the incision usually heals spontaneously in two to three weeks. Depending on the type, the tube is either naturally extruded in 6 to 12 months or removed during a minor procedure.
Those requiring myringotomy usually have an obstructed or dysfunctional Eustachian tube that is unable to perform drainage or ventilation in its usual fashion. Before the invention of antibiotics, myringotomy without tube placement was also used as a major treatment of severe acute otitis media.
Society and culture
The Bajau people of the Pacific intentionally rupture their eardrums at an early age to facilitate diving and hunting at sea. Many older Bajau therefore have difficulties hearing.
| Biology and health sciences | Sensory nervous system | Biology |
134971 | https://en.wikipedia.org/wiki/Cochlea | Cochlea | The cochlea is the part of the inner ear involved in hearing. It is a spiral-shaped cavity in the bony labyrinth, in humans making 2.75 turns around its axis, the modiolus. A core component of the cochlea is the organ of Corti, the sensory organ of hearing, which is distributed along the partition separating the fluid chambers in the coiled tapered tube of the cochlea.
Etymology
The name 'cochlea' is derived from the Latin word for snail shell, which in turn is from the Ancient Greek κοχλίας kokhlias ("snail, screw"), and from κόχλος kokhlos ("spiral shell") in reference to its coiled shape; the cochlea is coiled in mammals with the exception of monotremes.
Structure
The cochlea (: cochleae) is a spiraled, hollow, conical chamber of bone, in which waves propagate from the base (near the middle ear and the oval window) to the apex (the top or center of the spiral). The spiral canal of the cochlea is a section of the bony labyrinth of the inner ear that is approximately 30 mm long and makes 2 turns about the modiolus. The cochlear structures include:
Three scalae or chambers:
the vestibular duct or scala vestibuli (containing perilymph), which lies superior to the cochlear duct and abuts the oval window
the tympanic duct or scala tympani (containing perilymph), which lies inferior to the cochlear duct and terminates at the round window
the cochlear duct or scala media (containing endolymph) a region of high potassium ion concentration that the stereocilia of the hair cells project into
The helicotrema, the location where the tympanic duct and the vestibular duct merge, at the apex of the cochlea
Reissner's membrane, which separates the vestibular duct from the cochlear duct
The osseous spiral lamina, a main structural element that separates the cochlear duct from the tympanic duct
The basilar membrane, a main structural element that separates the cochlear duct from the tympanic duct and determines the mechanical wave propagation properties of the cochlear partition
The organ of Corti, the sensory epithelium, a cellular layer on the basilar membrane, in which sensory hair cells are powered by the potential difference between the perilymph and the endolymph
Hair cells, sensory cells in the organ of Corti, topped with hair-like structures called stereocilia
The spiral ligament is a coiled thickening in the fibrous lining of the cochlear wall. It attaches the membranous cochlear duct to the bony spiral canal.
The cochlea is a portion of the inner ear that looks like a snail shell (cochlea is Greek for snail). The cochlea receives sound in the form of vibrations, which cause the stereocilia to move. The stereocilia then convert these vibrations into nerve impulses which are taken up to the brain to be interpreted. Two of the three fluid sections are canals and the third is the 'organ of Corti' which detects pressure impulses that travel along the auditory nerve to the brain. The two canals are called the vestibular canal and the tympanic canal.
Microanatomy
The walls of the hollow cochlea are made of bone, with a thin, delicate lining of epithelial tissue. This coiled tube is divided through most of its length by an inner membranous partition. Two fluid-filled outer spaces (ducts or scalae) are formed by this dividing membrane. At the top of the snailshell-like coiling tubes, there is a reversal of the direction of the fluid, thus changing the vestibular duct to the tympanic duct. This area is called the helicotrema. This continuation at the helicotrema allows fluid being pushed into the vestibular duct by the oval window to move back out via movement in the tympanic duct and deflection of the round window; since the fluid is nearly incompressible and the bony walls are rigid, it is essential for the conserved fluid volume to exit somewhere.
The lengthwise partition that divides most of the cochlea is itself a fluid-filled tube, the third 'duct'. This central column is called the cochlear duct. Its fluid, endolymph, also contains electrolytes and proteins, but is chemically quite different from perilymph. Whereas the perilymph is rich in sodium ions, the endolymph is rich in potassium ions, which produces an ionic, electrical potential.
The hair cells are arranged in four rows in the organ of Corti along the entire length of the cochlear coil. Three rows consist of outer hair cells (OHCs) and one row consists of inner hair cells (IHCs). The inner hair cells provide the main neural output of the cochlea. The outer hair cells, instead, mainly 'receive' neural input from the brain, which influences their motility as part of the cochlea's mechanical "pre-amplifier". The input to the OHC is from the olivary body via the medial olivocochlear bundle.
The cochlear duct is almost as complex on its own as the ear itself. The cochlear duct is bounded on three sides by the basilar membrane, the stria vascularis, and Reissner's membrane. The stria vascularis is a rich bed of capillaries and secretory cells; Reissner's membrane is a thin membrane that separates endolymph from perilymph; and the basilar membrane is a mechanically somewhat stiff membrane, supporting the receptor organ for hearing, the organ of Corti, and determines the mechanical wave propagation properties of the cochlear system.
Sexual dimorphism
Between males and females, there are differences in the shape of the human cochlea. The variation is in the twist at the end of the spiral. Because of this difference, and because the cochlea is one of the more durable bones in the skull, it is used in ascertaining the sexes of human remains found at archaeological sites.
Function
The cochlea is filled with a watery liquid, the endolymph, which moves in response to the vibrations coming from the middle ear via the oval window. As the fluid moves, the cochlear partition (basilar membrane and organ of Corti) moves; thousands of hair cells sense the motion via their stereocilia, and convert that motion to electrical signals that are communicated via neurotransmitters to many thousands of nerve cells. These primary auditory neurons transform the signals into electrochemical impulses known as action potentials, which travel along the auditory nerve to structures in the brainstem for further processing.
Hearing
The stapes (stirrup) ossicle bone of the middle ear transmits vibrations to the fenestra ovalis (oval window) on the outside of the cochlea, which vibrates the perilymph in the vestibular duct (upper chamber of the cochlea). The ossicles are essential for efficient coupling of sound waves into the cochlea, since the cochlea environment is a fluid–membrane system, and it takes more pressure to move sound through fluid–membrane waves than it does through air. A pressure increase is achieved by reducing the area ratio from the tympanic membrane (drum) to the oval window (stapes bone) by 20. As pressure = force/area, results in a pressure gain of about 20 times from the original sound wave pressure in air. This gain is a form of impedance matching – to match the soundwave travelling through air to that travelling in the fluid–membrane system.
At the base of the cochlea, each 'duct' ends in a membranous portal that faces the middle ear cavity: The vestibular duct ends at the oval window, where the footplate of the stapes sits. The footplate vibrates when the pressure is transmitted via the ossicular chain. The wave in the perilymph moves away from the footplate and towards the helicotrema. Since those fluid waves move the cochlear partition that separates the ducts up and down, the waves have a corresponding symmetric part in perilymph of the tympanic duct, which ends at the round window, bulging out when the oval window bulges in.
The perilymph in the vestibular duct and the endolymph in the cochlear duct act mechanically as a single duct, being kept apart only by the very thin Reissner's membrane.
The vibrations of the endolymph in the cochlear duct displace the basilar membrane in a pattern that peaks a distance from the oval window depending upon the soundwave frequency. The organ of Corti vibrates due to outer hair cells further amplifying these vibrations. Inner hair cells are then displaced by the vibrations in the fluid, and depolarise by an influx of K+ via their tip-link-connected channels, and send their signals via neurotransmitter to the primary auditory neurons of the spiral ganglion.
The hair cells in the organ of Corti are tuned to certain sound frequencies by way of their location in the cochlea, due to the degree of stiffness in the basilar membrane. This stiffness is due to, among other things, the thickness and width of the basilar membrane, which along the length of the cochlea is stiffest nearest its beginning at the oval window, where the stapes introduces the vibrations coming from the eardrum. Since its stiffness is high there, it allows only high-frequency vibrations to move the basilar membrane, and thus the hair cells. The farther a wave travels towards the cochlea's apex (the helicotrema), the less stiff the basilar membrane is; thus lower frequencies travel down the tube, and the less-stiff membrane is moved most easily by them where the reduced stiffness allows: that is, as the basilar membrane gets less and less stiff, waves slow down and it responds better to lower frequencies. In addition, in mammals, the cochlea is coiled, which has been shown to enhance low-frequency vibrations as they travel through the fluid-filled coil. This spatial arrangement of sound reception is referred to as tonotopy.
For very low frequencies (below 20 Hz), the waves propagate along the complete route of the cochlea – differentially up vestibular duct and tympanic duct all the way to the helicotrema. Frequencies this low still activate the organ of Corti to some extent but are too low to elicit the perception of a pitch. Higher frequencies do not propagate to the helicotrema, due to the stiffness-mediated tonotopy.
A very strong movement of the basilar membrane due to very loud noise may cause hair cells to die. This is a common cause of partial hearing loss and is the reason why users of firearms or heavy machinery often wear earmuffs or earplugs.
Pathway to the brain
To transmit the sensation of sound to the brain, where it can be processed into the perception of hearing, hair cells of the cochlea must convert their mechanical stimulation into the electrical signaling patterns of the nervous system. Hair cells are modified neurons, able to generate action potentials which can be transmitted to other nerve cells. These action potential signals travel through the vestibulocochlear nerve to eventually reach the anterior medulla, where they synapse and are initially processed in the cochlear nuclei.
Some processing occurs in the cochlear nuclei themselves, but the signals must also travel to the superior olivary complex of the pons as well as the inferior colliculi for further processing.
Hair cell amplification
Not only does the cochlea "receive" sound, a healthy cochlea generates and amplifies sound when necessary. Where the organism needs a mechanism to hear very faint sounds, the cochlea amplifies by the reverse transduction of the OHCs, converting electrical signals back to mechanical in a positive-feedback configuration. The OHCs have a protein motor called prestin on their outer membranes; it generates additional movement that couples back to the fluid–membrane wave. This "active amplifier" is essential in the ear's ability to amplify weak sounds.
The active amplifier also leads to the phenomenon of soundwave vibrations being emitted from the cochlea back into the ear canal through the middle ear (otoacoustic emissions).
Otoacoustic emissions
Otoacoustic emissions are due to a wave exiting the cochlea via the oval window, and propagating back through the middle ear to the eardrum, and out the ear canal, where it can be picked up by a microphone. Otoacoustic emissions are important in some types of tests for hearing impairment, since they are present when the cochlea is working well, and less so when it is suffering from loss of OHC activity. Otoacoustic emissions also exhibit sex dimorphisms, since females tend to display higher magnitudes of otoacoustic emissions. Males tend to experience a reduction in otoacoustic emission magnitudes as they age. Women, on the other hand, do not experience a change in otoacoustic emission magnitudes with age.
Role of gap junctions
Gap-junction proteins, called connexins, expressed in the cochlea play an important role in auditory functioning. Mutations in gap-junction genes have been found to cause syndromic and nonsyndromic deafness. Certain connexins, including connexin 30 and connexin 26, are prevalent in the two distinct gap-junction systems found in the cochlea. The epithelial-cell gap-junction network couples non-sensory epithelial cells, while the connective-tissue gap-junction network couples connective-tissue cells. Gap-junction channels recycle potassium ions back to the endolymph after mechanotransduction in hair cells. Importantly, gap junction channels are found between cochlear supporting cells, but not auditory hair cells.
Clinical significance
Physical damage
Damage to the cochlea can result from different incidents or conditions like a severe head injury, a cholesteatoma, an infection, and/or exposure to loud noise which could kill hair cells in the cochlea.
Hearing loss
Hearing loss associated with the cochlea is often a result of outer hair cells and inner hair cells damage or death. Outer hair cells are more susceptible to damage, which can result in less sensitivity to weak sounds. Frequency sensitivity is also affected by cochlear damage which can impair the patient's ability to distinguish between spectral differences of vowels. The effects of cochlear damage on different aspects of hearing loss like temporal integration, pitch perception, and frequency determination are still being studied, given that multiple factors must be taken into account in regard to cochlear research.
Bionics
In 2009, engineers at the Massachusetts Institute of Technology created an electronic chip that can quickly analyze a very large range of radio frequencies while using only a fraction of the power needed for existing technologies; its design specifically mimics a cochlea.
Other animals
The coiled form of cochlea is unique to mammals. In birds and in other non-mammalian vertebrates, the compartment containing the sensory cells for hearing is occasionally also called "cochlea," despite not being coiled up. Instead, it forms a blind-ended tube, also called the cochlear duct. This difference apparently evolved in parallel with the differences in frequency range of hearing between mammals and non-mammalian vertebrates. The superior frequency range in mammals is partly due to their unique mechanism of pre-amplification of sound by active cell-body vibrations of outer hair cells. Frequency resolution is, however, not better in mammals than in most lizards and birds, but the upper frequency limit is – sometimes much – higher. Most bird species do not hear above 4–5 kHz, the currently known maximum being ~ 11 kHz in the barn owl. Some marine mammals hear up to 200 kHz. A long coiled compartment, rather than a short and straight one, provides more space for additional octaves of hearing range, and has made possible some of the highly derived behaviors involving mammalian hearing.
As the study of the cochlea should fundamentally be focused at the level of hair cells, it is important to note the anatomical and physiological differences between the hair cells of various species. In birds, for instance, instead of outer and inner hair cells, there are tall and short hair cells. There are several similarities of note in regard to this comparative data. For one, the tall hair cell is very similar in function to that of the inner hair cell, and the short hair cell, lacking afferent auditory-nerve fiber innervation, resembles the outer hair cell. One unavoidable difference, however, is that while all hair cells are attached to a tectorial membrane in birds, only the outer hair cells are attached to the tectorial membrane in mammals.
Gallery
| Biology and health sciences | Sensory nervous system | Biology |
135316 | https://en.wikipedia.org/wiki/Sexagesimal | Sexagesimal | Sexagesimal, also known as base 60, is a numeral system with sixty as its base. It originated with the ancient Sumerians in the 3rd millennium BC, was passed down to the ancient Babylonians, and is still used—in a modified form—for measuring time, angles, and geographic coordinates.
The number 60, a superior highly composite number, has twelve divisors, namely 1, 2, 3, 4, 5, 6, 10, 12, 15, 20, 30, and 60, of which 2, 3, and 5 are prime numbers. With so many factors, many fractions involving sexagesimal numbers are simplified. For example, one hour can be divided evenly into sections of 30 minutes, 20 minutes, 15 minutes, 12 minutes, 10 minutes, 6 minutes, 5 minutes, 4 minutes, 3 minutes, 2 minutes, and 1 minute. 60 is the smallest number that is divisible by every number from 1 to 6; that is, it is the lowest common multiple of 1, 2, 3, 4, 5, and 6.
In this article, all sexagesimal digits are represented as decimal numbers, except where otherwise noted. For example, the largest sexagesimal digit is "59".
Origin
According to Otto Neugebauer, the origins of sexagesimal are not as simple, consistent, or singular in time as they are often portrayed. Throughout their many centuries of use, which continues today for specialized topics such as time, angles, and astronomical coordinate systems, sexagesimal notations have always contained a strong undercurrent of decimal notation, such as in how sexagesimal digits are written. Their use has also always included (and continues to include) inconsistencies in where and how various bases are to represent numbers even within a single text.
The most powerful driver for rigorous, fully self-consistent use of sexagesimal has always been its mathematical advantages for writing and calculating fractions. In ancient texts this shows up in the fact that sexagesimal is used most uniformly and consistently in mathematical tables of data. Another practical factor that helped expand the use of sexagesimal in the past even if less consistently than in mathematical tables, was its decided advantages to merchants and buyers for making everyday financial transactions easier when they involved bargaining for and dividing up larger quantities of goods. In the late 3rd millennium BC, Sumerian/Akkadian units of weight included the kakkaru (talent, approximately 30 kg) divided into 60 manû (mina), which was further subdivided into 60 šiqlu (shekel); the descendants of these units persisted for millennia, though the Greeks later coerced this relationship into the more base-10 compatible ratio of a shekel being one 50th of a mina.
Apart from mathematical tables, the inconsistencies in how numbers were represented within most texts extended all the way down to the most basic cuneiform symbols used to represent numeric quantities. For example, the cuneiform symbol for 1 was an ellipse made by applying the rounded end of the stylus at an angle to the clay, while the sexagesimal symbol for 60 was a larger oval or "big 1". But within the same texts in which these symbols were used, the number 10 was represented as a circle made by applying the round end of the style perpendicular to the clay, and a larger circle or "big 10" was used to represent 100. Such multi-base numeric quantity symbols could be mixed with each other and with abbreviations, even within a single number. The details and even the magnitudes implied (since zero was not used consistently) were idiomatic to the particular time periods, cultures, and quantities or concepts being represented. In modern times there is the recent innovation of adding decimal fractions to sexagesimal astronomical coordinates.
Usage
Babylonian mathematics
The sexagesimal system as used in ancient Mesopotamia was not a pure base-60 system, in the sense that it did not use 60 distinct symbols for its digits. Instead, the cuneiform digits used ten as a sub-base in the fashion of a sign-value notation: a sexagesimal digit was composed of a group of narrow, wedge-shaped marks representing units up to nine (, , , , ..., ) and a group of wide, wedge-shaped marks representing up to five tens (, , , , ). The value of the digit was the sum of the values of its component parts:
Numbers larger than 59 were indicated by multiple symbol blocks of this form in place value notation. Because there was no symbol for zero it is not always immediately obvious how a number should be interpreted, and its true value must sometimes have been determined by its context. For example, the symbols for 1 and 60 are identical. Later Babylonian texts used a placeholder () to represent zero, but only in the medial positions, and not on the right-hand side of the number, as in numbers like .
Other historical usages
In the Chinese calendar, a system is commonly used in which days or years are named by positions in a sequence of ten stems and in another sequence of 12 branches. The same stem and branch repeat every 60 steps through this cycle.
Book VIII of Plato's Republic involves an allegory of marriage centered on the number 604 = and its divisors. This number has the particularly simple sexagesimal representation 1,0,0,0,0. Later scholars have invoked both Babylonian mathematics and music theory in an attempt to explain this passage.
Ptolemy's Almagest, a treatise on mathematical astronomy written in the second century AD, uses base 60 to express the fractional parts of numbers. In particular, his table of chords, which was essentially the only extensive trigonometric table for more than a millennium, has fractional parts of a degree in base 60, and was practically equivalent to a modern-day table of values of the sine function.
Medieval astronomers also used sexagesimal numbers to note time. Al-Biruni first subdivided the hour sexagesimally into minutes, seconds, thirds and fourths in 1000 while discussing Jewish months. Around 1235 John of Sacrobosco continued this tradition, although Nothaft thought Sacrobosco was the first to do so. The Parisian version of the Alfonsine tables (ca. 1320) used the day as the basic unit of time, recording multiples and fractions of a day in base-60 notation.
The sexagesimal number system continued to be frequently used by European astronomers for performing calculations as late as 1671. For instance, Jost Bürgi in Fundamentum Astronomiae (presented to Emperor Rudolf II in 1592), his colleague Ursus in Fundamentum Astronomicum, and possibly also Henry Briggs, used multiplication tables based on the sexagesimal system in the late 16th century, to calculate sines.
In the late 18th and early 19th centuries, Tamil astronomers were found to make astronomical calculations, reckoning with shells using a mixture of decimal and sexagesimal notations developed by Hellenistic astronomers.
Base-60 number systems have also been used in some other cultures that are unrelated to the Sumerians, for example by the Ekari people of Western New Guinea.
Modern usage
Modern uses for the sexagesimal system include measuring angles, geographic coordinates, electronic navigation, and time.
One hour of time is divided into 60 minutes, and one minute is divided into 60 seconds. Thus, a measurement of time such as 3:23:17 can be interpreted as a whole sexagesimal number (no sexagesimal point), meaning . However, each of the three sexagesimal digits in this number (3, 23, and 17) is written using the decimal system.
Similarly, the practical unit of angular measure is the degree, of which there are 360 (six sixties) in a circle. There are 60 minutes of arc in a degree, and 60 arcseconds in a minute.
YAML
In version 1.1 of the YAML data storage format, sexagesimals are supported for plain scalars, and formally specified both for integers and floating point numbers. This has led to confusion, as e.g. some MAC addresses would be recognised as sexagesimals and loaded as integers, where others were not and loaded as strings. In YAML 1.2 support for sexagesimals was dropped.
Notations
In Hellenistic Greek astronomical texts, such as the writings of Ptolemy, sexagesimal numbers were written using Greek alphabetic numerals, with each sexagesimal digit being treated as a distinct number. Hellenistic astronomers adopted a new symbol for zero, , which morphed over the centuries into other forms, including the Greek letter omicron, ο, normally meaning 70, but permissible in a sexagesimal system where the maximum value in any position is 59. The Greeks limited their use of sexagesimal numbers to the fractional part of a number.
In medieval Latin texts, sexagesimal numbers were written using Arabic numerals; the different levels of fractions were denoted minuta (i.e., fraction), minuta secunda, minuta tertia, etc. By the 17th century it became common to denote the integer part of sexagesimal numbers by a superscripted zero, and the various fractional parts by one or more accent marks. John Wallis, in his Mathesis universalis, generalized this notation to include higher multiples of 60; giving as an example the number ; where the numbers to the left are multiplied by higher powers of 60, the numbers to the right are divided by powers of 60, and the number marked with the superscripted zero is multiplied by 1. This notation leads to the modern signs for degrees, minutes, and seconds. The same minute and second nomenclature is also used for units of time, and the modern notation for time with hours, minutes, and seconds written in decimal and separated from each other by colons may be interpreted as a form of sexagesimal notation.
In some usage systems, each position past the sexagesimal point was numbered, using Latin or French roots: prime or primus, seconde or secundus, tierce, quatre, quinte, etc. To this day we call the second-order part of an hour or of a degree a "second". Until at least the 18th century, of a second was called a "tierce" or "third".
In the 1930s, Otto Neugebauer introduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integer and fractional portions of the number and using a comma (,) to separate the positions within each portion. For example, the mean synodic month used by both Babylonian and Hellenistic astronomers and still used in the Hebrew calendar is 29;31,50,8,20 days. This notation is used in this article.
Fractions and irrational numbers
Fractions
In the sexagesimal system, any fraction in which the denominator is a regular number (having only 2, 3, and 5 in its prime factorization) may be expressed exactly. Shown here are all fractions of this type in which the denominator is less than or equal to 60:
= 0;30
= 0;20
= 0;15
= 0;12
= 0;10
= 0;7,30
= 0;6,40
= 0;6
= 0;5
= 0;4
= 0;3,45
= 0;3,20
= 0;3
= 0;2,30
= 0;2,24
= 0;2,13,20
= 0;2
= 0;1,52,30
= 0;1,40
= 0;1,30
= 0;1,20
= 0;1,15
= 0;1,12
= 0;1,6,40
= 0;1
However numbers that are not regular form more complicated repeating fractions. For example:
= 0; (the bar indicates the sequence of sexagesimal digits 8,34,17 repeats infinitely many times)
= 0;
= 0;
= 0;4,
= 0;
= 0;
= 0;
= 0;
The fact that the two numbers that are adjacent to sixty, 59 and 61, are both prime numbers implies that fractions that repeat with a period of one or two sexagesimal digits can only have regular number multiples of 59 or 61 as their denominators, and that other non-regular numbers have fractions that repeat with a longer period.
Irrational numbers
The representations of irrational numbers in any positional number system (including decimal and sexagesimal) neither terminate nor repeat.
The square root of 2, the length of the diagonal of a unit square, was approximated by the Babylonians of the Old Babylonian Period () as
Because ≈ ... is an irrational number, it cannot be expressed exactly in sexagesimal (or indeed any integer-base system), but its sexagesimal expansion does begin 1;24,51,10,7,46,6,4,44... ()
The value of as used by the Greek mathematician and scientist Ptolemy was 3;8,30 = = ≈ .... Jamshīd al-Kāshī, a 15th-century Persian mathematician, calculated 2 as a sexagesimal expression to its correct value when rounded to nine subdigits (thus to ); his value for 2 was 6;16,59,28,1,34,51,46,14,50. Like above, 2 is an irrational number and cannot be expressed exactly in sexagesimal. Its sexagesimal expansion begins 6;16,59,28,1,34,51,46,14,49,55,12,35... ()
| Mathematics | Basics | null |
3613142 | https://en.wikipedia.org/wiki/Agricultural%20philosophy | Agricultural philosophy | Agricultural philosophy (or philosophy of agriculture) is, roughly and approximately, a discipline devoted to the systematic critique of the philosophical frameworks (or ethical world views) that are the foundation for decisions regarding agriculture. Many of these views are also used to guide decisions dealing with land use in general. (Please see the Wikipedia article on environmental philosophy.) In everyday usage, it can also be defined as the love of, search after, and wisdom associated with agriculture, as one of humanity's founding components of civilization. However, this view is more aptly known as agrarianism. In actuality, agrarianism is only one philosophy or normative framework out of many that people use to guide their decisions regarding agriculture on an everyday basis. The most prevalent of these philosophies will be briefly defined below.
Utilitarian approach
This view was first put forth by Jeremy Bentham and John Stuart Mill. Though there are many varieties of utilitarianism, generally the view is that a morally right action is an action that produces the maximum good for people. This theory is a form of consequentialism; which basically means that the correct action is understood entirely in terms of the consequences of that action. Utilitarianism is often used when deciding farming issues. For example, farmland is commonly valued based upon its capacity to the grow crops that people want. This approach to valuing land is called Asset Theory (in contrast to Location Theory) and it is based upon utilitarian principles. Another example is when a community decides on what to do with a particular parcel of land. Let's say that this community must decide to use it for industry, residential uses, or for farming. By using a utilitarian approach, the council would judge which use would benefit the greatest number of people in the community and then make their choice based upon that information. Finally, it also forms the foundation for industrial farming; as an increase in yield, which would increase the number of people able to receive goods from farmed land, is judged from this view to be a good action or approach. Indeed, a common argument in favor of industrial agriculture is this it is a good practice because it increases the benefits for humans; benefits such as food abundance and a drop in food prices.
However, several scholars and writers, such as Peter Singer, Aldo Leopold, Vandana Shiva, Barbara Kingsolver, and Wendell Berry have argued against this view. For example, Singer argues that the suffering of animals (farm animals included) should be included in the cost/benefit calculus when deciding whether or not to do an action such as industrial farming. It has also been challenged on the grounds that farmland and farm animals are instrumentalized in this view and not valued in and of themselves. In addition, systems thinkers, deep ecologists, and agrarian philosophers (such as Aldo Leopold & Wendell Berry) critique this view on the grounds that it ignores aspects of farming which are morally applicable and/or intrinsically valuable. The Slow Food Movement and the Buy Local Agricultural Movements are also built upon philosophical views morally opposed to extreme versions of this approach. Other critiques will be explored below when different philosophical approaches to agriculture are briefly explained. However, it is important to note that the utilitarian approach to agriculture is currently the most widespread approach within the modern Western World.
Libertarian approach
Another philosophical approach often used when deciding land or farming issues is Libertarianism. Libertarianism is, roughly, the moral view that agents own themselves and have certain moral rights including the right to acquire property. In a looser sense, libertarianism is commonly identified with the belief that each person has a right to a maximum amount of liberty when this liberty does not interfere with other people's freedom. A well known libertarian theorist is John Hospers. Within this view, property rights are natural rights. Thus, it would be acceptable for a farmer to inefficiently farm their land as long as they don't harm others while doing it. In 1968, Garrett Harden applied this philosophy to land/farming issues when he argued that the only solution to the "Tragedy of the Commons" was to place soil and water resources into the hands of private citizens. He then supplied utilitarian justifications to support his argument and, indeed, one could argue that libertarianism is rooted in utilitarian ideals. However, this leaves libertarian based land ethics open to the above critiques lodged against utilitarian approaches to agriculture. Even excepting these critiques, the libertarian view has been specifically challenged by the critique that people making self-interested decisions can cause large ecological and social disasters such as the Dust Bowl disaster. Even so, it is a philosophical view commonly held within the United States and, especially, by U.S. ranchers and farmers.
Egalitarian approach
Egalitarian-based views are often developed as a response to libertarianism. This is because, while libertarianism provides for the maximum amount of human freedom, it does not require a person to help others. It also leads to the grossly uneven distribution of wealth. A well known egalitarian philosopher is John Rawls. When focusing on agriculture, what this translates into is the uneven distribution of land and food. While both utilitarian and libertarian approaches to agriculture ethics could conceivably rationalize this mal-distribution, an egalitarian approach typically favors equality whether that be equal entitlement and/or opportunity to employment or access to food. However, if one recognizes that people have a right to something, then someone has to supply this opportunity or item, whether that be an individual person or the government. Thus, the egalitarian view links land and water with the right to food. With the growth of human populations and the decline of soil and water resources, egalitarianism could provide a strong argument for the preservation of soil fertility and water.
Ecological or systems approach
In addition to utilitarian, libertarian, and egalitarian philosophies, there are normative views that are based upon the principle that land has intrinsic value and positions coming out of an ecological or systems view. Two main examples of this are James Lovelock's Gaia hypothesis which postulates that the Earth is an organism and deep ecologists who argue that human communities are built upon a foundation of the surrounding ecosystems or the biotic communities. While these philosophies can be useful for guiding decision making on issues concerning land in general, they have limited usefulness when applied to agriculture because they privilege natural ecosystems and agricultural ecosystems are often considered not natural. One philosophy grounded in the principle that land has intrinsic value which is directly applicable to agriculture is Aldo Leopold's stewardship ethic or land ethic, in which an action is correct if it tends to "preserve the integrity, stability, and beauty of the biotic community". Similar to egalitarian-based land ethics, many of the above philosophies were also developed as alternatives to utilitarian and libertarian based approaches. Leopold's ethic is currently one of the most popular ecological approaches to agriculture commonly known as agrarianism. Other agrarianists include Benjamin Franklin, Thomas Jefferson, J. Hector St. John de Crèvecœur (1735–1813), Ralph Waldo Emerson (1803–1882), Henry David Thoreau (1817–1862), John Steinbeck (1902–1968), Wendell Berry (b. 1934), Gene Logsdon (b. 1932), Paul B. Thompson, and Barbara Kingsolver.
| Technology | Academic disciplines | null |
3616597 | https://en.wikipedia.org/wiki/Digital%20photography | Digital photography | Digital photography uses cameras containing arrays of electronic photodetectors interfaced to an analog-to-digital converter (ADC) to produce images focused by a lens, as opposed to an exposure on photographic film. The digitized image is stored as a computer file ready for further digital processing, viewing, electronic publishing, or digital printing. It is a form of digital imaging based on gathering visible light (or for scientific instruments, light in various ranges of the electromagnetic spectrum).
Until the advent of such technology, photographs were made by exposing light-sensitive photographic film and paper, which was processed in liquid chemical solutions to develop and stabilize the image. Digital photographs are typically created solely by computer-based photoelectric and mechanical techniques, without wet bath chemical processing.
In consumer markets, apart from enthusiast digital single-lens reflex cameras (DSLR), most digital cameras now come with an electronic viewfinder, which approximates the final photograph in real-time. This enables the user to review, adjust, or delete a captured photograph within seconds, making this a form of instant photography, in contrast to most photochemical cameras from the preceding era.
Moreover, the onboard computational resources can usually perform aperture adjustment and focus adjustment (via inbuilt servomotors) as well as set the exposure level automatically, so these technical burdens are removed from the photographer unless the photographer feels competent to intercede (and the camera offers traditional controls). Electronic by nature, most digital cameras are instant, mechanized, and automatic in some or all functions. Digital cameras may choose to emulate traditional manual controls (rings, dials, sprung levers, and buttons) or it may instead provide a touchscreen interface for all functions; most camera phones fall into the latter category.
Digital photography spans a wide range of applications with a long history. Much of the technology originated in the space industry, where it pertains to highly customized, embedded systems combined with sophisticated remote telemetry. Any electronic image sensor can be digitized; this was achieved in 1951. The modern era in digital photography is dominated by the semiconductor industry, which evolved later. An early semiconductor milestone was the advent of the charge-coupled device (CCD) image sensor, first demonstrated in April 1970; since then, the field has advanced rapidly, with concurrent advances in photolithographic fabrication.
The first consumer digital cameras were marketed in the late 1990s. Professionals gravitated to digital slowly, converting as their professional work required using digital files to fulfill demands for faster turnaround than conventional methods could allow. Starting around 2000, digital cameras were incorporated into cell phones; in the following years, cell phone cameras became widespread, particularly due to their connectivity to social media and email. Since 2010, the digital point-and-shoot and DSLR cameras have also seen competition from the mirrorless digital cameras, which typically provide better image quality than point-and-shoot or cell phone cameras but are smaller in size and shape than typical DSLRs. Many mirrorless cameras accept interchangeable lenses and have advanced features through an electronic viewfinder, which replaces the through-the-lens viewfinder of single-lens reflex cameras.
History
While digital photography has only relatively recently become mainstream, the late 20th century saw many small developments leading to its creation. The history of digital photography began in the 1950s. In 1951, the first digital signals were saved to magnetic tape via the first video tape recorder. Six years later, in 1957, the first digital image was produced through a computer by Russell Kirsch. It was an image of his son.
The first semiconductor image sensor was the charge-coupled device (CCD), invented by physicists Willard S. Boyle and George E. Smith at Bell Labs in 1969. While researching the metal-oxide semiconductor (MOS) process, they realized that an electric charge was analogous to a magnetic bubble and that the charge could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to the capacitors so that the charge could be stepped along from one to the next. This semiconductor circuit was later used in the first digital video cameras for television broadcasting, and its invention was recognized by a Nobel Prize in Physics in 2009.
The first close-up image of Mars was taken as Mariner 4 flew by it on July 15, 1965, with a digital camera system designed by NASA and JPL. In 1976, the twin Mars Viking Landers produced the first images from the surface of Mars. The imaging process was different from that of a modern digital camera, though the result was similar; Viking used a mechanically scanned facsimile camera rather than a mosaic of solid state sensor elements. This produced a digital image that was stored on tape for later, relatively slow transmission back to Earth.
The first published color digital photograph was produced in 1972 by Michael Francis Tompsett using CCD sensor technology and was featured on the cover of Electronics Magazine. It was a picture of his wife, Margaret Tompsett.
The Cromemco Cyclops, a digital camera developed as a commercial product and interfaced to a microcomputer, was featured in the February 1975 issue of Popular Electronics magazine. It used MOS technology for its image sensor.
An important development in digital image compression technology was the discrete cosine transform (DCT), a lossy compression technique first proposed by Nasir Ahmed while he was working at the Kansas State University in 1972. DCT compression is used in the JPEG image standard, which was introduced by the Joint Photographic Experts Group in 1992. JPEG compresses images down to much smaller file sizes, and has become the most widely used image file format. The JPEG standard was largely responsible for popularizing digital photography.
The first self-contained (portable) digital camera was created in 1975 by Steven Sasson of Eastman Kodak. Sasson's camera used CCD image sensor chips developed by Fairchild Semiconductor in 1973. The camera weighed 8 pounds (3.6 kg), recorded black-and-white images to a cassette tape, had a resolution of 0.01 megapixels (10,000 pixels), and took 23 seconds to capture its first image in December 1975. The prototype camera was a technical exercise, not intended for production. While it was not until 1981 that the first consumer camera was produced by Sony, the groundwork for digital imaging and photography had been laid.
The first digital single-lens reflex (DSLR) camera was the Nikon SVC prototype demonstrated in 1986, followed by the commercial Nikon QV-1000C released in 1988. The first widely commercially available digital camera was the 1990 Dycam Model 1; it also sold as the Logitech Fotoman. It used a CCD image sensor, stored pictures digitally, and connected directly to a computer for downloading images. Originally offered to professional photographers for a hefty price, by the mid-to-late 1990s, due to technology advancements, digital cameras were commonly available to the general public.
The advent of digital photography also gave way to cultural changes in the field of photography. Unlike film photography, dark rooms and hazardous chemicals were no longer required for the post-production of an image – images could now be processed and enhanced from a personal computer. This allowed photographers to be more creative with their processing and editing techniques. As the field became more popular, digital photography and photographers diversified. Digital photography expanded the field of photography from a small, somewhat elite circle to one that encompassed many people.
The camera phone further helped popularize digital photography, along with the Internet, social media, and the JPEG image format. The first cell phones with built-in digital cameras were produced in 2000 by Sharp and Samsung. Small, convenient, and easy to use, camera phones have made digital photography ubiquitous in the daily life of the general public.
Digital camera
Sensors
Image sensors are arrays of electronic devices that convert the optical image created by the camera lens into a digital file that is stored in some digital memory device, inside or outside the camera. Each element of the image sensor array measures the intensity of light hitting a small area of the projected image (a pixel) and converts it to a digital value.
The two main types of sensors are charge-coupled devices (CCD)—in which the photo charge is shifted to a central charge-to-voltage converter—and CMOS or active pixel sensors.
Most cameras for the general consumer market create color images, in which each pixel has a color value from a three-dimensional color space like RGB. Although there is light-sensing technology that can distinguish the wavelength of the light incident on each pixel, most cameras use monochrome sensors that can only record the intensity of that light, over a broad range of wavelengths that includes all the visible spectrum. To obtain color images, those cameras depend on color filters applied over each pixel, typically in a Bayer pattern, or (rarely) on movable filters or light splitters such as dichroic mirrors. The resulting grayscale images are then combined to produce a color image. This step is usually performed by the camera itself, although some cameras may optionally provide the unprocessed grayscale images in a so-called raw image format.
However, some special-purpose cameras, such as those for thermal mapping, or low light viewing, or high speed capture, may record only monochrome (grayscale) images. The Leica Monochrom cameras, for example, opted for a grayscale-only sensor to get better resolution and dynamic range. The reduction from three-dimensional color to grayscale or simulated sepia toning may also be performed by digital post-processing, often as an option in the camera itself. On the other hand, some multispectral cameras may record more than three color coordinates for each pixel.
Multifunctionality and connectivity
In most digital camera (except some high-end linear array cameras and simple, low-end webcams), a digital memory device is used for storing images, which may be transferred to a computer later. This memory device is usually a memory card; floppy disks and CD-RWs are less common.
In addition to taking pictures, digital cameras may also record sound and video. Some function as webcams, some use the PictBridge standard to connect to printers without using a computer, and some can display pictures directly on a television set. Similarly, many camcorders can take still photographs and store them on videotape or flash memory cards with the same functionality as digital cameras.
Digital photography is an example of the shift from analog information to digital information. In the past, conventional photography was an entirely chemical and mechanical process that did not require electricity. Now, modern photography is a digital process in which analog signals are converted to and stored as digital data using built-in computers.
Performance metrics
The quality of a digital image is a composite of various factors, many of which are similar to those of film cameras. Pixel count (typically listed in megapixels, millions of pixels) is only one of the major factors, though it is the most heavily marketed figure of merit. Digital camera manufacturers advertise this figure because consumers can use it to easily compare camera capabilities. It is not, however, the major factor in evaluating a digital camera for most applications. The processing system inside the camera that turns the raw data into a color-balanced and pleasing photograph is usually more critical, which is why some 4+ megapixel cameras perform better than higher-end cameras.
Resolution in pixels is not the only measure of image quality. A larger sensor with the same number of pixels generally produces a better image than a smaller one. One of the most important benefits of this is a reduction in image noise. This is one of the advantages of DSLR cameras, which have larger sensors than simpler point-and-shoot cameras of the same resolution.
Additional factors that impact the quality of a digital image include:
Lens quality: resolution, distortion, dispersion (see Lens (optics))
Capture medium: CMOS, CCD, negative film, reversal film
Capture format: pixel count, digital file type (RAW, TIFF, JPEG), film format (135 film, 120 film), aspect ratio
Processing: digital or chemical processing of "negative" and "print"
Pixel counts
The number of pixels n for a given maximum resolution (w horizontal pixels by h vertical pixels) is the product n = w × h. For example, an image 1600 × 1200 in size has 1,920,000 pixels, or 1.92 megapixels.
The pixel count quoted by manufacturers can be misleading as it may not be the number of full-color pixels. For cameras using single-chip image sensors, the number claimed is the total number of single-color-sensitive photosensors, whether they have different locations in the plane, as with the Bayer sensor, or in stacks of three co-located photosensors as in the Foveon X3 sensor. However, the images have different numbers of RGB pixels: Bayer-sensor cameras produce as many RGB pixels as photosensors via demosaicing (interpolation), while Foveon sensors produce uninterpolated image files with one-third as many RGB pixels as photosensors. Comparisons of megapixel ratings of these two types of sensors are sometimes a subject of dispute.
The relative increase in detail resulting from an increase in resolution is better compared by looking at the number of pixels across (or down) the picture, rather than the total number of pixels in the picture area. For example, a sensor of 2560 × 1600 sensor elements is described as "4 megapixels" (2560 × 1600= 4,096,000). Increasing to 3200 × 2048 increases the pixels in the picture to 6,553,600 (6.5 megapixels), a factor of 1.6, but the pixels per cm in the picture (at the same image size) increases by only 1.25 times. A measure of the comparative increase in linear resolution is the square root of the increase in area resolution (i.e., megapixels in the entire image).
Dynamic range
Both digital and film practical imaging systems have a limited "dynamic range": the range of luminosity that can be reproduced accurately. Highlights of the subject that are too bright are rendered as white, with no detail (overexposure); shadows that are too dark are rendered as black (underexposure). The loss of detail in the highlights is not abrupt with film, or in dark shadows with digital sensors. "Highlight burn-out" of digital sensors is not usually abrupt in output images due to the tone mapping required to fit their large dynamic range into the more limited dynamic range of the output (be it SDR display or printing). Because sensor elements for different colors saturate in turn, there can be hue or saturation shift in burnt-out highlights.
Some digital cameras can show these blown highlights in the image review, allowing the photographer to re-shoot the picture with a modified exposure. Others compensate for the total contrast of a scene by selectively exposing darker pixels longer. A third technique is used by Fujifilm in its FinePix S3 Pro DSLR: the image sensor contains additional photodiodes of lower sensitivity than the main ones; these retain detail in parts of the image too bright for the main sensor.
High-dynamic-range imaging (HDR) addresses this problem by increasing the dynamic range of images by either
increasing the dynamic range of the image sensor, or
using exposure bracketing and post-processing the separate images to create a single image with a higher dynamic range.
Storage
Many camera phones and most digital cameras use memory cards with flash memory to store image data. The majority of cards for separate cameras are Secure Digital (SD) format, or the older CompactFlash (CF) format; other formats are rare. XQD card format was the last new form of card, targeted at high-definition camcorders and high-resolution digital photo cameras. Most modern digital cameras also use internal memory of limited capacity to hold pictures temporarily, regardless of whether or not the camera is equipped with a memory card. These pictures can then be transferred later to a memory card or external device.
Memory cards can hold vast numbers of photos, requiring attention only when the memory card is full. For most users, this means hundreds of quality photos stored on the same memory card. Images may be transferred to other media for archival or personal use. Cards with high speed and capacity are suited to video and burst mode (capture several photographs in quick succession).
Because photographers rely on the integrity of image files, it is important to take proper care of memory cards. One process is card formatting, which essentially involves scanning the cards for possible errors. Common advocacy calls for formatting cards after transferring its images onto a computer. Since all cameras only do quick formatting of cards, it is advisable to occasionally carry out a more thorough formatting using appropriate software on a computer.
Comparison with film photography
Advantages already in consumer level cameras
The primary advantage of consumer-level digital cameras is the low recurring cost, as users need not purchase photographic film. Processing costs may be reduced or even eliminated. Digicams tend also to be easier to carry and use than comparable film cameras, and more easily adapt to modern use of pictures. Some, particularly those in smartphones, can send their pictures directly to email, web pages, or other electronic distribution.
Advantages of professional digital cameras
In professional usage, digital cameras offer many advantages in speed, precision, flexibility, ease, and cost.
Immediacy: image review and deletion are possible immediately; lighting and composition can be assessed immediately, which ultimately conserves storage space.
Faster workflow: management (color and file), manipulation, and printing tools are more versatile than conventional film processes. However, batch processing of RAW files can be time-consuming, even on a fast computer.
Faster image ingest: it will take no more than a few seconds to transfer a high-resolution RAW file from a memory card vs many minutes to scan film with a high-quality scanner.
Flash: using flash in images can provide a different look such as the lighting of the image.
Higher image quantity: which enables longer photography sessions without changing film rolls. To most users, a single memory card is sufficient for the lifetime of the camera whereas film rolls are a re-incurring cost of film cameras.
Precision and reproducibility of processing: since processing in the digital domain is purely numerical, image processing using deterministic (non-random) algorithms is perfectly reproducible and eliminates variations common with photochemical processing, and enables otherwise difficult or impractical processing techniques.
Digital manipulation: a digital image can be modified and manipulated much easier and faster than with traditional negative and print methods.
Manufacturers such as Nikon and Canon have promoted the adoption of digital single-lens reflex cameras (DSLRs) by photojournalists. Images captured at 2+ megapixels are deemed of sufficient quality for small images in newspaper or magazine reproduction. 8- to 24-megapixel images, found in modern digital SLRs, when combined with high-end lenses, can approximate the detail of film prints from 35 mm film-based SLRs.
Disadvantages of digital cameras
Aliasing: as with any sampled signal, the combination of the periodic pixel structure of common electronic image sensors and periodic structure of photographed objects (typically human-made objects) can cause objectionable aliasing artifacts, such as false colors when using cameras using a Bayer pattern sensor. Aliasing is also present in film, but typically manifests itself in less obvious ways (such as increased granularity) due to the stochastic grain structure (stochastic sampling) of film.
Electricity-dependent: digital cameras cannot operate without electricity, usually provided via a battery. In contrast, a large number of mechanical film cameras existed, such as the Leica M2. These battery-less devices had advantages over digital devices in harsh or remote conditions.
Limited sensor size: a persistent challenge in semiconductor fabrication is that chips much larger than 1 cm2 are expensive to produce without defects, confining large image sensor formats compatible with traditional 35 mm optics to professional and prosumer markets.
Equivalent features
Image noise and grain
Noise in a digital camera's image may sometimes be visually similar to film grain in a film camera.
Speed of use
Turn-of-the-century digital cameras had a long start-up delay compared to film cameras (that is, the delay from when they are turned on until they are ready to take the first shot), but this is no longer the case for modern digital cameras, which have start-up times under 1/4 seconds.
Frame rate
While some film cameras could reach up to 14 frames per second (fps), like the Canon F-1 with its rare high-speed motor drive, professional DSLR cameras can take still photographs at the highest frame rates. While the Sony SLT technology allows rates of up to 12 fps, the Canon EOS-1D X can take stills at a rate of 14 fps. The Nikon F5 is limited to 36 continuous frames (the length of the film) without the cumbersome bulk film back, while the digital Nikon D5 is able to capture over 100 14-bit RAW images before its buffer must be cleared and the remaining space on the storage media can be used.
Image longevity
Depending on the materials and how they are stored, analog photographic film and prints may fade as they age. Similarly, the media on which digital images are stored or printed can decay or become corrupt, leading to a loss of image integrity.
Color reproduction
Color reproduction (gamut) depends on the type and quality of film or sensor used and the quality of the optical system and film processing. Different films and sensors have different color sensitivity; the photographer needs to understand their equipment, the lighting conditions, and the media used to ensure accurate color reproduction. Many digital cameras offer RAW format (sensor data), which makes it possible to choose the color gamut in the development stage regardless of camera settings.
Even in RAW format, however, the sensor and the camera's dynamics can only capture colors within the gamut supported by the hardware. When that image is transferred for reproduction on any device, the widest achievable gamut is the gamut that the end device supports. For a monitor, it is the gamut of the display device. For a photographic print, it is the gamut of the device that prints the image on a specific type of paper.
Professional photographers often use specially designed and calibrated monitors that help them to reproduce color accurately and consistently.
Frame aspect ratios
Most digital point-and-shoot cameras have an aspect ratio of 1.33 (4:3), the same as analog television or early movies. However, a 35 mm picture's aspect ratio is 1.5 (3:2). Several digital cameras take photos in either ratio. Nearly all digital SLRs take pictures in a 3:2 ratio, as most can use lenses designed for 35 mm film. Some photo labs print photos on 4:3 ratio paper, as well as the existing 3:2.
In 2005, Panasonic launched the first consumer camera with a native aspect ratio of 16:9, matching HDTV. This is similar to a 7:4 aspect ratio, which was a common size for APS film.
Different aspect ratios are one of the reasons consumers have issues when cropping photos. An aspect ratio of 4:3 translates to a size of 4.5"×6.0". This loses half an inch when printing on the "standard" size of 4"×6", an aspect ratio of 3:2. Similar cropping occurs when printing on other sizes, such as 5"×7", 8"×10", or 11"×14".
Market impact
In late 2002, the cheapest digital cameras in the United States were available for around $100 (USD). At the same time, many discount stores with photo labs introduced a "digital front end", allowing consumers to obtain true chemical prints (as opposed to ink-jet prints) in an hour. These prices were similar to those of prints made from film negatives.
In July 2003, digital cameras entered the disposable camera market with the release of the Ritz Dakota Digital, a 1.2-megapixel (1280 × 960) CMOS-based digital camera costing only $11. Following the familiar single-use concept long in use with film cameras, Ritz intended the Dakota Digital for single use. When the pre-programmed 25-picture limit is reached, the camera is returned to the store, and the consumer receives back prints and a CD-ROM with their photos. The camera is then refurbished and resold.
Since the introduction of the Dakota Digital, a number of similar single-use digital cameras have appeared. Most single-use digital cameras are nearly identical to the original Dakota Digital in specifications and function, though a few include superior specifications and more advanced functions (such as higher image resolutions and LCD screens). Most, if not all these single-use digital cameras cost less than $20, not including processing. However, the huge demand for complex digital cameras at competitive prices has often caused manufacturing shortcuts, evidenced by a large increase in customer complaints over camera malfunctions, high parts prices, and short service life. Some digital cameras offer only a 90-day warranty.
Since 2003, digital cameras have outsold film cameras. Prices of 35 mm compact cameras have dropped with manufacturers further outsourcing to countries such as China. Kodak announced in January 2004 that they would no longer sell Kodak-branded film cameras in the developed world. In January 2006, Nikon followed suit and announced they would stop production of all but two models of their film cameras. They will continue to produce the low-end Nikon FM10, and the high-end Nikon F6. In the same month, Konica Minolta announced it was pulling out of the camera business altogether. The price of 35 mm and Advanced Photo System (APS) compact cameras have dropped, probably due to direct competition from digital cameras and the resulting availability of second-hand film cameras. Pentax have reduced but not halted production of film cameras. The technology has improved so rapidly that one of Kodak's film cameras was discontinued before it was awarded a "camera of the year" award later in the year.
The decline in film camera sales has also led to a decline in purchases of film for such cameras. In November 2004, a German division of Agfa-Gevaert, AgfaPhoto, split off. Within six months it filed for bankruptcy. Konica Minolta Photo Imaging, Inc., ended production of color film and paper worldwide by March 31, 2007. In addition, by 2005, Kodak employed less than a third of the employees it had twenty years earlier. It is not known if these job losses in the film industry have been offset in the digital image industry. Digital cameras have decimated the film photography industry through the declining use of the expensive film rolls and development chemicals previously required to develop the photos. This has had a dramatic effect on companies such as Fuji, Kodak, and Agfa. Many stores that formerly offered photofinishing services or sold film no longer do, or have seen a tremendous decline. In 2012, Kodak filed for bankruptcy after struggling to adapt to the changing industry.
Digital camera sales peaked in March 2012, averaging about 11 million units a month, but sales have declined significantly ever since. By March 2014, about 3 million were purchased each month, about 30 percent of the peak sales total. The decline may have bottomed out, with sales average hovering around 3 million a month. The main competitor is smartphones, most of which have built-in digital cameras and are routinely improved. Like most digital cameras, they also offer the ability to record videos. While smartphones continue to improve on a technical level, their form factor is not optimized for use as a camera, and their battery life is typically more limited compared to a digital camera.
Digital photography has resulted in some positive market impacts as well. The increasing popularity of products such as digital photo frames and canvas prints is a direct result of the increasing popularity of digital photography.
Social impact
Digital photography has made photography available to a larger group of people. New technology and editing programs available to photographers have changed the way photographs are presented to the public. Photographs can be heavily manipulated or photoshopped to look completely different from the originals. Until the advent of the digital camera, amateur photographers used either print or slide film for their cameras. Slides had to be developed and shown to an audience using a slide projector. Digital photography eliminated the delay and cost of film. Consumers became able to view, transfer, edit, and distribute digital images with ordinary home computers rather than using specialized equipment.
Camera phones have recently had a large impact on photography. Users can set their smartphones to upload products to the Internet, preserving images even if the camera is destroyed or the photos deleted. Some high-street photography shops have self-service kiosks that allow images to be printed directly from smartphones via Bluetooth technology.
Archivists and historians have noticed the transitory nature of digital media. Unlike film and print which are tangible, digital image storage is ever-changing, with old media and decoding software becoming obsolete or inaccessible by new technologies. Historians are concerned that this is creating a historical void where information is being silently lost within failed or inaccessible digital media. They recommend that professional and amateur users develop strategies for digital preservation by migrating stored digital images from old technologies to new ones. Scrapbookers who may have used film for creating artistic and personal memoirs may need to modify their approach to use and personalize digital photo books, thereby retaining the special qualities of traditional photo albums.
The web has been a popular medium for storing and sharing photos ever since the first photograph was published online by Tim Berners-Lee in 1992 (an image of the CERN house band Les Horribles Cernettes). Today, photo sharing sites such as Flickr, Picasa, and PhotoBucket, as well as social websites, are used by millions of people to share their pictures. Digital photography and social media allow organizations and corporations to make photographs more accessible to a greater and more diverse population. For example, National Geographic Magazine has Twitter, Snapchat, Facebook, and Instagram accounts, each of which includes content aimed at the specific audiences found on its platform.
Digital photography has also impacted other fields, such as medicine. It has allowed doctors to help diagnose diabetic retinopathy, and is used in hospitals to diagnose and treat other diseases.
Digitally altered imagery
In digital art and media art, digital photos are often edited, manipulated, or combined with other digital images. Scanography is a related process in which digital photos are created using a scanner.
New technology in digital cameras and computer editing affects the way photographic images are now perceived. The ability to create and fabricate realistic imagery digitally—as opposed to untouched photos—changes the audience's perception of "truth" in digital photography. Digital manipulation enables pictures to adjust the perception of reality, both past and present, and thereby shape people's identities, beliefs, and opinions.
Digital photography and social media
In its early stages, photography was mainly used for physically preserving a family's heritage. It has now evolved into a key part of individual identity in the 21st century. Internet users often personally photograph and repost pictures that revolve around the ways they want to personally express themselves and their chosen aesthetic. With the invention of digital photography, photographs became less destructible and more easily maintained throughout the years, living across all types of digital devices. Digital photography advanced the use of photos for communication and identity rather than as a means of remembering.
Widespread access to digital photography has greatly influenced social behavior. The phrase "pics or it didn't happen" reflects the notion that one's life experiences can only be verified by others through photographs.
Filters are commonly used in social digital photography, some of which reflect the nostalgic gap left by the disappearance of film photography. Filters that emulated traditional analog effects (such as film grain, scratches, fading, and polaroid borders) grew immensely in popularity alongside the idea of social photography, the causal sharing of everyday images. Social photos differ from "true" photography as they are not meant to carry the same value or artistic qualities.
Recent research and innovation
As of today, advancements in digital photography have sky-rocketed due to the introduction of mirrorless cameras. Due to their cutting-edge technology, portability, and versatility, being more compact and innovative, mirrorless cameras are preferred. With its manual controls, adjustable settings, interchangeable lenses, and having an electronic viewfinder or LCD screen to display images straight from the sensor, mirrorless cameras have the advantage over DSLRs. While mirrorless cameras also provide quick autofocus, silent operations, and quick shooting rates, they also have some drawbacks, like a restricted range of lenses and a shorter battery life. However, progress still continues. As of 2024, ongoing advancements in mirrorless technology continue to address these limitations, solidifying their position as a leading choice for photographers.
The rise of mirrorless cameras has changed digital photography. These cameras are popular for their modern tech, portability, and versatility. Unlike DSLRs, mirrorless cameras have electronic viewfinders or LCD screens for previewing photos and manual controls. They are smaller and lighter, but may have fewer lens options and shorter battery life. Ongoing improvements are making them even better. Mirrorless cameras give photographers new ways to shoot, like seeing previews on the LCD screen. Mirrorless cameras brought big changes to photography. They do not have the bulky parts of DSLRs, so they are smaller and easier to carry. They are also quiet, good for discreet shooting like weddings or wildlife photography. Electronic viewfinders show details like exposure and focus, helping photographers take better shots. New autofocus systems make capturing moving subjects easier and more accurate. In summary, mirrorless cameras are changing photography with their compact size, advanced features, and quiet operation. As they improve, they are becoming essential tools for photographers.
Research and development continues to refine the lighting, optics, sensors, processing, storage, display, and software used in digital photography. Here are a few examples:
3D models can be created from collections of normal images. The resulting scene can be viewed from novel viewpoints, but creating the model is very computationally intensive. An example is Microsoft's Photosynth, which provided some models of famous places as examples.
Panoramic photographs can be created directly in camera without the need for any external processing. Some cameras feature a 3D Panorama capability, combining shots taken with a single lens from different angles to create a sense of depth.
Virtual-reality photography, the interactive visualization of photos.
High-dynamic-range cameras and displays are commercially available. Sensors with dynamic range in excess of 1,000,000:1 are in development, and software is also available to combine multiple non-HDR images (shot with different exposures) into an HDR image.
Motion blur can be dramatically removed by a flutter shutter (a flickering shutter that adds a signature to the blur, which postprocessing recognizes). It is not yet commercially available.
Advanced bokeh techniques use a hardware system of 2 sensors, one to take the photo as usual while the other records depth information. Bokeh effect and refocusing can then be applied to an image after the photo is taken.
In advanced cameras or camcorders, manipulating the sensitivity of the sensor with two or more neutral density filters.
An object's specular reflection can be captured using computer-controlled lights and sensors. This is needed to create attractive images of oil paintings, for instance. It is not yet commercially available, but some museums are starting to use it.
Dust reduction systems help keep dust off of image sensors. Originally introduced only by a few cameras like Olympus DSLRs, they have now become standard in most models and brands of detachable lens cameras, except the low-end or cheap ones.
Other areas of progress include improved sensors, more powerful software, advanced camera processors (sometimes using more than one processor; for instance, the Canon 7D camera has two Digic 4 processors), enlarged gamut displays, built-in GPS and Wi-Fi, and computer-controlled lighting.
| Technology | Optical instruments | null |
3616613 | https://en.wikipedia.org/wiki/Work%20%28thermodynamics%29 | Work (thermodynamics) | Thermodynamic work is one of the principal kinds of process by which a thermodynamic system can interact with and transfer energy to its surroundings. This results in externally measurable macroscopic forces on the system's surroundings, which can cause mechanical work, to lift a weight, for example, or cause changes in electromagnetic, or gravitational variables. Also, the surroundings can perform thermodynamic work on a thermodynamic system, which is measured by an opposite sign convention.
For thermodynamic work, appropriately chosen externally measured quantities are exactly matched by values of or contributions to changes in macroscopic internal state variables of the system, which always occur in conjugate pairs, for example pressure and volume or magnetic flux density and magnetization.
In the International System of Units (SI), work is measured in joules (symbol J). The rate at which work is performed is power, measured in joules per second, and denoted with the unit watt (W).
History
1824
Work, i.e. "weight lifted through a height", was originally defined in 1824 by Sadi Carnot in his famous paper Reflections on the Motive Power of Fire, where he used the term motive power for work. Specifically, according to Carnot:
We use here motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.
1845
In 1845, the English physicist James Joule wrote a paper On the mechanical equivalent of heat for the British Association meeting in Cambridge. In this paper, he reported his best-known experiment, in which the mechanical power released through the action of a "weight falling through a height" was used to turn a paddle-wheel in an insulated barrel of water.
In this experiment, the motion of the paddle wheel, through agitation and friction, heated the body of water, so as to increase its temperature. Both the temperature change of the water and the height of the fall of the weight were recorded. Using these values, Joule was able to determine the mechanical equivalent of heat. Joule estimated a mechanical equivalent of heat to be 819 ft•lbf/Btu (4.41 J/cal). The modern day definitions of heat, work, temperature, and energy all have connection to this experiment. In this arrangement of apparatus, it never happens that the process runs in reverse, with the water driving the paddles so as to raise the weight, not even slightly. Mechanical work was done by the apparatus of falling weight, pulley, and paddles, which lay in the surroundings of the water. Their motion scarcely affected the volume of the water. A quantity of mechanical work, measured as force × distance in the surroundings, that does not change the volume of the water, is said to be isochoric. Such work reaches the system only as friction, through microscopic modes, and is irreversible. It does not count as thermodynamic work. The energy supplied by the fall of the weight passed into the water as heat.
Overview
Conservation of energy
A fundamental guiding principle of thermodynamics is the conservation of energy. The total energy of a system is the sum of its internal energy, of its potential energy as a whole system in an external force field, such as gravity, and of its kinetic energy as a whole system in motion. Thermodynamics has special concern with transfers of energy, from a body of matter, such as, for example a cylinder of steam, to the surroundings of the body, by mechanisms through which the body exerts macroscopic forces on its surroundings so as to lift a weight there; such mechanisms are the ones that are said to mediate thermodynamic work.
Besides transfer of energy as work, thermodynamics admits transfer of energy as heat. For a process in a closed (no transfer of matter) thermodynamic system, the first law of thermodynamics relates changes in the internal energy (or other cardinal energy function, depending on the conditions of the transfer) of the system to those two modes of energy transfer, as work, and as heat. Adiabatic work is done without matter transfer and without heat transfer. In principle, in thermodynamics, for a process in a closed system, the quantity of heat transferred is defined by the amount of adiabatic work that would be needed to effect the change in the system that is occasioned by the heat transfer. In experimental practice, heat transfer is often estimated calorimetrically, through change of temperature of a known quantity of calorimetric material substance.
Energy can also be transferred to or from a system through transfer of matter. The possibility of such transfer defines the system as an open system, as opposed to a closed system. By definition, such transfer is neither as work nor as heat.
Changes in the potential energy of a body as a whole with respect to forces in its surroundings, and in the kinetic energy of the body moving as a whole with respect to its surroundings, are by definition excluded from the body's cardinal energy (examples are internal energy and enthalpy).
Nearly reversible transfer of energy by work in the surroundings
In the surroundings of a thermodynamic system, external to it, all the various mechanical and non-mechanical macroscopic forms of work can be converted into each other with no limitation in principle due to the laws of thermodynamics, so that the energy conversion efficiency can approach 100% in some cases; such conversion is required to be frictionless, and consequently adiabatic. In particular, in principle, all macroscopic forms of work can be converted into the mechanical work of lifting a weight, which was the original form of thermodynamic work considered by Carnot and Joule (see History section above). Some authors have considered this equivalence to the lifting of a weight as a defining characteristic of work. For example, with the apparatus of Joule's experiment in which, through pulleys, a weight descending in the surroundings drives the stirring of a thermodynamic system, the descent of the weight can be diverted by a re-arrangement of pulleys, so that it lifts another weight in the surroundings, instead of stirring the thermodynamic system.
Such conversion may be idealized as nearly frictionless, though it occurs relatively quickly. It usually comes about through devices that are not simple thermodynamic systems (a simple thermodynamic system is a homogeneous body of material substances). For example, the descent of the weight in Joule's stirring experiment reduces the weight's total energy. It is described as loss of gravitational potential energy by the weight, due to change of its macroscopic position in the gravity field, in contrast to, for example, loss of the weight's internal energy due to changes in its entropy, volume, and chemical composition. Though it occurs relatively rapidly, because the energy remains nearly fully available as work in one way or another, such diversion of work in the surroundings may be idealized as nearly reversible, or nearly perfectly efficient.
In contrast, the conversion of heat into work in a heat engine can never exceed the Carnot efficiency, as a consequence of the second law of thermodynamics. Such energy conversion, through work done relatively rapidly, in a practical heat engine, by a thermodynamic system on its surroundings, cannot be idealized, not even nearly, as reversible.
Thermodynamic work done by a thermodynamic system on its surroundings is defined so as to comply with this principle. Historically, thermodynamics was about how a thermodynamic system could do work on its surroundings.
Work done by and on a simple thermodynamic system
Work done on, and work done by, a thermodynamic system need to be distinguished, through consideration of their precise mechanisms. Work done on a thermodynamic system, by devices or systems in the surroundings, is performed by actions such as compression, and includes shaft work, stirring, and rubbing. Such work done by compression is thermodynamic work as here defined. But shaft work, stirring, and rubbing are not thermodynamic work as here defined, in that they do not change the volume of the system against its resisting pressure. Work without change of volume is known as isochoric work, for example when an agency, in the surroundings of the system, drives a frictional action on the surface or in the interior of the system.
In a process of transfer of energy from or to a thermodynamic system, the change of internal energy of the system is defined in theory by the amount of adiabatic work that would have been necessary to reach the final from the initial state, such adiabatic work being measurable only through the externally measurable mechanical or deformation variables of the system, that provide full information about the forces exerted by the surroundings on the system during the process. In the case of some of Joule's measurements, the process was so arranged that some heating that occurred outside the system (in the substance of the paddles) by the frictional process also led to heat transfer from the paddles into the system during the process, so that the quantity of work done by the surrounds on the system could be calculated as shaft work, an external mechanical variable.
The amount of energy transferred as work is measured through quantities defined externally to the system of interest, and thus belonging to its surroundings. In an important sign convention, preferred in chemistry, work that adds to the internal energy of the system is counted as positive. On the other hand, for historical reasons, an oft-encountered sign convention, preferred in physics, is to consider work done by the system on its surroundings as positive.
Processes not described by macroscopic work
Transfer of thermal energy through direct contact between a closed system and its surroundings, is by the microscopic thermal motions of particles and their associated inter-molecular potential energies. The microscopic description of such processes are the province of statistical mechanics, not of macroscopic thermodynamics. Another kind of energy transfer is by radiation, performing work on the system. Radiative transfer of energy is irreversible in the sense that it occurs only from a hotter to a colder system. There are several forms of dissipative transduction of energy that can occur internally within a system at a microscopic level, such as friction including bulk and shear viscosity chemical reaction, unconstrained expansion as in Joule expansion and in diffusion, and phase change.
Open systems
For an open system, the first law of thermodynamics admits three forms of energy transfer, as work, as heat, and as energy associated with matter that is transferred. The latter cannot be split uniquely into heat and work components.
One-way convection of internal energy is a form a transport of energy but is not, as sometimes mistakenly supposed (a relic of the caloric theory of heat), transfer of energy as heat, because one-way convection is transfer of matter; nor is it transfer of energy as work. Nevertheless, if the wall between the system and its surroundings is thick and contains fluid, in the presence of a gravitational field, convective circulation within the wall can be considered as indirectly mediating transfer of energy as heat between the system and its surroundings, though the source and destination of the transferred energy are not in direct contact.
Fictively imagined reversible thermodynamic "processes"
For purposes of theoretical calculations about a thermodynamic system, one can imagine fictive idealized thermodynamic "processes" that occur so slowly that they do not incur friction within or on the surface of system; they can then be regarded as virtually reversible. These fictive processes proceed along paths on geometrical surfaces that are described exactly by a characteristic equation of the thermodynamic system. Those geometrical surfaces are the loci of possible states of thermodynamic equilibrium for the system. Really possible thermodynamic processes, occurring at practical rates, even when they occur only by work assessed in the surroundings as adiabatic, without heat transfer, always incur friction within the system, and so are always irreversible. The paths of such really possible processes always depart from those geometrical characteristic surfaces. Even when they occur only by work assessed in the surroundings as adiabatic, without heat transfer, such departures always entail entropy production.
Joule heating and rubbing
The definition of thermodynamic work is in terms of the changes of the system's extensive deformation (and chemical constitutive and certain other) state variables, such as volume, molar chemical constitution, or electric polarisation. Examples of state variables that are not extensive deformation or other such variables are temperature and entropy , as for example in the expression . Changes of such variables are not actually physically measureable by use of a single simple adiabatic thermodynamic process; they are processes that occur neither by thermodynamic work nor by transfer of matter, and therefore are said occur by heat transfer. The quantity of thermodynamic work is defined as work done by the system on its surroundings. According to the second law of thermodynamics, such work is irreversible. To get an actual and precise physical measurement of a quantity of thermodynamic work, it is necessary to take account of the irreversibility by restoring the system to its initial condition by running a cycle, for example a Carnot cycle, that includes the target work as a step. The work done by the system on its surroundings is calculated from the quantities that constitute the whole cycle. A different cycle would be needed to actually measure the work done by the surroundings on the system. This is a reminder that rubbing the surface of a system appears to the rubbing agent in the surroundings as mechanical, though not thermodynamic, work done on the system, not as heat, but appears to the system as heat transferred to the system, not as thermodynamic work. The production of heat by rubbing is irreversible; historically, it was a piece of evidence for the rejection of the caloric theory of heat as a conserved substance. The irreversible process known as Joule heating also occurs through a change of a non-deformation extensive state variable.
Accordingly, in the opinion of Lavenda, work is not as primitive concept as is heat, which can be measured by calorimetry. This opinion does not negate the now customary thermodynamic definition of heat in terms of adiabatic work.
Known as a thermodynamic operation, the initiating factor of a thermodynamic process is, in many cases, a change in the permeability of a wall between the system and the surroundings. Rubbing is not a change in wall permeability. Kelvin's statement of the second law of thermodynamics uses the notion of an "inanimate material agency"; this notion is sometimes regarded as puzzling. The triggering of a process of rubbing can occur only in the surroundings, not in a thermodynamic system in its own state of internal thermodynamic equilibrium. Such triggering may be described as a thermodynamic operation.
Formal definition
In thermodynamics, the quantity of work done by a closed system on its surroundings is defined by factors strictly confined to the interface of the surroundings with the system and to the surroundings of the system, for example, an extended gravitational field in which the system sits, that is to say, to things external to the system.
A main concern of thermodynamics is the properties of materials. Thermodynamic work is defined for the purposes of thermodynamic calculations about bodies of material, known as thermodynamic systems. Consequently, thermodynamic work is defined in terms of quantities that describe the states of materials, which appear as the usual thermodynamic state variables, such as volume, pressure, temperature, chemical composition, and electric polarization. For example, to measure the pressure inside a system from outside it, the observer needs the system to have a wall that can move by a measurable amount in response to pressure differences between the interior of the system and the surroundings. In this sense, part of the definition of a thermodynamic system is the nature of the walls that confine it.
Several kinds of thermodynamic work are especially important. One simple example is pressure–volume work. The pressure of concern is that exerted by the surroundings on the surface of the system, and the volume of interest is the negative of the increment of volume gained by the system from the surroundings. It is usually arranged that the pressure exerted by the surroundings on the surface of the system is well defined and equal to the pressure exerted by the system on the surroundings. This arrangement for transfer of energy as work can be varied in a particular way that depends on the strictly mechanical nature of pressure–volume work. The variation consists in letting the coupling between the system and surroundings be through a rigid rod that links pistons of different areas for the system and surroundings. Then for a given amount of work transferred, the exchange of volumes involves different pressures, inversely with the piston areas, for mechanical equilibrium. This cannot be done for the transfer of energy as heat because of its non-mechanical nature.
Another important kind of work is isochoric work, i.e., work that involves no eventual overall change of volume of the system between the initial and the final states of the process. Examples are friction on the surface of the system as in Rumford's experiment; shaft work such as in Joule's experiments; stirring of the system by a magnetic paddle inside it, driven by a moving magnetic field from the surroundings; and vibrational action on the system that leaves its eventual volume unchanged, but involves friction within the system. Isochoric mechanical work for a body in its own state of internal thermodynamic equilibrium is done only by the surroundings on the body, not by the body on the surroundings, so that the sign of isochoric mechanical work with the physics sign convention is always negative.
When work, for example pressure–volume work, is done on its surroundings by a closed system that cannot pass heat in or out because it is confined by an adiabatic wall, the work is said to be adiabatic for the system as well as for the surroundings. When mechanical work is done on such an adiabatically enclosed system by the surroundings, it can happen that friction in the surroundings is negligible, for example in the Joule experiment with the falling weight driving paddles that stir the system. Such work is adiabatic for the surroundings, even though it is associated with friction within the system. Such work may or may not be isochoric for the system, depending on the system and its confining walls. If it happens to be isochoric for the system (and does not eventually change other system state variables such as magnetization), it appears as a heat transfer to the system, and does not appear to be adiabatic for the system.
Sign convention
In the early history of thermodynamics, a positive amount of work done by the system on the surroundings leads to energy being lost from the system. This historical sign convention has been used in many physics textbooks and is used in the present article.
According to the first law of thermodynamics for a closed system, any net change in the internal energy U must be fully accounted for, in terms of heat Q entering the system and work W done by the system:
An alternate sign convention is to consider the work performed on the system by its surroundings as positive. This leads to a change in sign of the work, so that . This convention has historically been used in chemistry, and has been adopted by most physics textbooks.
This equation reflects the fact that the heat transferred and the work done are not properties of the state of the system. Given only the initial state and the final state of the system, one can only say what the total change in internal energy was, not how much of the energy went out as heat, and how much as work. This can be summarized by saying that heat and work are not state functions of the system. This is in contrast to classical mechanics, where net work exerted by a particle is a state function.
Pressure–volume work
Pressure–volume work (or PV or P-V work) occurs when the volume of a system changes. PV work is often measured in units of litre-atmospheres where . However, the litre-atmosphere is not a recognized unit in the SI system of units, which measures P in pascals (Pa), V in m3, and PV in joules (J), where 1 J = 1 Pa·m3. PV work is an important topic in chemical thermodynamics.
For a process in a closed system, occurring slowly enough for accurate definition of the pressure on the inside of the system's wall that moves and transmits force to the surroundings, described as quasi-static, work is represented by the following equation between differentials:
where
(inexact differential) denotes an infinitesimal increment of work done by the system, transferring energy to the surroundings;
denotes the pressure inside the system, that it exerts on the moving wall that transmits force to the surroundings. In the alternative sign convention the right hand side has a negative sign.
(exact differential) denotes an infinitesimal increment of the volume of the system.
Moreover,
where denotes the work done by the system during the whole of the reversible process.
The first law of thermodynamics can then be expressed as
(In the alternative sign convention where W = work done on the system, . However, is unchanged.)
Path dependence
PV work is path-dependent and is, therefore, a thermodynamic process function. In general, the term is not an exact differential. The statement that a process is quasi-static gives important information about the process but does not determine the P–V path uniquely, because the path can include several slow goings backwards and forward in volume, slowly enough to exclude friction within the system occasioned by departure from the quasi-static requirement. An adiabatic wall is one that does not permit passage of energy by conduction or radiation.
The first law of thermodynamics states that .
For a quasi-static adiabatic process, so that
Also so that
It follows that so that
Internal energy is a state function so its change depends only on the initial and final states of a process. For a quasi-static adiabatic process, the change in internal energy is equal to minus the integral amount of work done by the system, so the work also depends only on the initial and final states of the process and is one and the same for every intermediate path. As a result, the work done by the system also depends on the initial and final states.
If the process path is other than quasi-static and adiabatic, there are indefinitely many different paths, with significantly different work amounts, between the initial and final states. (Again the internal energy change depends only on the initial and final states as it is a state function).
In the current mathematical notation, the differential is an inexact differential.
In another notation, is written (with a horizontal line through the d). This notation indicates that is not an exact one-form. The line-through is merely a flag to warn us there is actually no function (0-form) which is the potential of . If there were, indeed, this function , we should be able to just use Stokes Theorem to evaluate this putative function, the potential of , at the boundary of the path, that is, the initial and final points, and therefore the work would be a state function. This impossibility is consistent with the fact that it does not make sense to refer to the work on a point in the PV diagram; work presupposes a path.
Other mechanical types of work
There are several ways of doing mechanical work, each in some way related to a force acting through a distance. In basic mechanics, the work done by a constant force F on a body displaced a distance s in the direction of the force is given by
If the force is not constant, the work done is obtained by integrating the differential amount of work,
Rotational work
Energy transmission with a rotating shaft is very common in engineering practice. Often the torque T applied to the shaft is constant which means that the force F applied is constant. For a specified constant torque, the work done during n revolutions is determined as follows: A force F acting through a moment arm r generates a torque T
This force acts through a distance s, which is related to the radius r by
The shaft work is then determined from:
The power transmitted through the shaft is the shaft work done per unit time, which is expressed as
Spring work
When a force is applied on a spring, and the length of the spring changes by a differential amount dx, the work done is
For linear elastic springs, the displacement x is proportional to the force applied
where K is the spring constant and has the unit of N/m. The displacement x is measured from the undisturbed position of the spring (that is, when ). Substituting the two equations
,
where x1 and x2 are the initial and the final displacement of the spring respectively, measured from the undisturbed position of the spring.
Work done on elastic solid bars
Solids are often modeled as linear springs because under the action of a force they contract or elongate, and when the force is lifted, they return to their original lengths, like a spring. This is true as long as the force is in the elastic range, that is, not large enough to cause permanent or plastic deformation. Therefore, the equations given for a linear spring can also be used for elastic solid bars. Alternately, we can determine the work associated with the expansion or contraction of an elastic solid bar by replacing the pressure P by its counterpart in solids, normal stress in the work expansion
where A is the cross sectional area of the bar.
Work associated with the stretching of liquid film
Consider a liquid film such as a soap film suspended on a wire frame. Some force is required to stretch this film by the movable portion of the wire frame. This force is used to overcome the microscopic forces between molecules at the liquid-air interface. These microscopic forces are perpendicular to any line in the surface and the force generated by these forces per unit length is called the surface tension σ whose unit is N/m. Therefore, the work associated with the stretching of a film is called surface tension work, and is determined from
where is the change in the surface area of the film. The factor 2 is due to the fact that the film has two surfaces in contact with air. The force acting on the moveable wire as a result of surface tension effects is , where σ is the surface tension force per unit length.
Free energy and exergy
The amount of useful work which may be extracted from a thermodynamic system is determined by the second law of thermodynamics. Under many practical situations this can be represented by the thermodynamic availability, or Exergy, function. Two important cases are: in thermodynamic systems where the temperature and volume are held constant, the measure of useful work attainable is the Helmholtz free energy function; and in systems where the temperature and pressure are held constant, the measure of useful work attainable is the Gibbs free energy.
Non-mechanical forms of work
Non-mechanical work in thermodynamics is work caused by external force fields that a system is exposed to. The action of such forces can be initiated by events in the surroundings of the system, or by thermodynamic operations on the shielding walls of the system.
The non-mechanical work of force fields can have either positive or negative sign, work being done by the system on the surroundings, or vice versa. Work done by force fields can be done indefinitely slowly, so as to approach the fictive reversible quasi-static ideal, in which entropy is not created in the system by the process.
In thermodynamics, non-mechanical work is to be contrasted with mechanical work that is done by forces in immediate contact between the system and its surroundings. If the putative 'work' of a process cannot be defined as either long-range work or else as contact work, then sometimes it cannot be described by the thermodynamic formalism as work at all. Nevertheless, the thermodynamic formalism allows that energy can be transferred between an open system and its surroundings by processes for which work is not defined. An example is when the wall between the system and its surrounds is not considered as idealized and vanishingly thin, so that processes can occur within the wall, such as friction affecting the transfer of matter across the wall; in this case, the forces of transfer are neither strictly long-range nor strictly due to contact between the system and its surroundings; the transfer of energy can then be considered as convection, and assessed in sum just as transfer of internal energy. This is conceptually different from transfer of energy as heat through a thick fluid-filled wall in the presence of a gravitational field, between a closed system and its surroundings; in this case there may convective circulation within the wall but the process may still be considered as transfer of energy as heat between the system and its surroundings; if the whole wall is moved by the application of force from the surroundings, without change of volume of the wall, so as to change the volume of the system, then it is also at the same time transferring energy as work. A chemical reaction within a system can lead to electrical long-range forces and to electric current flow, which transfer energy as work between system and surroundings, though the system's chemical reactions themselves (except for the special limiting case in which in they are driven through devices in the surroundings so as to occur along a line of thermodynamic equilibrium) are always irreversible and do not directly interact with the surroundings of the system.
Non-mechanical work contrasts with pressure–volume work. Pressure–volume work is one of the two mainly considered kinds of mechanical contact work. A force acts on the interfacing wall between system and surroundings. The force is due to the pressure exerted on the interfacing wall by the material inside the system; that pressure is an internal state variable of the system, but is properly measured by external devices at the wall. The work is due to change of system volume by expansion or contraction of the system. If the system expands, in the present article it is said to do positive work on the surroundings. If the system contracts, in the present article it is said to do negative work on the surroundings. Pressure–volume work is a kind of contact work, because it occurs through direct material contact with the surrounding wall or matter at the boundary of the system. It is accurately described by changes in state variables of the system, such as the time courses of changes in the pressure and volume of the system. The volume of the system is classified as a "deformation variable", and is properly measured externally to the system, in the surroundings. Pressure–volume work can have either positive or negative sign. Pressure–volume work, performed slowly enough, can be made to approach the fictive reversible quasi-static ideal.
Non-mechanical work also contrasts with shaft work. Shaft work is the other of the two mainly considered kinds of mechanical contact work. It transfers energy by rotation, but it does not eventually change the shape or volume of the system. Because it does not change the volume of the system it is not measured as pressure–volume work, and it is called isochoric work. Considered solely in terms of the eventual difference between initial and final shapes and volumes of the system, shaft work does not make a change. During the process of shaft work, for example the rotation of a paddle, the shape of the system changes cyclically, but this does not make an eventual change in the shape or volume of the system. Shaft work is a kind of contact work, because it occurs through direct material contact with the surrounding matter at the boundary of the system. A system that is initially in a state of thermodynamic equilibrium cannot initiate any change in its internal energy. In particular, it cannot initiate shaft work. This explains the curious use of the phrase "inanimate material agency" by Kelvin in one of his statements of the second law of thermodynamics. Thermodynamic operations or changes in the surroundings are considered to be able to create elaborate changes such as indefinitely prolonged, varied, or ceased rotation of a driving shaft, while a system that starts in a state of thermodynamic equilibrium is inanimate and cannot spontaneously do that. Thus the sign of shaft work is always negative, work being done on the system by the surroundings. Shaft work can hardly be done indefinitely slowly; consequently it always produces entropy within the system, because it relies on friction or viscosity within the system for its transfer. The foregoing comments about shaft work apply only when one ignores that the system can store angular momentum and its related energy.
Examples of non-mechanical work modes include
Electric field work – where the force is defined by the surroundings' voltage (the electrical potential) and the generalized displacement is change of spatial distribution of electrical charge
Electrical polarization work – where the force is defined by the surroundings' electric field strength and the generalized displacement is change of the polarization of the medium (the sum of the electric dipole moments of the molecules)
Magnetic work – where the force is defined by the surroundings' magnetic field strength and the generalized displacement is change of total magnetic dipole moment
Gravitational work
Gravitational work is defined by the force on a body measured in a gravitational field. It may cause a generalized displacement in the form of change of the spatial distribution of the matter within the system. The system gains internal energy (or other relevant cardinal quantity of energy, such as enthalpy) through internal friction. As seen by the surroundings, such frictional work appears as mechanical work done on the system, but as seen by the system, it appears as transfer of energy as heat. When the system is in its own state of internal thermodynamic equilibrium, its temperature is uniform throughout. If the volume and other extensive state variables, apart from entropy, are held constant over the process, then the transferred heat must appear as increased temperature and entropy; in a uniform gravitational field, the pressure of the system will be greater at the bottom than at the top.
By definition, the relevant cardinal energy function is distinct from the gravitational potential energy of the system as a whole; the latter may also change as a result of gravitational work done by the surroundings on the system. The gravitational potential energy of the system is a component of its total energy, alongside its other components, namely its cardinal thermodynamic (e.g. internal) energy and its kinetic energy as a whole system in motion.
| Physical sciences | Statistical mechanics | Physics |
3618030 | https://en.wikipedia.org/wiki/Astrophysical%20maser | Astrophysical maser | An astrophysical maser is a naturally occurring source of stimulated spectral line emission, typically in the microwave portion of the electromagnetic spectrum. This emission may arise in molecular clouds, comets, planetary atmospheres, stellar atmospheres, or various other conditions in interstellar space.
Background
Discrete transition energy
Like a laser, the emission from a maser is stimulated (or seeded) and monochromatic, having the frequency corresponding to the energy difference between two quantum-mechanical energy levels of the species in the gain medium which have been pumped into a non-thermal population distribution. However, naturally occurring masers lack the resonant cavity engineered for terrestrial laboratory masers. The emission from an astrophysical maser is due to a single pass through the gain medium and therefore generally lacks the spatial coherence and mode purity expected from a laboratory maser.
Nomenclature
Due to the differences between engineered and naturally occurring masers, it is often stated that astrophysical masers are not "true" masers because they lack oscillation cavities. However, the distinction between oscillator-based lasers and single-pass lasers was intentionally disregarded by the laser community in the early years of the technology.
This fundamental incongruency in language has resulted in the use of other paradoxical definitions in the field. For example, if the gain medium of a misaligned laser is emission-seeded but non-oscillating radiation, it is said to emit amplified spontaneous emission or ASE. This ASE is regarded as unwanted or parasitic. Some researchers would add to this definition the presence of insufficient feedback or unmet lasing threshold: that is, the users wish the system to behave as a laser. The emission from astrophysical masers is, in fact, ASE but is sometimes termed superradiant emission to differentiate it from the laboratory phenomenon. This simply adds to the confusion, since both sources are superradiant. In some laboratory lasers, such as a single pass through a regeneratively amplified Ti:Sapph stage, the physics is directly analogous to an amplified ray in an astrophysical maser.
Furthermore, the practical limits of the use of the m to stand for microwave in maser are variously employed. For example, when lasers were initially developed in the visible portion of the spectrum, they were called optical masers. Charles Townes advocated that the m stand for molecule, since energy states of molecules generally provide the masing transition. Along these lines, some use the term laser to describe any system that exploits an electronic transition and the term maser to describe a system that exploits a rotational or vibrational transition, regardless of the output frequency. Some astrophysicists use the term iraser to describe a maser emitting at a wavelength of a few micrometres, even though the optics community terms similar sources lasers.
The term taser has been used to describe laboratory masers in the terahertz regime, although astronomers might call these sub-millimeter masers and laboratory physicists generally call these gas lasers or specifically alcohol lasers in reference to the gain species. The electrical engineering community typically limits the use of the word microwave to frequencies between roughly 1 GHz and 300 GHz; that is, wavelengths between 30 cm and 1 mm, respectively.
Astrophysical conditions
The simple existence of a pumped population inversion is not sufficient for the observation of a maser. For example, there must be velocity coherence along the line of sight so that Doppler shifting does not prevent inverted states in different parts of the gain medium from radiatively coupling. While polarisation in laboratory lasers and masers may be achieved by selectively oscillating the desired modes, polarisation in natural masers will arise only in the presence of a polarisation-state–dependent pump or of a magnetic field in the gain medium.
The radiation from astrophysical masers can be quite weak and may escape detection due to the limited sensitivity, and relative remoteness, of astronomical observatories and due to the sometimes overwhelming spectral absorption from unpumped molecules of the maser species in the surrounding space. This latter obstacle may be partially surmounted through the judicious use of the spatial filtering inherent in interferometric techniques, especially very long baseline interferometry (VLBI).
The study of masers provides valuable information on the conditions—temperature, density, magnetic field, and velocity—in environments of stellar birth and death and the centres of galaxies containing black holes, leading to refinements in existing theoretical models.
Discovery
Historical background
In 1965 an unexpected discovery was made by Weaver et al.: emission lines in space, of unknown origin, at a frequency of 1665 MHz. At this time many researchers still thought that molecules could not exist in space, even though they had been discovered by McKellar in the 1940s, and so the emission was at first attributed to a hypothetical form of interstellar matter named "mysterium", but the emission was soon identified as line emission from hydroxide molecules in compact sources within molecular clouds. More discoveries followed, with water emission in 1969, methanol emission in 1970, and silicon monoxide emission in 1974, all emanating from within molecular clouds. These were termed masers, as from their narrow line widths and high effective temperatures it became clear that these sources were amplifying microwave radiation.
Masers were then discovered around highly evolved late-type stars, named OH/IR stars. First was hydroxide emission in 1968, then water emission in 1969 and silicon monoxide emission in 1974. Masers were discovered in external galaxies in 1973, and in the Solar System in comet halos.
Another unexpected discovery was made in 1982 with the discovery of emission from an extra-galactic source with an unrivalled luminosity about 106 times larger than any previous source. This was termed a megamaser because of its great luminosity; many more megamasers have since been discovered.
A weak disk maser was discovered in 1995 emanating from the star MWC 349A, using NASA's Kuiper Airborne Observatory.
Evidence for an anti-pumped (dasar) sub-thermal population in the 4830 MHz transition of formaldehyde (H2CO) was observed in 1969 by Palmer et al.
Detection
The connections of maser activity with far infrared (FIR) emission has been used to conduct searches of the sky with optical telescopes (because optical telescopes are easier to use for searches of this kind), and likely objects are then checked in the radio spectrum. Particularly targeted are molecular clouds, OH-IR stars, and FIR active galaxies.
Known interstellar species
The following species have been observed in stimulated emission from astronomical environments:
OH
CH
H2CO
H2O
NH3, 15NH3
CH3OH
HNCNH
SiS
HC3N
SiO, 29SiO, 30SiO
HCN, H13CN
H (in MWC 349)
CS
Characteristics of maser radiation
The amplification or gain of radiation passing through a maser cloud is exponential. This has consequences for the radiation it produces:
Beaming
Small path differences across the irregularly shaped maser cloud become greatly distorted by exponential gain. Part of the cloud that has a slightly longer path length than the rest will appear much brighter (as it is the exponent of the path length that is relevant), and so maser spots are typically much smaller than their parent clouds. The majority of the radiation will emerge along this line of greatest path length in a "beam"; this is termed beaming.
Rapid variability
As the gain of a maser depends exponentially on the population inversion and the velocity-coherent path length, any variation of either will itself result in exponential change of the maser output.
Line narrowing
Exponential gain also amplifies the centre of the line shape (Gaussian or Lorentzian, etc.) more than the edges or wings. This results in an emission line shape that is much taller but not much wider. This makes the line appear narrower relative to the unamplified line.
Saturation
The exponential growth in intensity of radiation passing through a maser cloud continues as long as pumping processes can maintain the population inversion against the growing losses by stimulated emission. While this is so the maser is said to be unsaturated. However, after a point, the population inversion cannot be maintained any longer and the maser becomes saturated. In a saturated maser, amplification of radiation depends linearly on the size of population inversion and the path length. Saturation of one transition in a maser can affect the degree of inversion in other transitions in the same maser, an effect known as competitive gain.
High brightness
The brightness temperature of a maser is the temperature a black body would have if producing the same emission brightness at the wavelength of the maser. That is, if an object had a temperature of about 109K it would produce as much 1665-MHz radiation as a strong interstellar OH maser. Of course, at 109K the OH molecule would dissociate (kT is greater than the bond energy), so the brightness temperature is not indicative of the kinetic temperature of the maser gas but is nevertheless useful in describing maser emission. Masers have incredible effective temperatures, many around 109K, but some of up to 1012K and even 1014K.
Polarisation
An important aspect of maser study is polarisation of the emission. Astronomical masers are often very highly polarised, sometimes 100% (in the case of some OH masers) in a circular fashion, and to a lesser degree in a linear fashion. This polarisation is due to some combination of the Zeeman effect, magnetic beaming of the maser radiation, and anisotropic pumping which favours certain magnetic-state transitions.
Many of the characteristics of megamaser emission are different.
Maser environments
Comets
Comets are small bodies (5 to 15 km diameter) of frozen volatiles (e.g., water, carbon dioxide, ammonia, and methane) embedded in a crusty silicate filler that orbit the Sun in eccentric orbits. As they approach the Sun, the volatiles vaporise to form a halo and later a tail around the nucleus. Once vaporised, these molecules can form inversions and mase.
The impact of comet Shoemaker-Levy 9 with Jupiter in 1994 resulted in maser emissions in the 22 GHz region from the water molecule. Despite the apparent rarity of these events, observation of the intense maser emission has been suggested as a detection scheme for extrasolar planets.
Ultraviolet light from the Sun breaks down some water molecules to form hydroxides that can mase. In 1997, 1667-MHz maser emission characteristic of hydroxide was observed from comet Hale-Bopp.
Planetary atmospheres
It is predicted that masers exist in the atmospheres of gas giant planets. Such masers would be highly variable due to planetary rotation (10-hour period for Jovian planets). Cyclotron masers have been detected at the north pole of Jupiter.
Planetary systems
In 2009, S. V. Pogrebenko et al. reported the detection of water masers in the plumes of water associated with the Saturnian moons Hyperion, Titan, Enceladus, and Atlas.
Stellar atmospheres
The conditions in the atmospheres of late-type stars support the pumping of different maser species at different distances from the star. Due to instabilities within the nuclear burning sections of the star, the star experiences periods of increased energy release. These pulses produce a shockwave that forces the atmosphere outward. Hydroxyl masers occur at a distance of about 1,000 to 10,000 astronomical units (AU), water masers at a distance of about 100 to 400 AU, and silicon monoxide masers at a distance of about 5 to 10 AU.
Both radiative and collisional pumping resulting from the shockwave have been suggested as the pumping mechanism for the silicon monoxide masers. These masers diminish for larger radii as the gaseous silicon monoxide condenses into dust, depleting the available maser molecules. For the water masers, the inner and outer radii limits roughly correspond to the density limits for maser operation. At the inner boundary, the collisions between molecules are enough to remove a population inversion. At the outer boundary, the density and optical depth is low enough that the gain of the maser is diminished. The hydroxyl masers are supported chemical pumping. At the distances where these masers are found water molecules are disassociated by UV radiation.
Star-forming regions
Young stellar objects and (ultra)compact H II regions embedded in molecular clouds and giant molecular clouds, support the bulk of astrophysical masers. Various pumping schemes – both radiative and collisional and combinations thereof – result in the maser emission of multiple transitions of many species. For example, the OH molecule has been observed to mase at 1612, 1665, 1667, 1720, 4660, 4750, 4765, 6031, 6035, and 13441 MHz. Water and methanol masers are also typical of these environments. Relatively rare masers such as ammonia and formaldehyde may also be found in star-forming regions.
Supernova remnants
The 1720 MHz maser transition of hydroxide is known to be associated with supernova remnants that interact with molecular clouds.
Extragalactic sources
While some of the masers in star forming regions can achieve luminosities sufficient for detection from external galaxies (such as the nearby Magellanic Clouds), masers observed from distant galaxies generally arise in wholly different conditions. Some galaxies possess central black holes into which a disk of molecular material (about 0.5 parsec in size) is falling. Excitations of these molecules in the disk or in a jet can result in megamasers with large luminosities. Hydroxyl, water, and formaldehyde masers are known to exist in these conditions.
Ongoing research
Astronomical masers remain an active field of research in radio astronomy and laboratory astrophysics due, in part, to the fact that they are valuable diagnostic tools for astrophysical environments which may otherwise elude rigorous quantitative study and because they may facilitate the study of conditions which are inaccessible in terrestrial laboratories. A global collaboration called the Maser Monitoring Organisation, colloquially known as the M2O, are one prominent group of researchers in this discipline.
Variability
Maser variability is generally understood to mean the change in apparent brightness to the observer. Intensity variations can occur on timescales from days to years indicating limits on maser size and excitation scheme. However, masers change in various ways over various timescales.
Distance determinations
Masers in star-forming regions are known to move across the sky along with the material that is flowing out from the forming star(s). Also, since the emission is a narrow spectral line, line-of-sight velocity can be determined from the Doppler shift variation of the observed frequency of the maser, permitting a three-dimensional mapping of the dynamics of the maser environment. Perhaps the most spectacular success of this technique is the dynamical determination of the distance to the galaxy NGC 4258 from the analysis of the motion of the masers in the black-hole disk.
Also, water masers have been used to estimate the distance and proper motion of galaxies in the Local Group, including that of the Triangulum Galaxy.
VLBI observations of maser sources in late type stars and star forming regions provide determinations of their trigonometric parallax and therefore their distance. This method is much more accurate than other distance determinations, and gives us information about the galactic distance scale, e.g. the distance of spiral arms.
Open issues
Unlike terrestrial lasers and masers for which the excitation mechanism is known and engineered, the reverse is true for astrophysical masers. In general, astrophysical masers are discovered empirically then studied further in order to develop plausible suggestions about possible pumping schemes. Quantification of the transverse size, spatial and temporal variations, and polarisation state, typically requiring VLBI telemetry, are all useful in the development of a pump theory. Galactic formaldehyde masing is one such example that remains problematic.
On the other hand, some masers have been predicted to occur theoretically but have yet to be observed in nature. For example, the magnetic dipole transitions of the OH molecule near 53 MHz are expected to occur but have yet to be observed, perhaps due to a lack of sensitive equipment.
| Physical sciences | Radio astronomy | Astronomy |
3618245 | https://en.wikipedia.org/wiki/Safety%20glass | Safety glass | Safety glass is glass with additional safety features that make it less likely to break, or less likely to pose a threat when broken. Common designs include toughened glass (also known as tempered glass), laminated glass, and wire mesh glass (also known as wired glass). Toughened glass was invented in 1874 by Francois Barthelemy Alfred Royer de la Bastie. Wire mesh glass was invented in 1892 by Frank Shuman. Laminated glass was invented in 1903 by the French chemist Édouard Bénédictus (1878–1930).
These three approaches can easily be combined, allowing for the creation of glass that is at the same time toughened, laminated, and contains a wire mesh. However, combination of a wire mesh with other techniques is unusual, as it typically betrays their individual qualities. In many developed countries safety glass is part of the building regulations making properties safer.
Toughened glass
Toughened glass is processed by controlled thermal or chemical treatments to increase its strength compared with normal glass. Tempering, by design, creates balanced internal stresses which causes the glass sheet, when broken, to crumble into small granular chunks of similar size and shape instead of splintering into random, jagged shards. The granular chunks are less likely to cause injury.
As a result of its safety and strength, tempered glass is used in a variety of demanding applications, including passenger vehicle windows, shower doors, architectural glass doors and tables, refrigerator trays, as a component of bulletproof glass, for diving masks, and various types of plates and cookware. In the United States, since 1977 Federal law has required safety glass located within doors and tub and shower enclosures.
Laminated glass
Laminated glass is composed of layers of glass and plastic held together by an interlayer. When laminated glass is broken, it is held in place by an interlayer, typically of polyvinyl butyral (PVB), between its two or more layers of glass, which crumble into small pieces. The interlayer keeps the layers of glass bonded even when broken, and its toughening prevents the glass from breaking up into large sharp pieces. This produces a characteristic "spider web" cracking pattern (radial and concentric cracks) when the impact is not enough to completely pierce the glass.
Laminated glass is normally used when there is a possibility of human impact or where the glass could fall if shattered. Skylight glazing and automobile windshields typically use laminated glass. In geographical areas requiring hurricane-resistant construction, laminated glass is often used in exterior storefronts, curtain walls and windows. The PVB interlayer also gives the glass a much higher sound insulation rating, due to the damping effect, and also blocks most of the incoming UV radiation (88% in window glass and 97.4% in windscreen glass).
Wire mesh glass
Wire mesh glass (also known as Georgian Wired Glass) has a grid or mesh of thin metal wire embedded within the glass. Wired glass is used in the US for its fire-resistant abilities, and is well-rated to withstand both heat and hose streams. This is why wired glass exclusively is used on service elevators to prevent fire ingress to the shaft, and also why it is commonly found in institutional settings which are often well-protected and partitioned against fire. The wire prevents the glass from falling out of the frame even if it cracks under thermal stress, and is far more heat-resistant than a laminating material.
Wired glass, as it is typically described, does not perform the function most individuals associate with it. The presence of the wire mesh appears to be a strengthening component, as it is metallic, and conjures up the idea of rebar in reinforced concrete or other such examples. Despite this belief, wired glass is actually weaker than unwired glass due to the incursions of the wire into the structure of the glass. Wired glass often may cause heightened injury in comparison to unwired glass, as the wire amplifies the irregularity of any fractures. This has led to a decline in its use institutionally, particularly in schools.
In recent years, new materials have become available that offer both fire-ratings and safety ratings so the continued use of wired glass is being debated worldwide. The US International Building Code effectively banned wired glass in 2006.
Canada's building codes still permit the use of wired glass but the codes are being reviewed and traditional wired glass is expected to be greatly restricted in its use. Australia has no similar review taking place.
| Technology | Materials | null |
8371972 | https://en.wikipedia.org/wiki/Tetrigidae | Tetrigidae | Tetrigidae is an ancient family in the order Orthoptera, which also includes similar families such as crickets, grasshoppers, and their allies. Species within the Tetrigidae are variously called groundhoppers, pygmy grasshoppers, pygmy devils or (mostly historical) "grouse locusts".
Diagnostic characteristics
Tetrigidae are typically less than 20 mm in length and are recognizable by a long pronotum. This pronotum extends over the length of the abdomen, sometimes to the tip of the wings, and ends in a point. In other Orthoptera, the pronotum is short and covers neither the abdomen nor the wings. Tetrigidae are generally cryptic in coloration. Some species have enlarged pronota that mimic leaves, stones or twigs. Other characteristics pygmy grasshoppers exhibit in comparison to other Orthoptera families are the lack of an arolium between the claws, the first thoracic sternite being modified into collar-like structure called sternomentum, a tarsal formula of 2-2-3, scaly fore -wings, and developed hindwings.
General biology
In temperate regions, Tetrigidae are generally found along streams and ponds, where they feed on algae and diatoms. The North American species Paratettix aztecus and Paratettix mexicanus, for example, depend on aquatic primary production for between 80% and 100% of their diet. Riparian species are capable of swimming on the surface of the water, and readily leap into the water when alarmed Some species in the tribe Scelimenini are fully aquatic and capable of swimming underwater.
The highest biodiversity of Tetrigidae is found in tropical forests. Some tropical species are arboreal and live among mosses and lichens in tree buttresses or in the canopy, while others live on the forest floor.
Like other Orthoptera, Tetrigidae have a hemimetabolous development, in which eggs hatch into nymphs. Unlike other temperate Orthoptera, however, temperate Tetrigidae generally overwinter as adults.
Some subfamilies within the Tetrigidae, such as the Batrachideinae, are sometimes elevated to family rank besides the Tetrigidae.
Arulenus miae is a pygmy grasshopper species from the tropical mountainous rainforests of the Philippines. The species was firstly discovered in Facebook post.
Etymology
Origin of the name of the family is not completely clear as there are different sources on its etymology. The name may be derived from Latin tetricus or taetricus, meaning harsh, sour, severe. The name may also originate from the earlier name 'Tettigidae', based on Tettix (synonym of Tetrix), which was preoccupied by Tettigidae (synonym of Cicadidae). Because of the preoccupation by the cicadas' family name, the second 't' in 'tt' was changed into 'r', resulting in the word Tetrigidae.
Subfamilies and Genera
Approximately 2,000 species have been described; according to the Orthoptera Species File the following genera are included:
Subfamily Batrachideinae
Auth.: Bolívar, 1887; selected genera:
Batrachidea Serville, 1838
Saussurella Bolívar, 1887
Tettigidea Scudder, 1862
Subfamily Cladonotinae
Auth.: Bolívar, 1887; selected genera:
Tribe Cladonotini Bolívar, 1887
Cladonotus Saussure, 1862
Deltonotus Hancock, 1904
Piezotettix Bolívar, 1887
Tribe Choriphyllini Cadena-Castañeda & Silva, 2019
Choriphyllum Serville, 1838
Phyllotettix Hancock, 1902
Tribe Valalyllini Deranja, Kasalo, Adžić, Franjević & Skejo, 2022
Lepocranus Devriese, 1991
Valalyllum Deranja, Kasalo, Adžić, Franjević & Skejo, 2022
Tribe Xerophyllini Günther, 1979
SE Asia - selected genera:
Potua Bolívar, 1887 (genus group)
Xerophyllum Fairmaire, 1846
Tribe Unassigned
Austrohancockia Günther, 1938
Cota Bolívar, 1887
Epitettix Hancock, 1907
Nesotettix Holdhaus, 1909
Subfamily Lophotettiginae
Auth.: Hancock, 1909
Lophotettix Hancock, 1909
Phelene Bolívar, 1906
Subfamily Metrodorinae
Auth.: Bolívar, 1887; selected genera:
Tribe Amorphopini Günther, 1939
Amorphopus Serville, 1838
Tribe Cleostratini Hancock, 1907
Cleostratus (insect) Stål, 1877 (Philippines)
Tribe Clinophaestini Storozhenko, 2013
Birmana Brunner von Wattenwyl, 1893
Clinophaestus Storozhenko, 2013
Tribe Miriatrini Cadena-Castañeda & Cardona, 2015 (monotypic)
Miriatra Bolívar, 1906
Tribe Ophiotettigini Tumbrinck & Skejo, 2017
Ophiotettix Walker, 1871
Uvarovithyrsus Storozhenko, 2016
Tribe Unassigned
Bolivaritettix Günther, 1939
Cleostratoides Storozhenko, 2013
Crimisus (insect) Bolívar, 1887
Hildegardia (insect) Günther, 1974
Holocerus Bolívar, 1887
Macromotettix Günther, 1939
Mazarredia Bolívar, 1887
Pseudoparatettix Günther, 1937
Pseudoxistrella Liang, 1991
Vaotettix Podgornaya, 1986
Subfamily Scelimeninae
Auth.: Hancock, 1907
Tribe Scelimenini Hancock, 1907; selected genera:
Amphibotettix Hancock, 1906
Austrohancockia Günther, 1938
Bidentatettix Zheng, 1992
Discotettix Costa, 1864
Gavialidium Saussure, 1862
Scelimena Serville, 1838
incertae sedis
Zhengitettix Liang, 1994
Subfamily Tetriginae
Auth.: Serville, 1838
Tribe Dinotettigini Günther, 1979
Afrocriotettix Günther, 1938
Dinotettix Bolívar, 1905
Ibeotettix Rehn, 1930
Lamellitettix Hancock, 1904
Marshallacris Rehn, 1948
Pseudamphinotus Günther, 1979
Tribe Tetrigini Serville, 1838
Clinotettix Bei-Bienko, 1933
Euparatettix Hancock, 1904
Exothotettix Zheng & Jiang, 1993
Hydrotetrix Uvarov, 1926
Paratettix Bolívar, 1887
Pseudosystolederus Günther, 1939 - southern Africa
Tetrix Latreille, 1802 (synonym Depressotetrix Karaman, 1960)
Thibron Rehn, 1939
Tribe unassigned:
Aalatettix Zheng & Mao, 2002
Alulatettix Liang, 1993
Ankistropleuron Bruner, 1910
Bannatettix Zheng, 1993
Bienkotetrix Karaman, 1965
Bufonides Bolívar, 1898
Carolinotettix Willemse, 1951
Coptottigia Bolívar, 1912
Cranotettix Grant, 1955
Ergatettix Kirby, 1914
Flatocerus Liang & Zheng, 1984
Formosatettix Tinkham, 1937
Formosatettixoides Zheng, 1994
Gibbotettix Zheng, 1992
Hedotettix Bolívar, 1887
Leptacrydium Chopard, 1945
Macquillania Günther, 1972
Micronotus Hancock, 1902
Neocoptotettix Shishodia, 1984
Neotettix Hancock, 1898
Nomotettix Morse, 1894
Ochetotettix Morse, 1900
Oxyphyllum Hancock, 1909
Phaesticus Uvarov, 1940
Sciotettix Ichikawa, 2001
Stenodorus Hancock, 1906
Teredorus Hancock, 1907
Tettiella Hancock, 1909
Tettiellona Günther, 1979
Uvarovitettix Bazyluk & Kis, 1960
Xiaitettix Zheng & Liang, 1993
Subfamily Tripetalocerinae
Auth.: Bolívar, 1887
Tripetalocerinae was originally described by Bolívar in 1887 to gather all the Tetrigidae genera of the old world with widened antennae (e.g. Arulenus, Discotettix, Hirrius, Ophiotettix, Tripetalocera). This subfamily today includes only two species in two genera - Tripetalocera (with one species) from India and Borneo and Tripetaloceroides (with one species) from Vietnam and PR China. Members of the subfamily are characteristic within Tetrigidae by massive antennae built up of only eight segments (other Tetrigidae have usually 11-16, Batrachideinae 18-22). Until recently, the subfamily included two tribes - Tripetalocerini and Clinophaestini (including Clinophaestus and Birmana), but the later was moved to the subfamily Metrodorinae due to similarity to Ophiotettigini.
Tripetalocera - monotypic Tripetalocera ferruginea Westwood, 1834
Tripetaloceroides Storozhenko, 2013 - monotypic Tripetaloceroides tonkinensis (Günther, 1938)
Subfamily unassigned
Criotettigini
Auth. Kevan, 1966
Criotettix Bolívar, 1887
Dasyleurotettix Rehn, 1904
Thoradontini
Auth. Kevan, 1966
Eucriotettix Hebard, 1930
Loxilobus Hancock, 1904
Thoradonta Hancock, 1909
Subfamily and tribe unassigned
†Archaeotetrix Sharov, 1968
Bolotettix Hancock, 1907
Coptotettix Bolívar, 1887
Cyphotettix Rehn, 1952
†Eozaentetrix Zessin, 2017
Euloxilobus Sjöstedt, 1936
Paramphinotus Zheng, 2004
Peronotettix Rehn, 1952
Phaesticus Uvarov, 1940 (synonym Flatocerus Liang & Zheng, 1984)
Probolotettix Günther, 1939
†Prototetrix Sharov, 1968
Syzygotettix Günther, 1938
Tettitelum Hancock, 1915
| Biology and health sciences | Orthoptera | Animals |
8372004 | https://en.wikipedia.org/wiki/Periodic%20trends | Periodic trends | In chemistry, periodic trends are specific patterns present in the periodic table that illustrate different aspects of certain elements when grouped by period and/or group. They were discovered by the Russian chemist Dmitri Mendeleev in 1863. Major periodic trends include atomic radius, ionization energy, electron affinity, electronegativity, nucleophilicity, electrophilicity, valency, nuclear charge, and metallic character. Mendeleev built the foundation of the periodic table. Mendeleev organized the elements based on atomic weight, leaving empty spaces where he believed undiscovered elements would take their places. Mendeleev’s discovery of this trend allowed him to predict the existence and properties of three unknown elements, which were later discovered by other chemists and named gallium, scandium, and germanium. English physicist Henry Moseley discovered that organizing the elements by atomic number instead of atomic weight would naturally group elements with similar properties.
Summary of trends
Atomic radius
The atomic radius is the distance from the atomic nucleus to the outermost electron orbital in an atom. In general, the atomic radius decreases as we move from left-to-right in a period, and it increases when we go down a group. This is because in periods, the valence electrons are in the same outermost shell. The atomic number increases within the same period while moving from left to right, which in turn increases the effective nuclear charge. The increase in attractive forces reduces the atomic radius of elements. When we move down the group, the atomic radius increases due to the addition of a new shell.
Nuclear charge and effective nuclear charge
Nuclear charge is defined as the number of protons in the nucleus of an element. Thus, from left-to-right of a period and top-to-bottom of a group, as the number of protons in the nucleus increases, the nuclear charge will also increase. However, electrons of multi-electron atoms do not experience the entire nuclear charge due to shielding effects from the other electrons. In this case, the nuclear charge of atoms that experience this shielding is referred to as effective nuclear charge. Shielding increases as the number of an atom’s inner shells increases. So from left-to-right of a period, the effective nuclear charge will still increase. But, from top-to-bottom of a group, as the number of shells increases, the effective nuclear charge will decrease.
Ionization energy
The ionization energy is the minimum amount of energy that an electron in a gaseous atom or ion has to absorb to come out of the influence of the attracting force of the nucleus. It is also referred to as ionization potential. The first ionization energy is the amount of energy that is required to remove the first electron from a neutral atom. The energy needed to remove the second electron from the neutral atom is called the second ionization energy and so on.
As one moves from left-to-right across a period in the modern periodic table, the ionization energy increases as the nuclear charge increases and the atomic size decreases. The decrease in the atomic size results in a more potent force of attraction between the electrons and the nucleus. However, suppose one moves down in a group. In that case, the ionization energy decreases as atomic size increases due to adding a valence shell, thereby diminishing the nucleus's attraction to electrons.
Electron affinity
The energy released when an electron is added to a neutral gaseous atom to form an anion is known as electron affinity. Trend-wise, as one progresses from left to right across a period, the electron affinity will increase as the nuclear charge increases and the atomic size decreases resulting in a more potent force of attraction of the nucleus and the added electron. However, as one moves down in a group, electron affinity decreases because atomic size increases due to the addition of a valence shell, thereby weakening the nucleus's attraction to electrons. Although it may seem that fluorine should have the greatest electron affinity, its small size generates enough repulsion among the electrons, resulting in chlorine having the highest electron affinity in the halogen family.
Electronegativity
The tendency of an atom in a molecule to attract the shared pair of electrons towards itself is known as electronegativity. It is a dimensionless quantity because it is only a tendency. The most commonly used scale to measure electronegativity was designed by Linus Pauling. The scale has been named the Pauling scale in his honour. According to this scale, fluorine is the most electronegative element, while cesium is the least electronegative element.
Trend-wise, as one moves from left to right across a period in the modern periodic table, the electronegativity increases as the nuclear charge increases and the atomic size decreases. However, if one moves down in a group, the electronegativity decreases as atomic size increases due to the addition of a valence shell, thereby decreasing the atom's attraction to electrons.
However, in group XIII (boron family), the electronegativity first decreases from boron to aluminium and then increases down the group. It is due to the fact that the atomic size increases as we move down the group, but at the same time the effective nuclear charge increases due to poor shielding of the inner d and f electrons. As a result, the force of attraction of the nucleus for the electrons increases and hence the electronegativity increases from aluminium to thallium.
Valency
The valency of an element is the number of electrons that must be lost or gained by an atom to obtain a stable electron configuration. In simple terms, it is the measure of the combining capacity of an element to form chemical compounds. Electrons found in the outermost shell are generally known as valence electrons; the number of valence electrons determines the valency of an atom.
Trend-wise, while moving from left to right across a period, the number of valence electrons of elements increases and varies between one and eight. But the valency of elements first increases from 1 to 4, and then it decreases to 0 as we reach the noble gases. However, as we move down in a group, the number of valence electrons generally does not change. Hence, in many cases the elements of a particular group have the same valency. However, this periodic trend is not always followed for heavier elements, especially for the f-block and the transition metals. These elements show variable valency as these elements have a d-orbital as the penultimate orbital and an s-orbital as the outermost orbital. The energies of these (n-1)d and ns orbitals (e.g., 4d and 5s) are relatively close.
Metallic and non-metallic properties
Metallic properties generally increase down the groups, as decreasing attraction between the nuclei and outermost electrons causes these electrons to be more loosely bound and thus able to conduct heat and electricity. Across each period, from left to right, the increasing attraction between the nuclei and the outermost electrons causes the metallic character to decrease. In contrast, the nonmetallic character decreases down the groups and increases across the periods.
Nucleophilicity and Electrophilicity
Electrophilicity refers to the tendency of an electron-deficient species, called an electrophile, to accept electrons. Similarly, nucleophilicity is defined as the affinity of an electron-rich species, known as a nucleophile, to donate electrons to another species. Trends in the periodic table are useful for predicting an element's nucleophilicity and electrophilicity. In general, nucleophilicity decreases as electronegativity increases, meaning that nucleophilicity decreases from left to right across the periodic table. On the other hand, electrophilicity generally increases as electronegativity increases, meaning that electrophilicity follows an increasing trend from left to right on the periodic table. However, the specific molecular or chemical environment of the electrophile also influences electrophilicity. Therefore, electrophilicity cannot be accurately predicted based solely on periodic trends.
| Physical sciences | Periodic table | Chemistry |
8372175 | https://en.wikipedia.org/wiki/Krypton%20difluoride | Krypton difluoride | Krypton difluoride, KrF2 is a chemical compound of krypton and fluorine. It was the first compound of krypton discovered. It is a volatile, colourless solid at room temperature. The structure of the KrF2 molecule is linear, with Kr−F distances of 188.9 pm. It reacts with strong Lewis acids to form salts of the KrF+ and Kr cations.
The atomization energy of KrF2 (KrF2(g) → Kr(g) + 2 F(g)) is 21.9 kcal/mol, giving an average Kr–F bond energy of only 11 kcal/mol, the weakest of any isolable fluoride. In comparison, the dissociation of difluorine to atomic fluorine requires cleaving a F–F bond with a bond dissociation energy of 36 kcal/mol. Consequently, KrF2 is a good source of the extremely reactive and oxidizing atomic fluorine. It is thermally unstable, with a decomposition rate of 10% per hour at room temperature. The formation of krypton difluoride is endothermic, with a heat of formation (gas) of 14.4 ± 0.8 kcal/mol measured at 93 °C.
Synthesis
Krypton difluoride can be synthesized using many different methods including electrical discharge, photoionization, hot wire, and proton bombardment. The product can be stored at −78 °C without decomposition.
Electrical discharge
Electric discharge was the first method used to make krypton difluoride. It was also used in the only experiment ever reported to produce krypton tetrafluoride, although the identification of krypton tetrafluoride was later shown to be mistaken. The electrical discharge method involves having 1:1 to 2:1 mixtures of F2 to Kr at a pressure of 40 to 60 torr and then arcing large amounts of energy between it. Rates of almost 0.25 g/h can be achieved. The problem with this method is that it is unreliable with respect to yield.
Proton bombardment
Using proton bombardment for the production of KrF2 has a maximum production rate of about 1 g/h. This is achieved by bombarding mixtures of Kr and F2 with a proton beam operating at an energy level of 10 MeV and at a temperature of about 133 K. It is a fast method of producing relatively large amounts of KrF2, but requires a source of high-energy protons, which usually would come from a cyclotron.
Photochemical
The successful photochemical synthesis of krypton difluoride was first reported by Lucia V. Streng in 1963. It was next reported in 1975 by J. Slivnik. The photochemical process for the production of KrF2 involves the use of UV light and can produce under ideal circumstances 1.22 g/h. The ideal wavelengths to use are in the range of 303–313 nm. Harder UV radiation is detrimental to the production of KrF2. Using Pyrex glass or Vycor or quartz will significantly increase yield because they all block harder UV light. In a series of experiments performed by S. A Kinkead et al., it was shown that a quartz insert (UV cut off of 170 nm) produced on average 158 mg/h, Vycor 7913 (UV cut off of 210 nm) produced on average 204 mg/h and Pyrex 7740 (UV cut off of 280 nm) produced on average 507 mg/h. It is clear from these results that higher-energy ultraviolet light reduces the yield significantly. The ideal circumstances for the production KrF2 by a photochemical process appear to occur when krypton is a solid and fluorine is a liquid, which occur at 77 K. The biggest problem with this method is that it requires the handling of liquid F2 and the potential of it being released if it becomes overpressurized.
Hot wire
The hot wire method for the production of KrF2 uses krypton in a solid state with a hot wire running a few centimeters away from it as fluorine gas is then run past the wire. The wire has a large current, causing it to reach temperatures around 680 °C. This causes the fluorine gas to split into its radicals, which then can react with the solid krypton. Under ideal conditions, it has been known to reach a maximum yield of 6 g/h. In order to achieve optimal yields the gap between the wire and the solid krypton should be 1 cm, giving rise to a temperature gradient of about 900 °C/cm. A major downside to this method is the amount of electricity that has to be passed through the wire. It is dangerous if not properly set up.
Structure
Krypton difluoride can exist in one of two possible crystallographic morphologies: α-phase and β-phase. β-KrF2 generally exists at above −80 °C, while α-KrF2 is more stable at lower temperatures. The unit cell of α-KrF2 is body-centred tetragonal.
Reactions
Krypton difluoride is primarily a powerful oxidising and fluorinating agent, more powerful even than elemental fluorine because Kr–F has less bond energy. It has a redox potential of +3.5 V for the KrF2/Kr couple, making it the most powerful known oxidising agent. However, the hypothetical could be even stronger and nickel tetrafluoride comes close.
For example, krypton difluoride can oxidise gold to its highest-known oxidation state, +5:
KrFAuF decomposes at 60 °C into gold(V) fluoride and krypton and fluorine gases:
can also directly oxidise xenon to xenon hexafluoride:
is used to synthesize the highly reactive BrF cation. reacts with to form the salt KrFSbF; the KrF cation is capable of oxidising both and to BrF and ClF, respectively.
can also react with elemental silver to produce .
Irradiation of a crystal of KrF2 at 77 K with γ-rays leads to the formation of the krypton monofluoride radical, KrF•, a violet-colored species that was identified by its ESR spectrum. The radical, trapped in the crystal lattice, is stable indefinitely at 77 K but decomposes at 120 K.
| Physical sciences | Noble gas compounds | Chemistry |
6390647 | https://en.wikipedia.org/wiki/Wide-field%20Infrared%20Survey%20Explorer | Wide-field Infrared Survey Explorer | Wide-field Infrared Survey Explorer (WISE, observatory code C51, Explorer 92 and MIDEX-6) was a NASA infrared astronomy space telescope in the Explorers Program launched in December 2009. WISE discovered thousands of minor planets and numerous star clusters. Its observations also supported the discovery of the first Y-type brown dwarf and Earth trojan asteroid.
WISE performed an all-sky astronomical survey with images in 3.4, 4.6, 12 and 22 μm wavelength range bands, over ten months using a diameter infrared telescope in Earth orbit.
After its solid hydrogen coolant depleted, it was placed in hibernation mode in February 2011.
In 2013, NASA reactivated the WISE telescope to search for near-Earth objects (NEO), such as comets and asteroids, that could collide with Earth.
The reactivation mission was called Near-Earth Object Wide-field Infrared Survey Explorer (NEOWISE). As of August 2023, NEOWISE was 40% through the 20th coverage of the full sky.
Science operations and data processing for WISE and NEOWISE take place at the Infrared Processing and Analysis Center at the California Institute of Technology in Pasadena, California. The WISE All-Sky (WISEA) data, including processed images, source catalogs and raw data, was released to the public on 14 March 2012, and is available at the Infrared Science Archive.
The NEOWISE mission was originally expected to end in early 2025 with the satellite reentering the atmosphere some time after. However, the NEOWISE mission concluded its science survey on 31 July 2024 with the satellite expected to reenter Earth's atmosphere later the same year (2 November 2024). This decision was made due to increased solar activity hastening the decay of its orbit and the lack of an onboard propulsion system for orbital maintenance. The onboard transmitter was turned off on 8 August, marking the formal decommissioning of the spacecraft.
Mission goals
The mission was planned to create infrared images of 99% of the sky, with at least eight images made of each position on the sky in order to increase accuracy. The spacecraft was placed in a , circular, polar, Sun-synchronous orbit for its ten-month mission, during which it has taken 1.5 million images, one every 11 seconds. The satellite orbited above the terminator, its telescope pointing always to the opposite direction to the Earth, except for pointing towards the Moon, which was avoided, and its solar cells towards the Sun. Each image covers a 47 arcminute field of view (FoV), which means a 6 arcsecond resolution. Each area of the sky was scanned at least 10 times at the equator; the poles were scanned at theoretically every revolution due to the overlapping of the images. The produced image library contains data on the local Solar System, the Milky Way, and the more distant Universe. Among the objects WISE studied are asteroids, cool and dim stars such as brown dwarfs, and the most luminous infrared galaxies.
Targets within the Solar System
WISE was not able to detect Kuiper belt objects, because their temperatures are too low. Pluto is the only Kuiper belt object that was detected. It was able to detect any objects warmer than 70–100 K. A Neptune-sized object would be detectable out to 700 Astronomical unit (AU), a Jupiter mass object out to 1 light year (63,000 AU), where it would still be within the Sun's zone of gravitational control. A larger object of 2–3 Jupiter masses would be visible at a distance of up to 7–10 light years.
At the time of planning, it was estimated that WISE would detect about 300,000 main-belt asteroids, of which approximately 100,000 will be new, and some 700 Near-Earth objects (NEO) including about 300 undiscovered. That translates to about 1000 new main-belt asteroids per day, and 1–3 NEOs per day. The peak of magnitude distribution for NEOs will be about 21–22 V. WISE would detect each typical Solar System object 10–12 times over about 36 hours in intervals of 3 hours.
Targets outside the Solar System
Star formation, which are covered by interstellar dust, are detectable in infrared, since at this wavelength electromagnetic radiation can penetrate the dust. Infrared measurements from the WISE astronomical survey have been particularly effective at unveiling previously undiscovered star clusters. Examples of such embedded star clusters are Camargo 18, Camargo 440, Majaess 101, and Majaess 116. In addition, galaxies of the young Universe and interacting galaxies, where star formation is intensive, are bright in infrared. On this wavelength the interstellar gas clouds are also detectable, as well as proto-planetary discs. WISE satellite was expected to find at least 1,000 of those proto-planetary discs.
Spacecraft
The WISE satellite bus was built by Ball Aerospace & Technologies in Boulder, Colorado. The spacecraft is derived from the Ball Aerospace & Technologies RS-300 spacecraft architecture, particularly the NEXTSat spacecraft built for the successful Orbital Express mission launched on 9 March 2007. The flight system has an estimated mass of . The spacecraft is three-axis stabilized, with body-fixed solar arrays. It uses a high-gain antenna in the Ku-band to transmit to the ground through the Tracking and Data Relay Satellite System (TDRSS) geostationary system. Ball also performed the testing and flight system integration.
Telescope
Construction of the WISE telescope was divided between Ball Aerospace & Technologies (spacecraft, operations support), SSG Precision Optronics, Inc. (telescope, optics, scan mirror), DRS Technologies and Rockwell International (focal planes), Lockheed Martin (cryostat, cooling for the telescope), and Space Dynamics Laboratory (instruments, electronics, and testing). The program was managed through the Jet Propulsion Laboratory.
The WISE instrument was built by the Space Dynamics Laboratory in Logan, Utah.
Mission
WISE surveyed the sky in four wavelengths of the infrared band, at a very high sensitivity. Its design specified as goals that the full sky atlas of stacked images it produced have 5-sigma sensitivity limits of 120, 160, 650, and 2600 microjanskies (μJy) at 3.3, 4.7, 12, and 23 μm (aka microns). WISE achieved at least 68, 98, 860, and 5400 μJy; 5 sigma sensitivity at 3.4, 4.6, 12, and 22 μm for the WISE All-Sky data release. This is a factor of 1,000 times better sensitivity than the survey completed in 1983 by the IRAS satellite in the 12 and 23 μm bands, and a factor of 500,000 times better than the 1990s survey by the Cosmic Background Explorer (COBE) satellite at 3.3 and 4.7 μm. On the other hand, IRAS could also observe 60 and 100 μm wavelengths.
Band 1 – 3.4 μm (micrometre) – broad-band sensitivity to stars and galaxies
Band 2 – 4.6 μm – detect thermal radiation from the internal heat sources of sub-stellar objects like brown dwarfs
Band 3 – 12 μm – detect thermal radiation from asteroids
Band 4 – 22 μm – sensitivity to dust in star-forming regions (material with temperatures of 70–100 kelvins)
The primary mission lasted 10 months: one month for checkout, six months for a full-sky survey, then an additional three months of survey until cryogenic coolant (which kept the instruments at 17 K) ran out. The partial second survey pass facilitated the study of changes (e.g. orbital movement) in observed objects.
Congressional hearing - November 2007
On 8 November 2007, the House Committee on Science and Technology's Subcommittee on Space and Aeronautics held a hearing to examine the status of NASA's Near-Earth Object (NEO) survey program. The prospect of using WISE was proposed by NASA officials.
NASA officials told Committee staff that NASA plans to use WISE to detect [near-Earth objects in addition to performing its science goals. It was projected that WISE could detect 400 NEOs (or roughly 2% of the estimated NEO population of interest) within its one-year mission.
Results
By October 2010, over 33,500 new asteroids and comets were discovered, and nearly 154,000 Solar System objects had been observed by WISE.
Discovery of an ultra-cool brown dwarf, WISEPC J045853.90+643451.9, about 10~30 light years away from Earth, was announced in late 2010 based on early data. In July 2011, it was announced that WISE had discovered the first Earth trojan asteroid, . Also, the third-closest star system, Luhman 16.
As of May 2018, WISE / NEOWISE had also discovered 290 near-Earth objects and comets (see section below).
Project milestones
The WISE mission is led by Edward L. Wright of the University of California, Los Angeles. The mission has a long history under Wright's efforts and was first funded by NASA in 1999 as a candidate for a NASA Medium-class Explorer (MIDEX) mission under the name Next Generation Sky Survey (NGSS). The history of the program from 1999 to date is briefly summarized as follows:
January 1999 — NGSS is one of five missions selected for a Phase A study, with an expected selection in late 1999 of two of these five missions for construction and launch, one in 2003 and another in 2004. Mission cost is estimated at US$139 million at this time.
March 1999 — WIRE infrared telescope spacecraft fails within hours of reaching orbit.
October 1999 — Winners of MIDEX study are awarded, and NGSS is not selected.
October 2001 — NGSS proposal is re-submitted to NASA as a MIDEX mission.
April 2002 — NGSS proposal is accepted by the NASA Explorer office to proceed as one of four MIDEX programs for a Pre-Phase A study.
December 2002 — NGSS changes its name to Wide-field Infrared Survey Explorer (WISE).
March 2003 — NASA releases a press release announcing WISE has been selected for an Extended Phase-A study, leading to a decision in 2004 on whether to proceed with the development of the mission.
April 2003 — Ball Aerospace & Technologies is selected as the spacecraft provider for the WISE mission.
April 2004 — WISE is selected as NASA's next MIDEX mission. WISE's cost is estimated at US$208 million at this time.
November 2004 — NASA selects the Space Dynamics Laboratory at Utah State University to build the telescope for WISE.
October 2006 — WISE is confirmed for development by NASA and authorized to proceed with development. Mission cost at this time is estimated to be US$300 million.
14 December 2009 — WISE successfully launched from Vandenberg Air Force Base, California.
29 December 2009 — WISE successfully jettisoned instrument cover.
6 January 2010 — WISE first light image released.
14 January 2010 — WISE begins its regular four wavelength survey scheduled for nine months duration. It is expected to cover 99% of the sky with overlapping images in the first 6 months and continuing with a second pass until the hydrogen coolant is exhausted about three months later.
25 January 2010 — WISE detects a never-before-seen near Earth asteroid, designated 2010 AB78.
11 February 2010 — WISE detects a previously unknown comet, designated P/2010 B2 (WISE).
25 February 2010 — WISE website reports it has surveyed over 25% of the sky to a depth of 7 overlapping image frames.
10 April 2010 — WISE website reports it has surveyed over 50% of the sky to a depth of 7 overlapping image frames.
26 May 2010 — WISE website reports it has surveyed over 75% of the sky to a depth of 7 overlapping image frames.
16 July 2010 — Press release announces that 100% sky coverage will be completed on 17 July 2010. About half of the sky will be mapped again before the instrument's block of solid hydrogen coolant sublimes and is exhausted.
October 2010 — WISE hydrogen coolant runs out. Start of NASA Planetary Division funded NEOWISE mission.
January 2011 — Entire sky surveyed to an image density of at least 16+ frames (i.e. second scan of sky completed).
Hibernation
17 February 2011 — WISE Spacecraft transmitter turned off at 20:00 UTC by principal investigator Ned Wright. The spacecraft will remain in hibernation without ground contacts awaiting possible future use.
14 April 2011 — Preliminary release of data covering 57% of the sky as seen by WISE.
27 July 2011 — First Earth trojan asteroid discovered from WISE data.
23 August 2011 — WISE confirms the existence of a new class of brown dwarf, the Y dwarf. Some of these stars appear to have temperatures less than 300 K, close to room temperature at about 25 °C. Y dwarfs show ammonia absorption, in addition to methane and water absorption bands displayed by T dwarfs.
14 March 2012 — Release of the WISE All-Sky data to the scientific community.
29 August 2012 — WISE reveals millions of black-holes.
20 September 2012 — WISE was successfully contacted to check its status.
21 August 2013 — NASA announced it would recommission WISE with a new mission to search for asteroids.
Reactivation
19 December 2013 — NASA releases a new image taken by the reactivated WISE telescope, following an extended cooling down phase. The revived NeoWise mission is underway and collecting data.
7 March 2014 — NASA reports that WISE, after an exhaustive survey, has not been able to uncover any evidence of "planet X", a hypothesized planet within the Solar System.
26 April 2014 — The Penn State Center for Exoplanets and Habitable Worlds reports that WISE has found the coldest known brown dwarf, between −48 °C and −13 °C, 7.2 light years away from the Sun.
21 May 2015 — NASA reports the discovery of WISE J224607.57-052635.0, the most luminous known galaxy in the Universe.
History
Launch
The launch of the Delta II launch vehicle carrying the WISE spacecraft was originally scheduled for 11 December 2009. This attempt was scrubbed to correct a problem with a booster rocket steering engine. The launch was then rescheduled for 14 December 2009. The second attempt launched on time at 14:09:33 UTC from Vandenberg Air Force Base in California. The launch vehicle successfully placed the WISE spacecraft into the planned polar orbit at an altitude of above the Earth.
WISE avoided the problem that affected Wide Field Infrared Explorer (WIRE), which failed within hours of reaching orbit in March 1999. In addition, WISE was 1,000 times more sensitive than prior surveys such as IRAS, AKARI, and COBE's DIRBE.
"Cold" mission
A month-long checkout after launch found all spacecraft systems functioning normally and both the low- and high-rate data links to the operations center working properly. The instrument cover was successfully jettisoned on 29 December 2009. A first light image was released on 6 January 2010: an eight-second exposure in the Carina constellation showing infrared light in false color from three of WISE's four wavelength bands: Blue, green and red corresponding to 3.4, 4.6, and 12 μm, respectively. On 14 January 2010, the WISE mission started its official sky survey.
The WISE group's bid for continued funding for an extended "warm mission" scored low by a NASA review board, in part because of a lack of outside groups publishing on WISE data. Such a mission would have allowed use of the 3.4 and 4.6 μm detectors after the last of cryo-coolant had been exhausted, with the goal of completing a second sky survey to detect additional objects and obtain parallax data on putative brown dwarf stars. NASA extended the mission in October 2010 to search for near-Earth objects (NEO).
By October 2010, over 33,500 new asteroids and comets were discovered, and over 154,000 Solar System objects were observed by WISE. While active it found dozens of previously unknown asteroids every day. In total, it captured more than 2.7 million images during its primary mission.
NEOWISE (pre-hibernation)
In October 2010, NASA extended the mission by one month with a program called Near-Earth Object WISE (NEOWISE). Due to its success, the program was extended a further three months. The focus was to look for asteroids and comets close to Earth orbit, using the remaining post-cryogenic detection capability (two of four detectors on WISE work without cryogenic). In February 2011, NASA announced that NEOWISE had discovered many new objects in the Solar System, including twenty comets. During its primary and extended missions, the spacecraft delivered characterizations of 158,000 minor planets, including more than 35,000 newly discovered objects.
Hibernation and recommissioning
After completing a full scan of the asteroid belt for the NEOWISE mission, the spacecraft was put into hibernation on 1 February 2011. The spacecraft was briefly contacted to check its status on 20 September 2012.
On 21 August 2013, NASA announced it would recommission NEOWISE to continue its search for near-Earth objects (NEO) and potentially dangerous asteroids. It would additionally search for asteroids that a robotic spacecraft could intercept and redirect to orbit the Moon. The extended mission would be for three years at a cost of US$5 million per year, and was brought about in part due to calls for NASA to step up asteroid detection after the Chelyabinsk meteor exploded over Russia in February 2013.
NEOWISE was successfully taken out of hibernation in September 2013. With its coolant depleted, the spacecraft's temperature was reduced from — a relatively high temperature resulting from its hibernation — to an operating temperature of by having the telescope stare into deep space. Its instruments were then re-calibrated, and the first post-hibernation photograph was taken on 19 December 2013.
NEOWISE (post-hibernation)
The post-hibernation NEOWISE mission was anticipated to discover 150 previously unknown near-Earth objects and to learn more about the characteristics of 2,000 known asteroids. Few objects smaller than in diameter were detected by NEOWISE's automated detection software, known as the WISE Moving Object Processing Software (WMOPS), because it requires five or more detections to be reported. The average albedo of asteroids larger than discovered by NEOWISE is 0.14.
The telescope was turned on again in 2013, and by December 2013 the telescope had cooled down sufficiently to be able to resume observations. Between then and May 2017, the telescope made almost 640,000 detections of over 26,000 previously known objects including asteroids and comets. In addition, it discovered 416 new objects and about a quarter of those were near-Earth objects classification.
As of July 2024, WISE / NEOWISE statistics lists a total of 399 near-Earth objects (NEOs), including and , discovered by the spacecraft:
365 NEAs (subset of NEOs)
66 PHAs (subset of NEAs)
34 comets
Of the 365 near-Earth asteroids (NEAs), 66 of them are considered potentially hazardous asteroids (PHAs), a subset of the much larger family of NEOs, but particularly more likely to hit Earth and cause significant destruction. NEOs can be divided into NECs (comets only) and NEAs (asteroids only), and further into subcategories such as Atira asteroids, Aten asteroids, Apollo asteroids, Amor asteroids and the potentially hazardous asteroids (PHAs).
NEOWISE has provided an estimate of the size of over 1,850 near-Earth objects. NEOWISE mission was extended for two more years (1 July 2021 – 30 June 2023).
NEOWISE's replacement, the next-generation NEO Surveyor, is scheduled to launch in 2028, and will greatly expand on what humans have learned, and continue to learn, from NEOWISE.
"As of August 2023 NEOWISE is 40% through the 20th coverage of the full sky since the start of the Reactivation mission."
End of mission
On 13 December 2023, the Jet Propulsion Laboratory (JPL), announced that the satellite would enter a low orbit causing it to be unusable by early 2025. Increased solar activity as the sun approaches solar maximum during Solar cycle 25 was expected to increase atmospheric drag causing orbital decay. The satellite was expected to subsequently reenter the earth's atmosphere. On 8 August 2024, the Jet Propulsion Laboratory updated its estimate of orbital decay to sometime in late 2024 and announced that NEOWISE's science survey had ended on 31 July. NEOWISE entered and burnt up in the Earth's atmosphere at 8:49 p.m. EDT on 1 November 2024.
Data releases
On 14 April 2011, a preliminary release of WISE data was made public, covering 57% of the sky observed by the spacecraft. On 14 March 2012, a new atlas and catalog of the entire infrared sky as imaged by WISE was released to the astronomic community. On 31 July 2012, NEOWISE Post-Cryo Preliminary Data was released. A release called AllWISE, combining all data, was released on 13 November 2013. NEOWISE data is released annually.
The WISE data include diameter estimates of intermediate precision, better than from an assumed albedo but not nearly as precise as good direct measurements, can be obtained from the combination of reflected light and thermal infrared emission, using a thermal model of the asteroid to estimate both its diameter and its albedo. In May 2016, technologist Nathan Myhrvold questioned the precision of the diameters and claimed systemic errors arising from the spacecraft's design. The original version of his criticism itself faced criticism for its methodology and did not pass peer review, but a revised version was subsequently published. The same year, an analysis of 100 asteroids by an independent group of astronomers gave results consistent with the original WISE analysis.
unWISE and CatWISE
The Allwise co-added images were intentionally blurred, which is optimal for detecting isolated point sources. This has the disadvantage that many sources are not detected in crowded regions. The unofficial, unblurred coadds of the WISE imaging (unWISE) creates sharp images and masks defects and transients. unWISE coadded images can be searched by coordinates on the unWISE website. unWISE images are used for the citizen science projects Disk Detective and Backyard Worlds.
In 2019, a preliminary catalog was released. The catalog is called CatWISE. This catalog combines the WISE and NEOWISE data and provides photometry at 3.4 and 4.6 μm. It uses the unWISE images and the Allwise pipeline to detect sources. CatWISE includes fainter sources and far more accurate measurement of the motion of objects. The catalog is used to extend the number of discovered brown dwarfs, especially the cold and faint Y dwarfs. CatWISE is led by Jet Propulsion Laboratory (JPL), California Institute of Technology, with funding from NASA's Astrophysics Data Analysis Program. The CatWISE preliminary catalog can be accessed through Infrared Science Archive (IRSA).
Discovered objects
In addition to numerous comets and minor planets, WISE and NEOWISE discovered many brown dwarfs, some just a few light years from the solar system; the first Earth trojan; and the most luminous galaxies in the universe.
Nearby stars
Nearby stars discovered using WISE within 30 light years:
Brown dwarfs
The nearest brown dwarfs discovered by WISE within 20 light-years include:
Before the discovery of Luhman 16 in 2013, WISE 1506+7027 at a distance of light-years was suspected to be closest brown dwarf on the list of nearest stars (also see ).
Directly-imaged exoplanets
Directly imaged exoplanets first detected with WISE. See Definition of exoplanets: IAU working definition as of 2018 requires Mplanet ≤ 13 and Mplanet/Mcentral < 0.04006. Mmin and Mmax are the lower and upper mass limit of the planet in Jupiter masses.
Disks and young stars
The sensitivity of WISE in the infrared enabled the discovery of disk around young stars and old white dwarf systems. These discoveries usually require a combination of optical, near infrared and WISE or Spitzer mid-infrared observations. Examples are the red dwarf WISE J080822.18-644357.3, the brown dwarf WISEA J120037.79-784508.3 and the white dwarf LSPM J0207+3331. The NASA citizen science project Disk Detective is using WISE data. Additionally researchers used NEOWISE to discover erupting young stellar objects.
Nebulae
Researchers discovered a few nebulae using WISE. Such as the type Iax remnant Pa 30. Nebulae around the massive B-type stars BD+60° 2668 and ALS 19653, an obscured shell around the Wolf-Rayet star WR 35 and a halo around the Helix Nebula, a planetary nebula were also discovered with WISE.
Extragalactic discoveries
Active galactic nuclei (AGN) can be identified from their mid-infrared color. One work used for example a combination of Gaia and unWISE data to identify AGNs. Luminous infrared galaxies can be detected in the infrared. One study used SDSS and WISE to identify such galaxies. NEOWISE observed the entire sky for more than 10 years and can be used to find transient events. Some of these discovered transients are Tidal Disruption Events (TDE) in galaxies and infrared detection of supernovae similar to SN 2010jl.
Minor planets
WISE is credited with discovering 3,088 numbered minor planets. Examples of the mission's numbered minor planet discoveries include:
Comet C/2020 F3 (NEOWISE)
On 27 March 2020, the comet C/2020 F3 (NEOWISE) was discovered by the WISE spacecraft. It eventually became a naked-eye comet and was widely photographed by professional and amateur astronomers. It was the brightest comet visible in the northern hemisphere since comet Hale-Bopp in 1997.
Gallery
Full sky views by WISE
Selected images by WISE
Map with nearby WISE stars
| Technology | Space-based observatories | null |
6395779 | https://en.wikipedia.org/wiki/Dwarf%20planet | Dwarf planet | A dwarf planet is a small planetary-mass object that is in direct orbit around the Sun, massive enough to be gravitationally rounded, but insufficient to achieve orbital dominance like the eight classical planets of the Solar System. The prototypical dwarf planet is Pluto, which for decades was regarded as a planet before the "dwarf" concept was adopted in 2006.
Dwarf planets are capable of being geologically active, an expectation that was borne out in 2015 by the Dawn mission to and the New Horizons mission to Pluto. Planetary geologists are therefore particularly interested in them.
Astronomers are in general agreement that at least the nine largest candidates are dwarf planets – in rough order of diameter, , , , , , , , , , a considerable uncertainty remains over the tenth largest candidate , which may thus be considered a borderline case. Of these ten, two have been visited by spacecraft (Pluto and Ceres) and seven others have at least one known moon (Eris, Haumea, Makemake, Gonggong, Quaoar, Orcus, and Salacia), which allows their masses and thus an estimate of their densities to be determined. Mass and density in turn can be fit into geophysical models in an attempt to determine the nature of these worlds. Only one, Sedna, has neither been visited nor has any known moons, making an accurate estimate of mass difficult. Some astronomers include many smaller bodies as well, but there is no consensus that these are likely to be dwarf planets.
The term dwarf planet was coined by planetary scientist Alan Stern as part of a three-way categorization of planetary-mass objects in the Solar System: classical planets, dwarf planets, and satellite planets. Dwarf planets were thus conceived of as a category of planet. In 2006, however, the concept was adopted by the International Astronomical Union (IAU) as a category of sub-planetary objects, part of a three-way recategorization of bodies orbiting the Sun: planets, dwarf planets, and small Solar System bodies. Thus Stern and other planetary geologists consider dwarf planets and large satellites to be planets, but since 2006, the IAU and perhaps the majority of astronomers have excluded them from the roster of planets.
History of the concept
Starting in 1801, astronomers discovered Ceres and other bodies between Mars and Jupiter that for decades were considered to be planets. Between then and around 1851, when the number of planets had reached 23, astronomers started using the word asteroid (from Greek, meaning 'star-like' or 'star-shaped') for the smaller bodies and began to distinguish them as minor planets rather than major planets.
With the discovery of Pluto in 1930, most astronomers considered the Solar System to have nine major planets, along with thousands of significantly smaller bodies (asteroids and comets). For almost 50 years, Pluto was thought to be larger than Mercury, but with the discovery in 1978 of Pluto's moon Charon, it became possible to measure Pluto's mass accurately and to determine that it was much smaller than initial estimates. It was roughly one-twentieth the mass of Mercury, which made Pluto by far the smallest planet. Although it was still more than ten times as massive as the largest object in the asteroid belt, Ceres, it had only one-fifth the mass of Earth's Moon. Furthermore, having some unusual characteristics, such as large orbital eccentricity and a high orbital inclination, it became evident that it was a different kind of body from any of the other planets.
In the 1990s, astronomers began to find objects in the same region of space as Pluto (now known as the Kuiper belt), and some even farther away.
Many of these shared several of Pluto's key orbital characteristics, and Pluto started being seen as the largest member of a new class of objects, the plutinos. It became clear that either the larger of these bodies would also have to be classified as planets, or Pluto would have to be reclassified, much as Ceres had been reclassified after the discovery of additional asteroids.
This led some astronomers to stop referring to Pluto as a planet. Several terms, including subplanet and planetoid, started to be used for the bodies now known as dwarf planets.
Astronomers were also confident that more objects as large as Pluto would be discovered, and the number of planets would start growing quickly if Pluto were to remain classified as a planet.
Eris (then known as ), a trans-Neptunian object, was discovered in January 2005; it was thought to be slightly larger than Pluto, and some reports informally referred to it as the tenth planet. As a consequence, the issue became a matter of intense debate during the IAU General Assembly in August 2006. The IAU's initial draft proposal included Charon, Eris, and Ceres in the list of planets. After many astronomers objected to this proposal, an alternative was drawn up by the Uruguayan astronomers Julio Ángel Fernández and Gonzalo Tancredi: They proposed an intermediate category for objects large enough to be round but that had not cleared their orbits of planetesimals. Beside dropping Charon from the list, the new proposal also removed Pluto, Ceres, and Eris, because they have not cleared their orbits.
Although concerns were raised about the classification of planets orbiting other stars, the issue was not resolved; it was proposed instead to decide this only when dwarf-planet-size objects start to be observed.
In the immediate aftermath of the IAU definition of dwarf planet, some scientists expressed their disagreement with the IAU resolution. Campaigns included car bumper stickers and T-shirts. Mike Brown (the discoverer of Eris) agrees with the reduction of the number of planets to eight.
NASA announced in 2006 that it would use the new guidelines established by the IAU. Alan Stern, the director of NASA's mission to Pluto, rejects the current IAU definition of planet, both in terms of defining dwarf planets as something other than a type of planet, and in using orbital characteristics (rather than intrinsic characteristics) of objects to define them as dwarf planets. Thus, in 2011, he still referred to Pluto as a planet, and accepted other likely dwarf planets such as Ceres and Eris, as well as the larger moons, as additional planets. Several years before the IAU definition, he used orbital characteristics to separate "überplanets" (the dominant eight) from "unterplanets" (the dwarf planets), considering both types "planets".
Name
Names for large subplanetary bodies include dwarf planet, planetoid (more general term), meso-planet (narrowly used for sizes between Mercury and Ceres), quasi-planet, and (in the transneptunian region) plutoid. Dwarf planet, however, was originally coined as a term for the smallest planets, not the largest sub-planets, and is still used that way by many planetary astronomers.
Alan Stern coined the term dwarf planet, analogous to the term dwarf star, as part of a three-fold classification of planets, and he and many of his colleagues continue to classify dwarf planets as a class of planets. The IAU decided that dwarf planets are not to be considered planets, but kept Stern's term for them. Other terms for the IAU definition of the largest subplanetary bodies that do not have such conflicting connotations or usage include quasi-planet and the older term planetoid ("having the form of a planet"). Michael E. Brown stated that planetoid is "a perfectly good word" that has been used for these bodies for years, and that the use of the term dwarf planet for a non-planet is "dumb", but that it was motivated by an attempt by the IAU division III plenary session to reinstate Pluto as a planet in a second resolution. Indeed, the draft of Resolution 5A had called these median bodies planetoids, but the plenary session voted unanimously to change the name to dwarf planet. The second resolution, 5B, defined dwarf planets as a subtype of planet, as Stern had originally intended, distinguished from the other eight that were to be called "classical planets". Under this arrangement, the twelve planets of the rejected proposal were to be preserved in a distinction between eight classical planets and four dwarf planets. Resolution 5B was defeated in the same session that 5A was passed. Because of the semantic inconsistency of a dwarf planet not being a planet due to the failure of Resolution 5B, alternative terms such as nanoplanet and subplanet were discussed, but there was no consensus among the CSBN to change it.
In most languages equivalent terms have been created by translating dwarf planet more-or-less literally: French , Spanish , German , Russian karlikovaya planeta (), Arabic kaukab qazm (), Chinese ǎixíngxīng (), Korean waesohangseong () or waehangseong (), but in Japanese they are called junwakusei (), meaning "quasi-planets" or "peneplanets" (pene- meaning "almost").
IAU Resolution 6a of 2006 recognizes Pluto as "the prototype of a new category of trans-Neptunian objects". The name and precise nature of this category were not specified but left for the IAU to establish at a later date; in the debate leading up to the resolution, the members of the category were variously referred to as plutons and plutonian objects but neither name was carried forward, perhaps due to objections from geologists that this would create confusion with their pluton.
On June 11, 2008, the IAU Executive Committee announced a new term, plutoid, and a definition: all trans-Neptunian dwarf planets are plutoids. Other departments of the IAU have rejected the term:
The category of 'plutoid' captured an earlier distinction between the 'terrestrial dwarf' Ceres and the 'ice dwarfs' of the outer Solar system, part of a conception of a threefold division of the Solar System into inner terrestrial planets, central giant planets, and outer ice dwarfs, of which Pluto was the principal member. 'Ice dwarf' also saw some use as an umbrella term for all trans-Neptunian minor planets, or for the ice asteroids of the outer Solar System; one attempted definition was that an ice dwarf "is larger than the nucleus of a normal comet and icier than a typical asteroid."
Since the Dawn mission, it has been recognized that Ceres is a geologically icy body that may have originated from the outer Solar System.
Ceres has since been called an ice dwarf as well.
Criteria
The category dwarf planet arose from a conflict between dynamical and geophysical ideas of what a useful conception of a planet would be. In terms of the dynamics of the Solar System, the major distinction is between bodies that gravitationally dominate their neighbourhood (Mercury through Neptune) and those that do not (such as the asteroids and Kuiper belt objects). A celestial body may have a dynamic (planetary) geology at approximately the mass required for its mantle to become plastic under its own weight, which results in the body acquiring a round shape. Because this requires a much lower mass than gravitationally dominating the region of space near their orbit, there are a population of objects that are massive enough to have a world-like appearance and planetary geology, but not massive enough to clear their neighborhood. Examples are Ceres in the asteroid belt and Pluto in the Kuiper belt.
Dynamicists usually prefer using gravitational dominance as the threshold for planethood, because from their perspective smaller bodies are better grouped with their neighbours, e.g. Ceres as simply a large asteroid and Pluto as a large Kuiper belt object. Geoscientists usually prefer roundness as the threshold, because from their perspective the internally driven geology of a body like Ceres makes it more similar to a classical planet like Mars, than to a small asteroid that lacks internally driven geology. This necessitated the creation of the category of dwarf planets to describe this intermediate class.
Orbital dominance
Alan Stern and Harold F. Levison introduced a parameter (upper case lambda) in 2000, expressing the likelihood of an encounter resulting in a given deflection of orbit. The value of this parameter in Stern's model is proportional to the square of the mass and inversely proportional to the period. This value can be used to estimate the capacity of a body to clear the neighbourhood of its orbit, where will eventually clear it. A gap of five orders of magnitude in was found between the smallest terrestrial planets and the largest asteroids and Kuiper belt objects.
Using this parameter, Steven Soter and other astronomers argued for a distinction between planets and dwarf planets based on the inability of the latter to "clear the neighbourhood around their orbits": planets are able to remove smaller bodies near their orbits by collision, capture, or gravitational disturbance (or establish orbital resonances that prevent collisions), whereas dwarf planets lack the mass to do so. Soter went on to propose a parameter he called the planetary discriminant, designated with the symbol (mu), that represents an experimental measure of the actual degree of cleanliness of the orbital zone (where is calculated by dividing the mass of the candidate body by the total mass of the other objects that share its orbital zone), where is deemed to be cleared.
Jean-Luc Margot refined Stern and Levison's concept to produce a similar parameter (upper case Pi). It is based on theory, avoiding the empirical data used by indicates a planet, and there is again a gap of several orders of magnitude between planets and dwarf planets.
There are several other schemes that try to differentiate between planets and dwarf planets, but the 2006 definition uses this concept.
Hydrostatic equilibrium
Enough internal pressure, caused by the body's gravitation, will turn a body plastic, and enough plasticity will allow high elevations to sink and hollows to fill in, a process known as gravitational relaxation. Bodies smaller than a few kilometers are dominated by non-gravitational forces and tend to have an irregular shape and may be rubble piles. Larger objects, where gravity is significant but not dominant, are potato-shaped; the more massive the body, the higher its internal pressure, the more solid it is and the more rounded its shape, until the pressure is enough to overcome its compressive strength and it achieves hydrostatic equilibrium. Then, a body is as round as it is possible to be, given its rotation and tidal effects, and is an ellipsoid in shape. This is the defining limit of a dwarf planet.
If an object is in hydrostatic equilibrium, a global layer of liquid on its surface would form a surface of the same shape as the body, apart from small-scale surface features such as craters and fissures. The body will have a spherical shape if it does not rotate and an ellipsoidal one if it does. The faster it rotates, the more oblate or even scalene it becomes. If such a rotating body were heated until it melts, its shape would not change. The extreme example of a body that may be scalene due to rapid rotation is , which is twice as long on its major axis as it is at the poles. If the body has a massive nearby companion, then tidal forces gradually slow its rotation until it is tidally locked; that is, it always presents the same face to its companion. Tidally locked bodies are also scalene, though sometimes only slightly so. Earth's Moon is tidally locked, as are all the rounded satellites of the gas giants. Pluto and Charon are tidally locked to each other, as are Eris and Dysnomia, and probably also Orcus and Vanth.
There are no specific size or mass limits of dwarf planets, as those are not defining features. There is no clear upper limit: an object very far out in the Solar System that is more massive than Mercury might not have had time to clear its neighbourhood, and such a body would fit the definition of dwarf planet rather than planet. Indeed, Mike Brown set out to find such an object. The lower limit is determined by the requirements of achieving and retaining hydrostatic equilibrium, but the size or mass at which an object attains and retains equilibrium depends on its composition and thermal history, not simply its mass. An IAU 2006 press release question-and-answer section estimated that objects with mass above and radius greater than 400 km would "normally" be in hydrostatic equilibrium (the shape ... would normally be determined by self-gravity), but that all borderline cases would need to be determined by observation. This is close to what as of 2019 is believed to be roughly the limit for objects beyond Neptune that are fully compact, solid bodies, with being a borderline case both for the 2006 Q&A expectations and in more recent evaluations, and with being just above the expected limit. No other body with a measured mass is close to the expected mass limit, though several without a measured mass approach the expected size limit.
Population of dwarf planets
Though the definition of a dwarf planet is clear, evidence about whether a given trans-Neptunian object is large and malleable enough to be shaped by its own gravitational field is often inconclusive. There are also outstanding questions relating to the interpretation of the IAU criterion in certain instances. Consequently the number of currently conformed TNOs which meet the hydrostatic equilibrium criterion is uncertain.
The three objects under consideration during the debates leading up to the 2006 IAU acceptance of the category of dwarf planet – Ceres, Pluto and Eris – are generally accepted as dwarf planets, including by those astronomers who continue to classify dwarf planets as planets. Only one of them – Pluto – has been observed in enough detail to verify that its current shape fits what would be expected from hydrostatic equilibrium. Ceres is close to equilibrium, but some gravitational anomalies remain unexplained. Eris is generally assumed to be a dwarf planet because it is more massive than Pluto.
In order of discovery, these three bodies are:
– discovered January 1, 1801, and announced January 24, 45 years before Neptune. Considered a planet for half a century before reclassification as an asteroid. Considered a dwarf planet by the IAU since the adoption of Resolution 5A on August 24, 2006.
– discovered February 18, 1930, and announced March 13. Considered a planet for 76 years. Explicitly reclassified as a dwarf planet by the IAU with Resolution 6A on August 24, 2006. Five known moons.
() – discovered January 5, 2005, and announced July 29. Called the "tenth planet" in media reports. Considered a dwarf planet by the IAU since the adoption of Resolution 5A on August 24, 2006, and named by the IAU dwarf-planet naming committee on September 13 of that year. One known moon.
The IAU only established guidelines for which committee would oversee the naming of likely dwarf planets: any unnamed trans-Neptunian object with an absolute magnitude brighter than +1 (and hence a minimum diameter of 838 km at the maximum geometric albedo of 1) was to be named by a joint committee consisting of the Minor Planet Center and the planetary working group of the IAU. At the time (and still as of 2023), the only bodies to meet this threshold were and . These bodies are generally assumed to be dwarf planets, although they have not yet been demonstrated to be in hydrostatic equilibrium, and there is some disagreement for Haumea:
These five bodies – the three under consideration in 2006 (Pluto, Ceres and Eris) plus the two named in 2008 (Haumea and Makemake) – are commonly presented as the dwarf planets of the Solar System, though the limiting factor (albedo) is not what defines an object as a dwarf planet.
The astronomical community commonly refers to other larger TNOs as dwarf planets as well. At least four additional bodies meet the preliminary criteria of Brown, of Tancredi et al., of Grundy et al., and of Emery et al. for identifying dwarf planets, and are generally called dwarf planets by astronomers as well:
For instance, JPL/NASA called Gonggong a dwarf planet after observations in 2016, and Simon Porter of the Southwest Research Institute spoke of "the big eight [TNO] dwarf planets" in 2018, referring to Pluto, Eris, Haumea, Makemake, Gonggong, , and . The IAU itself has called Quaoar a dwarf planet in a 2022–2023 annual report.
More bodies have been proposed, such as and by Brown; and by Tancredi et al., and by Sheppard et al. Most of the larger bodies have moons, which enables a determination of their mass and thus their density, which inform estimates of whether they could be dwarf planets. The largest TNOs that are not known to have moons are Sedna, , and Ixion. In particular, Salacia has a known mass and diameter, putting it as a borderline case by the IAU's 2006 Q&A.
At the time Makemake and Haumea were named, it was thought that trans-Neptunian objects (TNOs) with icy cores would require a diameter of only about 400 km (250 mi), or 3% the size of Earththe size of the moons Mimas, the smallest moon that is round, and Proteus, the largest that is notto relax into gravitational equilibrium. Researchers thought that the number of such bodies could prove to be around 200 in the Kuiper belt, with thousands more beyond.
This was one of the reasons (keeping the roster of 'planets' to a reasonable number) that Pluto was reclassified in the first place.
Research since then has cast doubt on the idea that bodies that small could have achieved or maintained equilibrium under the typical conditions of the Kuiper belt and beyond.
Individual astronomers have recognized a number of objects as dwarf planets or as likely to prove to be dwarf planets. In 2008, Tancredi et al. advised the IAU to officially accept Orcus, Sedna and Quaoar as dwarf planets (Gonggong was not yet known), though the IAU did not address the issue then and has not since. Tancredi also considered the five TNOs , , , , and to most likely be dwarf planets as well.
Since 2011, Brown has maintained a list of hundreds of candidate objects, ranging from "nearly certain" to "possible" dwarf planets, based solely on estimated size.
As of September 13, 2019, Brown's list identifies ten trans-Neptunian objects with diameters then thought to be greater than 900 km (the four named by the IAU plus , , , , , and ) as "near certain" to be dwarf planets, and another 16, with diameter greater than 600 km, as "highly likely". Notably, Gonggong may have a larger diameter () than Pluto's round moon Charon (1212 km).
But in 2019 Grundy et al. proposed, based on their studies of Gǃkúnǁʼhòmdímà, that dark, low-density bodies smaller than about 900–1000 km in diameter, such as Salacia and , never fully collapsed into solid planetary bodies and retain internal porosity from their formation (in which case they could not be dwarf planets). They accept that brighter (albedo > ≈0.2) or denser (> ≈1.4 g/cc) Orcus and Quaoar probably were fully solid:
Salacia was later found to have a somewhat higher density, comparable within uncertainties to that of Orcus, though still with a very dark surface. Despite this determination, Grundy et al. call it "dwarf-planet sized", while calling Orcus a dwarf planet. Later studies on Varda suggest that its density may also be high, though a low density could not be excluded.
In 2023, Emery et al. wrote that near-infrared spectroscopy by the James Webb Space Telescope (JWST) in 2022 suggests that Sedna, Gonggong, and Quaoar underwent internal melting, differentiation, and chemical evolution, like the larger dwarf planets Pluto, Eris, Haumea, and Makemake, but unlike "all smaller KBOs". This is because light hydrocarbons are present on their surfaces (e.g. ethane, acetylene, and ethylene), which implies that methane is continuously being resupplied, and that methane would likely come from internal geochemistry. On the other hand, the surfaces of Sedna, Gonggong, and Quaoar have low abundances of CO and CO2, similar to Pluto, Eris, and Makemake, but in contrast to smaller bodies. This suggests that the threshold for dwarf planethood in the trans-Neptunian region is a diameter of ~900 km (thus including only Pluto, Eris, Haumea, Makemake, Gonggong, Quaoar, Orcus, and Sedna), and that even Salacia may not be a dwarf planet. A 2023 study of shows that it probably has an extremely large crater, whose depth takes up 5.7% of its diameter: this is proportionally larger than the Rheasilvia crater on Vesta, which is the reason Vesta is not usually considered a dwarf planet today.
In 2024, Kiss et al. found that Quaoar has an ellipsoidal shape incompatible with hydrostatic equilibrium for its current spin. They hypothesised that Quaoar originally had a rapid rotation and was in hydrostatic equilibrium, but that its shape became "frozen in" and did not change as it spun down due to tidal forces from its moon Weywot. If so, this would resemble the situation of Saturn's moon Iapetus, which is too oblate for its current spin. Iapetus is generally still considered a planetary-mass moon nonetheless, though not always.
Most likely dwarf planets
The trans-Neptunian objects in the following tables, except Salacia, are agreed by Brown, Tancredi et al., Grundy et al., and Emery et al. to be probable dwarf planets, or close to it. Salacia has been included as the largest TNO not generally agreed to be a dwarf planet; it is a borderline body by many criteria, and is therefore italicized. Charon, a moon of Pluto that was proposed as a dwarf planet by the IAU in 2006, is included for comparison. Those objects that have absolute magnitude greater than +1, and so meet the threshold of the joint planet–minor planet naming committee of the IAU, are highlighted, as is Ceres, which the IAU has assumed is a dwarf planet since they first debated the concept.
The masses of given dwarf planets are listed for their systems (if they have satellites) with exceptions for Pluto and Orcus.
Symbols
Ceres and Pluto received planetary symbols, as they were considered to be planets when they were discovered. By the time the others were discovered, planetary symbols had mostly fallen out of use among astronomers. Unicode includes symbols for Quaoar , Sedna , Orcus , Haumea , Eris , Makemake , and Gonggong that are primarily used by astrologers: they were devised by Denis Moskowitz, a software engineer in Massachusetts. NASA has used his Haumea, Eris, and Makemake symbols, as well as the traditional astrological symbol for Pluto when referring to it as a dwarf planet. Symbols have been proposed for the next-largest named candidates, but do not have consistent usage among astrologers. The Unicode proposal for Quaoar, Orcus, Haumea, Makemake, and Gonggong mentions the following symbols for named objects over 600 km diameter: Salacia , Varda , Ixion , Gǃkúnǁʼhòmdímà and Varuna .
Exploration
As of 2024, only two missions have targeted and explored dwarf planets up close. On March 6, 2015, the Dawn spacecraft entered orbit around Ceres, becoming the first spacecraft to visit a dwarf planet. On July 14, 2015, the New Horizons space probe flew by Pluto and its five moons.
Ceres displays such evidence of an active geology as salt deposits and cryovolcanos, while Pluto has water-ice mountains drifting in nitrogen-ice glaciers, as well as a significant atmosphere.
Ceres evidently has brine percolating through its subsurface, while there is evidence that Pluto has an actual subsurface ocean.
Dawn had previously orbited the asteroid Vesta. Saturn's moon Phoebe has been imaged by Cassini and before that by Voyager 2, which also encountered Neptune's moon Triton. All three bodies show evidence of once being dwarf planets, and their exploration helps clarify the evolution of dwarf planets.
New Horizons has captured distant images of Triton, Quaoar, Haumea, Eris, and Makemake, as well as the smaller candidates Ixion, , and .
One of the China National Space Administration's two Shensuo probes has been proposed to visit Quaoar in 2040.
Similar objects
A number of bodies physically resemble dwarf planets. These include former dwarf planets, which may still have equilibrium shape or evidence of active geology; planetary-mass moons, which meet the physical but not the orbital definition for dwarf planet; and Charon in the Pluto–Charon system, which is arguably a binary dwarf planet. The categories may overlap: Triton, for example, is both a former dwarf planet and a planetary-mass moon.
Former dwarf planets
Vesta, the next-most-massive body in the asteroid belt after Ceres, was once in hydrostatic equilibrium and is roughly spheroidal, deviating mainly due to massive impacts that formed the Rheasilvia and Veneneia craters after it solidified.
Its dimensions are not consistent with it currently being in hydrostatic equilibrium.
Triton is more massive than Eris or Pluto, has an equilibrium shape, and is thought to be a captured dwarf planet (likely a member of a binary system), but no longer directly orbits the sun.
Phoebe is a captured centaur that, like Vesta, is no longer in hydrostatic equilibrium, but is thought to have been so early in its history due to radiogenic heating.
Planetary-mass moons
At least nineteen moons have equilibrium shape from having relaxed under self-gravity at some point, though some have since frozen solid and are no longer in equilibrium. Seven are more massive than either Eris or Pluto. These moons are not physically distinct from the dwarf planets, but do not fit the IAU definition because they do not directly orbit the Sun. (Indeed, Neptune's moon Triton is a captured dwarf planet, and Ceres formed in the same region of the Solar System as the moons of Jupiter and Saturn.) Alan Stern calls planetary-mass moons "satellite planets", one of three categories of planet, together with dwarf planets and classical planets. The term planemo ("planetary-mass object") also covers all three populations.
Charon
There has been some debate as to whether the Pluto–Charon system should be considered a double dwarf planet.
In a draft resolution for the IAU definition of planet, both Pluto and Charon were considered planets in a binary system. The IAU currently says Charon is not considered a dwarf planet but rather a satellite of Pluto, though the idea that Charon might qualify as a dwarf planet may be considered at a later date. Nonetheless, it is no longer clear that Charon is in hydrostatic equilibrium. Also, the location of the barycenter depends not only on the relative masses of the bodies, but also on the distance between them; the barycenter of the Sun–Jupiter orbit, for example, lies outside the Sun, but they are not considered a binary object. Thus, a formal definition of what constitutes a binary (dwarf) planet must be established before Pluto and Charon are formally defined as binary dwarf planets.
| Physical sciences | Planetary science | null |
1291713 | https://en.wikipedia.org/wiki/African%20civet | African civet | The African civet (Civettictis civetta) is a large viverrid native to sub-Saharan Africa, where it is considered common and widely distributed in woodlands and secondary forests. It is listed as Least Concern on the IUCN Red List since 2008. In some countries, it is threatened by hunting, and wild-caught individuals are kept for producing civetone for the perfume industry.
The African civet is primarily nocturnal and spends the day sleeping in dense vegetation, but wakes up at sunset. It is a solitary mammal with a unique coloration: the black and white blotches covering its coarse pelage and rings on the tail are an effective cryptic pattern. The black bands surrounding its eyes closely resemble those of the raccoon. Other distinguishing features are its disproportionately large hindquarters and its erectile dorsal crest. It is an omnivorous generalist, preying on small vertebrates, invertebrates, eggs, carrion, and vegetable matter. It is one of the few carnivores capable of eating toxic invertebrates such as termites and millipedes. It detects prey primarily by smell and sound rather than by sight. It is the only living member of the genus Civettictis.
Taxonomy and evolution
Viverra civetta was the scientific name introduced in 1776 by Johann Christian Daniel von Schreber when he described African civets based on previous descriptions and accounts. Schreber is therefore considered the binomial authority.
In 1915, Reginald Innes Pocock described the structural differences between feet of African and large Indian civet (Viverra zibetha) specimens in the zoological collection of the Natural History Museum, London. Because of marked differences, he proposed Civettictis as a new genus, with C. civetta as only species.
The following subspecies were proposed in the 20th century:
C. c. congica described by Ángel Cabrera in 1929 was a zoological specimen from the upper Congo River.
C. c. schwarzi was proposed by Cabrera in 1929 for African civet specimens from East Africa.
C. c. australis described by Bengt G. Lundholm in 1955 was based on a male type specimen and three paratype specimens collected near the Olifants River in northeastern Transvaal province.
C. c. volkmanni also described by Lundholm in 1955 was a specimen from the vicinity of Otavi in Namibia.
C. c. pauli described in 2000 by Dieter Kock, Künzel and Rayaleh was a specimen collected close to the coast near Djibouti.
A 1969 study noted that this civet showed enough differences from the rest of the viverrines in terms of dentition to be classified under its own genus.
Evolution
A 2006 phylogenetic study showed that the African civet is closely related to the genus Viverra. It was estimated that the Civettictis-Viverra clade diverged from Viverricula around 16.2 Mya; the African civet split from Viverra 12.3 Mya. The authors suggested that the subfamily Viverrinae should be bifurcated into Genettinae (Poiana and Genetta) and Viverrinae (Civettictis, Viverra, and Viverricula). The following cladogram is based on this study.
Etymology
The generic name Civettictis is a fusion of the French word civette and the Greek word ictis, meaning "weasel". The specific name civetta and the common name "civet" come from the French civette or the Arabic zabād or sinnawr al-zabād ("civet cat").
Local and indigenous names
In Tigrinya: (zibad)
In Akan:
In Yoruba: ,
In Igbo:
Characteristics
The African civet has a coarse and wiry fur that varies in colour from white to creamy yellow to reddish on the back. The stripes, spots, and blotches are deep brown to black. Horizontal lines are prominent on the hind limbs, spots are normally present on its midsection and fade into vertical stripes above the forelimbs. Its muzzle is pointed, ears small and rounded. A black band stretches across its small eyes, and two black bands are around its short broad neck. Following the spine of the animal extending from the neck to the base of the tail is the erectile dorsal crest. The hairs of the erectile crest are longer than those of the rest of the pelage.
The sagittal crest of its skull is well-developed providing a large area for attachment of the temporal muscle. The zygomatic arch is robust and provides a large area for attachment of the masseter muscle. This musculature and its strong mandible give it a powerful bite. Its dental formula is . Its black paws are compact with hairless soles, five digits per manus in which the first toe is slightly set back from the others. Its long, curved claws are semi-retractile. Its head-and-body length is , with a long tail. The average weight is within a range of .
It is the largest viverrid in Africa. Only the binturong is likely heavier among the world's viverrids. Its shoulder height averages . Both male and female have perineal and anal glands, which are bigger in males. The perineal glands are located between the scrotum and the penis in males, and between the anus and the vulva in females.
Distribution and habitat
African civets typically sleep during the day in the tall grasses near water sources in central and southern Africa. It often inhabits savannahs, forests, and sometimes near rivers as the tall grasses and thickets present provide them with necessary cover during the day. In Guinea's National Park of Upper Niger, it was recorded during surveys conducted in 1996 to 1997.
In Gabon's Moukalaba-Doudou National Park, it was photographed close to forested areas during a survey in 2012.
In Batéké Plateau National Park, it was recorded in gallery forest along the Mpassa River during surveys conducted between June 2014 and May 2015.
In the Republic of Congo, it was recorded in the Western Congolian forest–savanna mosaic of Odzala-Kokoua National Park during surveys in 2007.
In the transboundary Dinder–Alatash (Sudan and Ethiopia) protected area complex it was recorded during surveys between 2015 and 2018. It is also frequently spotted in Ethiopia's northern Degua Tembien massif.
Behaviour and ecology
African civets deposit their feces in large piles called latrines, or specifically "civetries". The latrines are characterized by fruits, seeds, exoskeletons of insect and millipede rings, and occasionally clumps of grass. The role of civet latrines as a mechanism of seed dispersal and forest regeneration is still being researched. Like felids, male African civets scent mark by spraying urine backwards.
African civets are typically solitary creatures. They use their perineal gland secretion to mark their territories around their civetries. These markings typically follow common routes and paths and lie within 100 meters of civetries 96.72% of the time.
If an African civet feels threatened, it raises its dorsal crest to make itself look larger and thus more formidable and dangerous to attack. This behavior is a predatory defense.
Feeding
Research in southeastern Nigeria revealed that the African civet has an omnivorous diet. It feeds on rodents like giant pouched rats (Cricetomys), Temminck's mouse (Mus musculoides), Tullberg's soft-furred mouse (Praomys tulbergi), greater cane rat (Thryonomys swinderianus), and typical striped grass mouse (Lemniscomys striatus), amphibians and small reptiles like Hallowell's toad (Amietophrynus maculatus), herald snake (Crotaphopeltis hotamboeia), black-necked spitting cobra (Naja nigricollis), common agama (Agama agama), and Mabuya skinks, birds, millipedes, and insects such as Orthoptera, Coleoptera, and Blattodea, as well as carrion, eggs, fruits (such as Strychnos), berries and seeds.
African civets can take prey as large as hares but can be somewhat clumsy killers with sizable prey. Stomach content of three African civets in Botswana included foremost husks of fan palm (Hyphaene petersiana) and jackalberry (Diospyros mespiliformis), and some remains of African red toad (Schismaderma carens), Acrididae grasshoppers and larvae of Dytiscidae beetles.
Green grass is also frequently found in feces, and this seems to be linked to the eating of snakes and amphibians.
Reproduction
Captive females are polyestrous. Mating lasts 40 to 70 seconds.
In Southern Africa, African civets probably mate from October to November, and females give birth in the rainy season between January and February.
The average lifespan of a captive African civet is 15 to 20 years. Females create a nest which is normally in dense vegetation and commonly in a hole dug by another animal. Female African civets normally give birth to one to four young. The young are born in advanced stages compared to most carnivores. They are covered in a dark, short fur and can crawl at birth. The young leave the nest after 18 days but are still dependent on the mother for milk and protection for another two months.
Threats
In 2006, it was estimated that about 9,400 African civets are hunted yearly in the Nigerian part and more than 5,800 in the Cameroon part of the Cross-Sanaga-Bioko coastal forests.
Skins and skulls of African civets were found in 2007 at the Dantokpa Market in southern Benin, where it was among the most expensive small carnivores. Local hunters considered it a rare species, indicating that the population declined due to hunting for trade as bushmeat.
The African civet has historically been hunted for the secretion of perineal glands. This secretion is a white or yellow waxy substance called civetone, which has been used as a basic ingredient for many perfumes for hundreds of years. In Ethiopia, African civets are hunted alive, and are kept in small cages. Most die within three weeks after capture, most likely due to stress. Extraction of the civetone is cruel and has been criticised by animal rights activists. The writer Daniel Defoe once invested in a scheme to raise civets in captivity for their secretions.
The population of African civet in Botswana is listed under Appendix III of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES).
| Biology and health sciences | Other carnivora | Animals |
1291730 | https://en.wikipedia.org/wiki/Backfeeding | Backfeeding | Backfeeding is the flow of electric power in the direction reverse to that of the generally understood or typical flow of power. Depending on the source of the power, this reverse flow may be intentional or unintentional. If not prevented (in the case of unintentional backfeeding) or properly performed (in cases of intentional backfeeding), backfeeding may present unanticipated hazards to electrical grid equipment and service personnel.
Types of backfeeding
Intentional backfeeding
Development and economization of consumer power generation equipment such as wind turbines and photovoltaic systems has led to an increase in the number of consumers that may produce more electrical power than they consume during peak generating conditions. If supported by the consumer's electric utility provider, the excess power generated may be fed back into the electrical grid. This process makes the typical consumer a temporary producer while the flow of electrical power remains reversed. When backfeeding is performed this way, electric utility providers will install a specially engineered electrical meter that is capable of net metering.
Unintentional backfeeding
A common source of unintentional backfeeding is an electrical generator (typically a portable generator) that is improperly connected to a building electrical system. A properly installed electrical generator incorporates the use of a transfer switch or generator interlock kit to ensure the incoming electrical service line is disconnected when the generator is providing power to the building. In the absence (or improper usage) of a transfer switch, unintentional backfeeding may occur when the power provided by the electrical generator is able to flow over the electrical service line. Because an electrical transformer is capable of operating in both directions, electrical power generated from equipment on the consumer's premises can backfeed through the transformer and energize the distribution line to which the transformer is connected.
Intrinsic backfeeding
Backfeeding also exists in other instances where a location that is typically a generator becomes a consumer. This is commonly seen when an electrical generation plant is shut down or operating at such a reduced capacity that its parasitic load becomes greater than its generated power. The parasitic power load is the result of the usage of: pumps, facility lighting, HVAC equipment, and other control equipment that must remain active regardless of actual electrical power production. Electrical utilities often take steps to decrease their overall parasitic load to minimize this type of backfeeding and improve efficiency.
Grid design considerations
For manufacturing cost and operational simplicity reasons, most circuit (overcurrent) protection and power quality control (voltage regulation) devices used by electric utility companies are designed with the assumption that power always flows in one direction. An interconnection agreement can be arranged for equipment designed to backfeed from the consumer's equipment to the electrical utility provider's distribution system. This type of interconnection can involve nontrivial engineering and usage of costly specialized equipment designed to keep distribution circuits and equipment properly protected. Such costs may be minimized by limiting distributed generation capacity to less than that which is consumed locally, and guaranteeing this condition by installing a reverse-power cutoff relay that opens if backfeeding occurs.
Safety and operational hazards
Because it involves transfer of significant amounts of energy, backfeeding must be carefully controlled and monitored. Personnel working on equipment subject to backfeeding must be aware of all possible power sources, and follow systematic protocols to ensure that equipment is fully de-energized before commencing work, or use special equipment and techniques suitable for working on live equipment.
When working on de-energized power conductors, lineworkers attach temporary protective grounding assemblies or "protective ground sets", which short all conductors to each other and to an earth ground. This ensures that no wires can become energized, whether by accidental switching or by unintentional backfeeding.
Because of the hazards presented by unintentional backfeeding, the usage of equipment that defeats engineered or standardized safety mechanisms such as double-ended power cords (an electrical cord that has a male electrical plug on both ends) is illegal and against the United States National Electrical Code.
| Technology | Concepts | null |
1293340 | https://en.wikipedia.org/wiki/Classical%20field%20theory | Classical field theory | A classical field theory is a physical theory that predicts how one or more fields in physics interact with matter through field equations, without considering effects of quantization; theories that incorporate quantum mechanics are called quantum field theories. In most contexts, 'classical field theory' is specifically intended to describe electromagnetism and gravitation, two of the fundamental forces of nature.
A physical field can be thought of as the assignment of a physical quantity at each point of space and time. For example, in a weather forecast, the wind velocity during a day over a country is described by assigning a vector to each point in space. Each vector represents the direction of the movement of air at that point, so the set of all wind vectors in an area at a given point in time constitutes a vector field. As the day progresses, the directions in which the vectors point change as the directions of the wind change.
The first field theories, Newtonian gravitation and Maxwell's equations of electromagnetic fields were developed in classical physics before the advent of relativity theory in 1905, and had to be revised to be consistent with that theory. Consequently, classical field theories are usually categorized as non-relativistic and relativistic. Modern field theories are usually expressed using the mathematics of tensor calculus. A more recent alternative mathematical formalism describes classical fields as sections of mathematical objects called fiber bundles.
History
Michael Faraday coined the term "field" and lines of forces to explain electric and magnetic phenomena. Lord Kelvin in 1851 formalized the concept of field in different areas of physics.
Non-relativistic field theories
Some of the simplest physical fields are vector force fields. Historically, the first time that fields were taken seriously was with Faraday's lines of force when describing the electric field. The gravitational field was then similarly described.
Newtonian gravitation
The first field theory of gravity was Newton's theory of gravitation in which the mutual interaction between two masses obeys an inverse square law. This was very useful for predicting the motion of planets around the Sun.
Any massive body M has a gravitational field g which describes its influence on other massive bodies. The gravitational field of M at a point r in space is found by determining the force F that M exerts on a small test mass m located at r, and then dividing by m:
Stipulating that m is much smaller than M ensures that the presence of m has a negligible influence on the behavior of M.
According to Newton's law of universal gravitation, F(r) is given by
where is a unit vector pointing along the line from M to m, and G is Newton's gravitational constant. Therefore, the gravitational field of M is
The experimental observation that inertial mass and gravitational mass are equal to unprecedented levels of accuracy leads to the identification of the gravitational field strength as identical to the acceleration experienced by a particle. This is the starting point of the equivalence principle, which leads to general relativity.
For a discrete collection of masses, Mi, located at points, ri, the gravitational field at a point r due to the masses is
If we have a continuous mass distribution ρ instead, the sum is replaced by an integral,
Note that the direction of the field points from the position r to the position of the masses ri; this is ensured by the minus sign. In a nutshell, this means all masses attract.
In the integral form Gauss's law for gravity is
while in differential form it is
Therefore, the gravitational field g can be written in terms of the gradient of a gravitational potential :
This is a consequence of the gravitational force F being conservative.
Electromagnetism
Electrostatics
A charged test particle with charge q experiences a force F based solely on its charge. We can similarly describe the electric field E generated by the source charge Q so that :
Using this and Coulomb's law the electric field due to a single charged particle is
The electric field is conservative, and hence is given by the gradient of a scalar potential,
Gauss's law for electricity is in integral form
while in differential form
Magnetostatics
A steady current I flowing along a path ℓ will exert a force on nearby charged particles that is quantitatively different from the electric field force described above. The force exerted by I on a nearby charge q with velocity v is
where B(r) is the magnetic field, which is determined from I by the Biot–Savart law:
The magnetic field is not conservative in general, and hence cannot usually be written in terms of a scalar potential. However, it can be written in terms of a vector potential, A(r):
Gauss's law for magnetism in integral form is
while in differential form it is
The physical interpretation is that there are no magnetic monopoles.
Electrodynamics
In general, in the presence of both a charge density ρ(r, t) and current density J(r, t), there will be both an electric and a magnetic field, and both will vary in time. They are determined by Maxwell's equations, a set of differential equations which directly relate E and B to the electric charge density (charge per unit volume) ρ and current density (electric current per unit area) J.
Alternatively, one can describe the system in terms of its scalar and vector potentials V and A. A set of integral equations known as retarded potentials allow one to calculate V and A from ρ and J, and from there the electric and magnetic fields are determined via the relations
Continuum mechanics
Fluid dynamics
Fluid dynamics has fields of pressure, density, and flow rate that are connected by conservation laws for energy and momentum. The mass continuity equation is a continuity equation, representing the conservation of mass
and the Navier–Stokes equations represent the conservation of momentum in the fluid, found from Newton's laws applied to the fluid,
if the density , pressure , deviatoric stress tensor of the fluid, as well as external body forces b, are all given. The velocity field u is the vector field to solve for.
Other examples
In 1839, James MacCullagh presented field equations to describe reflection and refraction in "An essay toward a dynamical theory of crystalline reflection and refraction".
Potential theory
The term "potential theory" arises from the fact that, in 19th century physics, the fundamental forces of nature were believed to be derived from scalar potentials which satisfied Laplace's equation. Poisson addressed the question of the stability of the planetary orbits, which had already been settled by Lagrange to the first degree of approximation from the perturbation forces, and derived the Poisson's equation, named after him. The general form of this equation is
where σ is a source function (as a density, a quantity per unit volume) and ø the scalar potential to solve for.
In Newtonian gravitation, masses are the sources of the field so that field lines terminate at objects that have mass. Similarly, charges are the sources and sinks of electrostatic fields: positive charges emanate electric field lines, and field lines terminate at negative charges. These field concepts are also illustrated in the general divergence theorem, specifically Gauss's law's for gravity and electricity. For the cases of time-independent gravity and electromagnetism, the fields are gradients of corresponding potentials
so substituting these into Gauss' law for each case obtains
where ρg is the mass density, ρe the charge density, G the gravitational constant and ke = 1/4πε0 the electric force constant.
Incidentally, this similarity arises from the similarity between Newton's law of gravitation and Coulomb's law.
In the case where there is no source term (e.g. vacuum, or paired charges), these potentials obey Laplace's equation:
For a distribution of mass (or charge), the potential can be expanded in a series of spherical harmonics, and the nth term in the series can be viewed as a potential arising from the 2n-moments (see multipole expansion). For many purposes only the monopole, dipole, and quadrupole terms are needed in calculations.
Relativistic field theory
Modern formulations of classical field theories generally require Lorentz covariance as this is now recognised as a fundamental aspect of nature. A field theory tends to be expressed mathematically by using Lagrangians. This is a function that, when subjected to an action principle, gives rise to the field equations and a conservation law for the theory. The action is a Lorentz scalar, from which the field equations and symmetries can be readily derived.
Throughout we use units such that the speed of light in vacuum is 1, i.e. c = 1.
Lagrangian dynamics
Given a field tensor , a scalar called the Lagrangian density can be constructed from and its derivatives.
From this density, the action functional can be constructed by integrating over spacetime,
Where is the volume form in curved spacetime.
Therefore, the Lagrangian itself is equal to the integral of the Lagrangian density over all space.
Then by enforcing the action principle, the Euler–Lagrange equations are obtained
Relativistic fields
Two of the most well-known Lorentz-covariant classical field theories are now described.
Electromagnetism
Historically, the first (classical) field theories were those describing the electric and magnetic fields (separately). After numerous experiments, it was found that these two fields were related, or, in fact, two aspects of the same field: the electromagnetic field. Maxwell's theory of electromagnetism describes the interaction of charged matter with the electromagnetic field. The first formulation of this field theory used vector fields to describe the electric and magnetic fields. With the advent of special relativity, a more complete formulation using tensor fields was found. Instead of using two vector fields describing the electric and magnetic fields, a tensor field representing these two fields together is used.
The electromagnetic four-potential is defined to be , and the electromagnetic four-current . The electromagnetic field at any point in spacetime is described by the antisymmetric (0,2)-rank electromagnetic field tensor
The Lagrangian
To obtain the dynamics for this field, we try and construct a scalar from the field. In the vacuum, we have
We can use gauge field theory to get the interaction term, and this gives us
The equations
To obtain the field equations, the electromagnetic tensor in the Lagrangian density needs to be replaced by its definition in terms of the 4-potential A, and it's this potential which enters the Euler-Lagrange equations. The EM field F is not varied in the EL equations. Therefore,
Evaluating the derivative of the Lagrangian density with respect to the field components
and the derivatives of the field components
obtains Maxwell's equations in vacuum. The source equations (Gauss' law for electricity and the Maxwell-Ampère law) are
while the other two (Gauss' law for magnetism and Faraday's law) are obtained from the fact that F is the 4-curl of A, or, in other words, from the fact that the Bianchi identity holds for the electromagnetic field tensor.
where the comma indicates a partial derivative.
Gravitation
After Newtonian gravitation was found to be inconsistent with special relativity, Albert Einstein formulated a new theory of gravitation called general relativity. This treats gravitation as a geometric phenomenon ('curved spacetime') caused by masses and represents the gravitational field mathematically by a tensor field called the metric tensor. The Einstein field equations describe how this curvature is produced. Newtonian gravitation is now superseded by Einstein's theory of general relativity, in which gravitation is thought of as being due to a curved spacetime, caused by masses. The Einstein field equations,
describe how this curvature is produced by matter and radiation, where Gab is the Einstein tensor,
written in terms of the Ricci tensor Rab and Ricci scalar , is the stress–energy tensor and is a constant. In the absence of matter and radiation (including sources) the 'vacuum field equations,
can be derived by varying the Einstein–Hilbert action,
with respect to the metric, where g is the determinant of the metric tensor gab''. Solutions of the vacuum field equations are called vacuum solutions. An alternative interpretation, due to Arthur Eddington, is that is fundamental, is merely one aspect of , and is forced by the choice of units.
Further examples
Further examples of Lorentz-covariant classical field theories are
Klein-Gordon theory for real or complex scalar fields
Dirac theory for a Dirac spinor field
Yang–Mills theory for a non-abelian gauge field
Unification attempts
Attempts to create a unified field theory based on classical physics are classical unified field theories. During the years between the two World Wars, the idea of unification of gravity with electromagnetism was actively pursued by several mathematicians and physicists like Albert Einstein, Theodor Kaluza, Hermann Weyl, Arthur Eddington, Gustav Mie and Ernst Reichenbacher.
Early attempts to create such theory were based on incorporation of electromagnetic fields into the geometry of general relativity. In 1918, the case for the first geometrization of the electromagnetic field was proposed in 1918 by Hermann Weyl.
In 1919, the idea of a five-dimensional approach was suggested by Theodor Kaluza. From that, a theory called Kaluza-Klein Theory was developed. It attempts to unify gravitation and electromagnetism, in a five-dimensional space-time.
There are several ways of extending the representational framework for a unified field theory which have been considered by Einstein and other researchers. These extensions in general are based in two options. The first option is based in relaxing the conditions imposed on the original formulation, and the second is based in introducing other mathematical objects into the theory. An example of the first option is relaxing the restrictions to four-dimensional space-time by considering higher-dimensional representations. That is used in Kaluza-Klein Theory. For the second, the most prominent example arises from the concept of the affine connection that was introduced into the theory of general relativity mainly through the work of Tullio Levi-Civita and Hermann Weyl.
Further development of quantum field theory changed the focus of searching for unified field theory from classical to quantum description. Because of that, many theoretical physicists gave up looking for a classical unified field theory. Quantum field theory would include unification of two other fundamental forces of nature, the strong and weak nuclear force which act on the subatomic level.
| Physical sciences | Physics basics: General | Physics |
1294453 | https://en.wikipedia.org/wiki/Insular%20biogeography | Insular biogeography | Insular biogeography or island biogeography is a field within biogeography that examines the factors that affect the species richness and diversification of isolated natural communities. The theory was originally developed to explain the pattern of the species–area relationship occurring in oceanic islands. Under either name it is now used in reference to any ecosystem (present or past) that is isolated due to being surrounded by unlike ecosystems, and has been extended to mountain peaks, seamounts, oases, fragmented forests, and even natural habitats isolated by human land development. The field was started in the 1960s by the ecologists Robert H. MacArthur and E. O. Wilson, who coined the term island biogeography in their inaugural contribution to Princeton's Monograph in Population Biology series, which attempted to predict the number of species that would exist on a newly created island.
Definitions
For biogeographical purposes, an insular environment or "island" is any area of habitat suitable for a specific ecosystem, surrounded by an expanse of unsuitable habitat. While this may be a traditional island—a mass of land surrounded by water—the term may also be applied to many nontraditional "islands", such as the peaks of mountains, isolated springs or lakes, and non-contiguous woodlands. The concept is often applied to natural habitats surrounded by human-altered landscapes, such as expanses of grassland surrounded by highways or housing tracts, and national parks. Additionally, what is an insular for one organism may not be so for others, some organisms located on mountaintops may also be found in the valleys, while others may be restricted to the peaks.
Theory
The theory of insular biogeography proposes that the number of species found in an undisturbed insular environment ("island") is determined by immigration and extinction. And further, that the isolated populations may follow different evolutionary routes, as shown by Darwin's observation of finches in the Galapagos Islands. Immigration and emigration are affected by the distance of an island from a source of colonists (distance effect). Usually this source is the mainland, but it can also be other islands. Islands that are more isolated are less likely to receive immigrants than islands that are less isolated.
The rate of extinction once a species manages to colonize an island is affected by island size; this is the species-area curve or effect. Larger islands contain larger habitat areas and opportunities for more different varieties of habitat. Larger habitat size reduces the probability of extinction due to chance events. Habitat heterogeneity increases the number of species that will be successful after immigration.
Over time, the countervailing forces of extinction and immigration result in an equilibrium level of species richness.
Modifications
In addition to having an effect on immigration rates, isolation can also affect extinction rates. Populations on islands that are less isolated are less likely to go extinct because individuals from the source population and other islands can immigrate and "rescue" the population from extinction; this is known as the rescue effect.
In addition to having an effect on extinction, island size can also affect immigration rates. Species may actively target larger islands for their greater number of resources and available niches; or, larger islands may accumulate more species by chance just because they are larger. This is the target effect.
Influencing factors
Degree of isolation (distance to nearest neighbour, and mainland)
Length of isolation (time)
Size of island (larger area usually facilitates greater diversity)
The habitat suitability which includes:
Climate (tropical versus arctic, humid versus arid, variability, etc.)
Initial plant and animal composition if previously attached to a larger land mass (e.g. marsupials, primates)
The current species composition
Location relative to ocean currents (influences nutrient, fish, bird, and seed flow patterns)
Location relative to dust blow (influences nutrients)
Serendipity (the impacts of chance arrivals)
Human activity
Species-area relationships
Species–area relationships show the relationship between a given area and the species richness within that area. This concept comes from the theory of island biogeography, and is well illustrated on islands because they are relatively isolated. Thus, the immigrating species and the species going extinct from an island are more limited and therefore easier to keep track of. It is expected that as the area and species richness relationship are directly proportional to one another. For example, as the area of a series of islands increase, there is a direct relationship to the increasing species richness of primary producers. It is important to consider that island species area relationships will behave somewhat differently than mainland species area relationships, however the connections between the two can still prove to be useful.
The species-area relationship equation is: .
In this equation, represents the measure of diversity of a species (for example, the number of species) and is a constant representing the y-intercept. represents the area of the island or space that is being examined and represents the slope of the area curve.
This function can also be expressed as a logarithmic function: This expression of the function allows for the function to be drawn as a linear function. However, the core meaning of the function is the same: the area of the island dictates the species area relationship.
Historical record
The theory can be studied through the fossils, which provide a record of life on Earth. 300 million years ago, Europe and North America lay on the equator and were covered by steamy tropical rainforests. Climate change devastated these tropical rainforests during the Carboniferous Period and as the climate grew drier, rainforests fragmented. Shrunken islands of forest were uninhabitable for amphibians but were well suited to reptiles, which became more diverse and even varied their diet in the rapidly changing environment; this Carboniferous rainforest collapse event triggered an evolutionary burst among reptiles.
Research experiments
The theory of island biogeography was experimentally tested by E. O. Wilson and his student Daniel Simberloff in the mangrove islands in the Florida Keys. Species richness on several small mangroves islands were surveyed. The islands were fumigated with methyl bromide to clear their arthropod communities. Following fumigation, the immigration of species onto the islands was monitored. Within a year the islands had been recolonized to pre-fumigation levels. However, Simberloff and Wilson contended this final species richness was oscillating in quasi-equilibrium. Islands closer to the mainland recovered faster as predicted by the Theory of Island Biogeography. The effect of island size was not tested, since all islands were of approximately equal size.
Research conducted at the rainforest research station on Barro Colorado Island has yielded a large number of publications concerning the ecological changes following the formation of islands, such as the local extinction of large predators and the subsequent changes in prey populations.
Applications to Island Like Systems (ILS)
The theory of island biogeography was originally used to study oceanic islands, but those concepts can be extrapolated to other areas of study. Island species dynamics give information about how species move and interact within Island Like Systems (ILS). Rather than an actual island, ILS are primarily defined by their isolation within an ecosystem. In the case of an island, the area referred to as the matrix is usually the body of water surrounding it. The mainland is often the nearest non-island piece of land. Similarly, in an ILS the “mainland” is the source of immigrating species, however the matrix is far more varied. By imagining how different types of isolated ecosystems, for example a pond that is surrounded by land, are similar to an island ecosystems it can be understood how theories and phenomena that are true of island ecosystems can be applied to ILS. However, the overall immigration and extinction patterns that are outlined in the theory of island biogeography as they play out on islands, also play out between ecosystems on the mainland.
The concepts of area of an island and the level of isolation from a mainland as presented in the theory of island biogeography, apply to ILS. The main difference is in the dynamics of area and isolation. For example, an ILS may have a changing area because of seasons, which may impact its degree of isolation. Resource availability plays an important role in the conditions that an island is under. This is another factor that changes in ILS in comparison to real islands, since generally there is a greater resource availability in some ILS than true islands.
Species-area relationships, as described above, can be applied to Island Like Systems (ILS) as well. It is typically observed that as the area of an ecosystem increases, the species richness is directly proportional. One major difference is that -values are generally lower for ILSs than true islands. Furthermore, values also vary between true islands and ILS, and within types of ILS.
Applications in conservation biology
Within a few years of the publishing of the theory, its potential application to the field of conservation biology had been realised and was being vigorously debated in ecological circles. The idea that reserves and national parks formed islands inside human-altered landscapes (habitat fragmentation), and that these reserves could lose species as they 'relaxed towards equilibrium' (that is they would lose species as they achieved their new equilibrium number, known as ecosystem decay) caused a great deal of concern. This is particularly true when conserving larger species which tend to have larger ranges. A study by William Newmark, published in the journal Nature and reported in The New York Times, showed a strong correlation between the size of a protected U.S. National Park and the number of species of mammals.
This led to the debate known as single large or several small (SLOSS), described by writer David Quammen in The Song of the Dodo as "ecology's own genteel version of trench warfare". In the years after the publication of Wilson and Simberloff's papers ecologists had found more examples of the species-area relationship, and conservation planning was taking the view that the one large reserve could hold more species than several smaller reserves, and that larger reserves should be the norm in reserve design. This view was in particular championed by Jared Diamond. This led to concern by other ecologists, including Dan Simberloff, who considered this to be an unproven over-simplification that would damage conservation efforts. Habitat diversity was as or more important than size in determining the number of species protected.
Island biogeography theory also led to the development of wildlife corridors as a conservation tool to increase connectivity between habitat islands. Wildlife corridors can increase the movement of species between parks and reserves and therefore increase the number of species that can be supported, but they can also allow for the spread of disease and pathogens between populations, complicating the simple proscription of connectivity being good for biodiversity.
In species diversity, island biogeography most describes allopatric speciation. Allopatric speciation is where new gene pools arise out of natural selection in isolated gene pools. Island biogeography is also useful in considering sympatric speciation, the idea of different species arising from one ancestral species in the same area. Interbreeding between the two differently adapted species would prevent speciation, but in some species, sympatric speciation appears to have occurred.
| Biology and health sciences | Ecology | Biology |
1294851 | https://en.wikipedia.org/wiki/Millimetre%20of%20mercury | Millimetre of mercury | A millimetre of mercury is a manometric unit of pressure, formerly defined as the extra pressure generated by a column of mercury one millimetre high, and currently defined as exactly pascals or approximately pascals. It is denoted mmHg or mm Hg.
Although not an SI unit, the millimetre of mercury is still often encountered in some fields; for example, it is still widely used in medicine, as demonstrated for example in the medical literature indexed in PubMed. For example, the U.S. and European guidelines on hypertension, in using millimeters of mercury for blood pressure, are reflecting the fact (common basic knowledge among health care professionals) that this is the usual unit of blood pressure in clinical medicine.
One millimetre of mercury is approximately 1 torr, which is of standard atmospheric pressure ( ≈ ). Although the two units are not equal, the relative difference (less than ) is negligible for most practical uses.
History
For much of human history, the pressure of gases like air was ignored, denied, or taken for granted, but as early as the 6th century BC, Greek philosopher Anaximenes of Miletus claimed that all things are made of air that is simply changed by varying levels of pressure. He could observe water evaporating, changing to a gas, and felt that this applied even to solid matter. More condensed air made colder, heavier objects, and expanded air made lighter, hotter objects. This was akin to how gases become less dense when warmer and more dense when cooler.
In the 17th century, Evangelista Torricelli conducted experiments with mercury that allowed him to measure the presence of air. He would dip a glass tube, closed at one end, into a bowl of mercury and raise the closed end up out of it, keeping the open end submerged. The weight of the mercury would pull it down, leaving a partial vacuum at the far end. This validated his belief that air/gas has mass, creating pressure on things around it. Previously, the more popular conclusion, even for Galileo, was that air was weightless and it is vacuum that provided force, as in a siphon. The discovery helped bring Torricelli to the conclusion:
This test, known as Torricelli's experiment, was essentially the first documented pressure gauge.
Blaise Pascal went farther, having his brother-in-law try the experiment at different altitudes on a mountain, and finding indeed that the farther down in the ocean of atmosphere, the higher the pressure.
Mercury manometers were the first accurate pressure gauges. They are less used today due to mercury's toxicity, the mercury column's sensitivity to temperature and local gravity, and the greater convenience of other instrumentation. They displayed the pressure difference between two fluids as a vertical difference between the mercury levels in two connected reservoirs.
An actual mercury column reading may be converted to more fundamental units of pressure by multiplying the difference in height between two mercury levels by the density of mercury and the local gravitational acceleration. Because the specific weight of mercury depends on temperature and surface gravity, both of which vary with local conditions, specific standard values for these two parameters were adopted. This resulted in defining a "millimetre of mercury" as the pressure exerted at the base of a column of mercury 1 millimetre high with a precise density of when the acceleration due to gravity is exactly .
The density chosen for this definition is the approximate density of mercury at , and is standard gravity. The use of an actual column of mercury to measure pressure normally requires correction for the density of mercury at the actual temperature and the sometimes significant variation of gravity with location, and may be further corrected to take account of the density of the measured air, water or other fluid.
Each millimetre of mercury can be divided into 1000 micrometres of mercury, denoted μmHg or simply microns.
Relation to the torr
The precision of modern transducers is often insufficient to show the difference between the torr and the millimetre of mercury. The difference between these two units is about one part in seven million or . By the same factor, a millitorr is slightly less than a micrometre of mercury.
Use in medicine and physiology
In medicine, pressure is still generally measured in millimetres of mercury. These measurements are in general given relative to the current atmospheric pressure: for example, a blood pressure of 120 mmHg, when the current atmospheric pressure is 760 mmHg, means 880 mmHg relative to perfect vacuum.
Routine pressure measurements in medicine include:
Blood pressure, measured with a sphygmomanometer
Intraocular pressure, with a tonometer
Cerebrospinal fluid pressure
Intracranial pressure
Intramuscular pressure (compartment syndrome)
Central venous pressure
Pulmonary artery catheterization
Mechanical ventilation
In physiology manometric units are used to measure Starling forces.
| Physical sciences | Pressure | Basics and measurement |
1295947 | https://en.wikipedia.org/wiki/Dysthymia | Dysthymia | Dysthymia ( ), also known as persistent depressive disorder (PDD), is a mental and behavioral disorder, specifically a disorder primarily of mood, consisting of similar cognitive and physical problems as major depressive disorder, but with longer-lasting symptoms. The concept was used by Robert Spitzer as a replacement for the term "depressive personality" in the late 1970s.
The Diagnostic and Statistical Manual of Mental Disorders (DSM-5) indicates that persistent depressive disorder— the new name for what was called dysthymic disorder in DSM-IV—is a serious state of chronic depression, which persists for at least two years (one year for children and adolescents). Dysthymia causes substantial distress and functional impairment—not as much as major depressive disorder, but close.
As dysthymia is a chronic disorder, those with the condition may experience symptoms for many years before it is diagnosed, if diagnosis occurs at all. As a result, they may believe that depression is a part of their character, so they may not even discuss their symptoms with doctors, family members or friends. In terms of official diagnostic entities, DSM-5 removed dysthymic disorder (DSM-IV), and introduced a new psychiatric construct: persistent depressive disorder. The DSM-5 authors explained that this new disorder subsumes the DSM-IV diagnoses of chronic major depressive disorder and dysthymic disorder. This change arose from research showing no evidence for meaningful differences between chronic major depressive disorder and dysthymic disorder.
Signs and symptoms
Dysthymia characteristics include an extended period of depressed mood combined with at least two other symptoms which may include insomnia or hypersomnia, fatigue or low energy, eating changes (more or less), low self-esteem, or feelings of hopelessness. Poor concentration or difficulty making decisions are treated as another possible symptom. Irritability is one of the more common symptoms in children and adolescents.
Mild degrees of dysthymia may result in people withdrawing from stress and avoiding opportunities for failure. In more severe cases of dysthymia, people may withdraw from daily activities. They will usually find little pleasure in usual activities and pastimes.
Diagnosis of dysthymia can be difficult because of the subtle nature of the symptoms and patients can often hide them in social situations, making it challenging for others to detect symptoms. Additionally, dysthymia often occurs at the same time as other psychological disorders, which adds a level of complexity in determining the presence of dysthymia, particularly because there is often an overlap in the symptoms of disorders.
There is a high incidence of comorbid illness in those with dysthymia. Suicidal behavior is also a particular problem with those with dysthymia. It is vital to look for signs of major depression, panic disorder, generalised anxiety disorder, alcohol and substance use disorders, and personality disorder.
Causes
There are no known biological causes that apply consistently to all cases of dysthymia, which suggests diverse origin of the disorder. However, there are some indications that there is a genetic predisposition to dysthymia: "The rate of depression in the families of people with dysthymia is as high as fifty percent for the early-onset form of the disorder". Other factors linked with dysthymia include stress, social isolation, and lack of social support.
In a study using identical and fraternal twins, results indicated that there is a stronger likelihood of identical twins both having depression than fraternal twins. This provides support for the idea that dysthymia is in part hereditary.
Co-occurring conditions
Dysthymia often co-occurs with other mental disorders. A "double depression" is the occurrence of episodes of major depression in addition to dysthymia. Switching between periods of dysthymic moods and periods of hypomanic moods is indicative of cyclothymia, which is a mild variant of bipolar disorder.
"At least three-quarters of patients with dysthymia also have a chronic physical illness or another psychiatric disorder such as one of the anxiety disorders, cyclothymia, drug addiction, or alcoholism". Common co-occurring conditions include major depression (up to 75%), anxiety disorders (up to 50%), personality disorders (up to 40%), somatoform disorders (up to 45%) and substance use disorders (up to 50%). People with dysthymia have a higher-than-average chance of developing major depression. A 10-year follow-up study found that 95% of dysthymia patients had an episode of major depression. When an intense episode of depression occurs on top of dysthymia, the state is called "double depression."
Double depression
Double depression occurs when a person experiences a major depressive episode on top of the already-existing condition of dysthymia. It is difficult to treat, as patients accept these major depressive symptoms as a natural part of their personality or as a part of their life that is outside of their control. The fact that people with dysthymia may accept these worsening symptoms as inevitable can delay treatment. When and if such people seek out treatment, the treatment may not be very effective if only the symptoms of the major depression are addressed, but not the dysthymic symptoms.
Patients with double depression tend to report significantly higher levels of hopelessness than is normal. This can be a useful symptom for mental health services providers to focus on when working with patients to treat the condition. Additionally, cognitive therapies can be effective for working with people with double depression in order to help change negative thinking patterns and give individuals a new way of seeing themselves and their environment.
It has been suggested that the best way to prevent double depression is by treating the dysthymia. A combination of antidepressants and cognitive therapies can be helpful in preventing major depressive symptoms from occurring. Additionally, exercise and good sleep hygiene (e.g., improving sleep patterns) are thought to have an additive effect on treating dysthymic symptoms and preventing them from worsening.
Pathophysiology
There is evidence that there may be neurological indicators of early onset dysthymia. There are several brain structures (corpus callosum and frontal lobe) that are different in women with dysthymia than in those without dysthymia. This may indicate that there is a developmental difference between these two groups.
Another study, which used fMRI techniques to assess the differences between individuals with dysthymia and other people, found additional support for neurological indicators of the disorder. This study found several areas of the brain that function differently. The amygdala (associated with processing emotions such as fear) was more activated in dysthymia patients. The study also observed increased activity in the insula (which is associated with sad emotions). Finally, there was increased activity in the cingulate gyrus (which serves as the bridge between attention and emotion).
A study comparing healthy individuals to people with dysthymia indicates there are other biological indicators of the disorder. An anticipated result appeared as healthy individuals expected fewer negative adjectives to apply to them, whereas people with dysthymia expected fewer positive adjectives to apply to them in the future. Biologically these groups are also differentiated in that healthy individuals showed greater neurological anticipation for all types of events (positive, neutral, or negative) than those with dysthymia. This provides neurological evidence of the dulling of emotion that individuals with dysthymia have learned to use to protect themselves from overly strong negative feelings, compared to healthy people.
There is some evidence of a genetic basis for all types of depression, including dysthymia. A study using identical and fraternal twins indicated that there is a stronger likelihood of identical twins both having depression than fraternal twins. This provides support for the idea that dysthymia is caused in part by heredity.
A new model has recently surfaced in the literature regarding the HPA axis (structures in the brain that get activated in response to stress) and its involvement with dysthymia (e.g. phenotypic variations of corticotropin releasing hormone (CRH) and arginine vasopressin (AVP), and down-regulation of adrenal functioning) as well as forebrain serotonergic mechanisms. Since this model is highly provisional, further research is still needed.
Diagnosis
The Diagnostic and Statistical Manual of Mental Disorders IV (DSM-IV), published by the American Psychiatric Association, characterizes dysthymic disorder. The essential symptom involves the individual feeling depressed for the majority of days, and parts of the day, for at least two years. Low energy, disturbances in sleep or in appetite, and low self-esteem typically contribute to the clinical picture as well. Those with the condition have often experienced dysthymia for many years before it is diagnosed. People around them often describe them in words similar to "just a moody person". The following are the diagnostic criteria:
During a majority of days for two years or more, the adult patient reports depressed mood, or appears depressed to others for most of the day.
When depressed, the patient has two or more of:
decreased or increased appetite;
decreased or increased sleep (insomnia or hypersomnia);
fatigue or low energy;
reduced self-esteem;
decreased concentration or problems making decisions;
feelings of hopelessness or pessimism.
During this two-year period, the above symptoms are never absent longer than two consecutive months.
During the duration of the two-year period, the patient may have had a perpetual major depressive episode.
The patient has not had any manic, hypomanic, or mixed episodes.
The patient has never fulfilled criteria for cyclothymic disorder.
The depression does not exist only as part of a chronic psychosis (such as schizophrenia or delusional disorder).
The symptoms are often not directly caused by a medical illness or by substances, including substance use or other medications.
The symptoms may cause significant problems or distress in social, work, academic, or other major areas of life functioning.
In children and adolescents, mood can be irritable, and duration must be at least one year, in contrast to two years needed for diagnosis in adults.
Early onset (diagnosis before age 21) is associated with more frequent relapses, psychiatric hospitalizations, and more co-occurring conditions. For younger adults with dysthymia, there is a higher co-occurrence in personality abnormalities and the symptoms are likely chronic. However, in older adults with dysthymia, the psychological symptoms are associated with medical conditions and/or stressful life events and losses.
Dysthymia can be contrasted with major depressive disorder by assessing the acute nature of the symptoms. Dysthymia is far more chronic (long lasting) than major depressive disorder, in which symptoms may be present for as little as two weeks. Also dysthymia often presents itself at an earlier age than major depressive disorder.
Prevention
Though there is no clear-cut way to prevent dysthymia from occurring, there are some suggestions to help reduce its effects. Since dysthymia often appears first in childhood, it is important to identify children who may be at risk. It may be beneficial to work with children in helping to control their stress, increase resilience, boost self-esteem, and provide strong social support networks. These tactics may be helpful in warding off or delaying dysthymic symptoms.
Treatments
Persistent depressive disorder can be treated with psychotherapy and pharmacotherapy. The overall rate and degree of treatment success is somewhat lower than for non-chronic depression, and a combination of psychotherapy and pharmacotherapy shows best results.
Therapy
Psychotherapy can be effective in treating dysthymia.
In a meta-analytic study from 2010, psychotherapy had a small but significant effect when compared to control groups. However, psychotherapy is significantly less effective than pharmacotherapy in direct comparisons.
There are many different types of therapy, and some are more effective than others.
The empirically most studied type of treatment is cognitive-behavioral therapy. This type of therapy is very effective for non-chronic depression, and it appears to be also effective for chronic depression.
Cognitive behavioral analysis system of psychotherapy (CBASP) has been designed specifically to treat PDD. Empirical results on this form of therapy are inconclusive: While one study showed remarkably high treatment success rates, a later, even larger study showed no significant benefit of adding CBASP to treatment with antidepressants.
Schema therapy and psychodynamic psychotherapy have been used for PDD, though good empirical results are lacking.
Interpersonal psychotherapy has also been said to be effective in treating the disorder, though it only shows marginal benefit when added to treatment with antidepressants.
Medications
In a 2010 meta-analysis, the benefit of pharmacotherapy was limited to selective serotonin reuptake inhibitors (SSRIs) rather than tricyclic antidepressants (TCA).
According to a 2014 meta-analysis, antidepressants are at least as effective for persistent depressive disorder as for major depressive disorder.
The first line of pharmacotherapy is usually SSRIs due to their purported more tolerable nature and reduced side effects compared to the irreversible monoamine oxidase inhibitors or tricyclic antidepressants. Studies have found that the mean response to antidepressant medications for people with dysthymia is 55%, compared with a 31% response rate to a placebo. The most commonly prescribed antidepressants/SSRIs for dysthymia are escitalopram, citalopram, sertraline, fluoxetine, paroxetine, and fluvoxamine. It often takes an average of 6–8 weeks before the patient begins to feel these medications' therapeutic effects. Additionally, STAR*D, a multi-clinic governmental study, found that people with overall depression will generally need to try different brands of medication before finding one that works specifically for them. Research shows that 1 in 4 of those who switch medications get better results regardless of whether the second medication is an SSRI or some other type of antidepressant.
In a meta-analytic study from 2005, it was found that SSRIs and TCAs are equally effective in treating dysthymia. They also found that MAOIs have a slight advantage over the use of other medication in treating this disorder. However, the author of this study cautions that MAOIs should not necessarily be the first line of defense in the treatment of dysthymia, as they are often less tolerable than their counterparts, such as SSRIs.
Tentative evidence supports the use of amisulpride to treat dysthymia but with increased side effects.
Combination treatment
When pharmacotherapy alone is compared with combined treatment with pharmacotherapy plus psychotherapy, there is a strong trend in favour of combined treatment. Working with a psychotherapist to address the causes and effects of the disorder, in addition to taking antidepressants to help eliminate the symptoms, can be extremely beneficial. This combination is often the preferred method of treatment for those who have dysthymia. Looking at various studies involving treatment for dysthymia, 75% of people responded positively to a combination of cognitive behavioral therapy (CBT) and pharmacotherapy, whereas only 48% of people responded positively to just CBT or medication alone.
A 2019 Cochrane review of 10 studies involving 840 participants could not conclude with certainty that continued pharmacotherapy with antidepressants (those used in the studies) was effective in preventing relapse or recurrence of persistent depressive disorder. The body of evidence was too small for any greater certainty although the study acknowledges that continued psychotherapy may be beneficial when compared to no treatment.
Treatment resistance
Because of dysthymia's chronic nature, treatment resistance is somewhat common. In such a case, augmentation is often recommended. Such treatment augmentations can include lithium pharmacology, thyroid hormone augmentation, amisulpride, buspirone, bupropion, guanfacine, stimulants, and mirtazapine. Additionally, if the person also has seasonal affective disorder, light therapy can be useful in helping augment therapeutic effects.
Epidemiology
Globally, the one-year incidence is about 105 million people (1.53% of the global population). , research suggests incidence rates of 1.8% for women and 1.3% for men. In the U.S. general population, research suggests a lifetime prevalence rate of 3 to 6 percent. In primary care settings the lifetime prevalence rate is 5 to 15 percent.
| Biology and health sciences | Mental disorders | Health |
1295963 | https://en.wikipedia.org/wiki/Empennage | Empennage | The empennage ( or ), also known as the tail or tail assembly, is a structure at the rear of an aircraft that provides stability during flight, in a way similar to the feathers on an arrow. The term derives from the French language verb which means "to feather an arrow". Most aircraft feature an empennage incorporating vertical and horizontal stabilising surfaces which stabilise the flight dynamics of yaw and pitch, as well as housing control surfaces.
In spite of effective control surfaces, many early aircraft that lacked a stabilising empennage were virtually unflyable. Even so-called "tailless aircraft" usually have a tail fin (usually a vertical stabiliser). Heavier-than-air aircraft without any kind of empennage (such as the Northrop B-2) are rare, and generally use specially shaped airfoils whose trailing edge provide pitch stability, and rearwards swept wings, often with dihedral to provide the necessary yaw stability. In some aircraft with swept wings, the airfoil section or angle of incidence may change radically towards the tip.
Structure
Structurally, the empennage consists of the entire tail assembly, including the tailfin, the tailplane and the part of the fuselage to which these are attached. On an airliner this would be all the flying and control surfaces behind the rear pressure bulkhead.
The front (usually fixed) section of the tailplane is called the horizontal stabiliser and is used to provide pitch stability. The rear section of the tailplane is called the elevator, and is a movable aerofoil that controls changes in pitch, the up-and-down motion of the aircraft's nose. In some aircraft the horizontal stabilizer and elevator are one unit, and to control pitch the entire unit moves as one. This is known as a stabilator or full-flying stabiliser.
The vertical tail structure has a fixed front section called the vertical stabiliser, used to control yaw, which is movement of the fuselage right to left motion of the nose of the aircraft. The rear section of the vertical fin is the rudder, a movable aerofoil that is used to turn the aircraft's nose right or left. When used in combination with the ailerons, the result is a banking turn, a coordinated turn, the essential feature of aircraft movement.
Some aircraft are fitted with a tail assembly that is hinged to pivot in two axes forward of the fin and stabiliser, in an arrangement referred to as a movable tail. The entire empennage is rotated vertically to actuate the horizontal stabiliser, and sideways to actuate the fin.
The aircraft's cockpit voice recorder, flight data recorder and emergency locator transmitter (ELT) are often located in the empennage, because the aft of the aircraft provides better protection for these in most aircraft crashes.
Trim
In some aircraft trim devices are provided to eliminate the need for the pilot to maintain constant pressure on the elevator or rudder controls.
The trim device may be:
a trim tab on the rear of the elevators or rudder which act to change the aerodynamic load on the surface. Usually controlled by a cockpit wheel or crank.
an adjustable stabiliser into which the stabiliser may be hinged at its spar and adjustably jacked a few degrees in incidence either up or down. Usually controlled by a cockpit crank.
a bungee trim system which uses a spring to provide an adjustable preload in the controls. Usually controlled by a cockpit lever.
an anti-servo tab used to trim some elevators and stabilators as well as increased control force feel. Usually controlled by a cockpit wheel or crank.
a servo tab used to move the main control surface, as well as act as a trim tab. Usually controlled by a cockpit wheel or crank.
Multi-engined aircraft often have trim tabs on the rudder to reduce the pilot effort required to keep the aircraft straight in situations of asymmetrical thrust, such as single engine operations.
Tail configurations
Aircraft empennage designs may be classified broadly according to the fin and tailplane configurations.
The overall shapes of individual tail surfaces (tailplane planforms, fin profiles) are similar to wing planforms.
Tailplanes
The tailplane comprises the tail-mounted fixed horizontal stabiliser and movable elevator. Besides its planform, it is characterised by:
Configuration – tailless or canard.
Location of tailplane – mounted high, mid or low on the fuselage, fin or tail booms.
Fixed stabiliser and movable elevator surfaces, or a single combined stabilator or "[all]-flying tail". (General Dynamics F-111 Aardvark)
Some locations have been given special names:
Conventional tail – The vertical stabiliser and horizontal stabilisers are mounted to the rear of the fuselage. This is the simplest configuration that performs all three aspects of the function of a tail: trim, stability, and control. Around 60% of current aircraft designs — and about 80% ever — incorporate this type of tail. Examples are found on aircraft of every size and role, from general aviation types like the ubiquitous Cessna 172 to the largest airliners ever flown, such as the Airbus A380. Examples of this type of tail were in use as early as the Blériot VII of 1907.
Cruciform tail – The horizontal stabilisers are placed midway up the vertical stabiliser, giving the appearance of a cross when viewed from the front. Cruciform tails are often used to keep the horizontal stabilisers out of the engine wake, while avoiding many of the disadvantages of a T-tail. Examples include the Hawker Sea Hawk and Douglas A-4 Skyhawk.
T-tail – The horizontal stabiliser is mounted on top of the fin, creating a "T" shape when viewed from the front. T-tails keep the stabilisers out of the engine wake, and give better pitch control. T-tails have a good glide ratio, and are more efficient on low speed aircraft. However, the T-tail has several disadvantages. It is more likely to enter a deep stall, and is more difficult to recover from a spin. For this reason a small secondary stabiliser or tail-let may be fitted lower down where it will be in free air when the aircraft is stalled. A T-tail must be stronger, and therefore heavier than a conventional tail. T-tails also tend to have a larger radar cross section. Examples include the Gloster Javelin and McDonnell Douglas DC-9.
Fins
The fin comprises the fixed vertical stabiliser and rudder. Besides its profile, it is characterised by:
Number of fins – usually one or two.
Location of fins – on the fuselage (over or under), tailplane, tail booms or wings
Twin fins may be mounted at various points:
Twin tail A twin tail, also called an H-tail, consists of two small vertical stabilisers on either side of the horizontal stabiliser. Examples include the Antonov An-225 Mriya, B-25 Mitchell, Avro Lancaster, and ERCO Ercoupe.
Twin boom A twin boom has two fuselages or booms, with a vertical stabiliser on each, and a horizontal stabiliser between them. Examples include the Northrop P-61 Black Widow, P-38 Lightning, de Havilland Sea Vixen, Sadler Vampire, and Edgley Optica.
Wing mounted midwing as on the F7U Cutlass or on the wing tips as on the Handley Page Manx and Rutan Long-EZ
Unusual fin configurations include:
No fin – as on the McDonnell Douglas X-36. This configuration is sometimes incorrectly referred to as "tailless".
Multiple fins – examples include the Lockheed Constellation (three), Bellanca 14-13 (three), and the Northrop Grumman E-2 Hawkeye (four).
Ventral fin – underneath the fuselage. Often used in addition to a conventional fin as on the (North American X-15 and Dornier Do 335).
V, Y and X tails
An alternative to the fin-and-tailplane approach is provided by the V-tail and X-tail designs. Here, the tail surfaces are set at diagonal angles, with each surface contributing to both pitch and yaw. The control surfaces, sometimes called ruddervators, act differentially to provide yaw control (in place of the rudder) and act together to provide pitch control (in place of the elevator).
V tail: A V-tail can be lighter than a conventional tail in some situations and produce less drag, as on the Fouga Magister trainer, Northrop Grumman RQ-4 Global Hawk RPV and X-37 spacecraft. A V-tail may also have a smaller radar signature. Other aircraft featuring a V-tail include the Beechcraft Model 35 Bonanza, and Davis DA-2. A slight modification to the V-tail can be found on the Waiex and Monnett Moni called a Y-tail.
Inverted V tail:The unmanned Predator uses an inverted V-tail as do the Lazair and Mini-IMP.
Y tail: A V-tail with an added lower vertical fin (generally used to protect an aft propeller), as LearAvia Lear Fan
X tail: The Lockheed XFV featured an "X" tail, which was reinforced and fitted with a wheel on each surface so that the craft could sit on its tail and take off and land vertically.
Outboard tail
An outboard tail is split in two, with each half mounted on a short boom just behind and outboard of each wing tip. It comprises outboard horizontal stabilizers (OHS) and may or may not include additional boom-mounted vertical stabilizers (fins). In this position, the tail surfaces interact constructively with the wingtip vortices and, with careful design, can significantly reduce drag to improve efficiency, without adding unduly to the structural loads on the wing.
The configuration was first developed during World War II by Richard Vogt and George Haag at Blohm & Voss. The Skoda-Kauba SL6 tested the proposed control system in 1944 and, following several design proposals, an order was received for the Blohm & Voss P 215 just weeks before the war ended. The outboard tail reappeared on the Scaled Composites SpaceShipOne in 2003 and SpaceShipTwo in 2010.
Tailless aircraft
A tailless aircraft (often tail-less) traditionally has all its horizontal control surfaces on its main wing surface. It has no horizontal stabiliser – either tailplane or canard foreplane (nor does it have a second wing in tandem arrangement). A "tailless" type usually still has a vertical stabilising fin (vertical stabiliser) and control surface (rudder). However, NASA adopted the "tailless" description for the novel X-36 research aircraft which has a canard foreplane but no vertical fin.
The most successful tailless configuration has been the tailless delta, especially for combat aircraft.
| Technology | Aircraft components | null |
1297380 | https://en.wikipedia.org/wiki/Police%20car | Police car | A police car is an emergency vehicle used by police for transportation during patrols and responses to calls for service. A type of emergency vehicle, police cars are used by police officers to patrol a beat, quickly reach incident scenes, and transport and temporarily detain suspects.
Police vehicles, like other emergency vehicles, usually bare livery (markings) to distinguish them as such. They often use emergency lights (usually blue, red or blue and red colored) and sirens to warn other motorists of their presence, especially when responding to calls for service. Police cars usually contain communication devices, weaponry, and a variety of equipment for dealing with emergency situations.
History
The first police car was an electric wagon used by the Akron Police Department in Akron, Ohio, in 1899. The first operator of the police patrol wagon was Officer Louis Mueller, Sr. It could reach and travel before its battery needed to be recharged. The car was designed by city mechanical engineer Frank Loomis. The US$2,400 vehicle was equipped with electric lights, gongs, and a stretcher. The car's first assignment was to pick up a drunken man at the junction of Main and Exchange streets.
Ford introduced the flathead V8 in the 1932 Ford as the first mass-marketed V8 car; this low-priced, mass-marketed V8 car became popular with police in the United States, establishing strong brand loyalty that continued into the 21st century. Starting in the 1940s, major American automakers, namely the Big Three, began to manufacture specialized police cars. Over time, these became their own dedicated police fleet offerings, such as the Ford Police Interceptor and Chevrolet 9C1.
In the United Kingdom, Captain Athelstan Popkess, Chief Constable of the Nottingham City Police from 1930 to 1959, transformed British police from their Victorian era foot patrol beat model to the modern car-based reactive response model, through his development of the "Mechanized Division", which used two-way radio communication between police command and police cars. Under Popkess, the Nottingham City Police began to use police cars as an asset that police tactics centered around, such as overlaying police car patrol sectors over foot patrol beats and using police cars to pick up foot patrol officers while responding to crimes.
Increased car ownership in the post-World War II economic expansion led to police cars becoming significantly more common in most developed countries, as police jurisdictions expanded farther out into residential and suburban areas, car-oriented urban planning and highways dominated cities, vehicular crimes and police evasion in cars increased, and more equipment was issued to police officers, to the point that vehicles became practically necessary for modern law enforcement.
Types
Various types of police car exist. Depending on the organization of the law enforcement agency, the class of vehicle used as a police car, and the environmental factors of the agency's jurisdiction, many of the types below may or may not exist in certain fleets, or their capabilities may be merged to create all-rounded units with shared vehicles as opposed to specialized units with separate vehicles.
Patrol car
A patrol car is a police car used for standard patrol. Used to replace traditional foot patrols, the patrol car's primary function is to provide transportation for regular police duties, such as responding to calls, enforcing laws, or simply establishing a more visible police presence while on patrol. Driving a patrol car allows officers to reach their destinations more quickly and to cover more ground compared to other methods. Patrol cars are typically designed to be identifiable as police cars to the public and thus almost always have proper markings, roof-mounted emergency lights, and sirens.
Response car
A response car, also known as a pursuit car, area car, rapid response unit, or fast response car, is a police car used to ensure quick responses to emergencies compared to patrol cars. It is likely to be of a higher specification, capable of higher speeds, and often fitted with unique markings and increased-visibility emergency lights. These cars are generally only used to respond to emergency incidents and may carry specialized equipment not used in regular patrol cars, such as long arms.
Traffic car
A traffic car, also known as a highway patrol car, traffic enforcement unit, speed enforcement unit, or road policing unit, is a police car tasked with enforcing traffic laws and conducting traffic stops, typically on major roadways such as highways. They are often relatively high-performance vehicles compared to patrol cars, as they must be capable of catching up to fast-moving vehicles. They may have specific markings or special emergency lights to either improve or hinder visibility. Alternatively, some traffic cars may use the same models as patrol cars, and may barely differ from them aside from markings, radar speed guns, and traffic-oriented equipment.
Unmarked car
An unmarked car is a police car that lacks markings and easily-visible or roof-mounted emergency lights. They are generally used for varying purposes, ranging from standard patrol and traffic enforcement to sting operations and detective work. They have the advantage of not being immediately recognizable, and are considered a valuable tool in catching criminals in the commission of a crime or by surprise. The resemblance an unmarked police car has to a civilian vehicle varies based on their application: they may use the same models as marked patrol cars, and may be virtually identical to them aside from the lack of roof-mounted emergency lights, with pushbars and spotlights clearly visible; alternatively, they may use common civilian vehicle models that blend in with traffic, with emergency lights embedded in the grille or capable of being hidden and revealed, such as Japanese unmarked cars having retractable beacons built into the car's roof.
Unmarked cars typically use regular civilian license plates, occasionally even in jurisdictions where emergency vehicles and government vehicles use unique license plates, though some agencies or jurisdictions may be able to use the unique plates anyway; for example, American federal law enforcement agencies may use either government plates or regular license plates.
The term "undercover car" is often used to describe unmarked cars. However, this usage is erroneous: unmarked cars are police cars that lack markings but have police equipment, emergency lights, and sirens, while undercover cars lack these entirely and are essentially civilian vehicles used by law enforcement in undercover operations to avoid detection.
The close resemblance of unmarked cars to civilian cars has created concerns of police impersonation. Some police officers advise motorists that they do not have to pull over in a secluded location and instead can wait until they reach somewhere safer. In the UK, officers must be wearing uniforms in order to make traffic stops. Motorists can also ask for the officer's badge and identification or call an emergency number or a police non-emergency number to confirm if the police unit is genuine.
Ghost car
A ghost car, also known as a stealth car or semi-marked car, is a police car that combines elements of both an unmarked car and a marked patrol car, featuring markings that are either similar colors to the vehicle's body paint, or are reflective graphics that are difficult to see unless illuminated by lights or viewed at certain angles. Ghost cars are often used for traffic enforcement, though they may also be used in lieu of unmarked cars in jurisdictions where they are prohibited or have their enforcement capabilities limited, such as being unable to conduct traffic stops. In these instances, the markings on ghost cars may be sufficient to legally count as marked police cars, despite the markings being difficult to see.
Utility vehicle
A utility vehicle is a police car used for utility or support purposes as opposed to regular police duties. Utility vehicles are usually all-wheel drive vehicles with cargo space such as SUVs, pickup trucks, vans, utes, or off-road vehicles. They are often used to transport or tow assets such as trailers, equipment, or other vehicles such as police boats; they are alternatively used for or are capable of off-roading, especially in fleets where most other vehicles cannot do so. They can also be used for animal control, if that is the responsibility of police within that jurisdiction. Some utility vehicles can be used for transporting teams of officers and occasionally have facilities to securely detain and transport a small number of suspects, provided there is enough seating space.
Police dog vehicle
A police dog vehicle, also known as a K-9 vehicle or a police dog unit, is a police car modified to transport police dogs. The models used for these vehicles range from the same as patrol cars to dedicated SUVs, pickup trucks, or vans. To provide sufficient space for the police dog, there is usually a cage in the trunk or rear seats with enough space for the dog, though some agencies may put the cage in the front passenger seat, or may lack a cage entirely and simply have the dog in the rear compartment. There may or may not be space to transport detainees or additional officers. Police dog vehicles almost always have markings noting they have a police dog on board, typically just the agency's standard markings with the added notice.
Decoy car
A decoy car is a police car used to establish a police presence, typically to deter traffic violations or speeding, without a police officer actually being present. They may be older models retired from use, civilian cars modified to resemble police cars, or demonstration vehicles. In some instances, a "decoy car" may not be a vehicle at all, but rather a life-sized cutout or sign depicting a police car. Use of decoy cars is intended to ensure crime deterrence without having to commit manpower, allowing the officer that would otherwise be there to be freed up for other assignments.
In the United Kingdom, decoy liveried police cars and vans may be parked on filling station forecourts to deter motorists dispensing fuel then making off without payment, also known as "bilking".
The use of decoy cars is entirely up to the agency, though in 2005, the Virginia General Assembly considered a bill that would make decoy cars a legal requirement for police. The bill stated in part: "Whenever any law-enforcement vehicle is permanently taken out of service... such vehicle shall be placed at a conspicuous location within a highway median in order to deter violations of motor vehicle laws at that location. Such vehicles shall... be rotated from one location to another as needed to maintain their deterrent effect."
Surveillance car
A surveillance car is a police car used for surveillance purposes. Usually SUVs, vans, or trucks, surveillance cars can be marked, unmarked, undercover, or disguised, and may be crewed or remotely monitored. They are used to gather evidence of criminal offenses or provide better vantage points at events or high-traffic areas. The surveillance method used varies, and may include CCTV, hidden cameras, wiretapping devices, or even aerial platforms. Some surveillance cars may also be used as bait cars, deployed to catch car thieves.
Armored vehicle
A police armored vehicle, also known as a SWAT vehicle, tactical vehicle, or rescue vehicle, is an armored vehicle used in a police capacity. They are typically four-wheeled armored vehicles with similar configurations to military light utility vehicles, infantry mobility vehicles, internal security vehicles, MRAPs, or similar armored personnel carriers, that lack mounted and installed weaponry. As their name suggests, they are typically used to transport police tactical units such as SWAT teams, though they may also be used in riot control or to establish police presence at events.
Mobile command center
A mobile command center, also known as an emergency operations center, mobile command post, or mobile police station, is a truck used to provide a central command center at the scene of an incident, or to establish a visible police presence or temporary police station at an event.
Bomb disposal vehicle
A bomb disposal vehicle is a vehicle used by bomb disposal squads to transport equipment and bomb disposal robots, or to store bombs for later disposal. They are often vans or trucks, typically with at least one bomb containment chamber installed in the rear of the vehicle, and ramps to allow bomb disposal robots to access the vehicle. Bomb disposal vehicles are generally not explosive-resistant and are only used for transporting explosives for disposal, not actively disposing of them.
Armed vehicle
An armed police vehicle is a police vehicle that has lethal weaponry installed on it. These are often technicals or light utility vehicles with machine gun turrets, and may or may not lack emergency lights and sirens. Armed police vehicles are very rare and are usually only used in wartime, in regions with very high violent crime rates, or where combat with organized crime or insurgencies is common to the point that armed police vehicles are necessary; for example, the Iraqi Police received technicals during the Iraq War, and the National Police of Ukraine used armed vehicles during the 2022 Russian invasion of Ukraine, including the STREIT Group Spartan and a modified BMW 6 Series with a mounted machine gun.
These should not be confused with police vehicles that have turrets but do not have guns, which are often just police armored vehicles or, if less-lethal munitions are used, riot control vehicles.
Riot control vehicle
A riot control vehicle, also known as a riot suppression vehicle or simply a riot vehicle, is an armored or reinforced police vehicle used for riot control. A wide array of vehicles, from armored SUVs and vans to dedicated trucks and armored personnel carriers, are used by law enforcement to suppress or intimidate riots, protests, and public order crimes; hold and reinforce a police barricade to keep the scene contained; or simply transport officers and equipment at the scene in a manner safer than what could be achieved with a standard police car.
Common modifications include tear gas launchers, shields, and caged windows. Some riot control vehicles also include less-lethal weaponry and devices, such as water cannons and long-range acoustic devices.
Community engagement, liaison, and demonstration vehicles
A community engagement vehicle, also known as a liaison vehicle, demonstration vehicle, or parade car, is a police car used for display and community policing purposes, but not for patrol duties. These are often performance cars, modified cars, classic police cars, or vehicles seized from convicted criminals and converted to police cars that are used to represent the agency in parades, promote a specific program (such as the D.A.R.E. program), or help build connections between law enforcement and those that the vehicle appeals to.
Some cars can be visibly marked but not fitted with audio or visual warning devices. These are often used by community liaison officers, administrative staff, or high-ranking officers for transport to meetings, engagements, and community events.
Some vehicles are produced by automotive manufacturers with police markings to showcase them to police departments; these are usually concepts, prototypes, or reveals of their police fleet offerings. Emergency vehicle equipment manufacturers such as Federal Signal, Whelen, and Code 3 also use unofficial police cars to demonstrate their emergency vehicle equipment.
Equipment
Police cars are usually passenger car models which are upgraded to the specifications required by the purchasing police service. Several vehicle manufacturers provide a "police package" option, which is built to police specifications from the factory. Agencies may add to these modifications by adding their own equipment and making their own modifications after purchasing a vehicle.
Mechanical modifications
Modifications a police car might undergo include adjustments for higher durability, speed, high-mileage driving, and long periods of idling at a higher temperature. This is usually accomplished through installing heavy duty suspension, brakes, calibrated speedometer, tires, alternator, transmission, and cooling systems. The car's stock engine may be modified or replaced by a more powerful engine from another vehicle from the manufacturer. The car's electrical system may also be upgraded to accommodate for the additional electronic police equipment.
Warning systems
Police vehicles are often fitted with audible and visual warning systems to alert other motorists of their approach or position on the road. In many countries, use of the audible and visual warnings affords the officer a degree of exemption from road traffic laws (such as the right to exceed speed limits, or to treat red stop lights as a yield sign) and may also suggest a duty on other motorists to yield for the police car and allow it to pass.
Warning systems on a police vehicle can be of two types: passive or active.
Passive visual warnings
Passive visual warnings are the livery markings on the vehicle. Police vehicle markings usually make use of bright colors or strong contrast with the base color of the vehicle. Some police cars have retroreflective markings that reflect light for better visibility at night, though others may only have painted on or non-reflective markings. Examples of markings and designs used in police liveries include black and white, Battenburg markings, Sillitoe tartan, and "jam sandwich" markings.
Police vehicle markings include, at the very least, the word "police" (or a similar applicable phrase if the agency does not use that term, such as "sheriff", "gendarmerie", "state trooper", "public safety" etc.) and the agency's name or jurisdiction (such as "national police" or "Chicago Police"). Also common are the agency's seal, the jurisdiction's seal, and a unit number. Text is usually in the national language or local language, though other languages may be used where appropriate, such as in ethnic enclaves or areas with large numbers of tourists.
Unmarked vehicles generally lack passive visual warnings, while ghost cars have markings that are visible only at certain angles, such as from the rear or sides, making them appear unmarked when viewed from the front.
Another unofficial passive visual warning of police vehicles can simply be the vehicle's silhouette if its use as a police car is common, such as that of the Ford Crown Victoria in North America, or the presence of emergency vehicle equipment on the vehicle, such as a pushbar or a roof-mounted lightbar.
Active visual warnings
Active visual warnings are the emergency lights on the vehicle. These lights are used while responding to attract the attention of other road users and coerce them into yielding for the police car to pass. The colors used by police car lights depend on the jurisdiction, though they are commonly blue and red. Several types of flashing lights are used, such as rotating beacons, halogen lamps, or LED strobes. Some agencies use arrow sticks to direct traffic, or message display boards to provide short messages or instructions to motorists. The headlights and tail lights of some vehicles can be made to flash, or small strobe lights can be fitted in the vehicle lights.
Audible warnings
Audible warnings are the sirens on the vehicle. These sirens alert road users to the presence of an emergency vehicle before they can be seen, to warn of their approach. The first audible warnings were mechanical bells, mounted to either the front or roof of the car. A later development was the rotating air siren, which makes noise when air moves past it. Most modern police vehicles use electronic sirens, which can produce a range of different noises. Different models and manufacturers have distinct siren noises; one siren model, the Rumbler, emits a low frequency sound that can be felt through vibrations, allowing those who would not otherwise hear the siren or see the emergency vehicle to still know it is approaching.
Different siren noises may be used depending on traffic conditions and the context. For example, on a clear road, "wail" (a long up-and-down unbroken tone) is often used, whereas in heavy slow traffic or at intersections, "yelp" (essentially a sped-up wail) may be preferred. Other noises are used in certain countries and jurisdictions, such as "phaser" (a series of brief sped-up beeps) and "hi-lo" (a two-tone up-down sound). Some vehicles may also be fitted with electronic airhorns.
Police-specific equipment
A wide range of equipment is carried in police cars, used to make police work easier or safer. The installation of this equipment in a police car partially transforms it into a desk. Police officers use their car to fill out different forms, print documents, type on a computer or a console, and examine different screens, all while driving. Ergonomics in layout and installation of these items in the police car plays an important role in the comfort and safety of the police officers at work and preventing injuries such as back pain and musculoskeletal disorders.
Communication devices
Police radio systems are generally standard equipment in police cars, used to communicate between the officers assigned to the car and the dispatcher. Mobile data terminals are also common as alternative ways to communicate with the dispatcher or receive important information, and are typically a tablet or a dashboard-mounted laptop installed in the car.
Suspect transport enclosure
Suspect transport enclosures are typically located at the rear of the vehicle, taking up the rear seats or rear compartment. The seats are sometimes modified to be a hard metal or plastic bench. Separating the transport enclosure is often a partition, a barrier between the front and rear compartments typically made of metal with a window made of reinforced glass, clear plastic, or metal mesh or bars. Some police cars do not have partitions; in these instances, another officer may have to sit in the rear to secure the detainee, or a dedicated transport vehicle may be called.
Weapon storage
Weapons may be stored in the trunk or front compartment of the vehicle. In countries where police officers are already armed with handguns, long guns such as rifles or shotguns may be kept on a gun rack in the front or in the trunk, alongside ammunition. In countries where police are not armed or do not keep their guns on them, handguns may be kept in the car instead; for example, Norwegian Police Service officers are issued handguns, but they keep them in a locked compartment in their car that requires high-ranking authorization to access. Less-lethal weaponry and riot gear may also be stored in the trunk.
Rescue equipment
Rescue equipment such as first aid kits, dressing, fire extinguishers, defibrillators, and naloxone kits are often kept in police cars to provide first aid and rescue when necessary.
Scene equipment
Tools such as barricade tape, traffic cones, traffic barricades, and road flares are often kept in police cars to secure scenes for further investigation.
Recording equipment
Recording equipment such as dashcams and interior cameras are installed in some police cars to make audio and video recordings of incidents, police interactions, and evidence.
Detectors
Detector devices such as radar speed guns, automatic number-plate recognition, and LoJack are used in some police cars, typically in traffic enforcement, to detect speeding violations, read multiple plates for flags (such as warrants or lack of insurance) without having to manually check, and track stolen cars, respectively.
Pushbar
Pushbars, also known as bullbars, rambars, or nudge bars, are fitted to the chassis of a police car to augment the front bumper. They allow the car to push disabled vehicles out of a roadway, breach small and light objects, and conduct PIT maneuvers with less damage to the front of the vehicle. Pushbar designs vary; some are small and only protect the grille, while others have extensions that shield as far as the headlights. Some pushbars also have emergency lights installed on them, providing additional visual warnings.
Spotlights
Spotlights are small searchlights typically installed on the A-pillar of a police car. They are used to provide light in darkened areas or where necessary, such as down alleyways or into a suspect's car during a nighttime traffic stop. These spotlights can be aimed and activated by the officers inside the vehicle. Usually, one or two are installed on the car, though more may occasionally be installed on the roof, grille, bumper, or pushbar.
Run lock
Run locks allow the vehicle's engine to be left running without the keys being in the ignition. This allows adequate power to be supplied to the vehicle's equipment at the scene of an incident without battery drain. The vehicle can only be driven after inserting the keys; if the keys are not inserted, the engine will switch off if the handbrake is disengaged or the footbrake is activated.
Ballistic protection
Some police cars can be optionally upgraded with bullet-resistant armor in the car doors. The armor is typically made from ceramic ballistic plates and aramid baffles. A 2016 news report said that Ford sells 5 to 10 percent of their American police vehicles with ballistic protection in the doors. In 2017, New York City Mayor Bill de Blasio announced that all NYPD patrol cars would have bullet-resistant door panels and bullet-resistant window inserts installed.
Use by country
Police vehicles in Armenia
Police vehicles in Australia
Police vehicles in Austria
Police vehicles in Belgium
Police vehicles in China
Police vehicles in the Czech Republic
Police vehicles in Denmark
Police vehicles in France
Police vehicles in Germany
Police vehicles in Greece
Police vehicles in Hong Kong
Police vehicles in Hungary
Police vehicles in Iceland
Police vehicles in India
Police vehicles in Indonesia
Police vehicles in Italy
Police vehicles in Japan
Police vehicles in Malaysia
Police vehicles in the Netherlands
Police vehicles in New Zealand
Police vehicles in The Philippines
Police vehicles in Poland
Police vehicles in Romania
Police vehicles in Russia
Police vehicles in South Africa
Police vehicles in Sweden
Police vehicles in Taiwan
Police vehicles in Turkey
Police vehicles in Ukraine
Police vehicles in the United Kingdom
Police vehicles in the United States and Canada
Police vehicles in Vietnam
Police vehicles in South Korea
| Technology | Specific-purpose transportation | null |
1297539 | https://en.wikipedia.org/wiki/Free%20particle | Free particle | In physics, a free particle is a particle that, in some sense, is not bound by an external force, or equivalently not in a region where its potential energy varies. In classical physics, this means the particle is present in a "field-free" space. In quantum mechanics, it means the particle is in a region of uniform potential, usually set to zero in the region of interest since the potential can be arbitrarily set to zero at any point in space.
Classical free particle
The classical free particle is characterized by a fixed velocity v. The momentum is given by
and the kinetic energy (equal to total energy) by
where m is the mass of the particle and v is the vector velocity of the particle.
Quantum free particle
Mathematical description
A free particle with mass in non-relativistic quantum mechanics is described by the free Schrödinger equation:
where ψ is the wavefunction of the particle at position r and time t. The solution for a particle with momentum p or wave vector k, at angular frequency ω or energy E, is given by a complex plane wave:
with amplitude A and has two different rules according to its mass:
if the particle has mass : (or equivalent ).
if the particle is a massless particle: .
The eigenvalue spectrum is infinitely degenerate since for each eigenvalue E>0, there corresponds an infinite number of eigenfunctions corresponding to different directions of .
The De Broglie relations: , apply. Since the potential energy is (stated to be) zero, the total energy E is equal to the kinetic energy, which has the same form as in classical physics:
As for all quantum particles free or bound, the Heisenberg uncertainty principles apply. It is clear that since the plane wave has definite momentum (definite energy), the probability of finding the particle's location is uniform and negligible all over the space. In other words, the wave function is not normalizable in a Euclidean space, these stationary states can not correspond to physical realizable states.
Measurement and calculations
The normalization condition for the wave function states that if a wavefunction belongs to the quantum state space
then the integral of the probability density function
where * denotes complex conjugate, over all space is the probability of finding the particle in all space, which must be unity if the particle exists:
The state of a free particle given by plane wave solutions is not normalizable as
for any fixed time . Using wave packets, however, the states can be expressed as functions that are normalizable.
Wave packet
Using the Fourier inversion theorem, the free particle wave function may be represented by a superposition of momentum eigenfunctions, or, wave packet:
where
and is the Fourier transform of a "sufficiently nice" initial wavefunction .
The expectation value of the momentum p for the complex plane wave is
and for the general wave packet it is
The expectation value of the energy E is
Group velocity and phase velocity
The phase velocity is defined to be the speed at which a plane wave solution propagates, namely
Note that is not the speed of a classical particle with momentum ; rather, it is half of the classical velocity.
Meanwhile, suppose that the initial wave function is a wave packet whose Fourier transform is concentrated near a particular wave vector . Then the group velocity of the plane wave is defined as
which agrees with the formula for the classical velocity of the particle. The group velocity is the (approximate) speed at which the whole wave packet propagates, while the phase velocity is the speed at which the individual peaks in the wave packet move. The figure illustrates this phenomenon, with the individual peaks within the wave packet propagating at half the speed of the overall packet.
Spread of the wave packet
The notion of group velocity is based on a linear approximation to the dispersion relation near a particular value of . In this approximation, the amplitude of the wave packet moves at a velocity equal to the group velocity without changing shape. This result is an approximation that fails to capture certain interesting aspects of the evolution a free quantum particle. Notably, the width of the wave packet, as measured by the uncertainty in the position, grows linearly in time for large times. This phenomenon is called the spread of the wave packet for a free particle.
Specifically, it is not difficult to compute an exact formula for the uncertainty as a function of time, where is the position operator. Working in one spatial dimension for simplicity, we have:
where is the time-zero wave function. The expression in parentheses in the second term on the right-hand side is the quantum covariance of and .
Thus, for large positive times, the uncertainty in grows linearly, with the coefficient of equal to . If the momentum of the initial wave function is highly localized, the wave packet will spread slowly and the group-velocity approximation will remain good for a long time. Intuitively, this result says that if the initial wave function has a very sharply defined momentum, then the particle has a sharply defined velocity and will (to good approximation) propagate at this velocity for a long time.
Relativistic quantum free particle
There are a number of equations describing relativistic particles: see relativistic wave equations.
| Physical sciences | Basics_4 | Physics |
1297796 | https://en.wikipedia.org/wiki/Oxygen%20mask | Oxygen mask | An oxygen mask is a mask that provides a method to transfer breathing oxygen gas from a storage tank to the lungs. Oxygen masks may cover only the nose and mouth (oral nasal mask) or the entire face (full-face mask). They may be made of plastic, silicone, or rubber.
In certain circumstances, oxygen may be delivered via a nasal cannula instead of a mask.
Medical plastic oxygen masks
Examples of medical plastic oxygen masks
Medical plastic oxygen masks are used primarily by medical care providers for oxygen therapy because they are disposable and so reduce cleaning costs and infection risks. Mask design can determine accuracy of oxygen delivered with many various medical situations requiring treatment with oxygen.
Oxygen is naturally occurring in room air at 21% and higher percentages are often essential in medical treatment. Oxygen in these higher percentages is classified as a drug with too much oxygen being potentially harmful to a patient's health, resulting in oxygen dependence over time, and in extreme circumstances patient blindness. For these reasons oxygen therapy is closely monitored. Masks are light in weight and attached using an elasticated headband or ear loops. They are transparent for allowing the face to be visible for patient assessment by healthcare providers, and reducing a sensation of claustrophobia experienced by some patients when wearing an oxygen mask. The vast majority of patients having an operation will at some stage wear an oxygen mask; they may alternatively wear a nasal cannula but oxygen delivered in this way is less accurate and restricted in concentration.
The global disposable oxygen masks market, according to Altus Market Research, has the potential to grow by US$1.1 billion between 2019 and 2023. The market's growth pace will also pick up speed throughout this time.
Silicone and rubber masks
Silicone and rubber oxygen masks are heavier than plastic masks. They are designed to provide a good seal for long-duration use by aviators, medical research subjects, and hyperbaric chamber and other patients who require administration of pure oxygen, such as carbon monoxide poisoning and decompression sickness victims. Dr. Arthur H. Bulbulian pioneered the first modern viable oxygen mask, worn by World War II pilots and used by hospitals. Valves inside these tight-fitting masks control the flow of gases into and out of the masks, so that rebreathing of exhaled gas is minimised.
Hoses and tubing and oxygen regulators
Hoses or tubing connect an oxygen mask to the oxygen supply. Hoses are larger in diameter than tubing and can allow greater oxygen flow. When a hose is used it may have a ribbed or corrugated design to allow bending of the hose while preventing twisting and cutting off the oxygen flow. The quantity of oxygen delivered from the storage tank to the oxygen mask is controlled by a valve called a regulator. Some types of oxygen masks have a breathing bag made of plastic or rubber attached to the mask or oxygen supply hose to store a supply of oxygen to allow deep breathing without waste of oxygen with use of simple fixed flow regulators.
Oxygen masks for aviators
History
An early 1919 high-altitude oxygen system used a vacuum flask of liquid oxygen to supply two people for one hour at . The liquid passed through several warming stages before use, as expansion when it evaporated, and absorbed latent heat of vaporization, would make the gasified oxygen so cold that it could cause instant frostbite of the lungs.
The first successful creation for the oxygen mask was by Armenian born Dr. Arthur Bulbulian, in the field of facial prosthetics, in 1941.
Many designs of aviator's oxygen masks contain a microphone to transmit speech to other crew members and to the aircraft's radio. Military aviators' oxygen masks have face pieces that partially cover the sides of the face and protect the face against flash burns, flying particles, and effects of a high speed air stream hitting the face during emergency evacuation from the aircraft by ejection seat or parachute. They are often part of a pressure suit or intended for use with a flight helmet.
Regulations
Three main kinds of oxygen masks are used by pilots and crews who fly at high altitudes: continuous flow, diluter demand, and pressure demand.
In a continuous-flow system, oxygen is provided to the user continuously. It does not matter if the user is exhaling or inhaling as oxygen is flowing from the time the system is activated. Below the oxygen mask is a rebreather bag that collects oxygen during exhalation and as a result allows a higher flow rate during the inhalation cycle.
Diluter-demand and pressure-demand masks supply oxygen only when the user inhales. They each require a good seal between the mask and the user's face.
In a diluter-demand system, as the altitude increases (ambient pressure, and therefore the partial pressure of ambient oxygen, decreases), the oxygen flow increases such that the partial pressure of oxygen is roughly constant. Diluter-demand oxygen systems can be used up to .
In a pressure-demand system, oxygen in the mask is above ambient pressure, permitting breathing above . Because the pressure inside the mask is greater than the pressure around the user's torso, inhalation is easy, but exhalation requires more effort. Aviators are trained in pressure-demand breathing in altitude chambers. Because they seal tightly, pressure-demand-type oxygen masks are also used in hyperbaric oxygen chambers and for oxygen breathing research projects with standard oxygen regulators.
Supplemental oxygen is needed for flying more than 30 minutes at cabin pressure altitudes of or higher, pilots must use oxygen at all times above and each occupant must be provided supplemental oxygen above .
Aviation passenger masks and emergency oxygen systems
Most commercial aircraft are fitted with oxygen masks for use when cabin pressurization fails. In general, commercial aircraft are pressurized so that the cabin air is at a pressure equivalent to no more than altitude (usually somewhat lower altitude), where one can breathe normally without an oxygen mask. If the oxygen pressure in the cabin drops below a safe level, risking hypoxia, compartments containing the oxygen masks will open automatically, either above or in front of the passenger and crew seats, and in the lavatories.
In the early years of commercial flight, before pressurized cabins were invented, airliner passengers sometimes had to wear oxygen masks during routine flights.
Self-contained breathing apparatus (SCBA)
Firefighters and emergency service workers use full face masks that provide breathing air as well as eye and face protection. These masks are typically attached to a tank carried upon the back of the wearer and are called self-contained breathing apparatuses (SCBA). Open circuit SCBAs do not normally supply oxygen, as it is not necessary and constitutes an easily avoidable fire hazard. Rebreather SCBAs usually supply oxygen as this is the lightest and most compact option, and uses a simpler mechanism than other types of rebreather.
Specialized masks for astronauts
Specialized full-face masks that supply oxygen or other breathing gases are used by astronauts to remove nitrogen from their blood before space walks (EVA).
Specialized masks for pets
Specialized snout masks which supply oxygen to revive family pets have been donated to fire departments.
Oxygen delivery to divers
Divers only use pure oxygen for accelerated decompression, or from oxygen rebreathers at shallow depths where the risk of acute oxygen toxicity is acceptable. Oxygen supply during in-water decompression is via rebreather, open circuit diving regulator, full-face mask or diving helmet which has been prepared for .
Built-in breathing system
Oxygen supply to divers in decompression chambers is preferably through a built-in breathing system, which uses an oxygen mask plumbed into supply and exhaust hoses which supply oxygen from outside the chamber, and discharge the exhaled oxygen-rich gas outside the chamber, using a system equivalent to two demand valves, one upstream of the diver, to supply oxygen on demand, and the other downstream, to exhaust exhaled gas on demand, so that the oxygen partial pressure in the chamber is limited to relatively safe levels. If oxygen masks are used that discharge into the chamber, the chamber air must be replaced frequently to keep the oxygen level within safe operating limits.
Anesthesia oxygen masks
Anesthesia masks are face masks that are designed to administer anesthetic gases to a patient through inhalation. Anesthesia masks are either made of anti-static silicone or rubber, as a static electricity spark may ignite some anesthetic gases. They are either black rubber or clear silicone. Anesthesia masks fit over the mouth and nose and have a double hose system. One hose carries inhaled anesthetic gas to the mask and the other brings exhaled anesthetic gas back to the machine. Anesthesia masks have 4 point head strap harnesses to securely fit on the head to hold the mask in place as the anaesthetist controls the gases and oxygen inhaled.
Masks for high-altitude climbers
Oxygen masks are used by climbers of high peaks such as Mount Everest. Because of the severe cold and harsh conditions oxygen masks for use at extreme altitude must be robust and effective. The oxygen storage tanks used with the masks (called oxygen bottles) are made of lightweight, high-strength metals and are covered in high-strength fiber such as kevlar. These special oxygen bottles are filled with oxygen at a very high pressure which provides a longer time duration of oxygen for breathing than standard pressure oxygen bottles. These systems are generally only used above .
In recent years oxygen mask systems for high-altitude climbing which pump oxygen constantly have been increasingly replaced by systems supplying oxygen on demand via nasal cannulas.
Oxygen helmets
Oxygen helmets are used in hyperbaric oxygen chambers for oxygen administration. They are transparent, lightweight plastic helmets with a seal that goes around the wearer's neck that looks like a space suit helmet. They offer a good visual field. Light weight plastic hoses provide oxygen to the helmet and remove exhaled gas to the outside of the chamber. Oxygen helmets are often preferred for oxygen administration in hyperbaric oxygen chambers for children and patients that are uncomfortable wearing an oxygen mask.
Mask retention systems
Medical oxygen masks are held in place by medical personnel or the user by hand, or they may be fitted with a lightweight elastic headband so the mask can be removed quickly. Full-face masks are secured by several straps. Tightly fitting oxygen masks are secured at four points by two head straps. Aviators' masks are often equipped with "quick don" harnesses that allow those in pressurized aircraft to rapidly don the masks in emergencies. Military aviators' oxygen masks are secured to flight helmets with quick-release systems.
| Technology | Food, water and health | null |
13020935 | https://en.wikipedia.org/wiki/Gold%20dredge | Gold dredge | A gold dredge is a placer mining machine that extracts gold from sand, gravel, and dirt using water and mechanical methods.
The original gold dredges were large, multi-story machines built in the first half of the 1900s.
Small suction machines are currently marketed as "gold dredges" to individuals seeking gold: just offshore from the beach of Nome, Alaska, for instance.
A large gold dredge uses a mechanical method to excavate material (sand, gravel, dirt, etc.) using steel "buckets" on a circular, continuous "bucketline" at the front end of the dredge. The material is then sorted/sifted using water. On large gold dredges, the buckets dump the material into a steel rotating cylinder (a specific type of trommel called "the screen") that is sloped downward toward a rubber belt (the stacker) that carries away oversize material (rocks) and dumps the rocks behind the dredge. The cylinder has many holes in it to allow undersized material (including gold) to fall into a sluice box. The material that is washed or sorted away is called tailings. The rocks deposited behind the dredge (by the stacker) are called "tailing piles." The holes in the screen were intended to screen out rocks (e.g., 3/4 inch holes in the screen sent anything larger than 3/4 inch to the stacker).
Concept
The basic concept of retrieving gold via placer mining has not changed since antiquity. The concept is that the gold in sand or soil will settle to the bottom because gold is heavy/dense, and dirt, sand and rock will wash away, leaving the gold behind. The original methods to perform placer mining involved gold panning, sluice boxes, and rockers. Each method involves washing sand, gravel and dirt in water. Gold then settles to the bottom of the pan, or into the bottom of the riffles of the sluice box. The gold dredge is the same concept but on a much larger scale.
Gold dredges are an important tool of gold miners around the world. They allow profitable mining at relatively low operational costs. Even though the concept is simple in principle, dredges can be engineered in different ways allowing to catch different sizes of gold specimen. Hence the efficiency of gold dredges differs greatly depending on its specifications.
History
By the mid to late 1850s the easily accessible placer gold in California was gone, but much gold remained. The challenge of retrieving the gold took a professional mining approach to make it pay: giant machines and giant companies. Massive floating dredges scooped up millions of tons of river gravels, as steam and electrical power became available in the early 1900s.
The last giant gold dredge in California was the Natomas Number 6 dredge operating in Folsom, California that ceased operations on 12 Feb 1962 as cost of operation began exceeding the value of the gold recovered. Many of these large dredges still exist today in state-sponsored heritage areas (Sumpter Valley Gold Dredge), or tourist attractions (Dredge No. 4 National Historic Site of Canada).
Gold dredges were used in New Zealand from the 1860s, although the earlier dredges were of primitive design and not very successful. Much of the New Zealand dredge technology was developed locally. The first really successful bucket dredge for gold mining was that of Choie Sew Hoy, also known as Charles Sew Hoy, in 1889. This dredge was able work river banks and flats, as well as the bottoms of streams. It became the prototype for many similar dredges, and led to a boom in gold dredging in the South Island; in Otago rivers like the Shotover River, Clutha River and the Molyneaux River, and in West Coast rivers like the Grey River (where the last gold dredge worked until 2004). A New Zealand born mining entrepreneur, Charles Lancelot Garland, bought the technology to New South Wales, Australia, launching the first dredge there, in March 1899, resulting in a major revival of the alluvial gold mining industry. Gold dredges also operated, extensively, in Victoria and in Queensland. Dredges were also used to mine placer deposits of other minerals, such as tin ore. In later years, some dredges were electrically powered. A gold dredge was working at Porcupine Flat, near Maldon, Victoria, until 1984.
From Australia, in turn, gold dredging technology spread to New Guinea, at the time an Australian territory, in the 1930s. Due to the remote locations of the goldfields and absence of roads in New Guinea, parts of dredges were carried to site by air and the dredge was assembled there.
Today
In the late 1960s and through today, dredging has returned as a popular form of gold mining. Advances in technology allow a small dredge to be carried by a single person to a remote location and profitably process gravel banks on streams that previously were inaccessible to the giant dredges of the 1930s.
Today dredges are versatile and popular consisting of both floating surface dredges that use a vacuum to suck gravel from the bottom and submersible dredges.
Large dredges are still operating in several countries of South America (Peru, Brasil, Guyana, Colombia), Asia (Russia, China, Mongolia Papua-New Guinea) and Africa (Sierra Leone). In 2015, gold miner Tony Beets reconstructed a 70-year-old dredge (as seen in the popular TV series, Gold Rush, on the Discovery channel.). As of 2016, this is the only operating large dredge in the Klondike. However, he is currently working on fixing up a second dredge 33% larger than the first one. In Season 7 Episode 20, titled Dredge vs Washplant, it was shown that in a 2-day test the running costs of the dredge were approximately 25% of those of running a washplant and feeding it with heavy equipment.
Environmental impact studies show no clear positive benefits from suction dredging and potential negative impacts on stream systems. Small scale suction dredging in rivers and streams remains a controversial land management topic and the subject of much political turmoil.
| Technology | Metallurgy | null |
13021878 | https://en.wikipedia.org/wiki/Geothermal%20power | Geothermal power | Geothermal power is electrical power generated from geothermal energy. Technologies in use include dry steam power stations, flash steam power stations and binary cycle power stations. Geothermal electricity generation is currently used in 26 countries, while geothermal heating is in use in 70 countries.
As of 2019, worldwide geothermal power capacity amounts to 15.4 gigawatts (GW), of which 23.9% (3.68 GW) are installed in the United States. International markets grew at an average annual rate of 5 percent over the three years to 2015, and global geothermal power capacity is expected to reach 14.5–17.6 GW by 2020. Based on current geologic knowledge and technology the Geothermal Energy Association (GEA) publicly discloses, the GEA estimates that only 6.9% of total global potential has been tapped so far, while the IPCC reported geothermal power potential to be in the range of 35 GW to 2 TW. Countries generating more than 15 percent of their electricity from geothermal sources include El Salvador, Kenya, the Philippines, Iceland, New Zealand, and Costa Rica. Indonesia has an estimated potential of 29 GW of geothermal energy resources, the largest in the world; in 2017, its installed capacity was 1.8 GW.
Geothermal power is considered to be a sustainable, renewable source of energy because the heat extraction is small compared with the Earth's heat content. The greenhouse gas emissions of geothermal electric stations average 45 grams of carbon dioxide per kilowatt-hour of electricity, or less than 5% of those of conventional coal-fired plants.
As a source of renewable energy for both power and heating, geothermal has the potential to meet 3 to 5% of global demand by 2050. With economic incentives, it is estimated that by 2100 it will be possible to meet 10% of global demand with geothermal power.
History and development
In the 20th century, demand for electricity led to the consideration of geothermal power as a generating source. Prince Piero Ginori Conti tested the first geothermal power generator on 4 July 1904 in Larderello, Italy. It successfully lit four light bulbs. Later, in 1911, the world's first commercial geothermal power station was built there. Experimental generators were built in Beppu, Japan and the Geysers, California, in the 1920s, but Italy was the world's only industrial producer of geothermal electricity until 1958.
In 1958, New Zealand became the second major industrial producer of geothermal electricity when its Wairakei station was commissioned. Wairakei was the first station to use flash steam technology. Over the past 60 years, net fluid production has been in excess of 2.5 km3. Subsidence at Wairakei-Tauhara has been an issue in a number of formal hearings related to environmental consents for expanded development of the system as a source of renewable energy.
In 1960, Pacific Gas and Electric began operation of the first successful geothermal electric power station in the United States at The Geysers in California. The original turbine lasted for more than 30 years and produced 11 MW net power.
An organic fluid based binary cycle power station was first demonstrated in 1967 in the Soviet Union and later introduced to the United States in 1981, following the 1970s energy crisis and significant changes in regulatory policies. This technology allows the use of temperature resources as low as 81 °C. In 2006, a binary cycle station in Chena Hot Springs, Alaska, came on-line, producing electricity from a record low fluid temperature of 57 °C (135 °F).
Geothermal electric stations have until recently been built exclusively where high-temperature geothermal resources are available near the surface. The development of binary cycle power plants and improvements in drilling and extraction technology may enable enhanced geothermal systems over a much greater geographical range. Demonstration projects are operational in Landau-Pfalz, Germany, and Soultz-sous-Forêts, France, while an earlier effort in Basel, Switzerland was shut down after it triggered earthquakes. Other demonstration projects are under construction in Australia, the United Kingdom, and the United States of America.
The thermal efficiency of geothermal electric stations is low, around 7–10%, because geothermal fluids are at a low temperature compared with steam from boilers. By the laws of thermodynamics this low temperature limits the efficiency of heat engines in extracting useful energy during the generation of electricity. Exhaust heat is wasted, unless it can be used directly and locally, for example in greenhouses, timber mills, and district heating. The efficiency of the system does not affect operational costs as it would for a coal or other fossil fuel plant, but it does factor into the viability of the station. In order to produce more energy than the pumps consume, electricity generation requires high-temperature geothermal fields and specialized heat cycles. Because geothermal power does not rely on variable sources of energy, unlike, for example, wind or solar, its capacity factor can be quite large – up to 96% has been demonstrated. However the global average capacity factor was 74.5% in 2008, according to the IPCC.
Resources
The Earth's heat content is about . This heat naturally flows to the surface by conduction at a rate of 44.2 TW and is replenished by radioactive decay at a rate of 30 TW. These power rates are more than double humanity's current energy consumption from primary sources, but most of this power is too diffuse (approximately 0.1 W/m2 on average) to be recoverable. The Earth's crust effectively acts as a thick insulating blanket which must be pierced by fluid conduits (of magma, water or other) to release the heat underneath.
Electricity generation requires high-temperature resources that can only come from deep underground. The heat must be carried to the surface by fluid circulation, either through magma conduits, hot springs, hydrothermal circulation, oil wells, drilled water wells, or a combination of these. This circulation sometimes exists naturally where the crust is thin: magma conduits bring heat close to the surface, and hot springs bring the heat to the surface. If a hot spring is not available, a well must be drilled into a hot aquifer. Away from tectonic plate boundaries the geothermal gradient is 25–30 °C per kilometre (km) of depth in most of the world, so wells would have to be several kilometres deep to permit electricity generation. The quantity and quality of recoverable resources improves with drilling depth and proximity to tectonic plate boundaries.
In ground that is hot but dry, or where water pressure is inadequate, injected fluid can stimulate production. Developers bore two holes into a candidate site, and fracture the rock between them with explosives or high-pressure water. Then they pump water or liquefied carbon dioxide down one borehole, and it comes up the other borehole as a gas. This approach is called hot dry rock geothermal energy in Europe, or enhanced geothermal systems in North America. Much greater potential may be available from this approach than from conventional tapping of natural aquifers.
Estimates of the electricity generating potential of geothermal energy vary from 35 to 2000 GW depending on the scale of investments. This does not include non-electric heat recovered by co-generation, geothermal heat pumps and other direct use. A 2006 report by the Massachusetts Institute of Technology (MIT) that included the potential of enhanced geothermal systems estimated that investing US$1 billion in research and development over 15 years would allow the creation of 100 GW of electrical generating capacity by 2050 in the United States alone. The MIT report estimated that over would be extractable, with the potential to increase this to over 2,000 ZJ with technology improvements – sufficient to provide all the world's present energy needs for several millennia.
At present, geothermal wells are rarely more than deep. Upper estimates of geothermal resources assume wells as deep as . Drilling near this depth is now possible in the petroleum industry, although it is an expensive process. The deepest research well in the world, the Kola Superdeep Borehole (KSDB-3), is deep.
Wells drilled to depths greater than generally incur drilling costs in the tens of millions of dollars. The technological challenges are to drill wide bores at low cost and to break larger volumes of rock.
Geothermal power is considered to be sustainable because the heat extraction is small compared to the Earth's heat content, but extraction must still be monitored to avoid local depletion. Although geothermal sites are capable of providing heat for many decades, individual wells may cool down or run out of water. The three oldest sites, at Larderello, Wairakei, and the Geysers have all reduced production from their peaks. It is not clear whether these stations extracted energy faster than it was replenished from greater depths, or whether the aquifers supplying them are being depleted. If production is reduced, and water is reinjected, these wells could theoretically recover their full potential. Such mitigation strategies have already been implemented at some sites. The long-term sustainability of geothermal energy has been demonstrated at the Larderello field in Italy since 1913, at the Wairakei field in New Zealand since 1958, and at the Geysers field in California since 1960.
Power station types
Geothermal power stations are similar to other steam turbine thermal power stations in that heat from a fuel source (in geothermal's case, the Earth's core) is used to heat water or another working fluid. The working fluid is then used to turn a turbine of a generator, thereby producing electricity. The fluid is then cooled and returned to the heat source.
Dry steam power stations
Dry steam stations are the simplest and oldest design. There are few power stations of this type, because they require a resource that produces dry steam, but they are the most efficient, with the simplest facilities. At these sites, there may be liquid water present in the reservoir, but only steam, not water, is produced to the surface. Dry steam power directly uses geothermal steam of 150 °C or greater to turn turbines. As the turbine rotates it powers a generator that produces electricity and adds to the power field. Then, the steam is emitted to a condenser, where it turns back into a liquid, which then cools the water. After the water is cooled it flows down a pipe that conducts the condensate back into deep wells, where it can be reheated and produced again. At The Geysers in California, after the first 30 years of power production, the steam supply had depleted and generation was substantially reduced. To restore some of the former capacity, supplemental water injection was developed during the 1990s and 2000s, including utilization of effluent from nearby municipal sewage treatment facilities.
Flash steam power stations
Flash steam stations pull deep, high-pressure hot water into lower-pressure tanks and use the resulting flashed steam to drive turbines. They require fluid temperatures of at least 180 °C, usually more. This is the most common type of station in operation today. Flash steam plants use geothermal reservoirs of water with temperatures greater than 360 °F (182 °C). The hot water flows up through wells in the ground under its own pressure. As it flows upward, the pressure decreases and some of the hot water is transformed into steam. The steam is then separated from the water and used to power a turbine/generator. Any leftover water and condensed steam may be injected back into the reservoir, making this a potentially sustainable resource.
Binary cycle power stations
Binary cycle power stations are the most recent development, and can accept fluid temperatures as low as 57 °C. The moderately hot geothermal water is passed by a secondary fluid with a much lower boiling point than water. This causes the secondary fluid to flash vaporize, which then drives the turbines. This is the most common type of geothermal electricity station being constructed today. Both Organic Rankine and Kalina cycles are used. The thermal efficiency of this type of station is typically about 10–13%. Binary cycle power plants have an average unit capacity of 6.3 MW, 30.4 MW at single-flash power plants, 37.4 MW at double-flash plants, and 45.4 MW at power plants working on superheated steam.
Worldwide production
The International Renewable Energy Agency has reported that 14,438 megawatts (MW) of geothermal power was online worldwide at the end of 2020, generating 94,949 GWh of electricity. In theory, the world's geothermal resources are sufficient to supply humans with energy. However, only a tiny fraction of the world's geothermal resources can at present be explored on a profitable basis.
Al Gore said in The Climate Project Asia Pacific Summit that Indonesia could become a super power country in electricity production from geothermal energy. In 2013 the publicly owned electricity sector in India announced a plan to develop the country's first geothermal power facility in the landlocked state of Chhattisgarh.
Geothermal power in Canada has high potential due to its position on the Pacific Ring of Fire. The region of greatest potential is the Canadian Cordillera, stretching from British Columbia to the Yukon, where estimates of generating output have ranged from 1,550 MW to 5,000 MW.
The geography of Japan is uniquely suited for geothermal power production. Japan has numerous hot springs that could provide fuel for geothermal power plants, but a massive investment in Japan's infrastructure would be necessary.
Utility-grade stations
The largest group of geothermal power plants in the world is located at The Geysers, a geothermal field in California, United States. As of 2021, five countries (Kenya, Iceland, El Salvador, New Zealand, and Nicaragua) generate more than 15% of their electricity from geothermal sources.
The following table lists these data for each country:
total generation from geothermal in terawatt-hours,
percent of that country's generation that was geothermal,
total geothermal capacity in gigawatts,
percent growth in geothermal capacity, and
the geothermal capacity factor for that year.
Data are for the year 2021. Data are sourced from the EIA. Only includes countries with more than 0.01 TWh of generation. Links for each location go to the relevant geothermal power page, when available.
Environmental impact
Existing geothermal electric stations, that fall within the 50th percentile of all total life cycle emissions studies reviewed by the IPCC, produce on average 45 kg of equivalent emissions per megawatt-hour of generated electricity (kg eq/MW·h). For comparison, a coal-fired power plant emits 1,001 kg of equivalent per megawatt-hour when not coupled with carbon capture and storage (CCS). As many geothermal projects are situated in volcanically active areas that naturally emit greenhouse gases, it is hypothesized that geothermal plants may actually decrease the rate of de-gassing by reducing the pressure on underground reservoirs.
Stations that experience high levels of acids and volatile chemicals are usually equipped with emission-control systems to reduce the exhaust. Geothermal stations can also inject these gases back into the earth as a form of carbon capture and storage, such as in New Zealand and in the CarbFix project in Iceland.
Other stations like the Kızıldere geothermal power plant, exhibit the capability to utilize geothermal fluids to process carbon dioxide gas into dry ice at two nearby plants resulting in very little environmental impact.
In addition to dissolved gases, hot water from geothermal sources may hold in solution trace amounts of toxic chemicals, such as mercury, arsenic, boron, antimony, and salt. These chemicals come out of solution as the water cools, and can cause environmental damage if released. The modern practice of injecting geothermal fluids back into the Earth to stimulate production has the side benefit of reducing this environmental risk.
Station construction can adversely affect land stability. Subsidence has occurred in the Wairakei field in New Zealand. Enhanced geothermal systems can trigger earthquakes due to water injection. The project in Basel, Switzerland was suspended because more than 10,000 seismic events measuring up to 3.4 on the Richter Scale occurred over the first 6 days of water injection. The risk of geothermal drilling leading to uplift has been experienced in Staufen im Breisgau.
Geothermal has minimal land and freshwater requirements. Geothermal stations use 404 square meters per GW·h versus 3,632 and 1,335 square meters for coal facilities and wind farms respectively. They use 20 litres of freshwater per MW·h versus over 1000 litres per MW·h for nuclear, coal, or oil.
Local climate cooling is possible as a result of the work of the geothermal circulation systems. However, according to an estimation given by Leningrad Mining Institute in 1980s, possible cool-down will be negligible compared to natural climate fluctuations.
While volcanic activity produces geothermal energy, it is also risky. the Puna Geothermal Venture has still not returned to full capacity after the 2018 lower Puna eruption.
Economics
Geothermal power requires no fuel; it is therefore immune to fuel cost fluctuations. However, capital costs tend to be high. Drilling accounts for over half the costs, and exploration of deep resources entails significant risks. A typical well doublet in Nevada can support 4.5 megawatts (MW) of electricity generation and costs about $10 million to drill, with a 20% failure rate.
In total, electrical station construction and well drilling costs about 2–5 million € per MW of electrical capacity, while the levelised energy cost is 0.04–0.10 € per kW·h. Enhanced geothermal systems tend to be on the high side of these ranges, with capital costs above $4 million per MW and levelized costs above $0.054 per kW·h in 2007.
Research suggests in-reservoir storage could increase the economic viability of enhanced geothermal systems in energy systems with a large share of variable renewable energy sources.
Geothermal power is highly scalable: a small power station can supply a rural village, though initial capital costs can be high.
The most developed geothermal field is the Geysers in California. In 2008, this field supported 15 stations, all owned by Calpine, with a total generating capacity of 725 MW.
| Technology | Energy and fuel | null |
27208069 | https://en.wikipedia.org/wiki/Contact%20fuze | Contact fuze | A contact fuze, impact fuze, percussion fuze or direct-action (D.A.) fuze (UK) is the fuze that is placed in the nose of a bomb or shell so that it will detonate on contact with a hard surface.
Many impacts are unpredictable: they may involve a soft surface, or an off-axis grazing impact. The pure contact fuze is often unreliable in such cases and so a more sensitive graze fuze or inertia fuze is used instead. The two types are often combined in the same mechanism.
Artillery fuzes
The British Army's first useful impact fuze for high-explosive shells was the Fuze No. 106 of World War I. (illus.) This used a simple protruding plunger or striker at the nose, which was pushed back to drive a firing pin into the detonator. Its ability to burst immediately at ground level was used to clear the barbed wire entanglements of no man's land, rather than burying itself first and leaving a deep, but useless, crater. The striker was protected by a safety cap that was removed before loading, but there was no other safety mechanism.
The simplest form of artillery contact fuze is a soft metal nose to the shell, filled with a fulminating explosive such as lead azide. An example is the British World War II Fuze, Percussion, D.A., No. 233 ('direct action') The primary explosive transmits its detonation to an explosive booster within the fuze, then in turn to the main charge of the shell. As an artillery shell lands with a considerable impact, the "soft" nose may be made robust enough to be adequately safe for careful handling, without requiring any protection cap or safety mechanism. As a matter of normal practice though, fuzes and shells are transported separately and the fuze is only installed shortly prior to use. These simple contact fuzes are generally used for anti-tank shells, rather than high-explosive.
A more sophisticated fuze is the double-acting fuze, which is sensitive to both contact and grazing. An example of such a double-acting fuze is the British WW II Fuze, D.A. and percussion, No. 119 This fuze uses a nose striker, as for the original No. 106, but is rather more complex with an added inertia mechanism for grazing impacts and also three automatic safety devices. Simple contact impacts drive the striker back into the detonating pellet, as before. Graze impacts trigger the inertia mechanism, where instead the pellet in a heavy carrying plug travels forwards onto the striker. The striker is protected in storage by a nose safety cap. Normally this is removed before loading, but it may also be left in place if the target is behind cover. This reduces the sensitivity of the striker to light impacts through vegetation, but the fuze will still function through the inertia mechanism, or through a hard impact. Three safety devices are provided, one released by inertia during firing, which then unlocks a second that is released by centrifugal force of the spinning shell. These are mechanical locks that prevent the striker contacting the pellet. A third device is a centrifugal shutter that initially blocks propagation from the detonator pellet to the booster explosive.
Most artillery contact fuzes act immediately, although some may have a delay. This allows a high-explosive or semi-armour-piercing shell to penetrate a wall before exploding, thus achieving the most damage inside the building. Where a shell is used against strong armour and requires all of its explosive power merely to penetrate, a delay is not appropriate. Most such delayed fuzes are thus switchable to a "superquick" or immediate mode.
Timed fuzes are used for airbursts. They take their delay time (½ second or longer) from firing, not from impact. These fuzes may also offer a contact fuzed ability. As this type of fuze is complex and more sensitive, they usually have a deliberate safing mechanism such as an arming wire that must be removed before use.
Air-dropped bomb fuzes
Fuzes for air-dropped bombs have generally used an internally mounted inertia fuze, triggered by the sudden deceleration on impact. Owing to the risk of an aircraft crash, or even the need to land with an undropped bomb still on board, these are protected by sophisticated safety systems so that the fuze can only be triggered after it has been dropped intentionally.
Stabo
The German Stachelbombe (nose-spike bomb) or stabo of WWII was a standard bomb, from 50 kg to 500 kg, modified for use from low altitude. To avoid the risk of ricochet from the ground, a nose spike was fitted to penetrate first and anchor the bomb against bouncing — without this, there was a risk of the dropping aircraft not only missing the target, but also being damaged by its own weapon. As the German electric fuzes had an arming delay after dropping, and the bombs were dropped at such low altitude as to leave insufficient time for this to arm, they were also sometimes fitted with additional contact fuzes on the tips of these nose spikes.
Similar devices were employed by Soviet forces, in a similar ground attack role using the Il-2 Sturmovik.
Fat Man
Notable examples of air-dropped bombs that did use contact fuzes include the Fat Man atomic bomb dropped on Nagasaki. The bomb was intended for air burst detonation and was fitted with both radar height-finding and barometric fuzes. As the device was so secret, and the risk of informative fragments or plutonium being recovered after a failed drop was considered to be unacceptable, it was fitted with supplementary contact fuzes that were only intended to destroy the weapon beyond recognition. Four AN-219 piezo-electric impact fuzes were fitted to the nose of the bomb casing.
BLU-82
The BLU-82 was a large conventional explosive bomb, used to make helicopter landing clearings in forests. The intended fusing was an extremely low air burst of only a few feet, so as to maximize the clearance effect and minimize cratering. The fuze was a mechanical impact fuze on a nose spike.
Other fuzes
The contact fuze is set off when a series of connected crush switches that are placed on the exterior nose of the ordnance device make contact with the ground. The contact with solid ground activates the interior firing circuits which leads to the detonation of the ordnance device. A cone shaped cover over the device prevents premature detonation while the device is being loaded and carried to the desired location by aircraft.
| Technology | Explosive weapons | null |
26846483 | https://en.wikipedia.org/wiki/Nucleic%20acid%20structure | Nucleic acid structure | Nucleic acid structure refers to the structure of nucleic acids such as DNA and RNA. Chemically speaking, DNA and RNA are very similar. Nucleic acid structure is often divided into four different levels: primary, secondary, tertiary, and quaternary.
Primary structure
Primary structure consists of a linear sequence of nucleotides that are linked together by phosphodiester bonds. It is this linear sequence of nucleotides that make up the primary structure of DNA or RNA. Nucleotides consist of 3 components:
Nitrogenous base
Adenine
Guanine
Cytosine
Thymine (present in DNA only)
Uracil (present in RNA only)
5-carbon sugar which is called deoxyribose (found in DNA) and ribose (found in RNA).
One or more phosphate groups.
The nitrogen bases adenine and guanine are purine in structure and form a glycosidic bond between their 9 nitrogen and the 1' -OH group of the deoxyribose. Cytosine, thymine, and uracil are pyrimidines, hence the glycosidic bonds form between their 1 nitrogen and the 1' -OH of the deoxyribose. For both the purine and pyrimidine bases, the phosphate group forms a bond with the deoxyribose sugar through an ester bond between one of its negatively charged oxygen groups and the 5' -OH of the sugar. The polarity in DNA and RNA is derived from the oxygen and nitrogen atoms in the backbone. Nucleic acids are formed when nucleotides come together through phosphodiester linkages between the 5' and 3' carbon atoms.
A nucleic acid sequence is the order of nucleotides within a DNA (GACT) or RNA (GACU) molecule that is determined by a series of letters. Sequences are presented from the 5' to 3' end and determine the covalent structure of the entire molecule. Sequences can be complementary to another sequence in that the base on each position is complementary as well as in the reverse order. An example of a complementary sequence to AGCT is TCGA. DNA is double-stranded containing both a sense strand and an antisense strand. Therefore, the complementary sequence will be to the sense strand.
Complexes with alkali metal ions
There are three potential metal binding groups on nucleic acids: phosphate, sugar, and base moieties. Solid-state structure of complexes with alkali metal ions have been reviewed.
Secondary structure
DNA
Secondary structure is the set of interactions between bases, i.e., which parts of strands are bound to each other. In DNA double helix, the two strands of DNA are held together by hydrogen bonds. The nucleotides on one strand base pairs with the nucleotide on the other strand. The secondary structure is responsible for the shape that the nucleic acid assumes. The bases in the DNA are classified as purines and pyrimidines. The purines are adenine and guanine. Purines consist of a double ring structure, a six-membered and a five-membered ring containing nitrogen. The pyrimidines are cytosine and thymine. It has a single ring structure, a six-membered ring containing nitrogen. A purine base always pairs with a pyrimidine base (guanine (G) pairs with cytosine (C) and adenine (A) pairs with thymine (T) or uracil (U)). DNA's secondary structure is predominantly determined by base-pairing of the two polynucleotide strands wrapped around each other to form a double helix. Although the two strands are aligned by hydrogen bonds in base pairs, the stronger forces holding the two strands together are stacking interactions between the bases. These stacking interactions are stabilized by Van der Waals forces and hydrophobic interactions, and show a large amount of local structural variability. There are also two grooves in the double helix, which are called major groove and minor groove based on their relative size.
RNA
The secondary structure of RNA consists of a single polynucleotide. Base pairing in RNA occurs when RNA folds between complementarity regions. Both single- and double-stranded regions are often found in RNA molecules.
The four basic elements in the secondary structure of RNA are:
Helices
Bulges
Loops
Junctions
The antiparallel strands form a helical shape. Bulges and internal loops are formed by separation of the double helical tract on either one strand (bulge) or on both strands (internal loops) by unpaired nucleotides.
Stem-loop or hairpin loop is the most common element of RNA secondary structure. Stem-loop is formed when the RNA chains fold back on themselves to form a double helical tract called the 'stem', the unpaired nucleotides forms single stranded region called the 'loop'. A tetraloop is a four-base pairs hairpin RNA structure. There are three common families of tetraloop in ribosomal RNA: UNCG, GNRA, and CUUG (N is one of the four nucleotides and R is a purine). UNCG is the most stable tetraloop.
Pseudoknot is an RNA secondary structure first identified in turnip yellow mosaic virus. It is minimally composed of two helical segments connected by single-stranded regions or loops. H-type fold pseudoknots are best characterized. In H-type fold, nucleotides in the hairpin-loop pair with the bases outside the hairpin stem forming second stem and loop. This causes formation of pseudoknots with two stems and two loops. Pseudoknots are functional elements in RNA structure having diverse function and found in most classes of RNA.
Secondary structure of RNA can be predicted by experimental data on the secondary structure elements, helices, loops, and bulges. DotKnot-PW method is used for comparative pseudoknots prediction. The main points in the DotKnot-PW method is scoring the similarities found in stems, secondary elements and H-type pseudoknots.
Tertiary structure
Tertiary structure refers to the locations of the atoms in three-dimensional space, taking into consideration geometrical and steric constraints. It is a higher order than the secondary structure, in which large-scale folding in a linear polymer occurs and the entire chain is folded into a specific 3-dimensional shape. There are 4 areas in which the structural forms of DNA can differ.
Handedness – right or left
Length of the helix turn
Number of base pairs per turn
Difference in size between the major and minor grooves
The tertiary arrangement of DNA's double helix in space includes B-DNA, A-DNA, and Z-DNA. Triple-stranded DNA structures have been demonstrated in repetitive polypurine:polypyrimidine Microsatellite sequences and Satellite DNA.
B-DNA is the most common form of DNA in vivo and is a more narrow, elongated helix than A-DNA. Its wide major groove makes it more accessible to proteins. On the other hand, it has a narrow minor groove. B-DNA's favored conformations occur at high water concentrations; the hydration of the minor groove appears to favor B-DNA. B-DNA base pairs are nearly perpendicular to the helix axis. The sugar pucker which determines the shape of the a-helix, whether the helix will exist in the A-form or in the B-form, occurs at the C2'-endo.
A-DNA, is a form of the DNA duplex observed under dehydrating conditions. It is shorter and wider than B-DNA. RNA adopts this double helical form, and RNA-DNA duplexes are mostly A-form, but B-form RNA-DNA duplexes have been observed. In localized single strand dinucleotide contexts, RNA can also adopt the B-form without pairing to DNA. A-DNA has a deep, narrow major groove which does not make it easily accessible to proteins. On the other hand, its wide, shallow minor groove makes it accessible to proteins but with lower information content than the major groove. Its favored conformation is at low water concentrations. A-DNAs base pairs are tilted relative to the helix axis, and are displaced from the axis. The sugar pucker occurs at the C3'-endo and in RNA 2'-OH inhibits C2'-endo conformation. Long considered little more than a laboratory artifice, A-DNA is now known to have several biological functions.
Z-DNA is a relatively rare left-handed double-helix. Given the proper sequence and superhelical tension, it can be formed in vivo but its function is unclear. It has a more narrow, more elongated helix than A or B. Z-DNA's major groove is not really a groove, and it has a narrow minor groove. The most favored conformation occurs when there are high salt concentrations. There are some base substitutions but they require an alternating purine-pyrimidine sequence. The N2-amino of G H-bonds to 5' PO, which explains the slow exchange of protons and the need for the G purine. Z-DNA base pairs are nearly perpendicular to the helix axis. Z-DNA does not contain single base-pairs but rather a GpC repeat with P-P distances varying for GpC and CpG. On the GpC stack there is good base overlap, whereas on the CpG stack there is less overlap. Z-DNA's zigzag backbone is due to the C sugar conformation compensating for G glycosidic bond conformation. The conformation of G is syn, C2'-endo; for C it is anti, C3'-endo.
A linear DNA molecule having free ends can rotate, to adjust to changes of various dynamic processes in the cell, by changing how many times the two chains of its double helix twist around each other. Some DNA molecules are circular and are topologically constrained. More recently circular RNA was described as well to be a natural pervasive class of nucleic acids, expressed in many organisms (see CircRNA).
A covalently closed, circular DNA (also known as cccDNA) is topologically constrained as the number of times the chains coiled around one other cannot change. This cccDNA can be supercoiled, which is the tertiary structure of DNA. Supercoiling is characterized by the linking number, twist and writhe. The linking number (Lk) for circular DNA is defined as the number of times one strand would have to pass through the other strand to completely separate the two strands. The linking number for circular DNA can only be changed by breaking of a covalent bond in one of the two strands. Always an integer, the linking number of a cccDNA is the sum of two components: twists (Tw) and writhes (Wr).
Twists are the number of times the two strands of DNA are twisted around each other. Writhes are number of times the DNA helix crosses over itself. DNA in cells is negatively supercoiled and has the tendency to unwind. Hence the separation of strands is easier in negatively supercoiled DNA than in relaxed DNA. The two components of supercoiled DNA are solenoid and plectonemic. The plectonemic supercoil is found in prokaryotes, while the solenoidal supercoiling is mostly seen in eukaryotes.
Quaternary structure
The quaternary structure of nucleic acids is similar to that of protein quaternary structure. Although some of the concepts are not exactly the same, the quaternary structure refers to a higher-level of organization of nucleic acids. Moreover, it refers to interactions of the nucleic acids with other molecules. The most commonly seen form of higher-level organization of nucleic acids is seen in the form of chromatin which leads to its interactions with the small proteins histones. Also, the quaternary structure refers to the interactions between separate RNA units in the ribosome or spliceosome.
| Biology and health sciences | Nucleic acids | Biology |
841860 | https://en.wikipedia.org/wiki/Oligomer | Oligomer | In chemistry and biochemistry, an oligomer () is a molecule that consists of a few repeating units which could be derived, actually or conceptually, from smaller molecules, monomers. The name is composed of Greek elements oligo-, "a few" and -mer, "parts". An adjective form is oligomeric.
The oligomer concept is contrasted to that of a polymer, which is usually understood to have a large number of units, possibly thousands or millions. However, there is no sharp distinction between these two concepts. One proposed criterion is whether the molecule's properties vary significantly with the removal of one or a few of the units.
An oligomer with a specific number of units is referred to by the Greek prefix denoting that number, with the ending -mer: thus dimer, trimer, tetramer, pentamer, and hexamer refer to molecules with two, three, four, five, and six units, respectively. The units of an oligomer may be arranged in a linear chain (as in melam, a dimer of melamine); a closed ring (as in 1,3,5-trioxane, a cyclic trimer of formaldehyde); or a more complex structure (as in tellurium tetrabromide, a tetramer of with a cube-like core). If the units are identical, one has a homo-oligomer; otherwise one may use hetero-oligomer. An example of a homo-oligomeric protein is collagen, which is composed of three identical protein chains.
Some biologically important oligomers are macromolecules like proteins or nucleic acids; for instance, hemoglobin is a protein tetramer. An oligomer of amino acids is called an oligopeptide or just a peptide. An oligosaccharide is an oligomer of monosaccharides (simple sugars). An oligonucleotide is a short single-stranded fragment of nucleic acid such as DNA or RNA, or similar fragments of analogs of nucleic acids such as peptide nucleic acid or Morpholinos.
The units of an oligomer may be connected by covalent bonds, which may result from bond rearrangement or condensation reactions, or by weaker forces such as hydrogen bonds.
The term multimer () is used in biochemistry for oligomers of proteins that are not covalently bound. The major capsid protein VP1 that comprises the shell of polyomaviruses is a self-assembling multimer of 72 pentamers held together by local electric charges.
Many oils are oligomeric, such as liquid paraffin. Plasticizers are oligomeric esters widely used to soften thermoplastics such as PVC. They may be made from monomers by linking them together, or by separation from the higher fractions of crude oil. Polybutene is an oligomeric oil used to make putty.
Oligomerization is a chemical process that converts monomers to macromolecular complexes through a finite degree of polymerization. Telomerization is an oligomerization carried out under conditions that result in chain transfer, limiting the size of the oligomers. (This concept is not to be confused with the formation of a telomere, a region of highly repetitive DNA at the end of a chromosome.)
Green oil
In the oil and gas industry, green oil refers to oligomers formed in all C2, C3, and C4 hydrogenation reactors of ethylene plants and other petrochemical production facilities; it is a mixture of C4 to C20 unsaturated and reactive components with about 90% aliphatic dienes and 10% of alkanes plus alkenes. Different heterogeneous and homogeneous catalysts are operative in producing green oils via the oligomerization of alkenes.
| Physical sciences | Polymers | Chemistry |
842360 | https://en.wikipedia.org/wiki/Paleomagnetism | Paleomagnetism | Paleomagnetism (occasionally palaeomagnetism) is the study of prehistoric Earth's magnetic fields recorded in rocks, sediment, or archeological materials. Geophysicists who specialize in paleomagnetism are called paleomagnetists.
Certain magnetic minerals in rocks can record the direction and intensity of Earth's magnetic field at the time they formed. This record provides information on the past behavior of the geomagnetic field and the past location of tectonic plates. The record of geomagnetic reversals preserved in volcanic and sedimentary rock sequences (magnetostratigraphy) provides a time-scale that is used as a geochronologic tool.
Evidence from paleomagnetism led to the revival of the continental drift hypothesis and its transformation into the modern theory of plate tectonics. Apparent polar wander paths provided the first clear geophysical evidence for continental drift, while marine magnetic anomalies did the same for seafloor spreading. Paleomagnetic data continues to extend the history of plate tectonics back in time, constraining the ancient position and movement of continents and continental fragments (terranes).
The field of paleomagnetism also encompasses equivalent measurements of samples from other Solar System bodies, such as Moon rocks and meteorites, where it is used to investigate the ancient magnetic fields of those bodies and dynamo theory. Paleomagnetism relies on developments in rock magnetism and overlaps with biomagnetism, magnetic fabrics (used as strain indicators in rocks and soils), and environmental magnetism.
History
As early as the 18th century, it was noticed that compass needles deviated near strongly magnetized outcrops. In 1797, Alexander von Humboldt attributed this magnetization to lightning strikes (and lightning strikes do often magnetize surface rocks). 19th century studies of the direction of magnetization in rocks showed that some recent lavas were magnetized parallel to Earth's magnetic field. Early in the 20th century, work by David, Bernard Brunhes and Paul Louis Mercanton showed that many rocks were magnetized antiparallel to the field. Japanese geophysicist Motonori Matuyama showed in the late 1920s that Earth's magnetic field reversed in the mid-Quaternary, a reversal now known as the Brunhes–Matuyama reversal.
British physicist P.M.S. Blackett provided a major impetus to paleomagnetism by inventing a sensitive astatic magnetometer in 1956. His intent was to test his theory that the geomagnetic field was related to Earth's rotation, a theory that he ultimately rejected; but the astatic magnetometer became the basic tool of paleomagnetism and led to a revival of the theory of continental drift.
Alfred Wegener first proposed in 1915 that continents had once been joined together and had since moved apart. Although he produced an abundance of circumstantial evidence, his theory met with little acceptance for two reasons: (1) no mechanism for continental drift was known, and (2) there was no way to reconstruct the movements of the continents over time. Keith Runcorn and Edward A. Irving constructed apparent polar wander paths for Europe and North America. These curves diverged but could be reconciled if it was assumed that the continents had been in contact up to 200 million years ago. This provided the first clear geophysical evidence for continental drift. Then in 1963, Morley, Vine and Matthews showed that marine magnetic anomalies provided evidence for seafloor spreading.
Fields
Paleomagnetism is studied on a number of scales:
Geomagnetic secular variation is the small-scale changes in the direction and intensity of Earth's magnetic field. The magnetic north pole is constantly shifting relative to the axis of rotation of Earth. Magnetism is a vector and so magnetic field variation is studied by palaeodirectional measurements of magnetic declination and magnetic inclination and palaeointensity measurements.
Magnetostratigraphy uses the polarity reversal history of Earth's magnetic field recorded in rocks to determine the age of those rocks. Reversals have occurred at irregular intervals throughout Earth's history. The age and pattern of these reversals is known from the study of sea floor spreading zones and the dating of volcanic rocks.
Principles
The study of paleomagnetism is possible because iron-bearing minerals such as magnetite may record past polarity of Earth's magnetic field. Magnetic signatures in rocks can be recorded by several different mechanisms.
Thermoremanent magnetization
Iron-titanium oxide minerals in basalt and other igneous rocks may preserve the direction of Earth's magnetic field when the rocks cool through the Curie temperatures of those minerals. The Curie temperature of magnetite, a spinel-group iron oxide, is about , whereas most basalt and gabbro are completely crystallized at temperatures below . Hence, the mineral grains are not rotated physically to align with Earth's magnetic field, but rather they may record the orientation of that field. The record so preserved is called a thermoremanent magnetization (TRM).
Because complex oxidation reactions may occur as igneous rocks cool after crystallization, the orientations of Earth's magnetic field are not always accurately recorded, nor is the record necessarily maintained. Nonetheless, the record has been preserved well enough in basalts of oceanic crust to have been critical in the development of theories of sea floor spreading related to plate tectonics.
TRM can also be recorded in pottery kilns, hearths, and burned adobe buildings. The discipline based on the study of thermoremanent magnetisation in archaeological materials is called archaeomagnetic dating. Although the Māori people of New Zealand do not make pottery, their 700- to 800-year-old steam ovens, or hāngī, provide adequate archaeomagnetic material.
Detrital remanent magnetization
In a completely different process, magnetic grains in sediments may align with the magnetic field during or soon after deposition; this is known as detrital remanent magnetization. If the magnetization is acquired as the grains are deposited, the result is a depositional detrital remanent magnetization; if it is acquired soon after deposition, it is a post-depositional detrital remanent magnetization.
Chemical remanent magnetization
In a third process, magnetic grains grow during chemical reactions and record the direction of the magnetic field at the time of their formation. The field is said to be recorded by chemical remanent magnetization (CRM). A common form is held by the mineral hematite, another iron oxide. Hematite forms through chemical oxidation reactions of other minerals in the rock including magnetite. Red beds, clastic sedimentary rocks (such as sandstones) are red because of hematite that formed during sedimentary diagenesis. The CRM signatures in red beds can be quite useful, and they are common targets in magnetostratigraphy studies.
Isothermal remanent magnetization
Remanence that is acquired at a fixed temperature is called isothermal remanent magnetization (IRM). Remanence of this sort is not useful for paleomagnetism, but it can be acquired as a result of lightning strikes. Lightning-induced remanent magnetization can be distinguished by its high intensity and rapid variation in direction over scales of centimeters.
IRM is often induced in drill cores by the magnetic field of the steel core barrel. This contaminant is generally parallel to the barrel, and most of it can be removed by heating up to about 400 °C or demagnetizing in a small alternating field. In the laboratory, IRM is induced by applying fields of various strengths and is used for many purposes in rock magnetism.
Viscous remanent magnetization
Viscous remanent magnetization is remanence that is acquired by ferromagnetic materials influenced by a magnetic field for some time. In rocks, this remanence is typically aligned in the direction of the modern-day geomagnetic field. The fraction of a rock’s overall magnetization that is a viscous remanent magnetization is dependent on the magnetic mineralogy.
Sampling
The oldest rocks on the ocean floor are 200 Ma: very young when compared with the oldest continental rocks which date from 3.8 Ga. In order to collect paleomagnetic data dating beyond 200 Ma, scientists turn to magnetite-bearing samples on land to reconstruct Earth's ancient field orientation. Paleomagnetists, like many geologists, gravitate towards outcrops because layers of rock are exposed. Road cuts are a convenient man-made source of outcrops.
"And everywhere, in profusion along this half mile of [roadcut], there are small, neatly cored holes ... appears to be a Hilton for wrens and purple martins."
There are two main goals of sampling:
Retrieve samples with accurate orientations, and
Reduce statistical uncertainty.
One way to achieve the first goal is to use a rock coring drill that has an auger tipped with diamond bits. The drill cuts a cylindrical space around some rock. Into this space is inserted a pipe with a compass and inclinometer attached. These provide the orientations. Before this device is removed, a mark is scratched on the sample. After the sample is broken off, the mark can be augmented for clarity.
Applications
Paleomagnetic evidence of both reversals and polar wandering data was instrumental in verifying the theories of continental drift and plate tectonics in the 1960s and 1970s. Some applications of paleomagnetic evidence to reconstruct histories of terranes have continued to arouse controversies. Paleomagnetic evidence is also used in constraining possible ages for rocks and processes and in reconstructions of the deformational histories of parts of the crust.
Reversal magnetostratigraphy is often used to estimate the age of sites bearing fossils and hominin remains. Conversely, for a fossil of known age, the paleomagnetic data can fix the latitude at which the fossil was laid down. Such a paleolatitude provides information about the geological environment at the time of deposition. Paleomagnetic studies are combined with geochronological methods to determine absolute ages for rocks in which the magnetic record is preserved. For igneous rocks such as basalt, commonly used methods include potassium–argon and argon–argon geochronology.
| Physical sciences | Geochronology | Earth science |
843566 | https://en.wikipedia.org/wiki/Military%20base | Military base | A military base is a facility directly owned and operated by or for the military or one of its branches that shelters military equipment and personnel, and facilitates training and operations. A military base always provides accommodations for one or more units, but it may also be used as a command center, training ground or proving ground. In most cases, military bases rely on outside help to operate. However, certain complex bases are able to endure on their own for long periods because they are able to provide food, drinking water, and other necessities for their inhabitants while under siege. Bases for military aviation are called air bases. Bases for military ships are called naval bases.
Jurisdictional definition
Military bases within the United States are considered federal property and are subject to federal law. Civilians (such as family members of military officers) living on military bases are generally subject to the civil and criminal laws of the states where the bases are located. Military bases can range from small outposts to military cities containing up to 100,000 people. A military base may belong to a different nation or state than the territory surrounding it.
Naming
The name used generally refers to the type of military activity that takes place at the base, as well as the traditional nomenclature used by a branch of service.
A military base may go by any of a number of names, such as the following:
Ammunition dump
Armory
Arsenal
Advance airfield
Barracks
Border outpost
Cantonment
Casern
Combat outpost (COP)
Facility
Fire support base (FSB, FB)
Forward Operating Base (FOB)
Forward operating site (FOS)
Fortification
Garrison
Installation
Joint base
Magazine
Main Operating Base (MOB)
Marine Corps base
Military airbase, airfield or field
Military camp
Missile launch facility
Naval air station
Naval base
Naval dockyard
Naval shipyard
Observation post
Outpost (OP)
Presidio
Proving ground
Reservation
Satellite airfield
Station
Submarine base
Training camp
Types of establishment
Depending on the context, the term "military base" may refer to any establishment (usually permanent) that houses a nation's armed forces, or even organized paramilitary forces such as the police, constabulary, militia, or national guards. Alternatively, the term may refer solely to an establishment which is used only by an army (or possibly other land fighting related forces, such as marines) to the exclusion of a base used by either an air force or a navy. This is consistent with the different meanings of the word 'military'.
Some examples of permanent military bases used by the navies and air forces of the world are the HMNB Portsmouth in Portsmouth, UK, the Naval Air Station Whidbey Island, Washington State, US, or Ramstein Air Base, Germany (the last two are each designated as a Main Operating Base). Other examples of non- or semi-permanent military bases include a Forward Operating Base (FOB), a Logistics Base (Log base) and a Fire Base (FB).
A military base may also contain large concentrations of military supplies in order to support military logistics. Most military bases are restricted to the public and usually only authorized personnel may enter them (be it military personnel or their relatives and authorized civilian personnel).
In addition to the main military facilities on a certain installation, military bases usually (but not always) have various different facilities for military personnel. These facilities vary from country to country. Military bases can provide housing for military personnel, a post office and dining facilities (restaurants). They may also provide support facilities such as fast food restaurants, gas stations, chapels, schools, banks, thrift stores, a hospital or clinic (dental or health clinics, as well as veterinarian clinics), lodging, movie theaters, and, in some countries, retail stores (usually a supermarket such as Commissary and a Department Store, such as AAFES). On American military installations, Family, Morale, Welfare and Recreation (FMWR) provides facilities such as fitness centers, libraries, golf courses, travel centers, community service centers, campgrounds, child development centers, youth centers, automotive workshops, hobby/arts and crafts centers, bowling centers, and community centers.
Bases used by the United States Air Force Reserve tend to be active USAF bases. However, there are a few Air Reserve Bases, such as Dobbins ARB, Georgia, and Grissom ARB, Indiana, both of which are former active-duty USAF bases. Facilities of the Air National Guard are often located on civil airports in a secure cantonment area not accessible to the general public, though some units are based on USAF bases, and a few ANG-operated bases, such as Selfridge ANGB, Michigan. Support facilities on Air National Guard and Air Force Reserve installations tend to not be as extensive as active bases (i.e., they usually do not have on-base lodging (though Kingsley Field ANGB, Oregon, is an exception), clinics (except for drill days), or retail stores (although some have small convenience stores)).
In Russian usage "military base" or "naval base" is not limited to denoting a specific fence described facility and usually encompasses a broad territory within which a number of discrete facilities may be located. As examples, 1) the Russian Sevastopol Naval Base comprises individual facilities located within the city of Sevastopol proper (waterfront moorings, weapons stores, a headquarters compound, and a naval infantry base) as well as an airfield at Kacha north of the city; 2) the Leningrad Naval Base comprises all naval facilities in the greater St. Petersburg area including training schools, commissioning institutes, the naval academy, and the Kronshtadt base on Kotlin island.
Overseas military base
An overseas military base is a military base that is geographically located outside of the territory of the country whose armed forces are the principal occupants of the base.
Such bases may be established by treaties between the governing power in the host country and another country which needs to establish the military base in the host country for various reasons, usually strategic and logistic.
Furthermore, overseas military bases often serve as the source of the military brat subculture due to the children of the bases' occupant military being born or raised in the host country but raised with a remote parental knowledge of the occupant military's home country.
British military bases
In the 18th and 19th centuries the Royal Engineers were largely responsible for erecting military bases in the British Isles and the British Empire. In 1792 the Chief Engineer was instructed to prepare the Barrack Construction estimates for Parliament and at the same time the Department of the Barrackmaster-General was established.
During the period from the 1840s through the 1860s barracks were constructed under supervision of the Royal Engineers in:
Bristol (1847)
Preston (1848)
Tower of London (1851)
Sheerness (1854)
Sheffield (1854)
Curragh Camp (1855)
Devonport (1856)
Chelsea (1861)
The Cardwell Reforms (1872) ushered in another period of intensive Barrack building at Aldershot, Portsmouth, Plymouth, London, Woking, Woolwich, Dublin, Belfast, Malta, Gibraltar and the Cape of Good Hope.
In 1959 the Corps' Work Services was transferred to the civilian War Department Works Organization (later renamed Property Services Agency (PSA)) and by 1965 the (Specialist Teams Royal Engineers (STRE)) were formed to plan and execute Works projects worldwide.
Some British and Commonwealth naval bases are traditionally named, commissioned, and administered as though they were naval ships. For this reason they are sometimes called stone frigates.
Related term
Billet
Kaserne
| Technology | Mixed-use buildings | null |
844186 | https://en.wikipedia.org/wiki/Modern%20physics | Modern physics | Modern physics is a branch of physics that developed in the early 20th century and onward or branches greatly influenced by early 20th century physics. Notable branches of modern physics include quantum mechanics, special relativity, and general relativity.
Classical physics is typically concerned with everyday conditions: speeds are much lower than the speed of light, sizes are much greater than that of atoms, and energies are relatively small. Modern physics, however, is concerned with more extreme conditions, such as high velocities that are comparable to the speed of light (special relativity), small distances comparable to the atomic radius (quantum mechanics), and very high energies (relativity). In general, quantum and relativistic effects are believed to exist across all scales, although these effects may be very small at human scale. While quantum mechanics is compatible with special relativity (See: Relativistic quantum mechanics), one of the unsolved problems in physics is the unification of quantum mechanics and general relativity, which the Standard Model of particle physics currently cannot account for.
Modern physics is an effort to understand the underlying processes of the interactions of matter using the tools of science and engineering. In a literal sense, the term modern physics means up-to-date physics. In this sense, a significant portion of so-called classical physics is modern. However, since roughly 1890, new discoveries have caused significant paradigm shifts: especially the advent of quantum mechanics (QM) and relativity (ER). Physics that incorporates elements of either QM or ER (or both) is said to be modern physics. It is in this latter sense that the term is generally used.
Modern physics is often encountered when dealing with extreme conditions. Quantum mechanical effects tend to appear when dealing with "lows" (low temperatures, small distances), while relativistic effects tend to appear when dealing with "highs" (high velocities, large distances), the "middles" being classical behavior. For example, when analyzing the behavior of a gas at room temperature, most phenomena will involve the (classical) Maxwell–Boltzmann distribution. However, near absolute zero, the Maxwell–Boltzmann distribution fails to account for the observed behavior of the gas, and the (modern) Fermi–Dirac or Bose–Einstein distributions have to be used instead.
Very often, it is possible to find – or "retrieve" – the classical behavior from the modern description by analyzing the modern description at low speeds and large distances (by taking a limit, or by making an approximation). When doing so, the result is called the classical limit.
Hallmarks
These are generally considered to be the topics regarded as the "core" of the foundation of modern physics:
| Physical sciences | Physics basics: General | Physics |
844783 | https://en.wikipedia.org/wiki/Forgetful%20functor | Forgetful functor | In mathematics, in the area of category theory, a forgetful functor (also known as a stripping functor) 'forgets' or drops some or all of the input's structure or properties 'before' mapping to the output. For an algebraic structure of a given signature, this may be expressed by curtailing the signature: the new signature is an edited form of the old one. If the signature is left as an empty list, the functor is simply to take the underlying set of a structure. Because many structures in mathematics consist of a set with an additional added structure, a forgetful functor that maps to the underlying set is the most common case.
Overview
As an example, there are several forgetful functors from the category of commutative rings. A (unital) ring, described in the language of universal algebra, is an ordered tuple satisfying certain axioms, where and are binary functions on the set , is a unary operation corresponding to additive inverse, and 0 and 1 are nullary operations giving the identities of the two binary operations. Deleting the 1 gives a forgetful functor to the category of rings without unit; it simply "forgets" the unit. Deleting and 1 yields a functor to the category of abelian groups, which assigns to each ring the underlying additive abelian group of . To each morphism of rings is assigned the same function considered merely as a morphism of addition between the underlying groups. Deleting all the operations gives the functor to the underlying set .
It is beneficial to distinguish between forgetful functors that "forget structure" versus those that "forget properties". For example, in the above example of commutative rings, in addition to those functors that delete some of the operations, there are functors that forget some of the axioms. There is a functor from the category CRing to Ring that forgets the axiom of commutativity, but keeps all the operations. Occasionally the object may include extra sets not defined strictly in terms of the underlying set (in this case, which part to consider the underlying set is a matter of taste, though this is rarely ambiguous in practice). For these objects, there are forgetful functors that forget the extra sets that are more general.
Most common objects studied in mathematics are constructed as underlying sets along with extra sets of structure on those sets (operations on the underlying set, privileged subsets of the underlying set, etc.) which may satisfy some axioms. For these objects, a commonly considered forgetful functor is as follows.
Let be any category based on sets, e.g. groups—sets of elements—or topological spaces—sets of 'points'. As usual, write for the objects of and write for the morphisms of the same. Consider the rule:
For all in the underlying set of
For all in the morphism, , as a map of sets.
The functor is then the forgetful functor from to Set, the category of sets.
Forgetful functors are almost always faithful. Concrete categories have forgetful functors to the category of sets—indeed they may be defined as those categories that admit a faithful functor to that category.
Forgetful functors that only forget axioms are always fully faithful, since every morphism that respects the structure between objects that satisfy the axioms automatically also respects the axioms. Forgetful functors that forget structures need not be full; some morphisms don't respect the structure. These functors are still faithful however because distinct morphisms that do respect the structure are still distinct when the structure is forgotten. Functors that forget the extra sets need not be faithful, since distinct morphisms respecting the structure of those extra sets may be indistinguishable on the underlying set.
In the language of formal logic, a functor of the first kind removes axioms, a functor of the second kind removes predicates, and a functor of the third kind remove types. An example of the first kind is the forgetful functor Ab → Grp. One of the second kind is the forgetful functor Ab → Set. A functor of the third kind is the functor Mod → Ab, where Mod is the fibred category of all modules over arbitrary rings. To see this, just choose a ring homomorphism between the underlying rings that does not change the ring action. Under the forgetful functor, this morphism yields the identity. Note that an object in Mod is a tuple, which includes a ring and an abelian group, so which to forget is a matter of taste.
Left adjoints of forgetful functors
Forgetful functors tend to have left adjoints, which are 'free' constructions. For example:
free module: the forgetful functor from (the category of -modules) to has left adjoint , with , the free -module with basis .
free group
free lattice
tensor algebra
free category, adjoint to the forgetful functor from categories to quivers
universal enveloping algebra
For a more extensive list, see (Mac Lane 1997).
As this is a fundamental example of adjoints, we spell it out:
adjointness means that given a set X and an object (say, an R-module) M, maps of sets correspond to maps of modules : every map of sets yields a map of modules, and every map of modules comes from a map of sets.
In the case of vector spaces, this is summarized as:
"A map between vector spaces is determined by where it sends a basis, and a basis can be mapped to anything."
Symbolically:
The unit of the free–forgetful adjunction is the "inclusion of a basis": .
Fld, the category of fields, furnishes an example of a forgetful functor with no adjoint. There is no field satisfying a free universal property for a given set.
| Mathematics | Category theory | null |
844811 | https://en.wikipedia.org/wiki/Ceiling | Ceiling | A ceiling is an overhead interior roof that covers the upper limits of a room. It is not generally considered a structural element, but a finished surface concealing the underside of the roof structure or the floor of a story above. Ceilings can be decorated to taste, and there are many examples of frescoes and artwork on ceilings, especially within religious buildings. A ceiling can also be the upper limit of a tunnel.
The most common type of ceiling is the dropped ceiling, which is suspended from structural elements above. Panels of drywall are fastened either directly to the ceiling joists or to a few layers of moisture-proof plywood which are then attached to the joists. Pipework or ducts can be run in the gap above the ceiling, and insulation and fireproofing material can be placed here. Alternatively, ceilings may be spray painted instead, leaving the pipework and ducts exposed but painted, and using spray foam.
A subset of the dropped ceiling is the suspended ceiling, wherein a network of aluminum struts, as opposed to drywall, are attached to the joists, forming a series of rectangular spaces. Individual pieces of cardboard are then placed inside the bottom of those spaces so that the outer side of the cardboard, interspersed with aluminum rails, is seen as the ceiling from below. This makes it relatively easy to repair the pipes and insulation behind the ceiling, since all that is necessary is to lift off the cardboard, rather than digging through the drywall and then replacing it.
Other types of ceiling include the cathedral ceiling, the concave or barrel-shaped ceiling, the stretched ceiling and the coffered ceiling. Coving often links the ceiling to the surrounding walls. Ceilings can play a part in reducing fire hazard, and a system is available for rating the fire resistance of dropped ceilings.
Types
Ceilings are classified according to their appearance or construction. A cathedral ceiling is any tall ceiling area similar to those in a church. A dropped ceiling is one in which the finished surface is constructed anywhere from a few inches or centimeters to several feet or a few meters below the structure above it. This may be done for aesthetic purposes, such as achieving a desirable ceiling height; or practical purposes such as acoustic damping or providing a space for HVAC or piping. An inverse of this would be a raised floor. A concave or barrel-shaped ceiling is curved or rounded upward, usually for visual or acoustical value, while a coffered ceiling is divided into a grid of recessed square or octagonal panels, also called a "lacunar ceiling". A cove ceiling uses a curved plaster transition between wall and ceiling; it is named for cove molding, a molding with a concave curve. A stretched ceiling (or stretch ceiling) uses a number of individual panels using material such as PVC fixed to a perimeter rail.
Elements
Ceilings have frequently been decorated with fresco painting, mosaic tiles and other surface treatments. While hard to execute (at least in place) a decorated ceiling has the advantage that it is largely protected from damage by fingers and dust. In the past, however, this was more than compensated for by the damage from smoke from candles or a fireplace. Many historic buildings have celebrated ceilings. Perhaps the most famous is the Sistine Chapel ceiling by Michelangelo.
Ceiling height, particularly in the case of low ceilings, may have psychological impacts.
Fire-resistance rated ceilings
The most common ceiling that contributes to fire-resistance ratings in commercial and residential construction is the dropped ceiling. In the case of a dropped ceiling, the rating is achieved by the entire system, which is both the structure above, from which the ceilings is suspended, which could be a concrete floor or a timber floor, as well as the suspension mechanism and, finally the lowest membrane or dropped ceiling. Between the structure that the dropped ceiling is suspended from and the dropped membrane, such as a T-bar ceiling or a layer of drywall, there is often some room for mechanical and electrical piping, wiring and ducting to run.
An independent ceiling, however, can be constructed such that it has a stand-alone fire-resistance rating. Such systems must be tested without the benefit of being suspended from a slab above in order to prove that the resulting system is capable of holding itself up. This type of ceiling would be installed to protect items above from fire.
Gallery
| Technology | Architectural elements | null |
845042 | https://en.wikipedia.org/wiki/Grass%20carp | Grass carp | The grass carp (Ctenopharyngodon idella) is a species of large herbivorous freshwater fish in the family Cyprinidae, native to the Pacific Far East, with a native range stretching from northern Vietnam to the Amur River on the Sino-Russian border. This Asian carp is the only species of the genus Ctenopharyngodon.
Grass carp are resident fish of large turbid rivers and associated floodplain lakes/wetlands with a wide range of temperature tolerance, and spawn at temperatures of . It has been cultivated as a food fish in China for centuries, being known as one of the "Four Great Domestic Fish" (), but was later introduced to Europe and the United States for aquatic weed control, becoming the fish species with the largest reported farmed production globally, over five million tonnes per year.
Appearance and anatomy
Grass carp have elongated, chubby, torpedo-shaped body forms. The terminal mouth is slightly oblique with non-fleshy, firm lips, and no barbels. The complete lateral line contains 40 to 42 scales. Broad, ridged pharyngeal teeth are arranged in a "2, 4-4, 2" formula. The dorsal fin has eight to 10 soft rays, and the anal fin is set closer to the tail than most cyprinids. Body color is dark olive, shading to brownish-yellow on the sides, with a white belly and large, slightly outlined scales.
Grass carp grow very rapidly. Young fish stocked in the spring at will reach over by fall. The typical length is about . The maximum length is and they grow to .
Ecology
Grass carp inhabit lakes, ponds, pools and backwaters of large rivers, preferring large, slow-flowing or standing water bodies with abundant vegetation. In the wild, grass carp spawn in fast-moving rivers, and their eggs, which are slightly heavier than water, develop while drifting downstream, kept in suspension by turbulence. Grass carp require long rivers for the survival of the eggs and very young fish, and the eggs are thought to die if they sink to the bottom.
Adult grass carps feed primarily on aquatic plants, both higher aquatic plants and submerged terrestrial vegetation, but may also eat detritus, insects and other invertebrates. They eat up to three times their own body weight daily, and thrive in small lakes and backwaters that provide an abundant supply of vegetation.
According to one study, grass carp live 5–9 years, with the oldest surviving 11 years. In Silver Lake, Washington, a thriving population of grass carp is passing the 15-year mark.
Introduced species
Grass carp have been introduced to many countries around the world. In the Northern Hemisphere, countries and territories of introduction include Japan, the Philippines, Malaysia, India, Pakistan, Iran, Israel, the United States, Mexico, Sweden, Denmark, the United Kingdom, France, Germany, the Netherlands, Switzerland, Italy, Poland, the Czech Republic, Slovakia, Romania, Croatia, Slovenia, Serbia, Montenegro, Bosnia and Herzegovina and Macedonia. In the Southern Hemisphere, they have been introduced to Argentina, Venezuela, Australia, New Zealand, Fiji and South Africa. Grass carp are known to have spawned and established self-reproducing populations in only six of the many larger Northern Hemisphere rivers into which they have been stocked. Their failure to establish populations in other rivers suggests they have quite specific reproductive requirements.
In the United States, the species was first imported in 1963 from Taiwan and Malaysia to aquaculture facilities in Alabama and Arkansas. The first release is believed to have been an accidental escape in 1966 from the U.S. Fish and Wildlife Service's Fish Farming Experimental Station in Stuttgart, Arkansas, followed by planned introductions beginning in 1969. Subsequently, authorized, illegal and accidental introductions have been widespread; by the 1970s, the species had been introduced to 40 states, and it has since been reported in 45 of the country's 50 states. In 2013, it was determined to be reproducing in the Great Lakes Basin. It is still stocked in many states as an effective biocontrol for undesirable aquatic vegetation, many species of which are themselves introduced.
Use
Weed control
Grass carp were introduced into New Zealand in 1966 to control the growth of aquatic plants. Unlike the other introduced fish brought to New Zealand, the potential value and impact of grass carp was investigated in secure facilities prior to their use in field trials. They are now approved by the New Zealand government for aquatic weed control, although each instance requires specific authorization. In the Netherlands, the species was also introduced in 1973 to control over-abundant aquatic weeds. The release was controlled and regulated by the Dutch Ministry of Agriculture, Nature, and Food Quality. In both of these countries, control is made easier because grass carp are very unlikely to naturally reproduce because of their very specific breeding requirements, but elsewhere, control is obtained by the use of sterile, triploid fish.
Food
Grass carp is one of the most common freshwater farmed fish in China, being one of the Four Domestic Fish (四大家鱼) alongside the Black carp, Silver carp, and Bighead carp. Its meat is tender, while with little bone. Many Chinese cuisine has grass carp as a featured dish, such as Cantonese cuisine. In some Asian countries, it is believed that ingestion of raw bile or entire gall bladders of the grass carp may improve visual acuity and health. However, it may in fact cause severe poisoning.
Fishing for grass carp
Grass carp grow large and are strong fighters when hooked on a line, but because of their vegetarian habits and their wariness, they can be difficult to catch via angling. The IGFA World record for a grass carp caught on line and hook is , caught in Bulgaria in 2009. The fish are also popular sport fish in areas where bowfishing is legal.
Where grass carp populations are maintained through stocking as a biocontrol for noxious weeds, fishermen are typically asked to return any caught to the water alive and unharmed.
| Biology and health sciences | Cypriniformes | Animals |
845253 | https://en.wikipedia.org/wiki/Gonostomatidae | Gonostomatidae | The Gonostomatidae are a family of mesopelagic marine fish, commonly named bristlemouths, lightfishes, or anglemouths. It is a relatively small family, containing only eight known genera and 32 species. However, bristlemouths make up for their lack of diversity with relative abundance, numbering in the hundreds of trillions to quadrillions. The genus Cyclothone (with 13 species) is thought to be one of the most abundant vertebrate genera in the world.
The fossil record of this family dates back to the Miocene epoch. Living bristlemouths were discovered by William Beebe in the early 1930s and described by L. S. Berg in 1958. The fish are mostly found in the Atlantic, Indian, and Pacific Oceans, although the species Cyclothone microdon may be found in Arctic waters. They have elongated bodies from in length. They have a number of green or red light-producing photophores aligned along the undersides of their heads or bodies. Their chief common name, bristlemouth, comes from their odd, equally sized, and bristle-like teeth. They are typically black in color which provides camouflage from predators in deep, dark waters. They mainly feed on zooplankton and small crustaceans due to their small size.
Morphology
Bristlemouths are protandrous, therefore a male first hermaphrodite. They begin their lives as males and some of them switch to female. Male bristlemouths are smaller than females.
Bristlemouths have large jaws that are capable of catching prey larger than themselves. The length of the S. glarisianus's (a species of Bristlemouth) lower jaw is equaled to 70% of the entire length of their head. The lower jaw of the Bristlemouths is not functional in terms of masticating their prey. It is therefore hypothesized that they swallow their prey tail first.
Bristlemouths are extremely small, measuring on average . Bristlemouths have elongated bodies, small eyes, short snouts, large mouths, and large jaws. The position of the dorsal fin begins in line with the anal fin. The difference between bristlemouths species is found in the intensity of their pigmentation and photophore size. For the majority of the species, the morphology remains the same.
Bristlemouths are mostly dark in pigmentation but at times can display translucently. Bristlemouths contain a pineal organ which functions to detect slow changing ambient light. This allows the Bristlemouth to have control over its circadian clock and seasonal behavior.
Due to the small size of the fish, they are easy prey to dragonfish and fangtooths.
Taxonomy
Some classifications include the genera Pollichthys and Vinciguerria, but this article follows FishBase in placing them in the family Phosichthyidae.
Some classifications include species in the genus Zaphotias, but these are junior synonyms of the species Bonapartia pedaliota.
Feeding habits
Brislemouths feed mostly on zooplankton and small crustaceans. Their diet is composed of a range from 92 to 98% of Crustacea. A minor part of their diet is made up of opportunistic encounters with smaller fish. Brislemouths that consume fish prey are found in individuals ranging from 70 mm to 75 mm. Bristlemouths do not have seasonal trends when it comes to their feeding habits.
Bristlemouths are diel vertical migrators, therefore migrating closer to the surface waters in the nighttime in order to find more food. Out of the thirteen bristlemouth species, eight have been found near the surface therefore explaining their DVM behaviors.
Bristlemouths are able to efficiently capture their prey due to their bioluminescent nature.
Bioluminescence
Bristlemouths are light emitting fish. Bristlemouths rely on their bioluminescence for different outcomes. Some rely on it to find prey while others use it to avoid predation. However, the most common way that their bioluminescence is used is to signal between fish in the same way people "dance or wear bright colors at the nightclub."
| Biology and health sciences | Stomiiformes | Animals |
18733230 | https://en.wikipedia.org/wiki/Leading-edge%20slat | Leading-edge slat | A slat is an aerodynamic surface on the leading edge of the wing of a fixed-wing aircraft. When retracted, the slat lies flush with the rest of the wing. A slat is deployed by sliding forward, opening a slot between the wing and the slat. Air from below the slat flows through the slot and replaces the boundary layer that has travelled at high speed around the leading edge of the slat, losing a significant amount of its kinetic energy due to skin friction drag. When deployed, slats allow the wings to operate at a higher angle of attack before stalling. With slats deployed an aircraft can fly at slower speeds, allowing it to take off and land in shorter distances. They are used during takeoff and landing and while performing low-speed maneuvers which may take the aircraft close to a stall. Slats are retracted in normal flight to minimize drag.
Slats are high-lift devices typically used on aircraft intended to operate within a wide range of speeds. Trailing-edge flap systems running along the trailing edge of the wing are common on all aircraft.
Types
Types include:
Automatic The spring-loaded slat lies flush with the wing leading edge, held in place by the force of the air acting on them. As the aircraft slows down, the aerodynamic force is reduced and the springs extend the slats. Sometimes referred to as Handley-Page slats.
Fixed The slat is permanently extended. This is sometimes used on specialist low-speed aircraft (these are referred to as slots) or when simplicity takes precedence over speed.
Powered The slat extension can be controlled by the pilot. This is commonly used on airliners.
Operation
The chord of the slat is typically only a few percent of the wing chord. The slats may extend over the outer third of the wing, or they may cover the entire leading edge. Many early aerodynamicists, including Ludwig Prandtl, believed that slats work by inducing a high energy stream to the flow of the main airfoil, thus re-energizing its boundary layer and delaying stall. In reality, the slat does not give the air in the slot a high velocity (it actually reduces its velocity) and also it cannot be called high-energy air since all the air outside the actual boundary layers has the same total heat. The actual effects of the slat are:
The slat effect The velocities at the leading edge of the downstream element (main airfoil) are reduced due to the circulation of the upstream element (slat) thus reducing the pressure peaks of the downstream element.
The circulation effect The circulation of the downstream element increases the circulation of the upstream element thus improving its aerodynamic performance.
The dumping effect The discharge velocity at the trailing edge of the slat is increased due to the circulation of the main airfoil thus alleviating separation problems or increasing lift.
Off the surface pressure recovery The deceleration of the slat wake occurs in an efficient manner, out of contact with a wall.
Fresh boundary layer effect Each new element starts with a fresh boundary layer at its leading edge. Thin boundary layers can withstand stronger adverse gradients than thick ones.
The slat has a counterpart found in the wings of some birds, the alula, a feather or group of feathers which the bird can extend under control of its "thumb".
History
Slats were first developed by Gustav Lachmann in 1918. The stall-related crash in August 1917 of a Rumpler C aeroplane prompted Lachmann to develop the idea, and a small wooden model was built in 1917 in Cologne. In Germany in 1918 Lachmann presented a patent for leading-edge slats. However, the German patent office at first rejected it, as the office did not believe the possibility of postponing the stall by dividing the wing.
Independently of Lachmann, Handley Page Ltd in Great Britain also developed the slotted wing as a way to postpone the stall by delaying separation of the flow from the upper surface of the wing at high angles of attack, and applied for a patent in 1919; to avoid a patent challenge, they reached an ownership agreement with Lachmann. That year, an Airco DH.9 was fitted with slats and test flown. Later, an Airco DH.9A was modified as a monoplane with a large wing fitted with full-span leading edge slats and trailing-edge ailerons (i.e. what would later be called trailing-edge flaps) that could be deployed in conjunction with the leading-edge slats to test improved low-speed performance. This was later known as the Handley Page H.P.20 Several years later, having subsequently taken employment at the Handley-Page aircraft company, Lachmann was responsible for a number of aircraft designs, including the Handley Page Hampden.
Licensing the design became one of the company's major sources of income in the 1920s. The original designs were in the form of a fixed slot near the leading edge of the wing, a design that was used on a number of STOL aircraft.
During World War II, German aircraft commonly fitted a more advanced version of the slat that reduced drag by being pushed back flush against the leading edge of the wing by air pressure, popping out when the angle of attack increased to a critical angle. Notable slats of that time belonged to the German Fieseler Fi 156 Storch. These were similar in design to retractable slats, but were fixed and non-retractable. This design feature allowed the aircraft to takeoff into a light wind in less than 45 m (150 ft), and land in 18 m (60 ft). Aircraft designed by the Messerschmitt company employed automatic, spring-loaded leading-edge slats as a general rule, except for the Alexander Lippisch-designed Messerschmitt Me 163B Komet rocket fighter, which instead used fixed slots built integrally with, and just behind, the wing panel's outer leading edges.
Post-World War II, slats have also been used on larger aircraft and generally operated by hydraulics or electricity. The A-4 Skyhawk slats were spring loaded and deployed by the air load below certain speeds.
Research
Several technology research and development efforts exist to integrate the functions of flight control systems such as ailerons, elevators, elevons, flaps, and flaperons into wings to perform the aerodynamic purpose with the advantages of less: mass, cost, drag, inertia (for faster, stronger control response), complexity (mechanically simpler, fewer moving parts or surfaces, less maintenance), and radar cross-section for stealth. These may be used in many unmanned aerial vehicles (UAVs) and 6th generation fighter aircraft.
One promising approach that could rival slats are flexible wings. In flexible wings, much or all of a wing surface can change shape in flight to deflect air flow. The X-53 Active Aeroelastic Wing is a NASA effort. The adaptive compliant wing is a military and commercial effort.
| Technology | Aircraft components | null |
6410946 | https://en.wikipedia.org/wiki/Atmosphere%20of%20Venus | Atmosphere of Venus | The atmosphere of Venus is the very dense layer of gases surrounding the planet Venus. Venus's atmosphere is composed of 96.5% carbon dioxide and 3.5% nitrogen, with other chemical compounds present only in trace amounts. It is much denser and hotter than that of Earth; the temperature at the surface is 740 K (467 °C, 872 °F), and the pressure is , roughly the pressure found under water on Earth. The atmosphere of Venus supports decks of opaque clouds of sulfuric acid that cover the entire planet, preventing optical Earth-based and orbital observation of the surface. Information about surface topography has been obtained exclusively by radar imaging.
Aside from the very surface layers, the atmosphere is in a state of vigorous circulation. The upper layer of troposphere exhibits a phenomenon of super-rotation, in which the atmosphere circles the planet in just four Earth days, much faster than the planet's sidereal day of 243 days. The winds supporting super-rotation blow at a speed of 100 m/s (≈360 km/h or 220 mph) or more. Winds move at up to 60 times the speed of the planet's rotation, while Earth's fastest winds are only 10% to 20% rotation speed. On the other hand, the wind speed becomes increasingly slower as the elevation from the surface decreases, with the breeze barely reaching the speed of 2.8 m/s (≈10 km/h or 6.2 mph) on the surface. Near the poles are anticyclonic structures called polar vortices. Each vortex is double-eyed and shows a characteristic S-shaped pattern of clouds. Above there is an intermediate layer of mesosphere which separates the troposphere from the thermosphere. The thermosphere is also characterized by strong circulation, but very different in its nature—the gases heated and partially ionized by sunlight in the sunlit hemisphere migrate to the dark hemisphere where they recombine and downwell.
Unlike Earth, Venus lacks a magnetic field. Its ionosphere separates the atmosphere from outer space and the solar wind. This ionized layer excludes the solar magnetic field, giving Venus a distinct magnetic environment. This is considered Venus's induced magnetosphere. Lighter gases, including water vapour, are continuously blown away by the solar wind through the induced magnetotail. It is speculated that the atmosphere of Venus up to around 4 billion years ago was more like that of the Earth with liquid water on the surface. A runaway greenhouse effect may have been caused by the evaporation of the surface water and subsequent rise of the levels of other greenhouse gases.
Despite the harsh conditions on the surface, the atmospheric pressure and temperature at about 50 km to 65 km above the surface of the planet are nearly the same as that of the Earth, making its upper atmosphere the most Earth-like area in the Solar System, even more so than the surface of Mars. Due to the similarity in pressure and temperature and the fact that breathable air (21% oxygen, 78% nitrogen) is a lifting gas on Venus in the same way that helium is a lifting gas on Earth, the upper atmosphere has been proposed as a location for both exploration and colonization.
History
Christiaan Huygens was the first to hypothesize the existence of an atmosphere on Venus. In the Book II of Cosmotheoros, published in 1698, he writes:
Decisive evidence for the atmosphere of Venus was provided by Mikhail Lomonosov, based on his observation of the transit of Venus in 1761 in a small observatory near his house in Saint Petersburg, Russia.
Structure and composition
Composition
The atmosphere of Venus is composed of 96.5% carbon dioxide, 3.5% nitrogen, and traces of other gases, most notably sulfur dioxide. The amount of nitrogen in the atmosphere is relatively small compared to the amount of carbon dioxide, but because the atmosphere is so much thicker than that on Earth, its total nitrogen content is roughly four times higher than Earth's, even though on Earth nitrogen makes up about 78% of the atmosphere.
The atmosphere contains a range of compounds in small quantities, including some based on hydrogen, such as hydrogen chloride (HCl) and hydrogen fluoride (HF). There is carbon monoxide, water vapour and atomic oxygen as well. Hydrogen is in relatively short supply in the Venusian atmosphere. A large amount of the planet's hydrogen is theorised to have been lost to space, with the remainder being mostly bound up in water vapour and sulfuric acid (H2SO4). Strong evidence of significant hydrogen loss over the historical evolution of the planet is the very high D–H ratio measured in the Venusian atmosphere. The ratio is about 0.015–0.025, which is 100–150 times higher than the terrestrial value of 1.6. According to some measurements, in the upper atmosphere of Venus D/H ratio is 1.5 higher than in the bulk atmosphere.
Phosphine
In 2020, there was considerable discussion regarding whether phosphine (PH3) might be present in trace amounts in Venus's atmosphere. This would be noteworthy as phosphine is a potential biomarker indicating the presence of life. This was prompted by an announcement in September 2020 that this compound had been detected in trace amounts. No known abiotic source present on Venus could produce phosphine in the quantities detected. On review, an interpolation error was discovered that resulted in multiple spurious spectroscopic lines, including the spectral feature of phosphine. Re-analysis of data with the fixed algorithm either do not result in the detection of the phosphine or detected it with much lower concentration of 1 ppb.
The announcement promoted re-analysis of Pioneer Venus data which found part of chlorine and all of hydrogen sulfide spectral features are instead phosphine-related, meaning lower than thought concentration of chlorine and non-detection of hydrogen sulfide. Another re-analysis of archived infrared spectral measurements by the NASA Infrared Telescope Facility in 2015 did not reveal any phosphine in the Venusian atmosphere, placing an upper limit for phosphine concentration at 5 ppb—a quarter of the spectroscopic value reported in September.
In 2022, no phosphine detection with an upper limit concentration of 0.8 ppb was announced for Venusian altitudes of 75–110 km.
In September 2024, the preliminary analysis of the JCMT-Venus data has confirmed the existence of phosphine in the atmosphere of Venus, with the concentration 300 ppb at altitude 55 km. Further data processing is still needed to measure phosphine concentration deeper in the Venusian cloud deck.
Ammonia
The ammonia in the atmosphere of Venus was tentatively detected by two atmospheric probes - Venera 8 and Pioneer Venus Multiprobe, although the detection was rejected that time due to poorly characterized sensors behavior in Venusian environment and ammonia believed to be chemically unstable in the strongly oxidizing atmosphere of Venus.
Troposphere
The atmosphere is divided into a number of sections depending on altitude. The densest part of the atmosphere, the troposphere, begins at the surface and extends upwards to 65 km. The winds are slow near the surface, but at the top of the troposphere the temperature and pressure reaches Earth-like levels and clouds pick up speed to 100 m/s (360 km/h).
The atmospheric pressure at the surface of Venus is about 92 times that of the Earth, similar to the pressure found below the surface of the ocean. The atmosphere has a mass of 4.8 kg, about 93 times the mass of the Earth's total atmosphere. The density of the air at the surface is 65 kg/m3, which is 6.5% that of liquid water on Earth. The pressure found on Venus's surface is high enough that the carbon dioxide is technically no longer a gas, but a supercritical fluid. This supercritical carbon dioxide forms a kind of sea, with 6.5% the density of water, that covers the entire surface of Venus. This sea of supercritical carbon dioxide transfers heat very efficiently, buffering the temperature changes between night and day (which last 56 terrestrial days). Particularly at possible higher atmospheric pressures in Venus' past might have created an even more fluid-like layer of supercritical carbon dioxide shaping Venus' landscape; altogether, it is unclear how the supercritical environment behaves and is shaped.
The large amount of CO2 in the atmosphere together with water vapour and sulfur dioxide create a strong greenhouse effect, trapping solar energy and raising the surface temperature to around 740 K (467 °C), hotter than any other terrestrial planet in the Solar System, even that of Mercury despite being located farther out from the Sun and receiving only 25% of the solar energy (per unit area) Mercury does. The average temperature on the surface is above the melting points of lead (600 K, 327 °C), tin (505 K, 232 °C), and zinc (693 K, 420 °C). The thick troposphere also makes the difference in temperature between the day and night side small, even though the slow retrograde rotation of the planet causes a single solar day to last 116.5 Earth days. The surface of Venus spends 58.3 days in darkness before the sun rises again behind the clouds.
The troposphere on Venus contains 99% of the atmosphere by mass. 90% of the atmosphere of Venus is within 28 km (17.5 mi) of the surface; by comparison, 90% of the atmosphere of Earth is within 16 km (10 mi) of the surface. At a height of 50 km (31 mi) the atmospheric pressure is approximately equal to that at the surface of Earth. On the night side of Venus clouds can still be found at 80 km (50 mi) above the surface.
The altitude of the troposphere most similar to Earth is near the tropopause—the boundary between troposphere and mesosphere. It is located slightly above 50 km. According to measurements by the Magellan and Venus Express probes, the altitude from 52.5 to 54 km has a temperature between 293 K (20 °C) and 310 K (37 °C), and the altitude at 49.5 km above the surface is where the pressure becomes the same as Earth at sea level. As crewed ships sent to Venus would be able to compensate for differences in temperature to a certain extent, anywhere from about 50 to 54 km or so above the surface would be the easiest altitude in which to base an exploration or colony, where the temperature would be in the crucial "liquid water" range of 273 K (0 °C) to 323 K (50 °C) and the air pressure the same as habitable regions of Earth. As CO2 is heavier than air, the colony's air (nitrogen and oxygen) could keep the structure floating at that altitude like a dirigible.
Circulation
The circulation in Venus's troposphere follows the so-called cyclostrophic flow. Its windspeeds are roughly determined by the balance of the pressure gradient and centrifugal forces in almost purely zonal flow. In contrast, the circulation in the Earth's atmosphere is governed by the geostrophic balance. Venus's windspeeds can be directly measured only in the upper troposphere (tropopause), between 60 and 70 km, altitude, which corresponds to the upper cloud deck. The cloud motion is usually observed in the ultraviolet part of the spectrum, where the contrast between clouds is the highest. The linear wind speeds at this level are about 100 ± 10 m/s at lower than 50° latitude. They are retrograde in the sense that they blow in the direction of the retrograde rotation of the planet. The winds quickly decrease towards the higher latitudes, eventually reaching zero at the poles. Such strong cloud-top winds cause a phenomenon known as the super-rotation of the atmosphere. In other words, these high-speed winds circle the whole planet faster than the planet itself rotates. The super-rotation on Venus is differential, which means that the equatorial troposphere super-rotates more slowly than the troposphere at the midlatitudes. The winds also have a strong vertical gradient. They decline deep in the troposphere with the rate of 3 m/s per km. The winds near the surface of Venus are much slower than that on Earth. They actually move at only a few kilometres per hour (generally less than 2 m/s and with an average of 0.3 to 1.0 m/s), but due to the high density of the atmosphere at the surface, this is still enough to transport dust and small stones across the surface, much like a slow-moving current of water.
All winds on Venus are ultimately driven by convection. Hot air rises in the equatorial zone, where solar heating is concentrated and flows to the poles. Such an almost-planetwide overturning of the troposphere is called Hadley circulation. However, the meridional air motions are much slower than zonal winds. The poleward limit of the planet-wide Hadley cell on Venus is near ±60° latitudes. Here air starts to descend and returns to the equator below the clouds. This interpretation is supported by the distribution of the carbon monoxide, which is also concentrated in the vicinity of ±60° latitudes. Poleward of the Hadley cell a different pattern of circulation is observed. In the latitude range 60°–70° cold polar collars exist. They are characterized by temperatures about 30–40 K lower than in the upper troposphere at nearby latitudes. The lower temperature is probably caused by the upwelling of the air in them and by the resulting adiabatic cooling. Such an interpretation is supported by the denser and higher clouds in the collars. The clouds lie at 70–72 km altitude in the collars—about 5 km higher than at the poles and low latitudes. A connection may exist between the cold collars and high-speed midlatitude jets in which winds blow as fast as 140 m/s. Such jets are a natural consequence of the Hadley-type circulation and should exist on Venus between 55 and 60° latitude.
Odd structures known as polar vortices lie within the cold polar collars. They are giant hurricane-like storms four times larger than their terrestrial analogs. Each vortex has two "eyes"—the centres of rotation, which are connected by distinct S-shaped cloud structures. Such double eyed structures are also called polar dipoles. Vortices rotate with the period of about 3 days in the direction of general super-rotation of the atmosphere. The linear wind speeds are 35–50 m/s near their outer edges and zero at the poles. The temperature at the cloud-tops in each polar vortex is much higher than in the nearby polar collars, reaching 250 K (−23 °C). The conventional interpretation of the polar vortices is that they are anticyclones with downwelling in the centre and upwelling in the cold polar collars. This type of circulation resembles a winter polar anticyclonic vortex on Earth, especially the one found over Antarctica. The observations in the various infrared atmospheric windows indicate that the anticyclonic circulation observed near the poles penetrates as deep as to 50 km altitude, i.e. to the base of the clouds. The polar upper troposphere and mesosphere are extremely dynamic; large bright clouds may appear and disappear over the space of a few hours. One such event was observed by Venus Express between 9 and 13 January 2007, when the south polar region became brighter by 30%. This event was probably caused by an injection of sulfur dioxide into the mesosphere, which then condensed, forming a bright haze. The two eyes in the vortices have yet to be explained.
The first vortex on Venus was discovered at the north pole by the Pioneer Venus mission in 1978. A discovery of the second large "double-eyed" vortex at the south pole of Venus was made in the summer of 2006 by Venus Express, which came with no surprise.
Images from the Akatsuki orbiter revealed something similar to jet stream winds in the low and middle cloud region, which extends from 45 to 60 km in altitude. The wind speed maximized near the equator. In September 2017, JAXA scientists named this phenomenon "Venusian equatorial jet".
Upper atmosphere and ionosphere
The mesosphere of Venus extends from 65 km to 120 km in height, and the thermosphere begins at approximately 120 km, eventually reaching the upper limit of the atmosphere (exosphere) at about 220 to 350 km. The exosphere begins when the atmosphere becomes so thin that the average number of collisions per air molecule is less than one.
The mesosphere of Venus can be divided into two layers: the lower one between 62 and 73 km and the upper one between 73 and 95 km. In the first layer the temperature is nearly constant at 230 K (−43 °C). This layer coincides with the upper cloud deck. In the second layer, the temperature starts to decrease again, reaching about 165 K (−108 °C) at the altitude of 95 km, where mesopause begins. It is the coldest part of the Venusian dayside atmosphere. In the dayside mesopause, which serves as a boundary between the mesosphere and thermosphere and is located between 95 and 120 km, temperature increases to a constant—about 300–400 K (27–127 °C)—value prevalent in the thermosphere. In contrast, the nightside Venusian thermosphere is the coldest place on Venus with temperature as low as 100 K (−173 °C). It is even called a cryosphere.
The circulation patterns in the upper mesosphere and thermosphere of Venus are completely different from those in the lower atmosphere. At altitudes 90–150 km the Venusian air moves from the dayside to nightside of the planet, with upwelling over sunlit hemisphere and downwelling over dark hemisphere. The downwelling over the nightside causes adiabatic heating of the air, which forms a warm layer in the nightside mesosphere at the altitudes 90–120 km. The temperature of this layer—230 K (−43 °C)—is far higher than the typical temperature found in the nightside thermosphere—100 K (−173 °C). The air circulated from the dayside also carries oxygen atoms, which after recombination form excited molecules of oxygen in the long-lived singlet state (1Δg), which then relax and emit infrared radiation at the wavelength 1.27 μm. This radiation from the altitude range 90–100 km is often observed from the ground and spacecraft. The nightside upper mesosphere and thermosphere of Venus is also the source of non-local thermodynamic equilibrium emissions of CO2 and nitric oxide molecules, which are responsible for the low temperature of the nightside thermosphere.
The Venus Express probe has shown through stellar occultation that the atmospheric haze extends much further up on the night side than the day side. On the day side the cloud deck has a thickness of 20 km and extends up to about 65 km, whereas on the night side the cloud deck in the form of a thick haze reaches up to 90 km in altitude—well into mesosphere, continuing even further to 105 km as a more transparent haze. In 2011, the spacecraft discovered that Venus has a thin ozone layer at an altitude of 100 km.
Venus has an extended ionosphere located at altitudes 120–300 km. The ionosphere almost coincides with the thermosphere. The high levels of the ionization are maintained only over the dayside of the planet. Over the nightside the concentration of the electrons is almost zero. The ionosphere of Venus consists of three layers: v1 between 120 and 130 km, v2 between 140 and 160 km and v3 between 200 and 250 km. There may be an additional layer near 180 km. The maximum electron volume density (number of electrons in a unit of volume) of 3 m−3 is reached in the v2 layer near the subsolar point. The upper boundary of the ionosphere (the ionopause) is located at altitudes 220–375 km and separates the plasma of the planetary origin from that of the induced magnetosphere. The main ionic species in the v1 and v2 layers is O2+ ion, whereas the v3 layer consists of O+ ions. The ionospheric plasma is observed to be in motion; solar photoionization on the dayside and ion recombination on the nightside are the processes mainly responsible for accelerating the plasma to the observed velocities. The plasma flow appears to be sufficient to maintain the nightside ionosphere at or near the observed median level of ion densities.
Induced magnetosphere
Venus is known not to have a magnetic field. The reason for its absence is not at all clear, but it may be related to a reduced intensity of convection in the Venusian mantle. Venus only has an induced magnetosphere formed by the Sun's magnetic field carried by the solar wind. This process can be understood as the field lines wrapping around an obstacle—Venus in this case. The induced magnetosphere of Venus has a bow shock, magnetosheath, magnetopause and magnetotail with the current sheet.
At the subsolar point the bow shock stands 1900 km (0.3 Rv, where Rv is the radius of Venus) above the surface of Venus. This distance was measured in 2007 near the solar activity minimum. Near the solar activity maximum it can be several times further from the planet. The magnetopause is located at the altitude of 300 km. The upper boundary of the ionosphere (ionopause) is near 250 km. Between the magnetopause and ionopause there exists a magnetic barrier—a local enhancement of the magnetic field, which prevents the solar plasma from penetrating deeper into the Venusian atmosphere, at least near solar activity minimum. The magnetic field in the barrier reaches up to 40 nT. The magnetotail continues up to ten radii from the planet. It is the most active part of the Venusian magnetosphere. There are reconnection events and particle acceleration in the tail. The energies of electrons and ions in the magnetotail are around 100 and 1000 eV respectively.
Due to the lack of the intrinsic magnetic field on Venus, the solar wind penetrates relatively deep into the planetary exosphere and causes substantial atmosphere loss. The loss happens mainly via the magnetotail. Currently the main ion types being lost are O+, H+ and He+. The ratio of hydrogen to oxygen losses is around 2 (i.e. almost stoichiometric for water) indicating the ongoing loss of water.
Clouds
Venusian clouds are thick and are composed mainly (75–96%) of sulfuric acid droplets. These clouds obscure the surface of Venus from optical imaging, and reflect about 75% of the sunlight that falls on them. The geometric albedo, a common measure of reflectivity, is the highest of any planet in the Solar System. This high reflectivity potentially enables any probe exploring the cloud tops sufficient solar energy such that solar cells can be fitted anywhere on the craft. The density of the clouds is highly variable with the densest layer at about 48.5 km, reaching 0.1 g/m3 similar to the lower range of cumulonimbus storm clouds on Earth.
The cloud cover is such that it reflects more than 60% of the solar light Venus receives, leaving the surface with typical light levels of 14,000 lux, comparable to that on Earth "in the daytime with overcast clouds". The equivalent visibility is about three kilometers, but this will likely vary with the wind conditions. Little to no solar energy could conceivably be collected by solar panels on a surface probe. In fact, due to the thick, highly reflective cloud cover, the total solar energy received by the surface of the planet is less than that of the Earth, despite its proximity to the Sun.
Sulfuric acid is produced in the upper atmosphere by the Sun's photochemical action on carbon dioxide, sulfur dioxide, and water vapour. Ultraviolet photons of wavelengths less than 169 nm can photodissociate carbon dioxide into carbon monoxide and monatomic oxygen. Monatomic oxygen is highly reactive; when it reacts with sulfur dioxide, a trace component of the Venusian atmosphere, the result is sulfur trioxide, which can combine with water vapour, another trace component of Venus's atmosphere, to yield sulfuric acid.
Surface level humidity is less than 0.1%. Venus's sulfuric acid rain never reaches the ground, but is evaporated by the heat before reaching the surface in a phenomenon known as virga. It is theorized that early volcanic activity released sulfur into the atmosphere and the high temperatures prevented it from being trapped into solid compounds on the surface as it was on the Earth. Besides sulfuric acid, cloud droplets can contain a wide array of sulfate salts, raising pH of droplet to 1.0 in one of scenarios explaining the sulfur dioxide measurements.
In 2009, a prominent bright spot in the atmosphere was noted by an amateur astronomer and photographed by Venus Express. Its cause is currently unknown, with surface volcanism advanced as a possible explanation.
Lightning
The clouds of Venus may be capable of producing lightning, but the debate is ongoing, with volcanic lightning and sprites also under discussion. The Soviet Venera 9 and 10 orbiters obtained ambiguous optical and electromagnetic evidence of lightning. There have been attempts to observe lightning from the Venera 11, 12, 13, and 14 landers, however no lightning activity was recorded, but very low frequency (VLF) waves were detected during descent. The European Space Agency's Venus Express in 2007 detected whistler waves which could be attributed to lightning. Their intermittent appearance suggests a pattern associated with weather activity. According to the whistler observations, the lightning rate is at least half of that on Earth and may possibly be similar. However, the Venus Express findings are incompatible with data from the JAXA Akatsuki spacecraft which indicate a very low flash rate. Recent work from a Parker Solar Probe flyby indicates that the direction of the whistler waves is towards Venus rather than away, indicating a non-planetary origin.
The Pioneer Venus Orbiter (PVO) was equipped with an electric field detector specifically to detect lightning and the Venera 13 and 14 missions included a radio receiver and point discharge sensor to search for thunderstorms. Other missions equipped with instruments that could search for lightning included Venera 9 which had a visible spectrometer; Pioneer which had a star sensor; and VEGA which had a photometer.
The mechanism generating lightning on Venus, if present, remains unknown. Whilst the sulfuric acid cloud droplets can become charged, the atmosphere may be too electrically conductive for the charge to be sustained, preventing lightning.
Lightning could potentially contribute to atmospheric chemistry, through heating which could break apart molecules that contain carbon, oxygen, sulfur, nitrogen, and hydrogen molecules (carbon dioxide, nitrogen gas, sulfuric acid, and water), that will recombine to form new molecules ("carbon oxides", "suboxides", "sulfur oxides", "oxygen", "elemental sulfur", "nitrogen oxides", "sulfuric acid clusters", "polysulfur oxides", "carbon soot", etc.). Lightning could contribute to the production of carbon monoxide and oxygen gas by converting sulfur and sulfur dioxide into sulfuric acid, and water and sulfuric dioxide to sulfur to sustain clouds. Regardless of how frequent lightning on Venus is, it is important to study as it can be a potential hazard for spacecraft.
Throughout the 1980s, it was thought that the cause of the night-side glow ("ashen light") on Venus was lightning however, there may be the possibility that Venus lightning would be too weak to cause it.
Possibility of life
Due to the harsh conditions on the surface, little of the planet has been explored; in addition to the fact that life as currently understood may not necessarily be the same in other parts of the universe, the extent of the tenacity of life on Earth itself has not yet been shown. Creatures known as extremophiles exist on Earth, preferring extreme habitats. Thermophiles and hyperthermophiles thrive at temperatures reaching above the boiling point of water, acidophiles thrive at a pH level of 3 or below, polyextremophiles can survive a varied number of extreme conditions, and many other types of extremophiles exist on Earth.
The surface temperature of Venus (over 450 °C) is far beyond the extremophile range, which extends only tens of degrees beyond 100 °C. However, the lower temperature of the cloud tops means that life could plausibly exist there, the same way that bacteria have been found living and reproducing in clouds on Earth. Any such bacteria living in the cloud tops, however, would have to be hyper-acidophilic, due to the concentrated sulfuric acid environment. Microbes in the thick, cloudy atmosphere could be protected from solar radiation by the sulfur compounds in the air.
The Venusian atmosphere has been found to be sufficiently out of equilibrium as to require further investigation. Analysis of data from the Venera, Pioneer, and Magellan missions has found hydrogen sulfide (later disputed) and sulfur dioxide (SO2) together in the upper atmosphere, as well as carbonyl sulfide (OCS). The first two gases react with each other, implying that something must produce them. Carbonyl sulfide is difficult to produce inorganically, but it is present in the Venusian atmosphere. However, the planet's volcanism could explain the presence of carbonyl sulfide. In addition, one of the early Venera probes detected large amounts of toxic chlorine just below the Venusian cloud deck.
It has been proposed that microbes at this level could be soaking up ultraviolet light from the Sun as a source of energy, which could be a possible explanation for the "unknown UV absorber" seen as dark patches on UV images of the planet. The existence of this "unknown UV absorber" prompted Carl Sagan to publish an article in 1963 proposing the hypothesis of microorganisms in the upper atmosphere as the agent absorbing the UV light. In 2012, the abundance and vertical distribution of these unknown ultraviolet absorbers in the Venusian atmosphere have been investigated from analysis of Venus Monitoring Camera images, but their composition is still unknown. In 2016, disulfur dioxide was identified as a possible candidate for causing the so far unknown UV absorption of the Venusian atmosphere. The dark patches of "unknown UV absorbers" are prominent enough to influence the weather on Venus. In 2021, it was suggested the color of "unknown UV absorber" match that of "red oil" - a known substance comprising a mixed organic carbon compounds dissolved in concentrated sulfuric acid.
In September 2020, research studies led by Cardiff University using the James Clerk Maxwell and ALMA radio telescopes noted the detection of phosphine in Venus's atmosphere that was not linked to any known abiotic method of production present, or possible under Venusian conditions. It is extremely hard to make, and the chemistry in the Venusian clouds should destroy the molecules before they could accumulate to the observed amounts. The phosphine was detected at heights of at least 48 km above the surface of Venus, and was detected primarily at mid-latitudes with none detected at the poles of Venus. Scientists note that the detection itself could be further verified beyond the use of multiple telescopes detecting the same signal, as the phosphine fingerprint described in the study could theoretically be a false signal introduced by the telescopes or by data processing. The detection was later suggested to be a false positive or true signal with much over-estimated amplitude, compatible with 1 ppb concentration of phosphine. The re-analysis of ALMA dataset in April 2021 have recovered the 20 ppb phosphine signal, with signal-to-noise ratio of 5.4, and by August 2021 it was confirmed the suspected contamination by sulfur dioxide was contributing only 10% to the tentative signal in phosphine spectral line band.
Evolution
Through studies of the present cloud structure and geology of the surface, combined with the fact that the luminosity of the Sun has increased by 25% since around 3.8 billion years ago, it is thought that the early environment of Venus was more like that of Earth with liquid water on the surface. At some point in the evolution of Venus, a runaway greenhouse effect occurred, leading to the current greenhouse-dominated atmosphere. The timing of this transition away from Earthlike is not known, but is estimated to have occurred around 4 billion years ago. The runaway greenhouse effect may have been caused by the evaporation of the surface water and the rise of the levels of greenhouse gases that followed. Venus's atmosphere has therefore received a great deal of attention from those studying climate change on Earth.
There are no geologic forms on the planet to suggest the presence of water over the past billion years. However, there is no reason to suppose that Venus was an exception to the processes that formed Earth and gave it its water during its early history, possibly from the original rocks that formed the planet or later on from comets. The common view among research scientists is that water would have existed for about 600 million years on the surface before evaporating, though some such as David Grinspoon believe that up to 2 billion years could also be plausible. This longer timescale for the persistence of oceans is also supported by General Circulation Model simulations incorporating the thermal effects of clouds on an evolving Venusian hydrosphere.
The early Earth during the Hadean eon is believed by most scientists to have had a Venus-like atmosphere, with roughly 100 bar of CO2 and a surface temperature of 230 °C, and possibly even sulfuric acid clouds, until about 4.0 billion years ago, by which time plate tectonics were in full force and together with the early water oceans, removed the CO2 and sulfur from the atmosphere. Early Venus would thus most likely have had water oceans like the Earth, but any plate tectonics would have ended when Venus lost its oceans. Its surface is estimated to be about 500 million years old, so it would not be expected to show evidence of plate tectonics.
Observations and measurement from Earth
In 1761, Russian polymath Mikhail Lomonosov observed an arc of light surrounding the part of Venus off the Sun's disc at the beginning of the egress phase of the transit and concluded that Venus has an atmosphere. In 1940, Rupert Wildt calculated that the amount of CO2 in the Venusian atmosphere would raise surface temperature above the boiling point for water. This was confirmed when Mariner 2 made radiometer measurements of the temperature in 1962. In 1967, Venera 4 confirmed that the atmosphere consisted primarily of carbon dioxide.
The upper atmosphere of Venus can be measured from Earth when the planet crosses the sun in a rare event known as a solar transit. The last solar transit of Venus occurred in 2012. Using quantitative astronomical spectroscopy, scientists were able to analyze sunlight that passed through the planet's atmosphere to reveal chemicals within it. As the technique to analyse light to discover information about a planet's atmosphere only first showed results in 2001, this was the first opportunity to gain conclusive results in this way on the atmosphere of Venus since observation of solar transits began. This solar transit was a rare opportunity considering the lack of information on the atmosphere between 65 and 85 km. The solar transit in 2004 enabled astronomers to gather a large amount of data useful not only in determining the composition of the upper atmosphere of Venus, but also in refining techniques used in searching for extrasolar planets. The atmosphere of mostly CO2, absorbs near-infrared radiation, making it easy to observe. During the 2004 transit, the absorption in the atmosphere as a function of wavelength revealed the properties of the gases at that altitude. The Doppler shift of the gases also enabled wind patterns to be measured.
A solar transit of Venus is an extremely rare event, and the last solar transit of the planet before 2004 was in 1882. The most recent solar transit was in 2012; the next one will not occur until 2117.
Space missions
Recent and current spaceprobes
The Venus Express spacecraft formerly in orbit around the planet probed deeper into the atmosphere using infrared imaging spectroscopy in the 1–5 μm spectral range.
The JAXA probe Akatsuki (Venus Climate Orbiter), launched in May 2010, is studying the planet for a period of two years, including the structure and activity of the atmosphere, but it failed to enter Venus orbit in December 2010. A second attempt to achieve orbit succeeded 7 December 2015. Designed specifically to study the planet's climate, Akatsuki is the first meteorology satellite to orbit Venus (the first for a planet other than Earth). One of its five cameras known as the "IR2" will be able to probe the atmosphere of the planet underneath its thick clouds, in addition to its movement and distribution of trace components. With a highly eccentric orbit (periapsis altitude of 400 km and apoapsis of 310,000 km), it will be able to take close-up photographs of the planet, and should also confirm the presence of both active volcanoes as well as lightning.
Proposed missions
The Venus In-Situ Explorer, proposed by NASA's New Frontiers program is a proposed probe which would aid in understanding the processes on the planet that led to climate change, as well as paving the way towards a later sample return mission.
A craft called the Venus Mobile Explorer has been proposed by the Venus Exploration Analysis Group (VEXAG) to study the composition and isotopic measurements of the surface and the atmosphere, for about 90 days. The mission has not been selected for launch.
After missions discovered the reality of the harsh nature of the planet's surface, attention shifted towards other targets such as Mars. There have been a number of proposed missions afterward, however, and many of these involve the little-known upper atmosphere. The Soviet Vega program in 1985 dropped two balloons into the atmosphere, but these were battery-powered and lasted for only about two Earth days each before running out of power. Since then, there has been no exploration of the upper atmosphere. In 2002, the NASA contractor Global Aerospace proposed a balloon that would be capable of staying in the upper atmosphere for hundreds of Earth days as opposed to two.
A solar flyer has also been proposed by Geoffrey A. Landis in place of a balloon, and the idea has been featured from time to time since the early 2000s. Venus has a high albedo, and reflects most of the sunlight that shines on it making the surface quite dark, the upper atmosphere at 60 km has an upward solar intensity of 90%, meaning that solar panels on both the top and the bottom of a craft could be used with nearly equal efficiency. In addition to this, the slightly lower gravity, high air pressure and slow rotation allowing for perpetual solar power make this part of the planet ideal for exploration. The proposed flyer would operate best at an altitude where sunlight, air pressure, and wind speed would enable it to remain in the air perpetually, with slight dips down to lower altitudes for a few hours at a time before returning to higher altitudes. As sulfuric acid in the clouds at this height is not a threat for a properly shielded craft, this so-called "solar flyer" would be able to measure the area in between 45 km and 60 km indefinitely, for however long it takes for mechanical error or unforeseen problems to cause it to fail. Landis also proposed that rovers similar to Spirit and Opportunity could possibly explore the surface, with the difference being that Venus surface rovers would be "dumb" rovers controlled by radio signals from computers located in the flyer above, only requiring parts such as motors and transistors to withstand the surface conditions, but not weaker parts involved in microelectronics that could not be made resistant to the heat, pressure and acidic conditions.
Russian space science plans include the launch of the Venera-D (Venus-D) probe in 2029. The main scientific goals of the Venera-D mission are investigation of the structure and chemical composition of the atmosphere and investigation of the upper atmosphere, ionosphere, electrical activity, magnetosphere, and escape rate. It has been proposed to fly together with Venera-D an inflatable aircraft designed by Northrop Grumman, called Venus Atmospheric Maneuverable Platform (VAMP).
The High Altitude Venus Operational Concept (HAVOC) is a NASA concept for a crewed exploration of Venus. Rather than traditional landings, it would send crews into the upper atmosphere, using dirigibles. Other proposals from the late 2010s include VERITAS, Venus Origins Explorer, VISAGE, and VICI. In June 2018, NASA also awarded a contract to Black Swift Technologies for a concept study of a Venus glider that would exploit wind shear for lift and speed.
In June 2021, NASA selected the DAVINCI+ mission to send an atmospheric probe to Venus in the late 2020s. DAVINCI+ will measure the composition of Venus' atmosphere to understand how it formed and evolved, as well as determine whether the planet ever had an ocean. The mission consists of a descent sphere that will plunge through the planet's thick atmosphere, measuring noble gases and other elements to understand Venus' climate change. This will be the first U.S.-led mission to Venus' atmosphere since 1978.
| Physical sciences | Solar System | Astronomy |
19879434 | https://en.wikipedia.org/wiki/Fang | Fang | A fang is a long, pointed tooth. In mammals, a fang is a modified maxillary tooth, used for biting and tearing flesh. In snakes, it is a specialized tooth that is associated with a venom gland (see snake venom). Spiders also have external fangs, which are part of the chelicerae.
Fangs are most common in carnivores or omnivores, but some herbivores, such as fruit bats, have them as well. They are generally used to hold or swiftly kill prey, such as in large cats. Omnivorous animals, such as bears, use their fangs when hunting fish or other prey, but they are not needed for consuming fruit. Some apes also have fangs, which they use for threats and fighting. However, the relatively short canines of humans are not considered to be fangs.
Fangs in religion, mythology and legend
Certain mythological and legendary creatures such as dragons, gargoyles, demons and yakshas are commonly depicted with prominent fangs. The fangs of vampires are one of their defining characteristics.
The iconographic representation of some Hindu deities include fangs, to symbolize the ability to hunt and kill. Two examples are fierce warrior goddess Chamunda and god of death Yama in some iconographic representations. Fangs are also common among guardian figures such as Verupaksha in Buddhism art in China and East Asia, as well as Rangda in Balinese Hinduism.
| Biology and health sciences | Gastrointestinal tract | Biology |
19881804 | https://en.wikipedia.org/wiki/Fuze | Fuze | In military munitions, a fuze (sometimes fuse) is the part of the device that initiates its function. In some applications, such as torpedoes, a fuze may be identified by function as the exploder. The relative complexity of even the earliest fuze designs can be seen in cutaway diagrams.
A fuze is a device that detonates a munition's explosive material under specified conditions. In addition, a fuze will have safety and arming mechanisms that protect users from premature or accidental detonation. For example, an artillery fuze's battery is activated by the high acceleration of cannon launch, and the fuze must be spinning rapidly before it will function. "Complete bore safety" can be achieved with mechanical shutters that isolate the detonator from the main charge until the shell is fired.
A fuze may contain only the electronic or mechanical elements necessary to signal or actuate the detonator, but some fuzes contain a small amount of primary explosive to initiate the detonation. Fuzes for large explosive charges may include an explosive booster.
Etymology
Some professional publications about explosives and munitions distinguish the "fuse" and "fuze" spelling. The UK Ministry of Defence states (emphasis in original):
Fuse: Cord or tube for the transmission of flame or explosion usually consisting of cord or rope with gunpowder or high explosive spun into it. (The spelling fuze may also be met for this term, but fuse is the preferred spelling in this context.)
Fuze: A device with explosive components designed to initiate a main charge. (The spelling fuse may also be met for this term, but fuze is the preferred spelling in this context.)
Historically, it was spelled with either 's' or 'z', and both spellings can still be found. In the United States and some military forces, fuze is used to denote a sophisticated ignition device incorporating mechanical and/or electronic components (for example a proximity fuze for an artillery shell, magnetic/acoustic fuze on a sea mine, spring-loaded grenade fuze, pencil detonator or anti-handling device) as opposed to a simple burning fuse.
Munition types
The situation of usage and the characteristics of the munition it is intended to activate affect the fuze design e.g. its safety and actuation mechanisms.
Artillery
Artillery fuzes are tailored to function in the special circumstances of artillery projectiles. The relevant factors are the projectile's initial rapid acceleration, high velocity and usually rapid rotation, which affect both safety and arming requirements and options, and the target may be moving or stationary. Artillery fuzes may be initiated by a timer mechanism, impact or detection of proximity to the target, or a combination of these.
Grenades
Requirements for a grenade fuze are defined by the projectile's small size and slow delivery over a short distance. This necessitates manual arming before throwing as the grenade has insufficient initial acceleration for arming to be driven by "setback" and no rotation to drive arming by centrifugal force.
Aerial bombs
Aerial bombs can be detonated either by a fuze, which contains a small explosive charge to initiate the main charge, or by a "pistol", a firing pin in a case which strikes the detonator when triggered. The pistol may be considered a part of the mechanical fuze assembly.
Landmines
The main design consideration is that the bomb that the fuze is intended to actuate is stationary, and the target itself is moving in making contact.
Naval mines
Relevant design factors in naval mine fuzes are that the mine may be static or moving downward through the water, and the target is typically moving on or below the water surface, usually above the mine.
Activation mechanisms
Time
Time fuzes detonate after a set period of time by using one or more combinations of mechanical, electronic, pyrotechnic or even chemical timers. Depending on the technology used, the device may self-destruct (or render itself safe without detonation) some seconds, minutes, hours, days, or even months after being deployed.
Early artillery time fuzes were nothing more than a hole filled with gunpowder leading from the surface to the centre of the projectile. The flame from the burning of the gunpowder propellant ignited this "fuze" on firing, and burned through to the centre during flight, then igniting or exploding whatever the projectile may have been filled with.
By the 19th century devices more recognisable as modern artillery "fuzes" were being made of carefully selected wood and trimmed to burn for a predictable time after firing. These were still typically fired from smoothbore muzzle-loaders with a relatively large gap between the shell and barrel, and still relied on flame from the gunpowder propellant charge escaping past the shell on firing to ignite the wood fuze and hence initiate the timer.
In the mid-to-late 19th century adjustable metal time fuzes, the fore-runners of today's time fuzes, containing burning gunpowder as the delay mechanism became common, in conjunction with the introduction of rifled artillery. Rifled guns introduced a tight fit between shell and barrel and hence could no longer rely on the flame from the propellant to initiate the timer. The new metal fuzes typically use the shock of firing ("setback") and/or the projectiles's rotation to "arm" the fuze and initiate the timer : hence introducing a safety factor previously absent.
As late as World War I, some countries were still using hand-grenades with simple black match fuses much like those of modern fireworks: the infantryman lit the fuse before throwing the grenade and hoped the fuse burned for the several seconds intended. These were soon superseded in 1915 by the Mills bomb, the first modern hand grenade with a relatively safe and reliable time fuze initiated by pulling out a safety pin and releasing an arming handle on throwing.
Modern time fuzes often use an electronic delay system.
Impact
Impact, percussion or contact fuzes detonate when their forward motion rapidly decreases, typically on physically striking an object such as the target. The detonation may be instantaneous or deliberately delayed to occur a preset fraction of a second after penetration of the target. An instantaneous "Superquick" fuze will detonate instantly on the slightest physical contact with the target. A fuze with a graze action will also detonate on change of direction caused by a slight glancing blow on a physical obstruction such as the ground.
Impact fuzes in artillery usage may be mounted in the shell nose ("point detonating") or shell base ("base detonating").
Inertia
Inertial fuzes are triggered when the entity carrying them (for example, a torpedo, air-dropped bomb, sea mine, or booby trap) experiences a sudden (or gradual, depending on the design) acceleration, deceleration, or impact. In this way they are both similar to and different from impact fuzes. Whereas impact fuzes usually require physical contact with, or an impact against a hard surface, inertial fuzes can trigger from any change of momentum. This allows them to be mounted deep inside the entity carrying them, rather than on its exterior. Designs can be varied. Some can be passively safe, ignoring all changes of momentum below a certain threshold, thereby functioning similarly to impact fuzes without the limitation of being externally mounted. Other designs can be passively dangerous, using other energy sources such as gravity or an electrical battery to greatly amplify slight changes in inertia over time. One easy way of visualizing a passively safe inertial fuze is to picture a marble in a bowl - the device is triggered when the marble rolls to the rim of the bowl. In contrast, a passively dangerous inertial fuze would be similar to a marble on a smooth, flat plate. The former uses gravity to actively suppress weak forces acting on the marble, whereas the latter uses gravity to actively amplify them. Passively dangerous inertial fuzes are commonly employed in anti-handling devices.
For comparatively low-velocity munitions such as torpedoes, inertial fuzes of the pendulum and swing-arm types have been used historically. An early example of a pendulum fuze can be seen in the design of the Brennan torpedo. Inertial fuzing is also used in the design of HESH munitions, since their concept precludes the use of contact fuzing.
Proximity
Proximity fuzes cause a missile warhead or other munition (e.g. air-dropped bomb or sea mine) to detonate when it comes within a certain pre-set distance of the target, or vice versa. Proximity fuzes utilize sensors incorporating one or more combinations of the following: radar, active sonar, passive acoustic, infrared, magnetic, photoelectric, seismic or even television cameras. These may take the form of an anti-handling device designed specifically to kill or severely injure anyone who tampers with the munition in some way e.g. lifting or tilting it. Regardless of the sensor used, the pre-set triggering distance is calculated such that the explosion will occur sufficiently close to the target that it is either destroyed or severely damaged.
Remote detonation
Remote detonators use wires or radio waves to remotely command the device to detonate.
Barometric
Barometric fuzes cause a bomb to detonate at a certain pre-set altitude above sea level by means of a radar, barometric altimeter or an infrared rangefinder.
Combinations
A fuze assembly may include more than one fuze in series or parallel arrangements. The RPG-7 usually has an impact (PIBD) fuze in parallel with a 4.5 second time fuze, so detonation should occur on impact, but otherwise takes place after 4.5 seconds. Military weapons containing explosives have fuzing systems including a series time fuze to ensure that they do not initiate (explode) prematurely within a danger distance of the munition launch platform. In general, the munition has to travel a certain distance, wait for a period of time (via a clockwork, electronic or chemical delay mechanism), or have some form of arming pin or plug removed. Only when these processes have occurred will the arming process of the series time fuze be complete. Mines often have a parallel time fuze to detonate and destroy the mine after a pre-determined period to minimize casualties after the anticipated duration of hostilities. Detonation of modern naval mines may require simultaneous detection of a series arrangement of acoustic, magnetic, and/or pressure sensors to complicate mine-sweeping efforts.
Safety and arming mechanisms
The multiple safety/arming features in the M734 fuze used for mortars are representative of the sophistication of modern electronic fuzes.
Safety/arming mechanisms can be as simple as the spring-loaded safety levers on M67 or RGD-5 grenade fuzes, which will not initiate the explosive train so long as the pin is kept in the grenade, or the safety lever is held down on a pinless grenade. Alternatively, it can be as complex as the electronic timer-countdown on an influence sea mine, which gives the vessel laying it sufficient time to move out of the blast zone before the magnetic or acoustic sensors are fully activated.
In modern artillery shells, most fuzes incorporate several safety features to prevent a fuze arming before it leaves the gun barrel. These safety features may include arming on "setback" or by centrifugal force, and often both operating together. Set-back arming uses the inertia of the accelerating artillery shell to remove a safety feature as the projectile accelerates from rest to its in-flight speed. Rotational arming requires that the artillery shell reach a certain rpm before centrifugal forces cause a safety feature to disengage or move an arming mechanism to its armed position. Artillery shells are fired through a rifled barrel, which forces them to spin during flight.
In other cases the bomb, mine or projectile has a fuze that prevents accidental initiation e.g. stopping the rotation of a small propeller (unless a lanyard pulls out a pin) so that the striker-pin cannot hit the detonator even if the weapon is dropped on the ground. These types of fuze operate with aircraft weapons, where the weapon may have to be jettisoned over friendly territory to allow a damaged aircraft to continue to fly. The crew can choose to jettison the weapons safe by dropping the devices with safety pins still attached, or drop them live by removing the safety pins as the weapons leave the aircraft.
Aerial bombs and depth charges can be nose and tail fuzed using different detonator/initiator characteristics so that the crew can choose which effect fuze will suit target conditions that may not have been known before the flight. The arming switch is set to one of safe, nose, or tail at the crew's choice.
Base fuzes are also used by artillery and tanks for shells of the 'squash head' type. Some types of armour piercing shells have also used base fuzes, as have nuclear artillery shells.
The most sophisticated fuze mechanisms of all are those fitted to nuclear weapons, and their safety/arming devices are correspondingly complex. In addition to PAL protection, the fuzing used in nuclear weapons features multiple, highly sophisticated environmental sensors e.g. sensors requiring highly specific acceleration and deceleration profiles before the warhead can be fully armed. The intensity and duration of the acceleration/deceleration must match the environmental conditions which the bomb/missile warhead would actually experience when dropped or fired. Furthermore, these events must occur in the correct order. As an additional safety precaution, most modern nuclear weapons utilize a timed two point detonation system such that ONLY a precisely firing of both detonators in sequence will result in the correct conditions to cause a fission reaction
Note: some fuzes, e.g. those used in air-dropped bombs and landmines may contain anti-handling devices specifically designed to kill bomb disposal personnel. The technology to incorporate booby-trap mechanisms in fuzes has existed since at least 1940 e.g. the German ZUS40 anti-removal bomb fuze.
Reliability
A fuze must be designed to function appropriately considering relative movement of the munition with respect to its target. The target may move past stationary munitions like land mines or naval mines; or the target may be approached by a rocket, torpedo, artillery shell, or air-dropped bomb. Timing of fuze function may be described as optimum if detonation occurs when target damage will be maximized, early if detonation occurs prior to optimum, late if detonation occurs past optimum, or dud if the munition fails to detonate. Any given batch of a specific design may be tested to determine the anticipated percentage of early, optimum. late, and dud expected from that fuze installation.
Combination fuze design attempts to maximize optimum detonation while recognizing dangers of early fuze function (and potential dangers of late function for subsequent occupation of the target zone by friendly forces or for gravity return of anti-aircraft munitions used in defense of surface positions.) Series fuze combinations minimize early function by detonating at the latest activation of the individual components. Series combinations are useful for safety arming devices, but increase the percentage of late and dud munitions. Parallel fuze combinations minimize duds by detonating at the earliest activation of individual components, but increase the possibility of premature early function of the munition. Sophisticated military munition fuzes typically contain an arming device in series with a parallel arrangement of sensing fuzes for target destruction and a time fuze for self-destruction if no target is detected.
Gallery
| Technology | Explosive weapons | null |
1919367 | https://en.wikipedia.org/wiki/QCD%20matter | QCD matter | Quark matter or QCD matter (quantum chromodynamic) refers to any of a number of hypothetical phases of matter whose degrees of freedom include quarks and gluons, of which the prominent example is quark-gluon plasma. Several series of conferences in 2019, 2020, and 2021 were devoted to this topic.
Quarks are liberated into quark matter at extremely high temperatures and/or densities, and some of them are still only theoretical as they require conditions so extreme that they cannot be produced in any laboratory, especially not at equilibrium conditions. Under these extreme conditions, the familiar structure of matter, where the basic constituents are nuclei (consisting of nucleons which are bound states of quarks) and electrons, is disrupted. In quark matter it is more appropriate to treat the quarks themselves as the basic degrees of freedom.
In the standard model of particle physics, the strong force is described by the theory of QCD. At ordinary temperatures or densities this force just confines the quarks into composite particles (hadrons) of size around 10−15 m = 1 femtometer = 1 fm (corresponding to the QCD energy scale ΛQCD ≈ 200 MeV) and its effects are not noticeable at longer distances.
However, when the temperature reaches the QCD energy scale (T of order 1012 kelvins) or the density rises to the point where the average inter-quark separation is less than 1 fm (quark chemical potential μ around 400 MeV), the hadrons are melted into their constituent quarks, and the strong interaction becomes the dominant feature of the physics. Such phases are called quark matter or QCD matter.
The strength of the color force makes the properties of quark matter unlike gas or plasma, instead leading to a state of matter more reminiscent of a liquid. At high densities, quark matter is a Fermi liquid, but is predicted to exhibit color superconductivity at high densities and temperatures below 1012 K.
Occurrence
Natural occurrence
According to the Big Bang theory, in the early universe at high temperatures when the universe was only a few tens of microseconds old, the phase of matter took the form of a hot phase of quark matter called the quark–gluon plasma (QGP).
Compact stars (neutron stars). A neutron star is much cooler than 1012 K, but gravitational collapse has compressed it to such high densities, that it is reasonable to surmise that quark matter may exist in the core. Compact stars composed mostly or entirely of quark matter are called quark stars or strange stars.
QCD matter may exist within the collapsar of a gamma-ray burst, where temperatures as high as 6.7 × 1013 K may be generated.
At this time no star with properties expected of these objects has been observed, although some evidence has been provided for quark matter in the cores of large neutron stars.
Strangelets. These are theoretically postulated (but as yet unobserved) lumps of strange matter comprising nearly equal amounts of up, down and strange quarks. Strangelets are supposed to be present in the galactic flux of high energy particles and should therefore theoretically be detectable in cosmic rays here on Earth, but no strangelet has been detected with certainty.
Cosmic ray impacts. Cosmic rays comprise a lot of different particles, including highly accelerated atomic nuclei, particularly that of iron.
Laboratory experiments suggests that the inevitable interaction with heavy noble gas nuclei in the upper atmosphere would lead to quark–gluon plasma formation.
Quark matter with baryon number over about 300 may be more stable than nuclear matter. This form of baryonic matter could possibly form a continent of stability.
Laboratory experiments
Even though quark-gluon plasma can only occur under quite extreme conditions of temperature and/or pressure, it is being actively studied at particle colliders, such as the Large Hadron Collider LHC at CERN and the Relativistic Heavy Ion Collider RHIC at Brookhaven National Laboratory.
In these collisions, the plasma only occurs for a very short time before it spontaneously disintegrates. The plasma's physical characteristics are studied by detecting the debris emanating from the collision region with large particle detectors
Heavy-ion collisions at very high energies can produce small short-lived regions of space whose energy density is comparable to that of the 20-micro-second-old universe. This has been achieved by colliding heavy nuclei such as lead nuclei at high speeds, and a first time claim of formation of quark–gluon plasma came from the SPS accelerator at CERN in February 2000.
This work has been continued at more powerful accelerators, such as RHIC in the US, and as of 2010 at the European LHC at CERN located in the border area of Switzerland and France. There is good evidence that the quark–gluon plasma has also been produced at RHIC.
Thermodynamics
The context for understanding the thermodynamics of quark matter is the standard model of particle physics, which contains six different flavors of quarks, as well as leptons like electrons and neutrinos. These interact via the strong interaction, electromagnetism, and also the weak interaction which allows one flavor of quark to turn into another. Electromagnetic interactions occur between particles that carry electrical charge; strong interactions occur between particles that carry color charge.
The correct thermodynamic treatment of quark matter depends on the physical context. For large quantities that exist for long periods of time (the "thermodynamic limit"), we must take into account the fact that the only conserved charges in the standard model are quark number (equivalent to baryon number), electric charge, the eight color charges, and lepton number. Each of these can have an associated chemical potential. However, large volumes of matter must be electrically and color-neutral, which determines the electric and color charge chemical potentials. This leaves a three-dimensional phase space, parameterized by quark chemical potential, lepton chemical potential, and temperature.
In compact stars quark matter would occupy cubic kilometers and exist for millions of years, so the thermodynamic limit is appropriate. However, the neutrinos escape, violating lepton number, so the phase space for quark matter in compact stars only has two dimensions, temperature (T) and quark number chemical potential μ. A strangelet is not in the thermodynamic limit of large volume, so it is like an exotic nucleus: it may carry electric charge.
A heavy-ion collision is in neither the thermodynamic limit of large volumes nor long times. Putting aside questions of whether it is sufficiently equilibrated for thermodynamics to be applicable, there is certainly not enough time for weak interactions to occur, so flavor is conserved, and there are independent chemical potentials for all six quark flavors. The initial conditions (the impact parameter of the collision, the number of up and down quarks in the colliding nuclei, and the fact that they contain no quarks of other flavors) determine the chemical potentials. (Reference for this section:).
Phase diagram
The phase diagram of quark matter is not well known, either experimentally or theoretically. A commonly conjectured form of the
phase diagram is shown in the figure to the right. It is applicable to matter in a compact star, where the only relevant thermodynamic potentials are quark chemical potential μ and temperature T.
For guidance it also shows the typical values of μ and T in heavy-ion collisions and in the early universe. For readers who are not familiar with the concept of a chemical potential, it is helpful to think of μ as a measure of the imbalance between quarks and antiquarks in the system. Higher μ means a stronger bias favoring quarks over antiquarks. At low temperatures there are no antiquarks, and then higher μ generally means a higher density of quarks.
Ordinary atomic matter as we know it is really a mixed phase, droplets of nuclear matter (nuclei) surrounded by vacuum, which exists at the low-temperature phase boundary between vacuum and nuclear matter, at μ = 310 MeV and T close to zero. If we increase the quark density (i.e. increase μ) keeping the temperature low, we move into a phase of more and more compressed nuclear matter. Following this path corresponds to burrowing more and more deeply into a neutron star.
Eventually, at an unknown critical value of μ, there is a transition to quark matter. At ultra-high densities we expect to find the color-flavor-locked (CFL) phase of color-superconducting quark matter. At intermediate densities we expect some other phases (labelled "non-CFL quark liquid" in the figure) whose nature is presently unknown. They might be other forms of color-superconducting quark matter, or something different.
Now, imagine starting at the bottom left corner of the phase diagram, in the vacuum where μ = T = 0. If we heat up the system without introducing any preference for quarks over antiquarks, this corresponds to moving vertically upwards along the T axis. At first, quarks are still confined and we create a gas of hadrons (pions, mostly). Then around T = 150 MeV there is a crossover to the quark gluon plasma: thermal fluctuations break up the pions, and we find a gas of quarks, antiquarks, and gluons, as well as lighter particles such as photons, electrons, positrons, etc. Following this path corresponds to travelling far back in time (so to say), to the state of the universe shortly after the big bang (where there was a very tiny preference for quarks over antiquarks).
The line that rises up from the nuclear/quark matter transition and then bends back towards the T axis, with its end marked by a star, is the conjectured boundary between confined and unconfined phases. Until recently it was also believed to be a boundary between phases where chiral symmetry is broken (low temperature and density) and phases where it is unbroken (high temperature and density). It is now known that the CFL phase exhibits chiral symmetry breaking, and other quark matter phases may also break chiral symmetry, so it is not clear whether this is really a chiral transition line. The line ends at the "chiral critical point", marked by a star in this figure, which is a special temperature and density at which striking physical phenomena, analogous to critical opalescence, are expected. (Reference for this section:).
For a complete description of phase diagram it is required that one must have complete understanding of dense, strongly interacting hadronic matter and strongly interacting quark matter from some underlying theory e.g. quantum chromodynamics (QCD). However, because such a description requires the proper understanding of QCD in its non-perturbative regime, which is still far from being completely understood, any theoretical advance remains very challenging.
Theoretical challenges: calculation techniques
The phase structure of quark matter remains mostly conjectural because it is difficult to perform calculations predicting the properties of quark matter. The reason is that QCD, the theory describing the dominant interaction between quarks, is strongly coupled at the densities and temperatures of greatest physical interest, and hence it is very hard to obtain any predictions from it. Here are brief descriptions of some of the standard approaches.
Lattice gauge theory
The only first-principles calculational tool currently available is lattice QCD, i.e. brute-force computer calculations. Because of a technical obstacle known as the fermion sign problem, this method can only be used at low density and high temperature (μ < T), and it predicts that the crossover to the quark–gluon plasma will occur around T = 150 MeV However, it cannot be used to investigate the interesting color-superconducting phase structure at high density and low temperature.
Weak coupling theory
Because QCD is asymptotically free it becomes weakly coupled at unrealistically high densities, and diagrammatic
methods can be used. Such methods show that the CFL phase occurs at very high density. At high temperatures, however, diagrammatic methods are still not under full control.
Models
To obtain a rough idea of what phases might occur, one can use a model that has some of the same properties as QCD, but is easier to manipulate. Many physicists use Nambu–Jona-Lasinio models, which contain no gluons, and replace the strong interaction with a four-fermion interaction. Mean-field methods are commonly used to analyse the phases. Another approach is the bag model, in which the effects of confinement are simulated by an additive energy density that penalizes unconfined quark matter.
Effective theories
Many physicists simply give up on a microscopic approach, and make informed guesses of the expected phases (perhaps based on NJL model results). For each phase, they then write down an effective theory for the low-energy excitations, in terms of a small number of parameters, and use it to make predictions that could allow those parameters to be fixed by experimental observations.
Other approaches
There are other methods that are sometimes used to shed light on QCD, but for various reasons have not yet yielded useful results in studying quark matter.
1/N expansion
Treat the number of colors N, which is actually 3, as a large number, and expand in powers of 1/N. It turns out that at high density the higher-order corrections are large, and the expansion gives misleading results.
Supersymmetry
Adding scalar quarks (squarks) and fermionic gluons (gluinos) to the theory makes it more tractable, but the thermodynamics of quark matter depends crucially on the fact that only fermions can carry quark number, and on the number of degrees of freedom in general.
Experimental challenges
Experimentally, it is hard to map the phase diagram of quark matter because it has been rather difficult to learn how to tune to high enough temperatures and density in the laboratory experiment using collisions of relativistic heavy ions as experimental tools. However, these collisions ultimately will provide information about the crossover from hadronic matter to QGP. It has been suggested that the observations of compact stars may also constrain the information about the high-density low-temperature region. Models of the cooling, spin-down, and precession of these stars offer information about the relevant properties of their interior. As observations become more precise, physicists hope to learn more.
One of the natural subjects for future research is the search for the exact location of the chiral critical point. Some ambitious lattice QCD calculations may have found evidence for it, and future calculations will clarify the situation. Heavy-ion collisions might be able to measure its position experimentally, but this will require scanning across a range of values of μ and T.
Evidence
In 2020, evidence was provided that the cores of neutron stars with mass ~2M⊙ were likely composed of quark matter. Their result was based on neutron-star tidal deformability during a neutron star merger as measured by gravitational-wave observatories, leading to an estimate of star radius, combined with calculations of the equation of state relating the pressure and energy density of the star's core. The evidence was strongly suggestive but did not conclusively prove the existence of quark matter.
| Physical sciences | Particle physics: General | Physics |
1921259 | https://en.wikipedia.org/wiki/Windcatcher | Windcatcher | A windcatcher, wind tower, or wind scoop () is a traditional architectural element used to create cross ventilation and passive cooling in buildings. Windcatchers come in various designs, depending on whether local prevailing winds are unidirectional, bidirectional, or multidirectional, on how they change with altitude, on the daily temperature cycle, on humidity, and on how much dust needs to be removed. Despite the name, windcatchers can also function without wind.
Neglected by modern architects in the latter half of the 20th century, the early 21st century saw them used again to increase ventilation and cut power demand for air-conditioning. Generally, the cost of construction for a windcatcher-ventilated building is less than that of a similar building with conventional heating, ventilation, and air conditioning (HVAC) systems. The maintenance costs are also lower. Unlike powered air-conditioning and fans, windcatchers are silent and continue to function when the electrical grid power fails (a particular concern in places where grid power is unreliable or expensive).
Windcatchers rely on local weather and microclimate conditions, and not all techniques will work everywhere; local factors must be taken into account in design. Windcatchers of varying designs are widely used in North Africa, West Asia, and India. A simple, widespread idea, there is evidence that windcatchers have been in use for many millennia, and no clear evidence that they were not used into prehistory. The "place of invention" of windcatchers is thus intensely disputed; Egypt, Iran, and the United Arab Emirates all claim it.
Windcatchers vary dramatically in shape, including height, cross-sectional area, and internal sub-divisions and filters.
Windcatching has gained some ground in Western architecture, and there are several commercial products using the name windcatcher. Some modern windcatchers use sensor-controlled moving parts or even solar-powered fans to make semi-passive ventilation and semi-passive cooling systems.
Windscoops have long been used on ships, for example in the form of a dorade box. Windcatchers have also been used experimentally to cool outdoor areas in cities, with mixed results; traditional methods include narrow, walled spaces, parks and winding streets, which act as cold-air reservoirs, and takhtabush-like arrangements (see sections on night flushing and convection, below).
Location
The construction of a windcatcher depends on the prevailing wind direction at that specific location: if the wind tends to blow from only one side, it may have only one opening, and no internal partitions. In areas with more variable wind directions, there may also be radial internal walls, which divide the windtower into vertical sections. These sections are like parallel chimneys, but with openings to the side, pointing in multiple directions. More sections reduce the flow rate, but increase the efficiency at suboptimal wind angles. If the wind hits the opening square-on, it will go in, but if it hits it at a sufficiently oblique angle, it will tend to slip around the tower, instead.
Windcatchers in areas with stronger winds will have smaller total cross-sections, and areas with very hot wind may have many smaller shafts in order to cool the incoming air. Windtowers with square horizontal cross-sections are more efficient than round ones, as the sharp angles make the flow less laminar, encouraging flow separation; suitable shaping increases suction.
Taller windcatchers catch higher winds. Higher winds blow stronger and cooler (and in a different direction). Higher air is also usually less dusty.
If the wind is dusty or polluted, or there are insect-borne illnesses such as malaria and dengue fever, then air filtering may be necessary. Some dust can be dumped at the bottom of the windcatcher as the air slows (see diagram below), and more can be filtered out by suitable plantings or insect mesh. Physical filters generally reduce throughflow, unless the flow is very gusty. It may also be possible to fully or partially close the windcatcher off.
The short, wide right-triangle-prism are usually bidirectional, set in symmetrical pairs, and are often used with a (evaporative cooling unit) and a (roof lantern vent). Wide s are more often used in damper climates, where high-volume air flow is more important compared to evaporative cooling. In hotter climates, they are narrower, and air is cooled on its way in. They are more commonly used in Africa. , on the other hand, are multisided (usually 4-sided), and they are typically tall towers (up to 34 meters tall) which can be closed in winter. They are more common in the Persian Gulf region and in areas with dust storms. Taller windcatchers also have a stronger stack effect.
Cooling methods
Night-flushing cools the house by increasing ventilation at night, when the outdoor air is cooler; windtowers can assist night flushing.
A windcatcher can also cool air by drawing it over cool objects. In arid climates, the daily temperature swings are often extreme, with desert temperatures often dipping below freezing at night. The thermal inertia of the soil evens out the daily and even annual temperature swings. Even the thermal inertia of thick masonry walls will keep a building warmer at night and cooler during the day. Windcatchers can thus cool by drawing air over night- or winter-cooled materials, which act as heat reservoirs.
Windcatchers that cool by drawing air over water use the water as a heat reservoir, but if the air is dry, they are also cooling the air with evaporative cooling. The heat in the air goes into evaporating some of the water, and will not be released until the water re-condenses. This is a very effective way of cooling dry air.
Simply moving the air also has a cooling effect. Humans cool themselves using evaporative cooling when they sweat. A draft disrupts the boundary layer of body-warmed and water-saturated air clinging to the skin, so a human will feel cooler in moving air than in stagnant air of the same temperature.
Airflow forces
The windcatcher can function in two ways: directing airflow using the pressure of wind blowing into the windcatcher, or directing airflow using buoyancy forces from temperature gradients (stack effect). The relative importance of these two forces has been debated. The importance of windpressure increases with increasing wind speed, and is generally more important than buoyancy under most conditions in which the windcatcher is working effectively.
Airflow speed is also important, especially for evaporative cooling (since it only works on dry air, and humidifies the air). It is possible for a windtower-ventilated building to have very high flow rates; 30 air changes per hour were measured in one experiment. Uniform, stable flow with no stagnant corners is important. Turbulent flow should therefore be avoided; laminar flow is more effective at maintaining human comfort (for an extreme example, see Tesla valve).
Other elements are often used in combination with the windcatchers to cool and ventilate: courtyards, domes, walls, and fountains, for instance, as integral parts of an overall ventilation and heat-management strategy.
Wind pressure
If a windcatcher's open side faces the prevailing wind, it can "catch" it, and bring it down into the heart of the building. Suction from the lee side of a windtower is also an important driving force, usually somewhat more constant and less gusty than the pressure on the upwind side (see Venturi effect and Bernoulli's principle).
Routing the wind through the building cools the people in the building interior. The air flows through the house, and leaves from the other side, creating a through-draft; the rate of airflow itself can provide a cooling effect. Windcatchers have been employed in this manner for thousands of years.
The windtower essentially creates a pressure gradient to draw air through the building. Windtowers topped with horizontal airfoils have been built to enhance these pressure gradients. The shape of the traditional roof also creates suction as wind blows over it.
Convection
Buoyancy is usually not the main effect driving windcatcher air circulation during the day.
In a windless environment, a windcatcher can still function using the stack effect. The hot air, which is less dense, tends to travel upwards and escape out the top of the house via the windtower.
Heating of the windtower itself can heat the air inside (making it a solar chimney), so that it rises and pulls air out of the top of the house, creating a draft. This effect can be enhanced with a heat source at the bottom of the windtower (such as humans, ~80 Watts each), but this heats the house and makes it less comfortable. A more practical technique is to cool the air as it flows down and in, using heat reservoirs and/or evaporative cooling.
A takhtabush is a space similar to the ancient Roman tablinum, opening both onto a heavily shaded courtyard and onto a rear garden court (the garden side being shaded with a lattice). It is designed to capture a cross-draft. The breeze is at least partly driven by convection (since one court will generally be warmer than the other), and may also be driven by wind pressure and evaporative cooling, so the garden and courtyard are used as windcatchers.
Buoyancy forces are used to cause night flushing.
Night flushing (colder air)
The diurnal temperature cycle means that the night air is colder than the daytime air; in arid climates, much colder. This creates appreciable buoyancy forces. Buildings may be designed to spontaneously increase ventilation at night.
Courtyards in hot climates fill with cold air at night. This cold air then flows from the courtyard into adjacent rooms. The cold night air will flow in easily, as it is more dense than the rising warm air it is displacing. But in the day, the courtyard walls and awning shade it, while the air outside is heated by the sun. The cool masonry will also chill the nearby air. The courtyard air will become stably stratified, the hot air floating on top of the cold air with little mixing. The fact that the openings are at the top will trap the cool air below, though it cannot cause the temperature to drop below the nightly minimum temperature. This mechanism also works in windtowers.
Subterranean cooling
A windcatcher can also cool air by bringing it into contact with cool thermal masses. These are often found underground.
Below approximately 6m of depth, soil and groundwater is always at about the annual mean-average temperature (MATT) (it is this depth which is used for many ground-source heat pumps, often loosely referred to as "geothermal heat pumps" by laypeople). The thermal inertia of the soil evens out the daily and even annual temperature swings. In arid climates, the daily temperature swings are often extreme, with desert temperatures often dipping below freezing at night. Even the thermal inertia of thick masonry walls will keep a building warmer at night and cooler during the day; in hot-arid climates, thick walls with high thermal mass (adobe, stone, brick) are common (though thinner walls with high resistance against heat transmission are more modernly sometimes used). Windcatchers can thus cool by drawing air over night- or winter-cooled materials, which act as heat reservoirs.
Windcatchers are also often used to ventilate lower-level indoor spaces (e.g. shabestans), which maintain frigid temperatures in the middle of the day even without windcatchers. Ice houses are traditionally used to store water frozen overnight in desert areas, or over winter in temperate areas. They may use windcatchers to circulate air into an underground or semi-underground chamber, evaporatively cooling the ice so that it melts only slowly and stays fairly dry (see lede image). At night, the windcatchers may even bring sub-freezing night air underground, helping to freeze ice.
Evaporative cooling
In dry climates, the evaporative cooling effect may be used by placing water at the air intake, such that the draft draws air over water and then into the house. For this reason, it is sometimes said that the fountain, in the architecture of hot, arid climates, is like the fireplace in the architecture of cold climates.
Windcatchers are used for evaporative cooling in combination with a qanat, or underground canal (which also makes use of the subterranean heat reservoir described above). In this method, the open side of the tower faces away from the direction of the prevailing wind (the tower's orientation may be adjusted by directional ports at the top). When only the leeward side is left open, air is drawn upwards using the Coandă effect. This pulls air into an intake on the other side of the building. The hot air brought down into the qanat tunnel is cooled by coming into contact with the water flow and the surrounding earth. The soil below ground level stays cool by virtue of being several meters below the surface. The insulation and heat capacity of the overlying earth maintains the same stable temperature day and night, and as nights in arid climates are quite cold, often below freezing, that stable temperature is quite cool. The air is also evaporatively cooled when some of the water in the qanat evaporates as the hot, dry surface air passes over it; the heat energy in the air is absorbed as energy of vaporization. The dry air is thus also humidified before entering the building. The cooled air is drawn up through the house and finally out the windcatcher, again by the Coandă effect. On the whole, the cool air flows through the building, decreasing the structure's overall temperature.
A is a type of fountain with a thin sheet of flowing water, shaped to maximize surface area and thus evaporative cooling. Windcatchers are often used with salasabils may be used to maximize the flow of unsaturated air over the water surface and carry the cooled air to where it is needed in the building.
Wetted matting can also be hung inside the windcatcher to cool incoming air. This can reduce flow, especially in weak winds. However, it can also produce a downdraft of cool air in windless conditions. The evaporative cooling within a windtower causes the air in the tower to sink, driving circulation. This is called passive downdraught evaporative cooling (PDEC). It may also be generated using spray nozzles (which have a tendency to get blocked if the water is hard) or cold-water cooling coils (like hydronic underfloor heating in reverse).
Windcatchers and climate change
Windcatchers can be used for mitigation of climate change as they can "reduce the building's energy consumption and carbon footprint" and for adaptation to climate change because they facilitate cooling in a warmer climate. Windcatchers can reduce temperature inside the house by in comparison to the outdoor temperature.
A window windcatcher can reduce the total energy use of a building by 23.3%.
Regional use
Africa
Egypt
In Egypt windcatchers are known as , pl. . They are generally shaped as right triangular prisms with the vertical side left open and facing directly up or down wind (one of each per building). They work best if oriented within 10 degrees of wind direction; larger angles allow the wind to escape. Windcatchers were used in traditional ancient Egyptian architecture, and only started to fall out of use in the mid-20th century. Their use is now being re-examined, as air conditioning accounts for 60% of Egypt's peak electrical power demand (and thus the need for 60% of its generating capacity).
Windcatchers in Egypt are often used in conjunction with other passive cooling elements.
Middle East and Asia
Windcatchers are a common feature across many Middle Eastern countries, influenced by the spread of Islamic culture.
Iran
In Iran, a windcatcher is called a bâdgir, bâd "wind" + gir "catcher" (). The devices were used in Achaemenid architecture. They are used in the hot, dry areas of the Central Iranian Plateau, and in the hot, humid coastal regions.
Central Iran shows large diurnal temperature variation with an arid climate. Most buildings are constructed from thick ceramic with high insulation values. Towns centered on desert oases tend to be packed very closely together with high walls and ceilings, maximizing shade at ground level. The heat of direct sunlight is minimized with small windows that face away from the sun.
The windcatcher's effectiveness had led to its routine use as a refrigerating device in Iran. Many traditional water reservoirs (ab anbars), which are capable of storing water at near freezing temperatures during summer months, are built with windcatchers. The evaporative cooling effect is strongest in the driest climates, such as on the Iranian plateau, leading to the ubiquitous use of windcatchers in drier areas such as Yazd, Kerman, Kashan, Sirjan, Nain, and Bam.
Windcatchers tend to have one, four, or eight openings. In the city of Yazd, all windcatchers are four- or eight-sided. The construction of a windcatcher depends on the direction of airflow at that specific location: if the wind tends to blow from only one side, it is built with only one downwind opening. This is the style most commonly seen in Meybod, 50 kilometers from Yazd: the windcatchers are short and have a single opening.
Windcatchers in Iran may be quite elaborate, due to their use as status symbols.
A small windcatcher is called a shish-khan in traditional Persian architecture. Shish-khans can still be seen on top of ab anbars in Qazvin and other northern cities in Iran. These seem to function more as ventilators than as the temperature regulators seen in the central deserts of Iran.
Australia
Council House 2 in Melbourne, Australia, has 3-story-tall "shower towers", made of cloth kept wet by a showerhead trickling at the top of each one. Evaporative cooling chills the air, which then descends into the building.
Europe
France
The Saint-Étienne Métropole's Zénith is a multi-purpose hall built in Auvergne-Rhône-Alpes (inland southern France). It incorporates a very large aluminium windcatcher, which is much lighter than the equivalent masonry windcatcher would be. The size of the windcatcher allows it to work in any wind direction; the cross-sectional area perpendicular to the wind flow remains large.
United Kingdom
The Bluewater Shopping Centre in Kent uses windcatcher towers. The Queen's Building of De Montfort University in Leicester uses stack-effect towers to ventilate.
Americas
A windcatcher has been used in the visitor center at Zion National Park, Utah, where it functions without the addition of mechanical devices in order to regulate temperature.
| Technology | Heating and cooling | null |
1922158 | https://en.wikipedia.org/wiki/Salix%20babylonica | Salix babylonica | Salix babylonica (Babylon willow or weeping willow; ) is a species of willow native to dry areas of northern China, Korea, Mongolia, Japan, and Siberia but cultivated for millennia elsewhere in Asia, being traded along the Silk Road to southwest Asia and Europe.
Description
Salix babylonica is a medium- to large-sized deciduous tree, growing up to tall. It grows rapidly, but has a short lifespan, between 40 and 75 years. The shoots are Yellowish-brown, with small buds. The leaves are alternate and spirally arranged, narrow, light green, long and broad, with finely serrate margins and long acuminate tips; they turn a gold-yellow in autumn. The flowers are arranged in catkins produced early in the spring; it is dioecious, with the male and female catkins on separate trees.
Taxonomy
Salix babylonica was described and named scientifically by Carl Linnaeus in 1736, who knew the species as the pendulous-branched ("weeping") variant then recently introduced into the Clifford garden in Hartekamp in The Netherlands.
Horticultural selections and related hybrids
Early Chinese cultivar selections include the original weeping willow, Salix babylonica 'Pendula', in which the branches and twigs are strongly pendulous, which was presumably spread along ancient trade routes. These distinctive trees were subsequently introduced into England from Aleppo in northern Syria in 1730, and have rapidly become naturalised, growing well along rivers and in parks. These plants are all females, readily propagated vegetatively, and capable of hybridizing with various other kinds of willows, but not breeding true from seed. This type of tree is grown very easily through plant propagation.
Two cultivated hybrids between pendulous Salix babylonica and other species of Salix willows also have pendulous branchlets, and are more commonly planted than S. babylonica itself:
Salix × pendulina, a hybrid with S. babylonica accepted as the female parent, but with the male parent unidentified, probably being either S. euxina or S. × fragilis, but perhaps S. pentandra. Of these possibilities, S. × fragilis is itself a hybrid, with S. alba and S. euxina as parental species.
Salix × sepulcralis, is a hybrid between S. alba and S. babylonica.
Cultivars derived from either of these hybrids are generally better adapted than S. babylonica to the more humid climates of most heavily populated regions of Europe and North America.
Relation to Salix matsudana
A similar willow species also native to northern China, Salix matsudana (Chinese willow), is now included in Salix babylonica as a synonym by many botanists, including the Russian willow expert Alexey Skvortsov. The only reported difference between the two species is S. matsudana has two nectaries in each female flower, whereas S. babylonica has only one; however, this character is variable in many willows (for example, crack willow, Salix × fragilis, can have either one or two), so even this difference may not be taxonomically significant.
A horticultural variant with twisted twigs and trunk, the corkscrew willow (S. matsudana var. tortuosa), is widely planted.
Cultivation
Salix babylonica, especially its pendulous-branched ("weeping") form, has been introduced into many other areas, including Europe and the southeastern United States, but beyond China, it has not generally been as successfully cultivated as some of its hybrid derivatives, being sensitive to late-spring frosts. In the more humid climates of much of Europe and eastern North America, it is susceptible to a canker disease, willow anthracnose (Marssonina salicicola), which makes infected trees very short-lived and unsightly.
Cultivars
Salix babylonica (Babylon willow) has many cultivars, including:
'Babylon' (synonym: 'Napoleon') is the most widely grown cultivar of S. babylonica, with its typical weeping branches.
'Crispa' (synonym: 'Annularis') is a mutant of 'Babylon', with spirally curled leaves.
Various cultivars of Salix matsudana (Chinese willow) are now often included within Salix babylonica, treated more broadly, including:
'Pendula' is one of the best weeping trees, with a silvery shine, hardier, and more disease resistant.
'Tortuosa' is an upright tree with twisted and contorted branches, marketed as corkscrew willow.
Yet other weeping willow cultivars are derived from interspecific Salix hybrids, including S. babylonica in their parentage. The most widely grown weeping willow cultivar is Salix × sepulcralis 'Chrysocoma', with bright yellowish branchlets.
Uses
Peking willow is a popular ornamental tree in northern China, and is also grown for wood production and shelterbelts there, being particularly important around the oases of the Gobi Desert, protecting agricultural land from desert winds.
Origin
The epithet babylonica in this Chinese species' scientific name (S. babylonica), as well as the related common names "Babylon willow" or "Babylon weeping willow", derive from a misunderstanding by Linnaeus that this willow was the tree described in the Bible in the opening of Psalm 137 (here in Latin and English translations):
From the Clementine Vulgate (Latin, 1592):
Super flumina Babylonis illic sedimus et flevimus, cum recordaremur Sion.
In salicibus in medio ejus suspendimus organa nostra....
Here, "salicibus" is the dative plural of the Latin noun salix, the willows, used by Linnaeus as the name for the willow genus Salix.
From the King James Version (English, 1611):
By the rivers of Babylon, there we sat down, yea, we wept, when we remembered Zion.
We hanged our harps upon the willows in the midst thereof.
From the Revised Standard Version (English, 1952):
By the waters of Babylon, there we sat down and wept, when we remembered Zion
On the willows there we hung up our lyres....
Despite these Biblical references to "willows", whether in Latin or English, the trees growing in Babylon along the Euphrates River in ancient Mesopotamia (modern Iraq) and named gharab in early Hebrew, are not willows (Salix) in either the modern or the classical sense, but the Euphrates poplar (Populus euphratica), with willow-like leaves on long, drooping shoots, in the related genus Populus. Both Populus and Salix are in the plant family Salicaceae, the willow family.
These Babylonian trees are correctly called poplars, not willows, in the New International Version of the Bible (English, 1978):
By the rivers of Babylon we sat and wept when we remembered Zion
There on the poplars we hung our harps.
Explanatory notes
| Biology and health sciences | Malpighiales | Plants |
1922671 | https://en.wikipedia.org/wiki/Cumulus%20mediocris%20cloud | Cumulus mediocris cloud | Cumulus mediocris is a low to middle level cloud with some vertical extent (Family D1) of the genus cumulus, larger in vertical development than Cumulus humilis. It also may exhibit small protuberances from the top and may show the cauliflower form characteristic of cumulus clouds. Cumulus mediocris clouds do not generally produce precipitation of more than very light intensity, but can further advance into clouds such as Cumulus congestus or Cumulonimbus, which do produce precipitation and severe storms.
Cumulus mediocris is also classified as a low cloud and is coded CL2 by the World Meteorological Organization.
Description
Cumulus mediocris is brilliantly white when sunlit, and is dark underneath. A single pattern-based variety, Cumulus radiatus, is sometimes seen when the individual clouds are arranged into parallel rows. The resulting formations are known as "cloud streets" and are aligned approximately parallel to the wind.
Cumulus mediocris may have precipitation-based features like virga, and may form Cumulus praecipitato clouds. The pannus supplementary feature is sometimes seen with precipitating Cumulus mediocris, but in this case the CL7 reporting code normally used with to identify pannus is usually superseded by CL2 due to the additional presence of significant vertical development. Pileus (cap cloud), velum (apron), arcus (roll or shelf cloud) and tuba (vertical column) features are also occasionally seen with Cumulus mediocris. Cumulus mediocris may form as a result of a partial transformation of altocumulus or stratocumulus. This genus and species type may also be the result of a complete transformation of stratocumulus or stratus.
Forecasting
These clouds are common in the advance of a cold front or in unstable atmospheric conditions such as an area of low pressure. They can grow into larger cumulus congestus which could bring rain and winds. The presence of cumulus mediocris in the morning or early afternoon indicates significant instability in the atmosphere which will likely lead to thunderstorms later in the afternoon or evening.
Formation
Like any Cumulus cloud, Cumulus mediocris forms via convection from thermal air columns. Pockets of air warmer than the air around them (due to ground surface irregularities or other factors) are also less dense than the air around them, and are buoyant. As the air pockets float upwards, they cool, eventually reaching their dew point and condensing, forming Cumulus humilis clouds. If the thermals are powerful enough, they will continue to push air upwards and the Cumulus humilis clouds will develop into Cumulus mediocris clouds.
| Physical sciences | Clouds | Earth science |
1923966 | https://en.wikipedia.org/wiki/Carbonate%20rock | Carbonate rock | Carbonate rocks are a class of sedimentary rocks composed primarily of carbonate minerals. The two major types are limestone, which is composed of calcite or aragonite (different crystal forms of CaCO3), and dolomite rock (also known as dolostone), which is composed of dolomite (CaMg(CO3)2). They are usually classified on the basis of texture and grain size. Importantly, carbonate rocks can exist as metamorphic and igneous rocks, too. When recrystallized carbonate rocks are metamorphosed, marble is created. Rare igneous carbonate rocks even exist as intrusive carbonatites and, even rarer, there exists volcanic carbonate lava.
Carbonate rocks are also crucial components to understanding geologic history due to processes such as diagenesis in which carbonates undergo compositional changes based on kinetic effects. The correlation between this compositional change and temperature can be exploited to reconstruct past climate as is done in paleoclimatology. Carbonate rocks can also be used for understanding various other systems as described below.
Limestone
Limestone is the most common carbonate rock and is a sedimentary rock made of calcium carbonate with two main polymorphs: calcite and aragonite. While the chemical composition of these two minerals is the same, their physical properties differ significantly due to their different crystalline form. The most common form found in the seafloor is calcite, while aragonite is more found in biological organisms.
Calcite
Calcite can be either dissolved by groundwater or precipitated by groundwater, depending on several factors including the water temperature, pH, and dissolved ion concentrations. Calcite exhibits an unusual characteristic called retrograde solubility in which it becomes less soluble in water as the temperature increases. When conditions are right for precipitation, calcite forms mineral coatings that cement the existing rock grains together or it can fill fractures.
Aragonite
Compared to calcite, aragonite is less stable and more soluble, and can thus be converted to calcite under certain conditions. In solution, magnesium ions can act as promoters of aragonite growth as they inhibit calcite precipitation. Often this inhibited precipitation occurs in biology where organisms aim to precipitate calcium carbonate for their structural features such as for skeleton and shells.
Dolostone
The discovery of dolomite rock, or dolostone, was first published in 1791 and has been found across the Earth's crust from various different time periods. Because the rock is made of calcium, magnesium, and carbonate ions, the mineral crystalline structure can be visualized similar to calcite and magnesite. Due to this composition, the dolomite mineral present in dolostone can be classified by varying degree of calcium inclusion, and occasionally iron, too.
Calcian dolomite
Calcium-rich dolomite, or calcian dolomite, is dolomite which has more calcium than magnesium in its mineral form. This is the most common form of dolomite found naturally and artificially from synthesis. This dolomite, when formed in the oceans, can prove to be metastable. The resultant structure of this mineral presents minimal differences from regular dolomite likely as a result of formation after initial crystal growth.
Ferroan dolomite / ankerite
Iron-rich dolomite, or ferroan dolomite, is doloimite which contains significant trace levels of iron. Due to the similar ionic radii of iron(II) and magnesium, iron(II) can easily substitute magnesium to form ferroan dolomite; manganese can also substitute this atom. The result can be defined as ankerite. The exact delineation between which minerals are considered ferroan dolomite and which are ankerite is unclear. Ankerite with the "pure" CaFe(CO3)2 chemical formula has yet to be found in nature.
Significance
Carbonate rocks are significant for both human understanding of Earth's atmospheric and geologic history, in addition to providing humans with significant resources for current civilizational endeavors such as concrete.
Limestone and concrete
Limestone is often used in concrete as powder due to its cheap cost. During the formation of concrete, however, breakdown of limestone releases carbon dioxide and contributes significantly to the greenhouse effect. There is significant amount of research studying the ideal quantity of calcium carbonate (derived from limestone) in concrete and if other compounds can be used to provide the same economic and structural integrity benefits.
Paleoclimatology from carbonate minerals
Many forms of paleoclimatology exist whereby carbonate rocks can be used to determine past climate. Corals and sediments are well-known proxies for these reconstructions. Corals are marine organisms with calcium carbonate skeletons (rocks) which grow specific to oceanic conditions at the time of growth. Diagenesis refers to the process whereby sediments are being converted to sedimentary rock. This includes biological activity, erosion, and other chemical reactions. Due to the strong correlation between diagenesis and seawater temperature, coral skeletons can be used as proxies for understanding past climate effects. Specifically, the ratio of Strontium to Calcium in the aragonite of coral skeleton can be used, alongside other proxies like oxygen isotopic ratios, to reconstruct climate variability when the coral was growing. This is because Strontium will sometimes substitute for Calcium in the calcium carbonate molecule depending on temperature effects.
Similar to the concept for using compositional changes in coral skeletons as proxies for climate conditions, compositional changes in marine sediments can be used for the same purpose (and more). The changes in trace metal ratios from carbonate minerals found here can be used to determine patterns from parent [carbonate] rocks, too.
| Physical sciences | Sedimentary rocks | Earth science |
1924025 | https://en.wikipedia.org/wiki/Tench | Tench | The tench or doctor fish (Tinca tinca) is a fresh- and brackish-water fish of the order Cypriniformes found throughout Eurasia from Western Europe including Britain and Ireland east into Asia as far as the Ob and Yenisei Rivers. It is also found in Lake Baikal. It normally inhabits slow-moving freshwater habitats, particularly lakes and lowland rivers.
Taxonomy
The tench was first formally described in as Cyprinus tinca by Carl Linnaeus in 1758 in the 10th edition of Systema Naturae with its type locality given as "European lakes". In 1764 François Alexandre Pierre de Garsault proposed the new monospecific genus Tinca, with Cyprinus tinca as the type species by absolute tautonymy. The 5th edition of Fishes of the World classified Tinca in the subfamily Tincinae, alongside the genus Tanichthys, while other authorities classified both these genera in the subfamily Leuciscinae with other Eurasian minnows, but more recent phylogenetic studies have supported it belonging to its own family Tincidae. The Tincidae was first proposed as a name in 1878 by David Starr Jordan.
Ecology
The tench is most often found in still waters with a clay or muddy substrate and abundant vegetation. This species is rare in clear waters across stony substrate, and is absent altogether from fast-flowing streams. It tolerates water with a low oxygen concentration, being found in waters where even the carp cannot survive.
Tench feed mostly at night with a preference for animals, such as chironomids, on the bottom of eutrophic waters and snails and pea clams in well-vegetated waters.
Breeding takes place in shallow water usually among aquatic plants where the sticky green eggs can be deposited. Spawning usually occurs in summer, and as many as 300,000 eggs may be produced. Growth is rapid, and fish may reach a weight of within the first year.
Morphology
Tench have a stocky, carp-like shape and olive-green skin, darker above and almost golden below. The tail fin is square in shape. The other fins are distinctly rounded in shape. The mouth is rather narrow and provided at each corner with a very small barbel.
Maximum size is , though most specimens are much smaller. A record fish caught in 2001 in England had a weight of . The eyes are small and red-orange in colour. Females can reach weights of around , although is considered large. Males rarely reach over . Sexual dimorphism is strong, males can be recognised by having larger, more curved pelvic fins extending beyond the anus and noticeable muscles around the base of these fins generally absent in females. Males also possess a very thick and flattened outer ray to the ventral fins. Adult females may have a more convex ventral profile when compared with males.
The tench has very small scales, which are deeply embedded in a thick skin, making it as slippery as an eel. Folklore has it that this slime cured any sick fish that rubbed against it, and from this belief arose the name doctor fish.
Golden tench
An artificially bred variety of tench called the golden tench is a popular ornamental fish for ponds. This form varies in colour from pale gold through to dark red, and some fish have black or red spots on the flanks and fins. Though somewhat similar to the goldfish, because these fish have such small scales, their quality is rather different.
Economic significance
Tench are edible, working well in recipes that would otherwise call for carp, but are not commonly consumed. They are shoaling fish that are popular quarries for coarse angling in rivers, lakes and canals. Tench, particularly golden tench, are also kept as ornamental fish in ponds as they are bottom feeders that help to keep the waterways clean and healthy.
Angling
Large tench may be found in gravel pits or deep, slow-moving waters with a clayey or silty bottom and copious aquatic vegetation. The best methods and bait to catch tench are float fishing and ledgering with a swim feeder using maggots, sweetcorn, pellets, bread, and worms. Fish over in weight are very strong fighters when caught on a rod.
| Biology and health sciences | Cypriniformes | Animals |
494553 | https://en.wikipedia.org/wiki/Fagus%20grandifolia | Fagus grandifolia | Fagus grandifolia, the American beech or North American beech, is the only species of beech native to North America. Its current range comprises the eastern United States, isolated pockets of Mexico and southeastern Canada. Prior to the glacial maximum of the Pleistocene epoch, the tree flourished over most of North America, reaching California.
Description
Fagus grandifolia is a large deciduous tree growing to tall, with smooth, silver-gray bark. The leaves are dark green, simple and sparsely-toothed with small teeth that terminate each vein, long (rarely ), with a short petiole. The winter twigs are distinctive among North American trees, being long and slender ( by ) with two rows of overlapping scales on the buds. Beech buds are distinctly thin and long, resembling cigars; this characteristic makes beech trees relatively easy to identify. The tree is monoecious, with flowers of both sexes on the same tree. The fruit is a small, sharply-angled nut, borne in pairs in a soft-spined, four-lobed husk. It has two means of reproduction: one is through the usual dispersal of seedlings, and the other is through root sprouts, which grow into new trees.
Taxonomy
Trees in the southern half of the range are sometimes distinguished as a variety, F. grandifolia var. caroliniana, but this is not considered distinct in the Flora of North America. The Mexican beech (F. grandifolia subsp. mexicana), native to the mountains of central Mexico, is closely related, and is treated as a subspecies of American beech, but some botanists classify it as a distinct species. The only Fagus species found in the Western Hemisphere (assuming the Mexican subspecies is treated as such), F. grandifolia is believed to have spanned the width of the North American continent all the way to the Pacific coast before the last ice age.
Two subspecies are generally recognized:
Etymology
The genus name Fagus is Latin for "beech", and the specific epithet grandifolia comes from grandis "large" and folium "leaf", in reference to the American beech's larger leaves when compared to the European beech.
Distribution and habitat
The American beech is native to eastern North America, from Nova Scotia west to southern Ontario in southeastern Canada, west to Wisconsin and south to eastern Texas and northern Florida in the United States, as well as the states of Hidalgo, Veracruz, Tamaulipas, Puebla, San Luis Potosí, and Tabasco in Mexico. Mature specimens are rare in lowland areas as early settlers quickly discovered that the presence of the tree indicated good farmland.
The American beech is a shade-tolerant species, commonly found in forests in the final stage of succession. Few trees in its natural range other than sugar maple match it for shade tolerance. Ecological succession is essentially the process of forests changing their composition through time; it is a pattern of events often observed on disturbed sites. Although sometimes found in pure stands, it is more often associated with sugar maple (forming the beech–maple climax community), yellow birch, and eastern hemlock, typically on moist, well-drained slopes and rich bottomlands. Near its southern limit, it often shares canopy dominance with southern magnolia. Although it has a reputation for slow growth (sometimes only 13 feet in 20 years), rich soil and ample moisture will greatly speed the process up. American beech favours a well-watered, but also well-drained spot and is intolerant of urban pollution, salt, and soil compaction. It also casts heavy shade and is an extremely thirsty tree with high moisture requirements compared to oaks, so it has a dense, shallow root system.
Ecology
The mast (crop of nuts) from American beech provides food for numerous species of animals. Among vertebrates alone, these include various birds including ruffed grouse and wild turkeys, raccoons, foxes, white-tailed deer, rabbits, squirrels, opossums, pheasants, black bears, and porcupines. Beech nuts were one of the primary foods of the now-extinct passenger pigeon; the clearing of beech and oak forests is pointed to as one of the major factors that may have contributed to the bird's extinction. Some Lepidoptera caterpillars feed on beeches. Deer occasionally browse on beech foliage, but it is not a preferred food.
Diseases and pests
Beech bark disease has become a major killer of beech trees in the Northeastern United States. This disease occurs when the European beech scale insect, Cryptococcus fagisuga, attacks the bark, creating a wound that is then infected by Neonectria ditissima or Neonectria faginata, two species of fungi. This causes a canker to develop and the tree is eventually killed.
Beech leaf disease is caused by the nematode Litylenchus crenatae mccannii. It was discovered in Ohio in 2012 and identified as far south as Virginia in 2022. Beech leaf disease causes severe damage to the American beech and also to the related European beech.
The beech leaf-miner weevil, a species native to Europe, has been identified in North America as a cause of defoliation of American beech trees. American beech trees have small gaps and crevices at the base of their trunks in which the pest overwinter before eventually making their way to the buds of the trees and finally laying eggs on the underside of the leaves. Once hatched, the larvae mine the leaves, causing destruction to the foliage.
Beech blight aphids colonize branches of the tree, but without serious harm to otherwise healthy trees. Below these colonies, deposits of sooty mold develop caused by the fungus Scorias spongiosa growing saprophytically on the honeydew the insects exude. This is also harmless to the trees.
Despite their high moisture needs, beeches succumb to flooding easily and their thin bark invites damage from animals, fire, and human activities. Late spring frosts can cause complete defoliation of the tree, although they typically recover by using reserve pools of sugar. The trunks of mature beeches often rot and develop cavities that are used by wildlife for habitation.
Uses
American beech is an important tree in forestry. The wood is hard and difficult to cut or split, although at it is not exceptionally heavy, and it also rots relatively easily. It is used for a wide variety of purposes, most notably bentwood furniture as beech wood easily bends when steamed. It also makes high quality, long-burning firewood.
Like European beech bark, the American beech bark is smooth and uniform, making it an attraction for people to carve names, dates, decorative symbols such as love hearts or gang identifiers, and other material into its surface. One such beech tree in Louisville, Kentucky, in what is now the southern part of Iroquois Park, bore the legend "D. Boone kill a Bar 1803." The beech finally fell over in 1916 during a storm; its age was estimated at around 325 years. Its trunk is now on display at the Filson Historical Society.
It is sometimes planted as an ornamental tree, but even within its native area, it is planted much less often than the European beech. Although American beech can handle hotter climates, its European cousin is faster-growing and more pollution-tolerant, in addition to being easier to propagate.
American beech does not produce significant quantities of nuts until the tree is about 40 years old. Large crops are produced by 60 years. The oldest documented tree is 246 years old. The fruit is a triangle-shaped shell containing 2–3 nuts inside, but many of them do not fill in, especially on solitary trees. Beech nuts are sweet and nutritious, can be eaten raw by wildlife and humans, or can be cooked. They can also be roasted and ground into a coffee substitute.
The leaves are edible when cooked. The inner bark can be dried and pulverized into bread flour as an emergency food.
In culture
Numerous place names in North America are named Beechwood.
In John Steinbeck's novel East of Eden, a character returns from the Civil War with a wooden leg he carved from beechwood.
| Biology and health sciences | Fagales | Plants |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.