id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4 values | section stringlengths 4 49 ⌀ | sublist stringclasses 9 values |
|---|---|---|---|---|---|---|
4109042 | https://en.wikipedia.org/wiki/Cell%20signaling | Cell signaling | In biology, cell signaling (cell signalling in British English) is the process by which a cell interacts with itself, other cells, and the environment. Cell signaling is a fundamental property of all cellular life in prokaryotes and eukaryotes.
Typically, the signaling process involves three components: the signal, the receptor, and the effector.
In biology, signals are mostly chemical in nature, but can also be physical cues such as pressure, voltage, temperature, or light. Chemical signals are molecules with the ability to bind and activate a specific receptor. These molecules, also referred as ligands, are chemically diverse, including ions (e.g. Na+, K+, Ca++, etc.), lipids (e.g. steroid, prostaglandin), peptides (e.g. insulin, ACTH), carbohydrates, glycosylated proteins (proteoglycans), nucleic acids, etc. Peptide and lipid ligands are particularly important, as most hormones belong to these classes of chemicals. Peptides are usually polar, hydrophilic molecules. As such they are unable to diffuse freely across the bi-lipid layer of the plasma membrane, so their action is mediated by a cell membrane bound receptor. On the other hand, liposoluble chemicals such as steroid hormones, can diffuse passively across the plasma membrane and interact with intracellular receptors.
Cell signaling can occur over short or long distances, and can be further classified as autocrine, intracrine, juxtacrine, paracrine, or endocrine. Autocrine signaling occurs when the chemical signal acts on the same cell that produced the signaling chemical. Intracrine signaling occurs when the chemical signal produced by a cell acts on receptors located in the cytoplasm or nucleus of the same cell. Juxtacrine signaling occurs between physically adjacent cells. Paracrine signaling occurs between nearby cells. Endocrine interaction occurs between distant cells, with the chemical signal usually carried by the blood.
Receptors are complex proteins or tightly bound multimer of proteins, located in the plasma membrane or within the interior of the cell such as in the cytoplasm, organelles, and nucleus. Receptors have the ability to detect a signal either by binding to a specific chemical or by undergoing a conformational change when interacting with physical agents. It is the specificity of the chemical interaction between a given ligand and its receptor that confers the ability to trigger a specific cellular response. Receptors can be broadly classified into cell membrane receptors and intracellular receptors.
Cell membrane receptors can be further classified into ion channel linked receptors, G-Protein coupled receptors and enzyme linked receptors.
Ion channels receptors are large transmembrane proteins with a ligand activated gate function. When these receptors are activated, they may allow or block passage of specific ions across the cell membrane. Most receptors activated by physical stimuli such as pressure or temperature belongs to this category.
G-protein receptors are multimeric proteins embedded within the plasma membrane. These receptors have extracellular, trans-membrane and intracellular domains. The extracellular domain is responsible for the interaction with a specific ligand. The intracellular domain is responsible for the initiation of a cascade of chemical reactions which ultimately triggers the specific cellular function controlled by the receptor.
Enzyme-linked receptors are transmembrane proteins with an extracellular domain responsible for binding a specific ligand and an intracellular domain with enzymatic or catalytic activity. Upon activation the enzymatic portion is responsible for promoting specific intracellular chemical reactions.
Intracellular receptors have a different mechanism of action. They usually bind to lipid soluble ligands that diffuse passively through the plasma membrane such as steroid hormones. These ligands bind to specific cytoplasmic transporters that shuttle the hormone-transporter complex inside the nucleus where specific genes are activated and the synthesis of specific proteins is promoted.
The effector component of the signaling pathway begins with signal transduction. In this process, the signal, by interacting with the receptor, starts a series of molecular events within the cell leading to the final effect of the signaling process. Typically the final effect consists in the activation of an ion channel (ligand-gated ion channel) or the initiation of a second messenger system cascade that propagates the signal through the cell. Second messenger systems can amplify or modulate a signal, in which activation of a few receptors results in multiple secondary messengers being activated, thereby amplifying the initial signal (the first messenger). The downstream effects of these signaling pathways may include additional enzymatic activities such as proteolytic cleavage, phosphorylation, methylation, and ubiquitinylation.
Signaling molecules can be synthesized from various biosynthetic pathways and released through passive or active transports, or even from cell damage.
Each cell is programmed to respond to specific extracellular signal molecules, and is the basis of development, tissue repair, immunity, and homeostasis. Errors in signaling interactions may cause diseases such as cancer, autoimmunity, and diabetes.
Taxonomic range
In many small organisms such as bacteria, quorum sensing enables individuals to begin an activity only when the population is sufficiently large. This signaling between cells was first observed in the marine bacterium Aliivibrio fischeri, which produces light when the population is dense enough. The mechanism involves the production and detection of a signaling molecule, and the regulation of gene transcription in response. Quorum sensing operates in both gram-positive and gram-negative bacteria, and both within and between species.
In slime molds, individual cells aggregate together to form fruiting bodies and eventually spores, under the influence of a chemical signal, known as an acrasin. The individuals move by chemotaxis, i.e. they are attracted by the chemical gradient. Some species use cyclic AMP as the signal; others such as Polysphondylium violaceum use a dipeptide known as glorin.
In plants and animals, signaling between cells occurs either through release into the extracellular space, divided in paracrine signaling (over short distances) and endocrine signaling (over long distances), or by direct contact, known as juxtacrine signaling such as notch signaling. Autocrine signaling is a special case of paracrine signaling where the secreting cell has the ability to respond to the secreted signaling molecule. Synaptic signaling is a special case of paracrine signaling (for chemical synapses) or juxtacrine signaling (for electrical synapses) between neurons and target cells.
Extracellular signal
Synthesis and release
Many cell signals are carried by molecules that are released by one cell and move to make contact with another cell. Signaling molecules can belong to several chemical classes: lipids, phospholipids, amino acids, monoamines, proteins, glycoproteins, or gases. Signaling molecules binding surface receptors are generally large and hydrophilic (e.g. TRH, Vasopressin, Acetylcholine), while those entering the cell are generally small and hydrophobic (e.g. glucocorticoids, thyroid hormones, cholecalciferol, retinoic acid), but important exceptions to both are numerous, and the same molecule can act both via surface receptors or in an intracrine manner to different effects. In animal cells, specialized cells release these hormones and send them through the circulatory system to other parts of the body. They then reach target cells, which can recognize and respond to the hormones and produce a result. This is also known as endocrine signaling. Plant growth regulators, or plant hormones, move through cells or by diffusing through the air as a gas to reach their targets. Hydrogen sulfide is produced in small amounts by some cells of the human body and has a number of biological signaling functions. Only two other such gases are currently known to act as signaling molecules in the human body: nitric oxide and carbon monoxide.
Exocytosis
Exocytosis is the process by which a cell transports molecules such as neurotransmitters and proteins out of the cell. As an active transport mechanism, exocytosis requires the use of energy to transport material. Exocytosis and its counterpart, endocytosis, the process that brings substances into the cell, are used by all cells because most chemical substances important to them are large polar molecules that cannot pass through the hydrophobic portion of the cell membrane by passive transport. Exocytosis is the process by which a large amount of molecules are released; thus it is a form of bulk transport. Exocytosis occurs via secretory portals at the cell plasma membrane called porosomes. Porosomes are permanent cup-shaped lipoprotein structures at the cell plasma membrane, where secretory vesicles transiently dock and fuse to release intra-vesicular contents from the cell.
In exocytosis, membrane-bound secretory vesicles are carried to the cell membrane, where they dock and fuse at porosomes and their contents (i.e., water-soluble molecules) are secreted into the extracellular environment. This secretion is possible because the vesicle transiently fuses with the plasma membrane. In the context of neurotransmission, neurotransmitters are typically released from synaptic vesicles into the synaptic cleft via exocytosis; however, neurotransmitters can also be released via reverse transport through membrane transport proteins.
Forms of Cell Signaling
Autocrine
Autocrine signaling involves a cell secreting a hormone or chemical messenger (called the autocrine agent) that binds to autocrine receptors on that same cell, leading to changes in the cell itself. This can be contrasted with paracrine signaling, intracrine signaling, or classical endocrine signaling.
Intracrine
In intracrine signaling, the signaling chemicals are produced inside the cell and bind to cytosolic or nuclear receptors without being secreted from the cell.. In intracrine signaling, signals are relayed without being secreted from the cell. The intracrine signals not being secreted outside of the cell is what sets apart intracrine signaling from the other cell signaling mechanisms such as autocrine signaling. In both autocrine and intracrine signaling, the signal has an effect on the cell that produced it.
Juxtacrine
Juxtacrine signaling is a type of cell–cell or cell–extracellular matrix signaling in multicellular organisms that requires close contact. There are three types:
A membrane ligand (protein, oligosaccharide, lipid) and a membrane protein of two adjacent cells interact.
A communicating junction links the intracellular compartments of two adjacent cells, allowing transit of relatively small molecules.
An extracellular matrix glycoprotein and a membrane protein interact.
Additionally, in unicellular organisms such as bacteria, juxtacrine signaling means interactions by membrane contact. Juxtacrine signaling has been observed for some growth factors, cytokine and chemokine cellular signals, playing an important role in the immune response. Juxtacrine signalling via direct membrane contacts is also present between neuronal cell bodies and motile processes of microglia both during development, and in the adult brain.
Paracrine
In paracrine signaling, a cell produces a signal to induce changes in nearby cells, altering the behaviour of those cells. Signaling molecules known as paracrine factors diffuse over a relatively short distance (local action), as opposed to cell signaling by endocrine factors, hormones which travel considerably longer distances via the circulatory system; juxtacrine interactions; and autocrine signaling. Cells that produce paracrine factors secrete them into the immediate extracellular environment. Factors then travel to nearby cells in which the gradient of factor received determines the outcome. However, the exact distance that paracrine factors can travel is not certain.
Paracrine signals such as retinoic acid target only cells in the vicinity of the emitting cell. Neurotransmitters represent another example of a paracrine signal.
Some signaling molecules can function as both a hormone and a neurotransmitter. For example, epinephrine and norepinephrine can function as hormones when released from the adrenal gland and are transported to the heart by way of the blood stream. Norepinephrine can also be produced by neurons to function as a neurotransmitter within the brain. Estrogen can be released by the ovary and function as a hormone or act locally via paracrine or autocrine signaling.
Although paracrine signaling elicits a diverse array of responses in the induced cells, most paracrine factors utilize a relatively streamlined set of receptors and pathways. In fact, different organs in the body - even between different species - are known to utilize a similar sets of paracrine factors in differential development. The highly conserved receptors and pathways can be organized into four major families based on similar structures: fibroblast growth factor (FGF) family, Hedgehog family, Wnt family, and TGF-β superfamily. Binding of a paracrine factor to its respective receptor initiates signal transduction cascades, eliciting different responses.
Endocrine
Endocrine signals are called hormones. Hormones are produced by endocrine cells and they travel through the blood to reach all parts of the body. Specificity of signaling can be controlled if only some cells can respond to a particular hormone. Endocrine signaling involves the release of hormones by internal glands of an organism directly into the circulatory system, regulating distant target organs. In vertebrates, the hypothalamus is the neural control center for all endocrine systems. In humans, the major endocrine glands are the thyroid gland and the adrenal glands. The study of the endocrine system and its disorders is known as endocrinology.
Receptors
Cells receive information from their neighbors through a class of proteins known as receptors. Receptors may bind with some molecules (ligands) or may interact with physical agents like light, mechanical temperature, pressure, etc. Reception occurs when the target cell (any cell with a receptor protein specific to the signal molecule) detects a signal, usually in the form of a small, water-soluble molecule, via binding to a receptor protein on the cell surface, or once inside the cell, the signaling molecule can bind to intracellular receptors, other elements, or stimulate enzyme activity (e.g. gasses), as in intracrine signaling.
Signaling molecules interact with a target cell as a ligand to cell surface receptors, and/or by entering into the cell through its membrane or endocytosis for intracrine signaling. This generally results in the activation of second messengers, leading to various physiological effects. In many mammals, early embryo cells exchange signals with cells of the uterus. In the human gastrointestinal tract, bacteria exchange signals with each other and with human epithelial and immune system cells. For the yeast Saccharomyces cerevisiae during mating, some cells send a peptide signal (mating factor pheromones) into their environment. The mating factor peptide may bind to a cell surface receptor on other yeast cells and induce them to prepare for mating.
Cell surface receptors
Cell surface receptors play an essential role in the biological systems of single- and multi-cellular organisms and malfunction or damage to these proteins is associated with cancer, heart disease, and asthma. These trans-membrane receptors are able to transmit information from outside the cell to the inside because they change conformation when a specific ligand binds to it. There are three major types: Ion channel linked receptors, G protein–coupled receptors, and enzyme-linked receptors.
Ion channel linked receptors
Ion channel linked receptors are a group of transmembrane ion-channel proteins which open to allow ions such as Na+, K+, Ca2+, and/or Cl− to pass through the membrane in response to the binding of a chemical messenger (i.e. a ligand), such as a neurotransmitter.
When a presynaptic neuron is excited, it releases a neurotransmitter from vesicles into the synaptic cleft. The neurotransmitter then binds to receptors located on the postsynaptic neuron. If these receptors are ligand-gated ion channels, a resulting conformational change opens the ion channels, which leads to a flow of ions across the cell membrane. This, in turn, results in either a depolarization, for an excitatory receptor response, or a hyperpolarization, for an inhibitory response.
These receptor proteins are typically composed of at least two different domains: a transmembrane domain which includes the ion pore, and an extracellular domain which includes the ligand binding location (an allosteric binding site). This modularity has enabled a 'divide and conquer' approach to finding the structure of the proteins (crystallising each domain separately). The function of such receptors located at synapses is to convert the chemical signal of presynaptically released neurotransmitter directly and very quickly into a postsynaptic electrical signal. Many LICs are additionally modulated by allosteric ligands, by channel blockers, ions, or the membrane potential. LICs are classified into three superfamilies which lack evolutionary relationship: cys-loop receptors, ionotropic glutamate receptors and ATP-gated channels.
G protein–coupled receptors
G protein-coupled receptors are a large group of evolutionarily-related proteins that are cell surface receptors that detect molecules outside the cell and activate cellular responses. Coupling with G proteins, they are called seven-transmembrane receptors because they pass through the cell membrane seven times. The G-protein acts as a "middle man" transferring the signal from its activated receptor to its target and therefore indirectly regulates that target protein. Ligands can bind either to extracellular N-terminus and loops (e.g. glutamate receptors) or to the binding site within transmembrane helices (Rhodopsin-like family). They are all activated by agonists although a spontaneous auto-activation of an empty receptor can also be observed.
G protein-coupled receptors are found only in eukaryotes, including yeast, choanoflagellates, and animals. The ligands that bind and activate these receptors include light-sensitive compounds, odors, pheromones, hormones, and neurotransmitters, and vary in size from small molecules to peptides to large proteins. G protein-coupled receptors are involved in many diseases.
There are two principal signal transduction pathways involving the G protein-coupled receptors: cAMP signal pathway and phosphatidylinositol signal pathway. When a ligand binds to the GPCR it causes a conformational change in the GPCR, which allows it to act as a guanine nucleotide exchange factor (GEF). The GPCR can then activate an associated G protein by exchanging the GDP bound to the G protein for a GTP. The G protein's α subunit, together with the bound GTP, can then dissociate from the β and γ subunits to further affect intracellular signaling proteins or target functional proteins directly depending on the α subunit type (Gαs, Gαi/o, Gαq/11, Gα12/13).
G protein-coupled receptors are an important drug target and approximately 34% of all Food and Drug Administration (FDA) approved drugs target 108 members of this family. The global sales volume for these drugs is estimated to be 180 billion US dollars . It is estimated that GPCRs are targets for about 50% of drugs currently on the market, mainly due to their involvement in signaling pathways related to many diseases i.e. mental, metabolic including endocrinological disorders, immunological including viral infections, cardiovascular, inflammatory, senses disorders, and cancer. The long ago discovered association between GPCRs and many endogenous and exogenous substances, resulting in e.g. analgesia, is another dynamically developing field of pharmaceutical research.
Enzyme-linked receptors
Enzyme-linked receptors (or catalytic receptors) are transmembrane receptors that, upon activation by an extracellular ligand, causes enzymatic activity on the intracellular side. Hence a catalytic receptor is an integral membrane protein possessing both enzymatic, catalytic, and receptor functions.
They have two important domains, an extra-cellular ligand binding domain and an intracellular domain, which has a catalytic function; and a single transmembrane helix. The signaling molecule binds to the receptor on the outside of the cell and causes a conformational change on the catalytic function located on the receptor inside the cell. Examples of the enzymatic activity include:
Receptor tyrosine kinase, as in fibroblast growth factor receptor. Most enzyme-linked receptors are of this type.
Serine/threonine-specific protein kinase, as in bone morphogenetic protein
Guanylate cyclase, as in atrial natriuretic factor receptor
Intracellular receptors
Intracellular receptors exist freely in the cytoplasm, nucleus, or can be bound to organelles or membranes. For example, the presence of nuclear and mitochondrial receptors is well documented. The binding of a ligand to the intracellular receptor typically induces a response in the cell. Intracellular receptors often have a level of specificity, this allows the receptors to initiate certain responses when bound to a corresponding ligand. Intracellular receptors typically act on lipid soluble molecules. The receptors bind to a group of DNA binding proteins. Upon binding, the receptor-ligand complex translocates to the nucleus where they can alter patterns of gene expression.
Steroid hormone receptor
Steroid hormone receptors are found in the nucleus, cytosol, and also on the plasma membrane of target cells. They are generally intracellular receptors (typically cytoplasmic or nuclear) and initiate signal transduction for steroid hormones which lead to changes in gene expression over a time period of hours to days. The best studied steroid hormone receptors are members of the nuclear receptor subfamily 3 (NR3) that include receptors for estrogen (group NR3A) and 3-ketosteroids (group NR3C). In addition to nuclear receptors, several G protein-coupled receptors and ion channels act as cell surface receptors for certain steroid hormones.
Mechanisms of Receptor Down-Regulation
Receptor mediated endocytosis is common way of turning receptors "off". Endocytic down regulation is regarded as a means for reducing receptor signaling. The process involves the binding of a ligand to the receptor, which then triggers the formation of coated pits, the coated pits transform to coated vesicles and are transported to the endosome.
Receptor Phosphorylation is another type of receptor down-regulation. Biochemical changes can reduce receptor affinity for a ligand.
Reducing the sensitivity of the receptor is a result of receptors being occupied for a long time. This results in a receptor adaptation in which the receptor no longer responds to the signaling molecule. Many receptors have the ability to change in response to ligand concentration.
Signal transduction pathways
When binding to the signaling molecule, the receptor protein changes in some way and starts the process of transduction, which can occur in a single step or as a series of changes in a sequence of different molecules (called a signal transduction pathway). The molecules that compose these pathways are known as relay molecules. The multistep process of the transduction stage is often composed of the activation of proteins by addition or removal of phosphate groups or even the release of other small molecules or ions that can act as messengers. The amplifying of a signal is one of the benefits to this multiple step sequence. Other benefits include more opportunities for regulation than simpler systems do and the fine-tuning of the response, in both unicellular and multicellular organism.
In some cases, receptor activation caused by ligand binding to a receptor is directly coupled to the cell's response to the ligand. For example, the neurotransmitter GABA can activate a cell surface receptor that is part of an ion channel. GABA binding to a GABAA receptor on a neuron opens a chloride-selective ion channel that is part of the receptor. GABAA receptor activation allows negatively charged chloride ions to move into the neuron, which inhibits the ability of the neuron to produce action potentials. However, for many cell surface receptors, ligand-receptor interactions are not directly linked to the cell's response. The activated receptor must first interact with other proteins inside the cell before the ultimate physiological effect of the ligand on the cell's behavior is produced. Often, the behavior of a chain of several interacting cell proteins is altered following receptor activation. The entire set of cell changes induced by receptor activation is called a signal transduction mechanism or pathway.
A more complex signal transduction pathway is the MAPK/ERK pathway, which involves changes of protein–protein interactions inside the cell, induced by an external signal. Many growth factors bind to receptors at the cell surface and stimulate cells to progress through the cell cycle and divide. Several of these receptors are kinases that start to phosphorylate themselves and other proteins when binding to a ligand. This phosphorylation can generate a binding site for a different protein and thus induce protein–protein interaction. In this case, the ligand (called epidermal growth factor, or EGF) binds to the receptor (called EGFR). This activates the receptor to phosphorylate itself. The phosphorylated receptor binds to an adaptor protein (GRB2), which couples the signal to further downstream signaling processes. For example, one of the signal transduction pathways that are activated is called the mitogen-activated protein kinase (MAPK) pathway. The signal transduction component labeled as "MAPK" in the pathway was originally called "ERK," so the pathway is called the MAPK/ERK pathway. The MAPK protein is an enzyme, a protein kinase that can attach phosphate to target proteins such as the transcription factor MYC and, thus, alter gene transcription and, ultimately, cell cycle progression. Many cellular proteins are activated downstream of the growth factor receptors (such as EGFR) that initiate this signal transduction pathway.
Some signaling transduction pathways respond differently, depending on the amount of signaling received by the cell. For instance, the hedgehog protein activates different genes, depending on the amount of hedgehog protein present.
Complex multi-component signal transduction pathways provide opportunities for feedback, signal amplification, and interactions inside one cell between multiple signals and signaling pathways.
A specific cellular response is the result of the transduced signal in the final stage of cell signaling. This response can essentially be any cellular activity that is present in a body. It can spur the rearrangement of the cytoskeleton, or even as catalysis by an enzyme. These three steps of cell signaling all ensure that the right cells are behaving as told, at the right time, and in synchronization with other cells and their own functions within the organism. At the end, the end of a signal pathway leads to the regulation of a cellular activity. This response can take place in the nucleus or in the cytoplasm of the cell. A majority of signaling pathways control protein synthesis by turning certain genes on and off in the nucleus.
In unicellular organisms such as bacteria, signaling can be used to 'activate' peers from a dormant state, enhance virulence, defend against bacteriophages, etc. In quorum sensing, which is also found in social insects, the multiplicity of individual signals has the potentiality to create a positive feedback loop, generating coordinated response. In this context, the signaling molecules are called autoinducers. This signaling mechanism may have been involved in evolution from unicellular to multicellular organisms. Bacteria also use contact-dependent signaling, notably to limit their growth.
Signaling molecules used by multicellular organisms are often called pheromones. They can have such purposes as alerting against danger, indicating food supply, or assisting in reproduction.
Short-term cellular responses
.
Regulating gene activity
.
Notch signaling pathway
Notch is a cell surface protein that functions as a receptor. Animals have a small set of genes that code for signaling proteins that interact specifically with Notch receptors and stimulate a response in cells that express Notch on their surface. Molecules that activate (or, in some cases, inhibit) receptors can be classified as hormones, neurotransmitters, cytokines, and growth factors, in general called receptor ligands. Ligand receptor interactions such as that of the Notch receptor interaction, are known to be the main interactions responsible for cell signaling mechanisms and communication. notch acts as a receptor for ligands that are expressed on adjacent cells. While some receptors are cell-surface proteins, others are found inside cells. For example, estrogen is a hydrophobic molecule that can pass through the lipid bilayer of the membranes. As part of the endocrine system, intracellular estrogen receptors from a variety of cell types can be activated by estrogen produced in the ovaries.
In the case of Notch-mediated signaling, the signal transduction mechanism can be relatively simple. As shown in Figure 2, the activation of Notch can cause the Notch protein to be altered by a protease. Part of the Notch protein is released from the cell surface membrane and takes part in gene regulation. Cell signaling research involves studying the spatial and temporal dynamics of both receptors and the components of signaling pathways that are activated by receptors in various cell types. Emerging methods for single-cell mass-spectrometry analysis promise to enable studying signal transduction with single-cell resolution.
In notch signaling, direct contact between cells allows for precise control of cell differentiation during embryonic development. In the worm Caenorhabditis elegans, two cells of the developing gonad each have an equal chance of terminally differentiating or becoming a uterine precursor cell that continues to divide. The choice of which cell continues to divide is controlled by competition of cell surface signals. One cell will happen to produce more of a cell surface protein that activates the Notch receptor on the adjacent cell. This activates a feedback loop or system that reduces Notch expression in the cell that will differentiate and that increases Notch on the surface of the cell that continues as a stem cell.
| Biology and health sciences | Cell processes | null |
5482408 | https://en.wikipedia.org/wiki/Body%20louse | Body louse | The body louse (Pediculus humanus humanus, also known as Pediculus humanus corporis) or the cootie is a hematophagic ectoparasite louse that infests humans. It is one of three lice which infest humans, the other two being the head louse, and the crab louse or pubic louse.
Body lice may lay eggs on the host hairs and clothing, but clothing is where the majority of eggs are usually secured.
Since body lice cannot jump or fly, they spread by direct contact with another person or more rarely by contact with clothing or bed sheets that are infested.
Body lice are disease vectors and can transmit pathogens that cause human diseases such as epidemic typhus, trench fever, and relapsing fever. In developed countries, infestations are only a problem in areas of poverty where there is poor body hygiene, crowded living conditions, and a lack of access to clean clothing. Outbreaks can also occur in situations where large groups of people are forced to live in unsanitary conditions. These types of outbreaks are seen globally in prisons, homeless populations, refugees of war, or when natural disasters occur and proper sanitation is not available.
Life cycle and morphology
Pediculus humanus humanus (the body louse) is indistinguishable in appearance from Pediculus humanus capitis (the head louse), and the two subspecies will interbreed under laboratory conditions. In their natural state, however, they occupy different habitats and do not usually meet. They can feed up to five times a day. Adults can live for about thirty days, but if they are separated from their host they will die within two days. If the conditions are favorable, the body louse can reproduce rapidly. After the final molt, female and male lice will mate immediately. A female louse can lay up to 200–300 eggs during her lifetime.
The life cycle of the body louse consists of three stages: egg, nymph, and adult.
Eggs (also called nits, see head louse nits) are attached to the clothes or hairs by the female louse, using a secretion of the accessory glands that holds the egg in place until it hatches, while the nits (empty egg shells) may remain for months on the clothing. They are oval and usually yellow to white in color and at optimal temperature and humidity, the new lice will hatch from the egg within 6 to 9 days after being laid.
A nymph is an immature louse that hatches from the egg. Immediately after hatching it starts feeding on the host's blood and then returns to the clothing until the next blood-meal. The nymph will molt three times before the adult louse emerges. The nymph usually takes 9–12 days to develop into an adult louse.
The adult body louse is about 2.5–3.5 mm long, and like a nymph it has six legs. It is wingless and is tan to grayish-white in color.
The two P. humanus subspecies are morphologically quite identical. Their heads are short with two antennae that are split into five segments each, compacted thorax, seven segmented abdomen with lateral paratergal plates.
Origins
The body louse diverged from the head louse around 170,000 years ago, establishing the latest date for the adoption of clothing by humans. Body lice were first described by Carl Linnaeus in the 10th edition of Systema Naturae. The human body louse had its genome sequenced in 2010, and at that time it had the smallest known insect genome.
The body louse belongs to the phylum Arthropoda, class Insecta, order Psocodea and family Pediculidae. There are roughly 5,000 species of lice described, with 4,000 parasitizing birds and an additional 800 special parasites of mammals worldwide. Lice on mammals originate on a common ancestor that lived on Afrotheria that originally acquired it from via host-switching from an ancient avian host.
Signs and symptoms
Since an infestation can include thousands of lice, with each of them biting five times a day, the bites can cause strong itching, especially at the beginning of the infestation, that can result in skin excoriations and secondary infections. If an individual is exposed to a long-term infestation, they may experience apathy, lethargy and fatigue.
Treatment
In principle, body louse infestations can be controlled by periodically changing clothes and bedding. Thereafter, clothes, towels, and bedding should be washed in hot water (at least ) and dried using a hot cycle. The itching can be treated with topical and systemic corticosteroids and antihistamines. In case of secondary infections, antibiotics can be used to control the bacterial infection. When regular changing of clothes and bedding is not possible, the infested items could be treated with insecticides.
Diseases caused
Unlike other species of lice, body lice can act as vectors of disease. The most important pathogens which are transmitted by them are Rickettsia prowazekii (causes epidemic typhus), Borrelia recurrentis (causes relapsing fever), and Bartonella quintana (causes trench fever).
Epidemic typhus can be treated with one dose of doxycycline, but if left untreated, the fatality rate is 30%. Relapsing fever can be treated with tetracycline and depending on the severity of the disease, if left untreated it has a fatality rate between 10 and 40%. Trench fever can be treated with either doxycycline or gentamicin, if left untreated the fatality rate is less than 1%.
| Biology and health sciences | Insects and other hexapods | null |
5485503 | https://en.wikipedia.org/wiki/Planthopper | Planthopper | A planthopper is any insect in the infraorder Fulgoromorpha, in the suborder Auchenorrhyncha, a group exceeding 12,500 described species worldwide. The name comes from their remarkable resemblance to leaves and other plants of their environment and that they often "hop" for quick transportation in a similar way to that of grasshoppers. However, planthoppers generally walk very slowly. Distributed worldwide, all members of this group are plant-feeders, though few are considered pests. Fulgoromorphs are most reliably distinguished from the other Auchenorrhyncha by two features; the bifurcate (Y-shaped) anal vein in the forewing, and the thickened, three-segmented antennae, with a generally round or egg-shaped second segment (pedicel) that bears a fine filamentous arista.
Overview
Planthoppers are laterally flattened and hold their broad wings vertically, in a tent-like fashion, concealing the sides of the body and part of the legs. Nymphs of many planthoppers produce wax from special glands on the abdominal terga and other parts of the body. These are hydrophobic and help conceal the insects. Adult females of many families also produce wax which may be used to protect eggs.
Planthopper nymphs also possess a biological gear mechanism at the base of the hind legs, which keeps the legs in synchrony when the insects jump. The gears, not present in the adults, were known for decades before the recent description of their function.
Planthoppers are often vectors for plant diseases, especially phytoplasmas which live in the phloem of plants and can be transmitted by planthoppers when feeding.
A number of extinct planthopper taxa are known from the fossil record, such as the Lutetian-age Emiliana from the Green River Formation (Eocene) in Colorado.
Both planthopper adults and nymphs feed by sucking sap from plants; in so doing, the nymphs produce copious quantities of honeydew, on which sooty mould often grows. One species considered to be a pest is Haplaxius crudus, which is a vector for lethal yellowing, a palm disease that nearly killed off the Jamaican Tall coconut variety.
Classification
The infraorder contains two superfamilies, Fulgoroidea and Delphacoidea. As mentioned under Auchenorrhyncha, some authors use the name Archaeorrhyncha as a replacement for the Fulgoromorpha.
Superfamily Fulgoroidea
Acanaloniidae
Achilidae
Achilixiidae
Caliscelidae
Derbidae
Dictyopharidae
Eurybrachidae (= Eurybrachyidae)
Flatidae
Fulgoridae
Gengidae
Hypochthonellidae
Issidae (sometimes includes Caliscelidae)
Kinnaridae
Lophopidae
Meenoplidae
Nogodinidae
Ricaniidae
Tettigometridae
Tropiduchidae
Superfamily Delphacoidea
Cixiidae
Delphacidae
Extinct families include:
†Dorytocidae Emeljanov and Shcherbakov 2018, monotypic, Burmese amber, Cenomanian
†Fulgoridiidae Handlirsch 1939 Early-Upper Jurassic, Eurasia
†Jubisentidae Zhang et al. 2019 Burmese amber, Cenomanian
†Katlasidae Luo et al. 2020, monotypic, Burmese amber, Cenomanian
†Lalacidae Hamilton 1990 Crato Formation, Brazil Lushangfen Formation, Yixian Formation, China, Aptian
†Mimarachnidae Shcherbakov 2007 Early Cretaceous- early Late Cretaceous, Eurasia
†Neazoniidae Szwedo 2007 Lebanese amber, Barremian, Charentese amber, France, Cenomanian
†Perforissidae Shcherbakov 2007 Early Cretaceous- early Late Cretaceous, Argentina, Lebanon, Mongolia, Myanmar, Russia, Spain, New Jersey
†Qiyangiricaniidae Szwedo et al. 2011 monotypic, Guanyintan Formation, China, Toarcian
†Weiwoboidae Lin et al. 2010 monotypic, Yunnan, China, Eocene
†Szeiiniidae Zhang et al. 2021 monotypic, Shaanxi, China, Late Triassic
†Yetkhatidae Song et al. 2019 Burmese amber, Cenomanian
Gallery
| Biology and health sciences | Hemiptera (true bugs) | Animals |
5489512 | https://en.wikipedia.org/wiki/Aluminium%20carbide | Aluminium carbide | Aluminium carbide, chemical formula Al4C3, is a carbide of aluminium. It has the appearance of pale yellow to brown crystals. It is stable up to 1400 °C. It decomposes in water with the production of methane.
Structure
Aluminium carbide has an unusual crystal structure that consists of alternating layers of Al2C and Al2C2. Each aluminium atom is coordinated to 4 carbon atoms to give a tetrahedral arrangement. Carbon atoms exist in 2 different binding environments; one is a deformed octahedron of 6 Al atoms at a distance of 217 pm. The other is a distorted trigonal bipyramidal structure of 4 Al atoms at 190–194 pm and a fifth Al atom at 221 pm.
Other carbides (IUPAC nomenclature: methides) also exhibit complex structures.
Reactions
Aluminium carbide hydrolyses with evolution of methane. The reaction proceeds at room temperature but is rapidly accelerated by heating.
Al4C3 + 12 H2O → 4 Al(OH)3 + 3 CH4
Similar reactions occur with other protic reagents:
Al4C3 + 12 HCl → 4 AlCl3 + 3 CH4
Reactive hot isostatic pressing (hipping) at ≈40 MPa of the appropriate mixtures of Ti, Al4C3 graphite, for 15 hours at 1300 °C yields predominantly single-phase samples of Ti2AlC0.5N0.5, 30 hours at 1300 °C yields predominantly single-phase samples of Ti2AlC (Titanium aluminium carbide).
Preparation
Aluminium carbide is prepared by direct reaction of aluminium and carbon in an electric arc furnace.
4 Al + 3 C → Al4C3
An alternative reaction begins with alumina, but it is less favorable because of generation of carbon monoxide.
2 Al2O3 + 9 C → Al4C3 + 6 CO
Silicon carbide also reacts with aluminium to yield Al4C3. This conversion limits the mechanical applications of SiC, because Al4C3 is more brittle than SiC.
4 Al + 3 SiC → Al4C3 + 3 Si
In aluminium-matrix composites reinforced with silicon carbide, the chemical reactions between silicon carbide and molten aluminium generate a layer of aluminium carbide on the silicon carbide particles, which decreases the strength of the material, although it increases the wettability of the SiC particles. This tendency can be decreased by coating the silicon carbide particles with a suitable oxide or nitride, preoxidation of the particles to form a silica coating, or using a layer of sacrificial metal.
An aluminium-aluminium carbide composite material can be made by mechanical alloying, by mixing aluminium powder with graphite particles.
Occurrence
Small amounts of aluminium carbide are a common impurity of technical calcium carbide. In electrolytic manufacturing of aluminium, aluminium carbide forms as a corrosion product of the graphite electrodes.
In metal matrix composites based on aluminium matrix reinforced with non-metal carbides (silicon carbide, boron carbide, etc.) or carbon fibres, aluminium carbide often forms as an unwanted product. In case of carbon fibre, it reacts with the aluminium matrix at temperatures above 500 °C; better wetting of the fibre and inhibition of chemical reaction can be achieved by coating it with e.g. titanium boride.
Applications
Aluminium carbide particles finely dispersed in aluminium matrix lower the tendency of the material to creep, especially in combination with silicon carbide particles.
Aluminium carbide can be used as an abrasive in high-speed cutting tools. It has approximately the same hardness as topaz.
| Physical sciences | Carbide salts | Chemistry |
5489731 | https://en.wikipedia.org/wiki/Reflex%20hammer | Reflex hammer | A reflex hammer is a medical instrument used by practitioners to test deep tendon reflexes, the best known possibly being the patellar reflex. Testing for reflexes is an important part of the neurological physical examination in order to detect abnormalities in the central or peripheral nervous system.
Reflex hammers can also be used for chest percussion.
Models of reflex hammer
Prior to the development of specialized reflex hammers, hammers specific for percussion of the chest were used to elicit reflexes. However, this proved to be cumbersome, as the weight of the chest percussion hammer was insufficient to generate an adequate stimulus for a reflex.
Starting in the late 19th century, several models of specific reflex hammers were created:
The Taylor or tomahawk reflex hammer was designed by John Madison Taylor in 1888 and is the most well known reflex hammer in the USA. It consists of a triangular rubber component which is attached to a flat metallic handle. The traditional Taylor hammer is significantly lighter in weight when compared to the heavier European hammers.
The Queen Square reflex hammer was designed for use at the National Hospital for Nervous Diseases (now the National Hospital for Neurology and Neurosurgery) in Queen Square, London. It was originally made with a bamboo or cane handle of varying length, of average 25 to 40 centimetres (10 to 16 inches), attached to a 5-centimetre (2-inch) metal disk with a plastic bumper. The Queen Square hammer is also now made with plastic molds, and often has a sharp tapered end to allow for testing of plantar reflexes though this is no longer recommended due to tightened infection control. It is the reflex hammer of choice of the UK neurologist.
The Babinski reflex hammer was designed by Joseph Babiński in 1912 and is similar to the Queen Square hammer, except that it has a metallic handle that is often detachable. Babinski hammers can also be telescoping, allowing for compact storage. Babinski's hammer was popularized in clinical use in America by the neurologist Abraham Rabiner, who was given the instrument as a peace offering by Babinski after the two brawled at a black tie affair in Vienna.
The Trömner reflex hammer was designed by Ernst Trömner. This model is shaped like a two-headed mallet. The larger mallet is used to elicit tendon stretch reflexes, and the smaller mallet is used to elicit percussion myotonia.
Other reflex hammer types include the Buck, Berliner and Stookey reflex hammers.
There are numerous models available from various commercial sources.
Method of use
The strength of a reflex is used to gauge central and peripheral nervous system disorders, with the former resulting in hyperreflexia, or exaggerated reflexes, and the latter resulting in hyporeflexia or diminished reflexes. However, the strength of the stimulus used to extract the reflex also affects the magnitude of the reflex. Attempts have been made to determine the force required to elicit a reflex, but vary depending on the hammer used, and are difficult to quantify.
The Taylor hammer is usually held at the end by the physician, and the entire device is swung in an arc-like motion onto the tendon in question. The Queen Square and Babinski hammers are usually held perpendicular to the tendon in question, and are passively swung with gravity assistance onto the tendon.
The Jendrassik maneuver, which entails interlocking of flexed fingers to distract a patient and prime the reflex response, can also be used to accentuate reflexes. In cases of hyperreflexia, the physician may place his finger on top of the tendon, and tap the finger with the hammer. Sometimes a reflex hammer may not be necessary to elicit hyperreflexia, with finger tapping over the tendon being sufficient as a stimulus.
| Technology | Devices | null |
44958 | https://en.wikipedia.org/wiki/Water%20tower | Water tower | A water tower is an elevated structure supporting a water tank constructed at a height sufficient to pressurize a distribution system for potable water, and to provide emergency storage for fire protection. Water towers often operate in conjunction with underground or surface service reservoirs, which store treated water close to where it will be used. Other types of water towers may only store raw (non-potable) water for fire protection or industrial purposes, and may not necessarily be connected to a public water supply.
Water towers are able to supply water even during power outages, because they rely on hydrostatic pressure produced by elevation of water (due to gravity) to push the water into domestic and industrial water distribution systems; however, they cannot supply the water for a long time without power, because a pump is typically required to refill the tower. A water tower also serves as a reservoir to help with water needs during peak usage times. The water level in the tower typically falls during the peak usage hours of the day, and then a pump fills it back up during the night. This process also keeps the water from freezing in cold weather, since the tower is constantly being drained and refilled.
History
Although the use of elevated water storage tanks has existed since ancient times in various forms, the modern use of water towers for pressurized public water systems developed during the mid-19th century, as steam-pumping became more common, and better pipes that could handle higher pressures were developed. In the United Kingdom, standpipes consisted of tall, exposed, N-shaped pipes, used for pressure relief and to provide a fixed elevation for steam-driven pumping engines which tended to produce a pulsing flow, while the pressurized water distribution system required constant pressure. Standpipes also provided a convenient fixed location to measure flow rates. Designers typically enclosed the riser pipes in decorative masonry or wooden structures. By the late 19th century, standpipes grew to include storage tanks to meet the ever-increasing demands of growing cities.
Many early water towers are now considered historically significant and have been included in various heritage listings around the world. Some are converted to apartments or exclusive penthouses. In certain areas, such as New York City in the United States, smaller water towers are constructed for individual buildings. In California and some other states, domestic water towers enclosed by siding (tankhouses) were once built (1850s–1930s) to supply individual homes; windmills pumped water from hand-dug wells up into the tank in New York.
Water towers were used to supply water stops for steam locomotives on railroad lines. Early steam locomotives required water stops every .
Design and construction
A variety of materials can be used to construct a typical water tower; steel and reinforced or prestressed concrete are most often used (with wood, fiberglass, or brick also in use), incorporating an interior coating to protect the water from any effects from the lining material. The reservoir in the tower may be spherical, cylindrical, or an ellipsoid, with a minimum height of approximately and a minimum of in diameter. A standard water tower typically has a height of approximately .
Pressurization occurs through the hydrostatic pressure of the elevation of water; for every of elevation, it produces of pressure. of elevation produces roughly , which is enough pressure to operate and provide for most domestic water pressure and distribution system requirements.
The height of the tower provides the pressure for the water supply system, and it may be supplemented with a pump. The volume of the reservoir and diameter of the piping provide and sustain flow rate. However, relying on a pump to provide pressure is expensive; to keep up with varying demand, the pump would have to be sized to meet peak demands. During periods of low demand, jockey pumps are used to meet these lower water flow requirements. The water tower reduces the need for electrical consumption of cycling pumps and thus the need for an expensive pump control system, as this system would have to be sized sufficiently to give the same pressure at high flow rates.
Very high volumes and flow rates are needed when fighting fires. With a water tower present, pumps can be sized for average demand, not peak demand; the water tower can provide water pressure during the day and pumps will refill the water tower when demands are lower.
Using wireless sensor networks to monitor water levels inside the tower allows municipalities to automatically monitor and control pumps without installing and maintaining expensive data cables.
Architecture
The adjacent image shows three architectural approaches to incorporating these tanks in the design of a building, one on East 57th Street in New York City. From left to right, a fully enclosed and ornately decorated brick structure, a simple unadorned roofless brick structure hiding most of the tank but revealing the top of the tank, and a simple utilitarian structure that makes no effort to hide the tanks or otherwise incorporate them into the design of the building.
The technology dates to at least the 19th century, and for a long time New York City required that all buildings higher than six stories be equipped with a rooftop water tower. Two companies in New York build water towers, both of which are family businesses in operation since the 19th century.
The original water tower builders were barrel makers who expanded their craft to meet a modern need as buildings in the city grew taller in height. Even today, no sealant is used to hold the water in. The wooden walls of the water tower are held together with steel cables or straps, but water leaks through the gaps when first filled. As the water saturates the wood, it swells, the gaps close and become impermeable. The rooftop water towers store of water until it is needed in the building below. The upper portion of water is skimmed off the top for everyday use while the water in the bottom of the tower is held in reserve to fight fire. When the water drops below a certain level, a pressure switch, level switch or float valve will activate a pump or open a public water line to refill the water tower.
Architects and builders have taken varied approaches to incorporating water towers into the design of their buildings. On many large commercial buildings, water towers are completely hidden behind an extension of the facade of the building. For cosmetic reasons, apartment buildings often enclose their tanks in rooftop structures, either simple unadorned rooftop boxes, or ornately decorated structures intended to enhance the visual appeal of the building. Many buildings, however, leave their water towers in plain view atop utilitarian framework structures.
Water towers are common in India, where the electricity supply is erratic in most places.
If the pumps fail (such as during a power outage), then water pressure will be lost, causing potential public health concerns. Many U.S. states require a "boil-water advisory" to be issued if water pressure drops below . This advisory presumes that the lower pressure might allow pathogens to enter the system.
Some have been converted to serve modern purposes, as for example, the Wieża Ciśnień (Wrocław water tower) in Wrocław, Poland which is today a restaurant complex. Others have been converted to residential use.
Historically, railroads that used steam locomotives required a means of replenishing the locomotive's tenders. Water towers were common along the railroad. The tenders were usually replenished by water cranes, which were fed by a water tower.
Some water towers are also used as observation towers, and some restaurants, such as the Goldbergturm in Sindelfingen, Germany, or the second of the three Kuwait Towers, in the State of Kuwait. It is also common to use water towers as the location of transmission mechanisms in the UHF range with small power, for instance for closed rural broadcasting service, amateur radio, or cellular telephone service.
In hilly regions, local topography can be substituted for structures to elevate the tanks. These tanks are often nothing more than concrete cisterns terraced into the sides of local hills or mountains, but function identically to the traditional water tower. The tops of these tanks can be landscaped or used as park space, if desired.
Spheres and spheroids
The Chicago Bridge and Iron Company has built many of the water spheres and spheroids found in the United States. The website World's Tallest Water Sphere describes the distinction between a water sphere and water spheroid thus:
The Union Watersphere is a water tower topped with a sphere-shaped water tank in Union, New Jersey, and characterized as the World's Tallest Water Sphere.
A Star Ledger article suggested a water tower in Erwin, North Carolina completed in early 2012, tall and holding , had become the World's Tallest Water Sphere. However, photographs of the Erwin water tower revealed the new tower to be a water spheroid.
The water tower in Braman, Oklahoma, built by the Kaw Nation and completed in 2010, is tall and can hold . Slightly taller than the Union Watersphere, it is also a spheroid.
Another tower in Oklahoma, built in 1986 and billed as the "largest water tower in the country", is tall, can hold , and is located in Edmond.
The Earthoid, a perfectly spherical tank located in Germantown, Maryland is tall and holds of water. The name is taken from it being painted to resemble a globe of the world.
The golf ball-shaped tank of the water tower at Gonzales, California is supported by three tubular legs and reaches about high.
The Watertoren (or Water Towers) in Eindhoven, Netherlands contain three spherical tanks, each in diameter and capable of holding of water, on three spires were completed in 1970.
Decoration
Water towers can be surrounded by ornate coverings including fancy brickwork, a large ivy-covered trellis or they can be simply painted. Some city water towers have the name of the city painted in large letters on the roof, as a navigational aid to aviators and motorists. Sometimes the decoration can be humorous. An example of this are water towers built side by side, labeled HOT and COLD. Cities in the United States possessing side-by-side water towers labeled HOT and COLD include Granger, Iowa; Canton, Kansas; Pratt, Kansas, and St. Clair, Missouri. Eveleth, Minnesota at one time had two such towers, but no longer does.
Many small towns in the United States use their water towers to advertise local tourism, their local high school sports teams, or other locally notable facts. A "mushroom" water tower was built in Örebro, Sweden and holds almost two million gallons of water.
Tallest
Alternatives
Alternatives to water towers are simple pumps mounted on top of the water pipes to increase the water pressure. This new approach is more straightforward, but also more subject to potential public health risks; if the pumps fail, then loss of water pressure may result in entry of contaminants into the water system. Most large water utilities do not use this approach, given the potential risks.
Examples
Australia
Bankstown Reservoir, Sydney
Austria
Wasserturm Amstetten
(Water tower with transmission antenna)
Belgium
Mechelen-Zuid Watertoren
Brazil
Nave Espacial de Varginha in Varginha
Canada
Guaranteed Pure Milk bottle in Montreal, Quebec
Croatia
Vukovar water tower in Vukovar.
Denmark
Svaneke water tower
Finland
Mustankallio water tower in Lahti
Germany
Lüneburg Water Tower
Heidelberg TV Tower (TV tower with water reservoir)
Mannheim Water Tower (built 1886–1889)
Kuwait
Kuwait Towers, which include two water reservoirs, and Kuwait Water Towers (Mushroom towers in Kuwait City.
India
Tala tank in Kolkata
Italy
Ginosa Water Tower, tall
Netherlands
Amsterdamsestraatweg Water Tower in Utrecht
Eindhoven Water Towers in Eindhoven
Poldertoren in Emmeloord
Water Tower Simpelveld in Simpelveld
Water Tower Hellevoetsluis in Hellevoetsluis
Poland
Wrocław Water Tower
Old Water Tower, Bydgoszcz
Romania
Fabric Water Tower
Iosefin Water Tower
Oltenița Water Tower
Turnu Măgurele Water Tower
Slovakia
Water Tower in Komárno
Water Tower in Trnava
Slovenia
Brežice Water Tower in Brežice
Sweden
Vanadislundens water reservoir (Stockholm)
United Kingdom
Cardiff Central Station Water Tower
Dock Tower in Grimsby
House in the Clouds in Thorpeness, Suffolk
Jumbo in Colchester, Essex
Norton Water Tower in Norton, Cheshire
Tilehurst Water Tower in Reading
Tower Park in Poole, Dorset
Cranhill, Garthamlock and Drumchapel in Glasgow, and Tannochside just outside the city
United States
Brooks Catsup Bottle Water Tower near Collinsville, Illinois
Chicago Water Tower in Chicago, Illinois
Florence Y'all Water Tower in Florence, Kentucky
Lawson Tower in Scituate, Massachusetts
Leaning Water Tower in Groom, Texas
North Point Water Tower in Milwaukee, Wisconsin
Peachoid next to I-85 on the edge of Gaffney, South Carolina
Show Place Arena water tower in Upper Marlboro, Maryland
Union Watersphere in Union Township, New Jersey
Volunteer Park Water Tower in Capitol Hill, Seattle, Washington
Warner Bros. Water Tower in Burbank, California (In the animated TV series Animaniacs, it was used to incarcerate the characters Yakko, Wakko, and Dot, as well as to serve as their home.)
Weehawken Water Tower in Weehawken, New Jersey
Ypsilanti Water Tower in Ypsilanti, Michigan (Winner of the Most Phallic Building contest in 2003)
Standpipe
A standpipe is a water tower which is cylindrical (or nearly cylindrical) throughout its whole height, rather than an elevated tank on supports with a narrower pipe leading to and from the ground.
There were originally over 400 standpipe water towers in the United States, but very few remain today, including:
Addison Standpipe, in Addison, Michigan
Belton Standpipe in Belton, South Carolina (also in Allendale and Walterboro)
Belton Standpipe in Belton, Texas
Bellevue Standpipe (actually a water tank, not a tower), in Boston, Massachusetts
Chicago Water Tower, in Chicago, Illinois
Cochituate standpipe, in Boston, Massachusetts
Craig, Nebraska standpipe
Eden Park Stand Pipe, in Cincinnati
Evansville Standpipe (a steel tower), in Evansville, Wisconsin
Fall River Waterworks, in Fall River, Massachusetts
Forbes Hill Standpipe, in Quincy, Massachusetts
Louisville Water Tower, in Louisville, Kentucky
North Point Water Tower, in Milwaukee, Wisconsin
Reading Standpipe (demolished in 1999 and replaced by a modern steel tower), in Reading, Massachusetts
Roxbury High Fort contains the Cochituate Standpipe
St. Louis, Missouri has three standpipe water towers which are on the National Register of Historic Places.
Bissell Tower (also known as the Red Tower)
Compton Hill Tower
Grand Avenue Water Tower
Thomas Hill Standpipe, in Bangor, Maine
Ypsilanti Water Tower, in Ypsilanti, Michigan
Bremen Water Tower, in Bremen, Indiana
Gallery
| Technology | Food, water and health | null |
44968 | https://en.wikipedia.org/wiki/Likelihood%20function | Likelihood function | A likelihood function (often simply called the likelihood) measures how well a statistical model explains observed data by calculating the probability of seeing that data under different parameter values of the model. It is constructed from the joint probability distribution of the random variable that (presumably) generated the observations. When evaluated on the actual data points, it becomes a function solely of the model parameters.
In maximum likelihood estimation, the argument that maximizes the likelihood function serves as a point estimate for the unknown parameter, while the Fisher information (often approximated by the likelihood's Hessian matrix at the maximum) gives an indication of the estimate's precision.
In contrast, in Bayesian statistics, the estimate of interest is the converse of the likelihood, the so-called posterior probability of the parameter given the observed data, which is calculated via Bayes' rule.
Definition
The likelihood function, parameterized by a (possibly multivariate) parameter , is usually defined differently for discrete and continuous probability distributions (a more general definition is discussed below). Given a probability density or mass function
where is a realization of the random variable , the likelihood function is
often written
In other words, when is viewed as a function of with fixed, it is a probability density function, and when viewed as a function of with fixed, it is a likelihood function. In the frequentist paradigm, the notation is often avoided and instead or are used to indicate that is regarded as a fixed unknown quantity rather than as a random variable being conditioned on.
The likelihood function does not specify the probability that is the truth, given the observed sample . Such an interpretation is a common error, with potentially disastrous consequences (see prosecutor's fallacy).
Discrete probability distribution
Let be a discrete random variable with probability mass function depending on a parameter . Then the function
considered as a function of , is the likelihood function, given the outcome of the random variable . Sometimes the probability of "the value of for the parameter value " is written as or . The likelihood is the probability that a particular outcome is observed when the true value of the parameter is , equivalent to the probability mass on ; it is not a probability density over the parameter . The likelihood, , should not be confused with , which is the posterior probability of given the data .
Example
Consider a simple statistical model of a coin flip: a single parameter that expresses the "fairness" of the coin. The parameter is the probability that a coin lands heads up ("H") when tossed. can take on any value within the range 0.0 to 1.0. For a perfectly fair coin, .
Imagine flipping a fair coin twice, and observing two heads in two tosses ("HH"). Assuming that each successive coin flip is i.i.d., then the probability of observing HH is
Equivalently, the likelihood of observing "HH" assuming is
This is not the same as saying that , a conclusion which could only be reached via Bayes' theorem given knowledge about the marginal probabilities and .
Now suppose that the coin is not a fair coin, but instead that . Then the probability of two heads on two flips is
Hence
More generally, for each value of , we can calculate the corresponding likelihood. The result of such calculations is displayed in Figure 1. The integral of over [0, 1] is 1/3; likelihoods need not integrate or sum to one over the parameter space.
Continuous probability distribution
Let be a random variable following an absolutely continuous probability distribution with density function (a function of ) which depends on a parameter . Then the function
considered as a function of , is the likelihood function (of , given the outcome ). Again, is not a probability density or mass function over , despite being a function of given the observation .
Relationship between the likelihood and probability density functions
The use of the probability density in specifying the likelihood function above is justified as follows. Given an observation , the likelihood for the interval , where is a constant, is given by . Observe that
since is positive and constant. Because
where is the probability density function, it follows that
The first fundamental theorem of calculus provides that
Then
Therefore,
and so maximizing the probability density at amounts to maximizing the likelihood of the specific observation .
In general
In measure-theoretic probability theory, the density function is defined as the Radon–Nikodym derivative of the probability distribution relative to a common dominating measure. The likelihood function is this density interpreted as a function of the parameter, rather than the random variable. Thus, we can construct a likelihood function for any distribution, whether discrete, continuous, a mixture, or otherwise. (Likelihoods are comparable, e.g. for parameter estimation, only if they are Radon–Nikodym derivatives with respect to the same dominating measure.)
The above discussion of the likelihood for discrete random variables uses the counting measure, under which the probability density at any outcome equals the probability of that outcome.
Likelihoods for mixed continuous–discrete distributions
The above can be extended in a simple way to allow consideration of distributions which contain both discrete and continuous components. Suppose that the distribution consists of a number of discrete probability masses and a density , where the sum of all the 's added to the integral of is always one. Assuming that it is possible to distinguish an observation corresponding to one of the discrete probability masses from one which corresponds to the density component, the likelihood function for an observation from the continuous component can be dealt with in the manner shown above. For an observation from the discrete component, the likelihood function for an observation from the discrete component is simply
where is the index of the discrete probability mass corresponding to observation , because maximizing the probability mass (or probability) at amounts to maximizing the likelihood of the specific observation.
The fact that the likelihood function can be defined in a way that includes contributions that are not commensurate (the density and the probability mass) arises from the way in which the likelihood function is defined up to a constant of proportionality, where this "constant" can change with the observation , but not with the parameter .
Regularity conditions
In the context of parameter estimation, the likelihood function is usually assumed to obey certain conditions, known as regularity conditions. These conditions are in various proofs involving likelihood functions, and need to be verified in each particular application. For maximum likelihood estimation, the existence of a global maximum of the likelihood function is of the utmost importance. By the extreme value theorem, it suffices that the likelihood function is continuous on a compact parameter space for the maximum likelihood estimator to exist. While the continuity assumption is usually met, the compactness assumption about the parameter space is often not, as the bounds of the true parameter values might be unknown. In that case, concavity of the likelihood function plays a key role.
More specifically, if the likelihood function is twice continuously differentiable on the k-dimensional parameter space assumed to be an open connected subset of there exists a unique maximum if the matrix of second partials
is negative definite for every at which the gradient vanishes,
and if the likelihood function approaches a constant on the boundary of the parameter space, i.e.,
which may include the points at infinity if is unbounded. Mäkeläinen and co-authors prove this result using Morse theory while informally appealing to a mountain pass property. Mascarenhas restates their proof using the mountain pass theorem.
In the proofs of consistency and asymptotic normality of the maximum likelihood estimator, additional assumptions are made about the probability densities that form the basis of a particular likelihood function. These conditions were first established by Chanda. In particular, for almost all , and for all
exist for all in order to ensure the existence of a Taylor expansion. Second, for almost all and for every it must be that
where is such that This boundedness of the derivatives is needed to allow for differentiation under the integral sign. And lastly, it is assumed that the information matrix,
is positive definite and is finite. This ensures that the score has a finite variance.
The above conditions are sufficient, but not necessary. That is, a model that does not meet these regularity conditions may or may not have a maximum likelihood estimator of the properties mentioned above. Further, in case of non-independently or non-identically distributed observations additional properties may need to be assumed.
In Bayesian statistics, almost identical regularity conditions are imposed on the likelihood function in order to proof asymptotic normality of the posterior probability, and therefore to justify a Laplace approximation of the posterior in large samples.
Likelihood ratio and relative likelihood
Likelihood ratio
A likelihood ratio is the ratio of any two specified likelihoods, frequently written as:
The likelihood ratio is central to likelihoodist statistics: the law of likelihood states that degree to which data (considered as evidence) supports one parameter value versus another is measured by the likelihood ratio.
In frequentist inference, the likelihood ratio is the basis for a test statistic, the so-called likelihood-ratio test. By the Neyman–Pearson lemma, this is the most powerful test for comparing two simple hypotheses at a given significance level. Numerous other tests can be viewed as likelihood-ratio tests or approximations thereof. The asymptotic distribution of the log-likelihood ratio, considered as a test statistic, is given by Wilks' theorem.
The likelihood ratio is also of central importance in Bayesian inference, where it is known as the Bayes factor, and is used in Bayes' rule. Stated in terms of odds, Bayes' rule states that the posterior odds of two alternatives, and , given an event , is the prior odds, times the likelihood ratio. As an equation:
The likelihood ratio is not directly used in AIC-based statistics. Instead, what is used is the relative likelihood of models (see below).
In evidence-based medicine, likelihood ratios are used in diagnostic testing to assess the value of performing a diagnostic test.
Relative likelihood function
Since the actual value of the likelihood function depends on the sample, it is often convenient to work with a standardized measure. Suppose that the maximum likelihood estimate for the parameter is . Relative plausibilities of other values may be found by comparing the likelihoods of those other values with the likelihood of . The relative likelihood of is defined to be
Thus, the relative likelihood is the likelihood ratio (discussed above) with the fixed denominator . This corresponds to standardizing the likelihood to have a maximum of 1.
Likelihood region
A likelihood region is the set of all values of whose relative likelihood is greater than or equal to a given threshold. In terms of percentages, a % likelihood region for is defined to be
If is a single real parameter, a % likelihood region will usually comprise an interval of real values. If the region does comprise an interval, then it is called a likelihood interval.
Likelihood intervals, and more generally likelihood regions, are used for interval estimation within likelihoodist statistics: they are similar to confidence intervals in frequentist statistics and credible intervals in Bayesian statistics. Likelihood intervals are interpreted directly in terms of relative likelihood, not in terms of coverage probability (frequentism) or posterior probability (Bayesianism).
Given a model, likelihood intervals can be compared to confidence intervals. If is a single real parameter, then under certain conditions, a 14.65% likelihood interval (about 1:7 likelihood) for will be the same as a 95% confidence interval (19/20 coverage probability). In a slightly different formulation suited to the use of log-likelihoods (see Wilks' theorem), the test statistic is twice the difference in log-likelihoods and the probability distribution of the test statistic is approximately a chi-squared distribution with degrees-of-freedom (df) equal to the difference in df's between the two models (therefore, the −2 likelihood interval is the same as the 0.954 confidence interval; assuming difference in df's to be 1).
Likelihoods that eliminate nuisance parameters
In many cases, the likelihood is a function of more than one parameter but interest focuses on the estimation of only one, or at most a few of them, with the others being considered as nuisance parameters. Several alternative approaches have been developed to eliminate such nuisance parameters, so that a likelihood can be written as a function of only the parameter (or parameters) of interest: the main approaches are profile, conditional, and marginal likelihoods. These approaches are also useful when a high-dimensional likelihood surface needs to be reduced to one or two parameters of interest in order to allow a graph.
Profile likelihood
It is possible to reduce the dimensions by concentrating the likelihood function for a subset of parameters by expressing the nuisance parameters as functions of the parameters of interest and replacing them in the likelihood function. In general, for a likelihood function depending on the parameter vector that can be partitioned into , and where a correspondence can be determined explicitly, concentration reduces computational burden of the original maximization problem.
For instance, in a linear regression with normally distributed errors, , the coefficient vector could be partitioned into (and consequently the design matrix ). Maximizing with respect to yields an optimal value function . Using this result, the maximum likelihood estimator for can then be derived as
where is the projection matrix of . This result is known as the Frisch–Waugh–Lovell theorem.
Since graphically the procedure of concentration is equivalent to slicing the likelihood surface along the ridge of values of the nuisance parameter that maximizes the likelihood function, creating an isometric profile of the likelihood function for a given , the result of this procedure is also known as profile likelihood. In addition to being graphed, the profile likelihood can also be used to compute confidence intervals that often have better small-sample properties than those based on asymptotic standard errors calculated from the full likelihood.
Conditional likelihood
Sometimes it is possible to find a sufficient statistic for the nuisance parameters, and conditioning on this statistic results in a likelihood which does not depend on the nuisance parameters.
One example occurs in 2×2 tables, where conditioning on all four marginal totals leads to a conditional likelihood based on the non-central hypergeometric distribution. This form of conditioning is also the basis for Fisher's exact test.
Marginal likelihood
Sometimes we can remove the nuisance parameters by considering a likelihood based on only part of the information in the data, for example by using the set of ranks rather than the numerical values. Another example occurs in linear mixed models, where considering a likelihood for the residuals only after fitting the fixed effects leads to residual maximum likelihood estimation of the variance components.
Partial likelihood
A partial likelihood is an adaption of the full likelihood such that only a part of the parameters (the parameters of interest) occur in it. It is a key component of the proportional hazards model: using a restriction on the hazard function, the likelihood does not contain the shape of the hazard over time.
Products of likelihoods
The likelihood, given two or more independent events, is the product of the likelihoods of each of the individual events:
This follows from the definition of independence in probability: the probabilities of two independent events happening, given a model, is the product of the probabilities.
This is particularly important when the events are from independent and identically distributed random variables, such as independent observations or sampling with replacement. In such a situation, the likelihood function factors into a product of individual likelihood functions.
The empty product has value 1, which corresponds to the likelihood, given no event, being 1: before any data, the likelihood is always 1. This is similar to a uniform prior in Bayesian statistics, but in likelihoodist statistics this is not an improper prior because likelihoods are not integrated.
Log-likelihood
Log-likelihood function is the logarithm of the likelihood function, often denoted by a lowercase or , to contrast with the uppercase or for the likelihood. Because logarithms are strictly increasing functions, maximizing the likelihood is equivalent to maximizing the log-likelihood. But for practical purposes it is more convenient to work with the log-likelihood function in maximum likelihood estimation, in particular since most common probability distributions—notably the exponential family—are only logarithmically concave, and concavity of the objective function plays a key role in the maximization.
Given the independence of each event, the overall log-likelihood of intersection equals the sum of the log-likelihoods of the individual events. This is analogous to the fact that the overall log-probability is the sum of the log-probability of the individual events. In addition to the mathematical convenience from this, the adding process of log-likelihood has an intuitive interpretation, as often expressed as "support" from the data. When the parameters are estimated using the log-likelihood for the maximum likelihood estimation, each data point is used by being added to the total log-likelihood. As the data can be viewed as an evidence that support the estimated parameters, this process can be interpreted as "support from independent evidence adds", and the log-likelihood is the "weight of evidence". Interpreting negative log-probability as information content or surprisal, the support (log-likelihood) of a model, given an event, is the negative of the surprisal of the event, given the model: a model is supported by an event to the extent that the event is unsurprising, given the model.
A logarithm of a likelihood ratio is equal to the difference of the log-likelihoods:
Just as the likelihood, given no event, being 1, the log-likelihood, given no event, is 0, which corresponds to the value of the empty sum: without any data, there is no support for any models.
Graph
The graph of the log-likelihood is called the support curve (in the univariate case).
In the multivariate case, the concept generalizes into a support surface over the parameter space.
It has a relation to, but is distinct from, the support of a distribution.
The term was coined by A. W. F. Edwards in the context of statistical hypothesis testing, i.e. whether or not the data "support" one hypothesis (or parameter value) being tested more than any other.
The log-likelihood function being plotted is used in the computation of the score (the gradient of the log-likelihood) and Fisher information (the curvature of the log-likelihood). Thus, the graph has a direct interpretation in the context of maximum likelihood estimation and likelihood-ratio tests.
Likelihood equations
If the log-likelihood function is smooth, its gradient with respect to the parameter, known as the score and written , exists and allows for the application of differential calculus. The basic way to maximize a differentiable function is to find the stationary points (the points where the derivative is zero); since the derivative of a sum is just the sum of the derivatives, but the derivative of a product requires the product rule, it is easier to compute the stationary points of the log-likelihood of independent events than for the likelihood of independent events.
The equations defined by the stationary point of the score function serve as estimating equations for the maximum likelihood estimator.
In that sense, the maximum likelihood estimator is implicitly defined by the value at of the inverse function , where is the d-dimensional Euclidean space, and is the parameter space. Using the inverse function theorem, it can be shown that is well-defined in an open neighborhood about with probability going to one, and is a consistent estimate of . As a consequence there exists a sequence such that asymptotically almost surely, and . A similar result can be established using Rolle's theorem.
The second derivative evaluated at , known as Fisher information, determines the curvature of the likelihood surface, and thus indicates the precision of the estimate.
Exponential families
The log-likelihood is also particularly useful for exponential families of distributions, which include many of the common parametric probability distributions. The probability distribution function (and thus likelihood function) for exponential families contain products of factors involving exponentiation. The logarithm of such a function is a sum of products, again easier to differentiate than the original function.
An exponential family is one whose probability density function is of the form (for some functions, writing for the inner product):
Each of these terms has an interpretation, but simply switching from probability to likelihood and taking logarithms yields the sum:
The and each correspond to a change of coordinates, so in these coordinates, the log-likelihood of an exponential family is given by the simple formula:
In words, the log-likelihood of an exponential family is inner product of the natural parameter and the sufficient statistic , minus the normalization factor (log-partition function) . Thus for example the maximum likelihood estimate can be computed by taking derivatives of the sufficient statistic and the log-partition function .
Example: the gamma distribution
The gamma distribution is an exponential family with two parameters, and . The likelihood function is
Finding the maximum likelihood estimate of for a single observed value looks rather daunting. Its logarithm is much simpler to work with:
To maximize the log-likelihood, we first take the partial derivative with respect to :
If there are a number of independent observations , then the joint log-likelihood will be the sum of individual log-likelihoods, and the derivative of this sum will be a sum of derivatives of each individual log-likelihood:
To complete the maximization procedure for the joint log-likelihood, the equation is set to zero and solved for :
Here denotes the maximum-likelihood estimate, and is the sample mean of the observations.
Background and interpretation
Historical remarks
The term "likelihood" has been in use in English since at least late Middle English. Its formal use to refer to a specific function in mathematical statistics was proposed by Ronald Fisher, in two research papers published in 1921 and 1922. The 1921 paper introduced what is today called a "likelihood interval"; the 1922 paper introduced the term "method of maximum likelihood". Quoting Fisher:
The concept of likelihood should not be confused with probability as mentioned by Sir Ronald Fisher
Fisher's invention of statistical likelihood was in reaction against an earlier form of reasoning called inverse probability. His use of the term "likelihood" fixed the meaning of the term within mathematical statistics.
A. W. F. Edwards (1972) established the axiomatic basis for use of the log-likelihood ratio as a measure of relative support for one hypothesis against another. The support function is then the natural logarithm of the likelihood function. Both terms are used in phylogenetics, but were not adopted in a general treatment of the topic of statistical evidence.
Interpretations under different foundations
Among statisticians, there is no consensus about what the foundation of statistics should be. There are four main paradigms that have been proposed for the foundation: frequentism, Bayesianism, likelihoodism, and AIC-based. For each of the proposed foundations, the interpretation of likelihood is different. The four interpretations are described in the subsections below.
Frequentist interpretation
Bayesian interpretation
In Bayesian inference, although one can speak about the likelihood of any proposition or random variable given another random variable: for example the likelihood of a parameter value or of a statistical model (see marginal likelihood), given specified data or other evidence, the likelihood function remains the same entity, with the additional interpretations of (i) a conditional density of the data given the parameter (since the parameter is then a random variable) and (ii) a measure or amount of information brought by the data about the parameter value or even the model. Due to the introduction of a probability structure on the parameter space or on the collection of models, it is possible that a parameter value or a statistical model have a large likelihood value for given data, and yet have a low probability, or vice versa. This is often the case in medical contexts. Following Bayes' Rule, the likelihood when seen as a conditional density can be multiplied by the prior probability density of the parameter and then normalized, to give a posterior probability density. More generally, the likelihood of an unknown quantity given another unknown quantity is proportional to the probability of given .
Likelihoodist interpretation
In frequentist statistics, the likelihood function is itself a statistic that summarizes a single sample from a population, whose calculated value depends on a choice of several parameters θ1 ... θp, where p is the count of parameters in some already-selected statistical model. The value of the likelihood serves as a figure of merit for the choice used for the parameters, and the parameter set with maximum likelihood is the best choice, given the data available.
The specific calculation of the likelihood is the probability that the observed sample would be assigned, assuming that the model chosen and the values of the several parameters θ give an accurate approximation of the frequency distribution of the population that the observed sample was drawn from. Heuristically, it makes sense that a good choice of parameters is those which render the sample actually observed the maximum possible post-hoc probability of having happened. Wilks' theorem quantifies the heuristic rule by showing that the difference in the logarithm of the likelihood generated by the estimate's parameter values and the logarithm of the likelihood generated by population's "true" (but unknown) parameter values is asymptotically χ2 distributed.
Each independent sample's maximum likelihood estimate is a separate estimate of the "true" parameter set describing the population sampled. Successive estimates from many independent samples will cluster together with the population's "true" set of parameter values hidden somewhere in their midst. The difference in the logarithms of the maximum likelihood and adjacent parameter sets' likelihoods may be used to draw a confidence region on a plot whose co-ordinates are the parameters θ1 ... θp. The region surrounds the maximum-likelihood estimate, and all points (parameter sets) within that region differ at most in log-likelihood by some fixed value. The χ2 distribution given by Wilks' theorem converts the region's log-likelihood differences into the "confidence" that the population's "true" parameter set lies inside. The art of choosing the fixed log-likelihood difference is to make the confidence acceptably high while keeping the region acceptably small (narrow range of estimates).
As more data are observed, instead of being used to make independent estimates, they can be combined with the previous samples to make a single combined sample, and that large sample may be used for a new maximum likelihood estimate. As the size of the combined sample increases, the size of the likelihood region with the same confidence shrinks. Eventually, either the size of the confidence region is very nearly a single point, or the entire population has been sampled; in both cases, the estimated parameter set is essentially the same as the population parameter set.
AIC-based interpretation
Under the AIC paradigm, likelihood is interpreted within the context of information theory.
| Mathematics | Specific functions | null |
45010 | https://en.wikipedia.org/wiki/Oil%20shale | Oil shale | Oil shale is an organic-rich fine-grained sedimentary rock containing kerogen (a solid mixture of organic chemical compounds) from which liquid hydrocarbons can be produced. In addition to kerogen, general composition of oil shales constitutes inorganic substance and bitumens. Based on their deposition environment, oil shales are classified as marine, lacustrine and terrestrial oil shales. Oil shales differ from oil-bearing shales, shale deposits that contain petroleum (tight oil) that is sometimes produced from drilled wells. Examples of oil-bearing shales are the Bakken Formation, Pierre Shale, Niobrara Formation, and Eagle Ford Formation. Accordingly, shale oil produced from oil shale should not be confused with tight oil, which is also frequently called shale oil.
A 2016 estimate of global deposits set the total world resources of oil shale equivalent of of oil in place. Oil shale has gained attention as a potential abundant source of oil. However, the various attempts to develop oil shale deposits have had limited success. Only Estonia and China have well-established oil shale industries, and Brazil, Germany, and Russia utilize oil shale to some extent.
Oil shale can be burned directly in furnaces as a low-grade fuel for power generation and district heating or used as a raw material in chemical and construction-materials processing. Heating oil shale to a sufficiently high temperature causes the chemical process of pyrolysis to yield a vapor. Upon cooling the vapor, the liquid unconventional oil, called shale oil, is separated from combustible oil-shale gas. Shale oil is a substitute for conventional crude oil; however, extracting shale oil is costlier than the production of conventional crude oil both financially and in terms of its environmental impact. Oil-shale mining and processing raise a number of environmental concerns, such as land use, waste disposal, water use, waste-water management, greenhouse-gas emissions and air pollution.
Geology
Oil shale, an organic-rich sedimentary rock, belongs to the group of sapropel fuels. It does not have a definite geological definition nor a specific chemical formula, and its seams do not always have discrete boundaries. Oil shales vary considerably in their mineral content, chemical composition, age, type of kerogen, and depositional history, and not all oil shales would necessarily be classified as shales in the strict sense. According to the petrologist Adrian C. Hutton of the University of Wollongong, oil shales are not "geological nor geochemically distinctive rock but rather 'economic' term". Their common defining feature is low solubility in low-boiling organic solvents and generation of liquid organic products on thermal decomposition. Geologists can classify oil shales on the basis of their composition as carbonate-rich shales, siliceous shales, or cannel shales.
Oil shale differs from bitumen-impregnated rocks (other so-called unconventional resources such as oil sands and petroleum reservoir rocks), humic coals and carbonaceous shale. While oil sands do originate from the biodegradation of oil, heat and pressure have not (yet) transformed the kerogen in oil shale into petroleum, which means its maturation does not exceed early mesocatagenetic. Oil shales differ also from oil-bearing shales, shale deposits that contain tight oil that is sometimes produced from drilled wells. Examples of oil-bearing shales are the Bakken Formation, Pierre Shale, Niobrara Formation, and Eagle Ford Formation. Accordingly, shale oil produced from oil shale should not be confused with tight oil, which is called also frequently shale oil.
General composition of oil shales constitutes inorganic matrix, bitumens, and kerogen. While the bitumen portion of oil shales is soluble in carbon disulfide, the kerogen portion is insoluble in carbon disulfide and may contain iron, vanadium, nickel, molybdenum, and uranium. Oil shale contains a lower percentage of organic matter than coal. In commercial grades of oil shale the ratio of organic matter to mineral matter lies approximately between 0.75:5 and 1.5:5. At the same time, the organic matter in oil shale has an atomic ratio of hydrogen to carbon (H/C) approximately 1.2 to 1.8 times lower than for crude oil and about 1.5 to 3 times higher than for coals. The organic components of oil shale derive from a variety of organisms, such as the remains of algae, spores, pollen, plant cuticles and corky fragments of herbaceous and woody plants, and cellular debris from other aquatic and land plants. Some deposits contain significant fossils; Germany's Messel Pit has the status of a UNESCO World Heritage Site. The mineral matter in oil shale includes various fine-grained silicates and carbonates. Inorganic matrix can contain quartz, feldspar, clay (mainly illite and chlorite), carbonate (calcite and dolomite), pyrite and some other minerals.
Another classification, known as the van Krevelen diagram, assigns kerogen types, depending on the hydrogen, carbon, and oxygen content of oil shales' original organic matter. The most commonly used classification of oil shales, developed between 1987 and 1991 by Adrian C. Hutton, adapts petrographic terms from coal terminology. This classification designates oil shales as terrestrial, lacustrine (lake-bottom-deposited), or marine (ocean bottom-deposited), based on the environment of the initial biomass deposit. Known oil shales are predominantly of aquatic (marine, lacustrine) origin. Hutton's classification scheme has proven useful in estimating the yield and composition of the extracted oil.
Resource
As source rocks for most conventional oil reservoirs, oil shale deposits are found in all world oil provinces, although most of them are too deep to be exploited economically. As with all oil and gas resources, analysts distinguish between oil shale resources and oil shale reserves. "Resources" refer to all oil shale deposits, while "reserves" represent those deposits from which producers can extract oil shale economically using existing technology. Since extraction technologies develop continuously, planners can only estimate the amount of recoverable kerogen. Although resources of oil shale occur in many countries, only 33 countries possess known deposits of potential economic value. Well-explored deposits, potentially classifiable as reserves, include the Green River deposits in the western United States, the Tertiary deposits in Queensland, Australia, deposits in Sweden and Estonia, the El-Lajjun deposit in Jordan, and deposits in France, Germany, Brazil, China, southern Mongolia and Russia. These deposits have given rise to expectations of yielding at least 40 liters of shale oil per tonne of oil shale, using the Fischer Assay.
A 2016 estimate set the total world resources of oil shale equivalent to yield of of shale oil, with the largest resource deposits in the United States accounting more than 80% of the world total resource. For comparison, at the same time the world's proven oil reserves are estimated to be . The largest deposits in the world occur in the United States in the Green River Formation, which covers portions of Colorado, Utah, and Wyoming; about 70% of this resource lies on land owned or managed by the United States federal government. Deposits in the United States constitute more than 80% of world resources; other significant resource holders being China, Russia, and Brazil. The amount of economically recoverable oil shale is unknown.
History
Humans have used oil shale as a fuel since prehistoric times, since it generally burns without any processing. Around 3000 BC, "rock oil" was used in Mesopotamia for road construction and making architectural adhesives. Britons of the Iron Age used tractable oil shales to fashion cists for burial, or just polish it to create ornaments.
In the 10th century, the Arab physician Masawaih al-Mardini (Mesue the Younger) described a method of extraction of oil from "some kind of bituminous shale". The first patent for extracting oil from oil shale was British Crown Patent 330 granted in 1694 to Martin Eele, Thomas Hancock and William Portlock, who had "found a way to extract and make great quantities of pitch, tarr, and oyle out of a sort of stone".
Modern industrial mining of oil shale began in 1837 in Autun, France, followed by exploitation in Scotland, Germany, and several other countries. Operations during the 19th century focused on the production of kerosene, lamp oil, and paraffin; these products helped supply the growing demand for lighting that arose during the Industrial Revolution, supplied from Scottish oil shales. Fuel oil, lubricating oil and grease, and ammonium sulfate were also produced. Scottish production peaked in around 1913, operating 120 oil shale works, producing 3,332,000 tonnes of oil shale, generating around 2% of the global production of petroleum. The Scottish oil-shale industry expanded immediately before World War I partly because of limited access to conventional petroleum resources and the mass production of automobiles and trucks, which accompanied an increase in gasoline consumption; but mostly because the British Admiralty required a reliable fuel source for their fleet as war in Europe loomed.
Although the Estonian and Chinese oil-shale industries continued to grow after World War II, most other countries abandoned their projects because of high processing costs and the availability of cheaper petroleum. Following the 1973 oil crisis, world production of oil shale reached a peak of 46 million tonnes in 1980 before falling to about 16 million tonnes in 2000, because of competition from cheap conventional petroleum in the 1980s.
On 2 May 1982, known in some circles as "Black Sunday", Exxon canceled its US$5 billion Colony Shale Oil Project near Parachute, Colorado, because of low oil prices and increased expenses, laying off more than 2,000 workers and leaving a trail of home foreclosures and small business bankruptcies. In 1986, President Ronald Reagan signed into law the Consolidated Omnibus Budget Reconciliation Act of 1985, which among other things abolished the United States' Synthetic Liquid Fuels Program.
The global oil-shale industry began to revive at the beginning of the 21st century. In 2003, an oil-shale development program restarted in the United States. Authorities introduced a commercial leasing program permitting the extraction of oil shale and oil sands on federal lands in 2005, in accordance with the Energy Policy Act of 2005.
Industry
, oil shale is utilized primarily in Brazil, China, Estonia and to some extent in Germany, and Russia. Several additional countries started assessing their reserves or had built experimental production plants, while others had phased out their oil shale industry. Oil shale serves for oil production in Estonia, Brazil, and China; for power generation in Estonia, China, and Germany; for cement production in Estonia, Germany, and China; and for use in chemical industries in China, Estonia, and Russia.
, 80% of oil shale used globally is extracted in Estonia, mainly because Estonia uses several oil-shale-fired power plants, which has an installed capacity of 2,967 megawatts (MW). By comparison, China's oil shale power plants have an installed capacity of 12 MW, and Germany's have 9.9 MW. A 470 MW oil shale power plant in Jordan is under construction as of 2020. Israel, Romania and Russia have in the past run power plants fired by oil shale but have shut them down or switched to other fuel sources such as natural gas. Other countries, such as Egypt, have had plans to construct power plants fired by oil shale, while Canada and Turkey had plans to burn oil shale along with coal for power generation. Oil shale serves as the main fuel for power generation only in Estonia, where 90.3% of country's electrical generation in 2016 was produced from oil shale.
According to the World Energy Council, in 2008 the total production of shale oil from oil shale was 930,000 tonnes, equal to , of which China produced 375,000 tonnes, Estonia 355,000 tonnes, and Brazil 200,000 tonnes. In comparison, production of the conventional oil and natural gas liquids in 2008 amounted 3.95 billion tonnes or .
Extraction and processing
Most exploitation of oil shale involves mining followed by shipping elsewhere, after which the shale is burned directly to generate electricity or undertakes further processing. The most common methods of mining involve open-pit mining and strip mining. These procedures remove most of the overlying material to expose the deposits of oil shale and become practical when the deposits occur near the surface. Underground mining of oil shale, which removes less of the overlying material, employs the room-and-pillar method.
The extraction of the useful components of oil shale usually takes place above ground (ex-situ processing), although several newer technologies perform this underground (on-site or in-situ processing). In either case, the chemical process of pyrolysis converts the kerogen in the oil shale to shale oil (synthetic crude oil) and oil shale gas. Most conversion technologies involve heating shale in the absence of oxygen to a temperature at which kerogen decomposes (pyrolyses) into gas, condensable oil, and a solid residue. This usually takes place between and . The process of decomposition begins at relatively low temperatures () but proceeds more rapidly and more completely at higher temperatures.
In-situ processing involves heating the oil shale underground. Such technologies can potentially extract more oil from a given area of land than ex-situ processes, since they can access the material at greater depths than surface mines can. Several companies have patented methods for in-situ retorting. However, most of these methods remain in the experimental phase. Two in-situ processes could be used: true in-situ processing does not involve mining the oil shale, while modified in-situ processing involves removing part of the oil shale and bringing it to the surface for modified in-situ retorting in order to create permeability for gas flow in a rubble chimney. Explosives rubblize the oil-shale deposit.
Hundreds of patents for oil shale retorting technologies exist; however, only a few dozen have undergone testing. By 2006, only four technologies remained in commercial use: Kiviter, Galoter, Fushun, and Petrosix.
Applications and products
Oil shale is utilized as a fuel for thermal power-plants, burning it (like coal) to drive steam turbines; some of these plants employ the resulting heat for district heating of homes and businesses. In addition to its use as a fuel, oil shale may also serve in the production of specialty carbon fibers, adsorbent carbons, carbon black, phenols, resins, glues, tanning agents, mastic, road bitumen, cement, bricks, construction and decorative blocks, soil-additives, fertilizers, rock-wool insulation, glass, and pharmaceutical products. However, oil shale use for production of these items remains small or only in experimental development. Some oil shales yield sulfur, ammonia, alumina, soda ash, uranium, and nahcolite as shale-oil extraction byproducts. Between 1946 and 1952, a marine type of Dictyonema shale served for uranium production in Sillamäe, Estonia, and between 1950 and 1989 Sweden used alum shale for the same purposes. Oil shale gas has served as a substitute for natural gas, but , producing oil shale gas as a natural-gas substitute remained economically infeasible.
The shale oil derived from oil shale does not directly substitute for crude oil in all applications. It may contain higher concentrations of olefins, oxygen, and nitrogen than conventional crude oil. Some shale oils may have higher sulfur or arsenic content. By comparison with West Texas Intermediate, the benchmark standard for crude oil in the futures-contract market, the Green River shale oil sulfur content ranges from near 0% to 4.9% (in average 0.76%), where West Texas Intermediate's sulfur content has a maximum of 0.42%. The sulfur content in shale oil from Jordan's oil shales may be as high as 9.5%. The arsenic content, for example, becomes an issue for Green River formation oil shale. The higher concentrations of these materials means that the oil must undergo considerable upgrading (hydrotreating) before serving as oil-refinery feedstock. Above-ground retorting processes tended to yield a lower API gravity shale oil than the in situ processes. Shale oil serves best for producing middle-distillates such as kerosene, jet fuel, and diesel fuel. Worldwide demand for these middle distillates, particularly for diesel fuels, increased rapidly in the 1990s and 2000s. However, appropriate refining processes equivalent to hydrocracking can transform shale oil into a lighter-range hydrocarbon (gasoline).
Economics
The various attempts to develop oil shale deposits have succeeded only when the cost of shale-oil production in a given region comes in below the price of crude oil or its other substitutes (break-even price). According to a 2005 survey, conducted by the RAND Corporation, the cost of producing a barrel of oil at a surface retorting complex in the United States (comprising a mine, retorting plant, upgrading plant, supporting utilities, and spent shale reclamation), would range between US$70–95 ($440–600/m3, adjusted to 2005 values). This estimate considers varying levels of kerogen quality and extraction efficiency. In order to run a profitable operation, the price of crude oil would need to remain above these levels. The analysis also discussed the expectation that processing costs would drop after the establishment of the complex. The hypothetical unit would see a cost reduction of 35–70% after producing its first . Assuming an increase in output of during each year after the start of commercial production, RAND predicted the costs would decline to $35–48 per barrel ($220–300/m3) within 12 years. After achieving the milestone of , its costs would decline further to $30–40 per barrel ($190–250/m3). In 2010, the International Energy Agency estimated, based on the various pilot projects, that investment and operating costs would be similar to those of Canadian oil sands, that means would be economic at prices above $60 per barrel at current costs. This figure does not account carbon pricing, which will add additional cost. According to the New Policies Scenario introduced in its World Energy Outlook 2010, a price of $50 per tonne of emitted adds additional $7.50 cost per barrel of shale oil. As of November 2021, the price of tonne of exceeded $60.
A 1972 publication in the journal Pétrole Informations () compared shale-based oil production unfavorably with coal liquefaction. The article portrayed coal liquefaction as less expensive, generating more oil, and creating fewer environmental impacts than extraction from oil shale. It cited a conversion ratio of of oil per one ton of coal, as against of shale oil per one ton of oil shale.
A critical measure of the viability of oil shale as an energy source lies in the ratio of the energy produced by the shale to the energy used in its mining and processing, a ratio known as "energy return on investment" (EROI). A 1984 study estimated the EROI of the various known oil-shale deposits as varying between 0.7–13.3, although known oil-shale extraction development projects assert an EROI between 3 and 10. According to the World Energy Outlook 2010, the EROI of ex-situ processing is typically 4 to 5 while of in-situ processing it may be even as low as 2. However, according to the IEA most of used energy can be provided by burning the spent shale or oil-shale gas. To increase efficiency when retorting oil shale, researchers have proposed and tested several co-pyrolysis processes.
Environmental considerations
Mining oil shale involves numerous environmental impacts, more pronounced in surface mining than in underground mining. These include acid drainage induced by the sudden rapid exposure and subsequent oxidation of formerly buried materials; the introduction of metals including mercury into surface-water and groundwater; increased erosion, sulfur-gas emissions; and air pollution caused by the production of particulates during processing, transport, and support activities.
Oil-shale extraction can damage the biological and recreational value of land and the ecosystem in the mining area. Combustion and thermal processing generate waste material. In addition, the atmospheric emissions from oil shale processing and combustion include carbon dioxide, a greenhouse gas. Environmentalists oppose production and usage of oil shale, as it creates even more greenhouse gases than conventional fossil fuels. Experimental in situ conversion processes and carbon capture and storage technologies may reduce some of these concerns in the future, but at the same time they may cause other problems, including groundwater pollution. Among the water contaminants commonly associated with oil shale processing are oxygen and nitrogen heterocyclic hydrocarbons. Commonly detected examples include quinoline derivatives, pyridine, and various alkyl homologues of pyridine, such as picoline and lutidine.
Water concerns are sensitive issues in arid regions, such as the western U.S. and Israel's Negev Desert, where plans exist to expand oil-shale extraction despite a water shortage. Depending on technology, above-ground retorting uses between one and five barrels of water per barrel of produced shale-oil. A 2008 programmatic environmental impact statement issued by the U.S. Bureau of Land Management stated that surface mining and retort operations produce of waste water per of processed oil shale. In situ processing, according to one estimate, uses about one-tenth as much water.
Environmental activists, including members of Greenpeace, have organized strong protests against the oil shale industry. In one result, Queensland Energy Resources put the proposed Stuart Oil Shale Project in Australia on hold in 2004.
Extraterrestrial oil shale
Some comets contain massive amounts of an organic material almost identical to high grade oil shale, the equivalent of cubic kilometers of such mixed with other material; for instance, corresponding hydrocarbons were detected in a probe fly-by through the tail of Halley's Comet in 1986.
| Physical sciences | Petrology | null |
45022 | https://en.wikipedia.org/wiki/Natural%20transformation | Natural transformation | In category theory, a branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure (i.e., the composition of morphisms) of the categories involved. Hence, a natural transformation can be considered to be a "morphism of functors". Informally, the notion of a natural transformation states that a particular map between functors can be done consistently over an entire category.
Indeed, this intuition can be formalized to define so-called functor categories. Natural transformations are, after categories and functors, one of the most fundamental notions of category theory and consequently appear in the majority of its applications.
Definition
If and are functors between the categories and (both from to ), then a natural transformation from to is a family of morphisms that satisfies two requirements.
The natural transformation must associate, to every object in , a morphism between objects of . The morphism is called the component of at .
Components must be such that for every morphism in we have:
The last equation can conveniently be expressed by the commutative diagram
If both and are contravariant, the vertical arrows in the right diagram are reversed. If is a natural transformation from to , we also write or . This is also expressed by saying the family of morphisms is natural in .
If, for every object in , the morphism is an isomorphism in , then is said to be a (or sometimes natural equivalence or isomorphism of functors). Two functors and are called naturally isomorphic or simply isomorphic if there exists a natural isomorphism from to .
An infranatural transformation from to is simply a family of morphisms , for all in . Thus a natural transformation is an infranatural transformation for which for every morphism . The naturalizer of , nat, is the largest subcategory of containing all the objects of on which restricts to a natural transformation.
Examples
Opposite group
Statements such as
"Every group is naturally isomorphic to its opposite group"
abound in modern mathematics. We will now give the precise meaning of this statement as well as its proof. Consider the category
of all groups with group homomorphisms as morphisms. If is a group, we define
its opposite group as follows: is the same set as , and the operation is defined
by . All multiplications in are thus "turned around". Forming the opposite group becomes
a (covariant) functor from to if we define for any group homomorphism . Note that
is indeed a group homomorphism from to :
The content of the above statement is:
"The identity functor is naturally isomorphic to the opposite functor "
To prove this, we need to provide isomorphisms for every group , such that the above diagram commutes.
Set .
The formulas and
show that is a group homomorphism with inverse . To prove the naturality, we start with a group homomorphism
and show , i.e.
for all in . This is true since
and every group homomorphism has the property .
Modules
Let be an -module homomorphism of right modules. For every left module there is a natural map , form a natural transformation . For every right module there is a natural map defined by , form a natural transformation .
Abelianization
Given a group , we can define its abelianization . Let
denote the projection map onto the cosets of . This homomorphism is "natural in
", i.e., it defines a natural transformation, which we now check. Let be a group. For any homomorphism , we have that
is contained in the kernel of , because any homomorphism into an abelian group kills the commutator subgroup. Then
factors through as for the unique homomorphism
. This makes a functor and
a natural transformation, but not a natural isomorphism, from the identity functor to .
Hurewicz homomorphism
Functors and natural transformations abound in algebraic topology, with the Hurewicz homomorphisms serving as examples. For any pointed topological space and positive integer there exists a group homomorphism
from the -th homotopy group of to the -th homology group of . Both and are functors from the category Top* of pointed topological spaces to the category Grp of groups, and is a natural transformation from to .
Determinant
Given commutative rings and with a ring homomorphism , the respective groups of invertible matrices and inherit a homomorphism which we denote by , obtained by applying
to each matrix entry. Similarly, restricts to a group homomorphism , where denotes the group of units of . In fact, and are functors from the category of commutative rings to .
The determinant on the group , denoted by , is a group homomorphism
which is natural in : because the determinant is defined by the same formula for every ring, holds. This makes the determinant a natural transformation from to .
Double dual of a vector space
For example, if is a field, then for every vector space over we have a "natural" injective linear map from the vector space into its double dual. These maps are "natural" in the following sense: the double dual operation is a functor, and the maps are the components of a natural transformation from the identity functor to the double dual functor.
Finite calculus
For every abelian group , the set of functions from the integers to the underlying set of
forms an abelian group under pointwise addition. (Here is the standard forgetful functor .)
Given an morphism , the map given by left composing with the elements of the former is itself a homomorphism of abelian groups; in this way we
obtain a functor . The finite difference operator taking each function
to is a map from to itself, and the collection of such maps gives a natural transformation .
Tensor-hom adjunction
Consider the category of abelian groups and group homomorphisms. For all abelian groups , and we have a group isomorphism
.
These isomorphisms are "natural" in the sense that they define a natural transformation between the two involved functors .
(Here "op" is the opposite category of , not to be confused with the trivial opposite group functor on !)
This is formally the tensor-hom adjunction, and is an archetypal example of a pair of adjoint functors. Natural transformations arise frequently in conjunction with adjoint functors, and indeed, adjoint functors are defined by a certain natural isomorphism. Additionally, every pair of adjoint functors comes equipped with two natural transformations (generally not isomorphisms) called the unit and counit.
Unnatural isomorphism
The notion of a natural transformation is categorical, and states (informally) that a particular map between functors can be done consistently over an entire category. Informally, a particular map (esp. an isomorphism) between individual objects (not entire categories) is referred to as a "natural isomorphism", meaning implicitly that it is actually defined on the entire category, and defines a natural transformation of functors; formalizing this intuition was a motivating factor in the development of category theory.
Conversely, a particular map between particular objects may be called an unnatural isomorphism (or "an isomorphism that is not natural") if the map cannot be extended to a natural transformation on the entire category. Given an object a functor (taking for simplicity the first functor to be the identity) and an isomorphism proof of unnaturality is most easily shown by giving an automorphism that does not commute with this isomorphism (so ). More strongly, if one wishes to prove that and are not naturally isomorphic, without reference to a particular isomorphism, this requires showing that for any isomorphism , there is some with which it does not commute; in some cases a single automorphism works for all candidate isomorphisms while in other cases one must show how to construct a different for each isomorphism. The maps of the category play a crucial role – any infranatural transform is natural if the only maps are the identity map, for instance.
This is similar (but more categorical) to concepts in group theory or module theory, where a given decomposition of an object into a direct sum is "not natural", or rather "not unique", as automorphisms exist that do not preserve the direct sum decomposition – see for example.
Some authors distinguish notationally, using for a natural isomorphism and for an unnatural isomorphism, reserving for equality (usually equality of maps).
Example: fundamental group of torus
As an example of the distinction between the functorial statement and individual objects, consider homotopy groups of a product space, specifically the fundamental group of the torus.
The homotopy groups of a product space are naturally the product of the homotopy groups of the components, with the isomorphism given by projection onto the two factors, fundamentally because maps into a product space are exactly products of maps into the components – this is a functorial statement.
However, the torus (which is abstractly a product of two circles) has fundamental group isomorphic to , but the splitting is not natural. Note the use of , , and :
This abstract isomorphism with a product is not natural, as some isomorphisms of do not preserve the product: the self-homeomorphism of (thought of as the quotient space ) given by (geometrically a Dehn twist about one of the generating curves) acts as this matrix on (it's in the general linear group of invertible integer matrices), which does not preserve the decomposition as a product because it is not diagonal. However, if one is given the torus as a product – equivalently, given a decomposition of the space – then the splitting of the group follows from the general statement earlier. In categorical terms, the relevant category (preserving the structure of a product space) is "maps of product spaces, namely a pair of maps between the respective components".
Naturality is a categorical notion, and requires being very precise about exactly what data is given – the torus as a space that happens to be a product (in the category of spaces and continuous maps) is different from the torus presented as a product (in the category of products of two spaces and continuous maps between the respective components).
Example: dual of a finite-dimensional vector space
Every finite-dimensional vector space is isomorphic to its dual space, but there may be many different isomorphisms between the two spaces. There is in general no natural isomorphism between a finite-dimensional vector space and its dual space. However, related categories (with additional structure and restrictions on the maps) do have a natural isomorphism, as described below.
The dual space of a finite-dimensional vector space is again a finite-dimensional vector space of the same dimension, and these are thus isomorphic, since dimension is the only invariant of finite-dimensional vector spaces over a given field. However, in the absence of additional constraints (such as a requirement that maps preserve the chosen basis), the map from a space to its dual is not unique, and thus such an isomorphism requires a choice, and is "not natural". On the category of finite-dimensional vector spaces and linear maps, one can define an infranatural isomorphism from vector spaces to their dual by choosing an isomorphism for each space (say, by choosing a basis for every vector space and taking the corresponding isomorphism), but this will not define a natural transformation. Intuitively this is because it required a choice, rigorously because any such choice of isomorphisms will not commute with, say, the zero map; see for detailed discussion.
Starting from finite-dimensional vector spaces (as objects) and the identity and dual functors, one can define a natural isomorphism, but this requires first adding additional structure, then restricting the maps from "all linear maps" to "linear maps that respect this structure". Explicitly, for each vector space, require that it comes with the data of an isomorphism to its dual, . In other words, take as objects vector spaces with a nondegenerate bilinear form . This defines an infranatural isomorphism (isomorphism for each object). One then restricts the maps to only those maps that commute with the isomorphisms: or in other words, preserve the bilinear form: . (These maps define the naturalizer of the isomorphisms.) The resulting category, with objects finite-dimensional vector spaces with a nondegenerate bilinear form, and maps linear transforms that respect the bilinear form, by construction has a natural isomorphism from the identity to the dual (each space has an isomorphism to its dual, and the maps in the category are required to commute). Viewed in this light, this construction (add transforms for each object, restrict maps to commute with these) is completely general, and does not depend on any particular properties of vector spaces.
In this category (finite-dimensional vector spaces with a nondegenerate bilinear form, maps linear transforms that respect the bilinear form), the dual of a map between vector spaces can be identified as a transpose. Often for reasons of geometric interest this is specialized to a subcategory, by requiring that the nondegenerate bilinear forms have additional properties, such as being symmetric (orthogonal matrices), symmetric and positive definite (inner product space), symmetric sesquilinear (Hermitian spaces), skew-symmetric and totally isotropic (symplectic vector space), etc. – in all these categories a vector space is naturally identified with its dual, by the nondegenerate bilinear form.
Operations with natural transformations
Vertical composition
If and are natural transformations between functors , then we can compose them to get a natural transformation .
This is done componentwise:
.
This vertical composition of natural transformations is associative and has an identity, and allows one to consider the collection of all functors itself as a category (see below under Functor categories).
The identity natural transformation on functor has components .
For , .
Horizontal composition
If is a natural transformation between functors and is a natural transformation between functors , then the composition of functors allows a composition of natural transformations with components
.
By using whiskering (see below), we can write
,
hence
.
This horizontal composition of natural transformations is also associative with identity.
This identity is the identity natural transformation on the identity functor, i.e., the natural transformation that associate to each object its identity morphism: for object in category , .
For with , .
As identity functors and are functors, the identity for horizontal composition is also the identity for vertical composition, but not vice versa.
Whiskering
Whiskering is an external binary operation between a functor and a natural transformation.
If is a natural transformation between functors , and is another functor, then we can form the natural transformation by defining
.
If on the other hand is a functor, the natural transformation is defined by
.
It's also an horizontal composition where one of the natural transformations is the identity natural transformation:
and .
Note that (resp. ) is generally not the left (resp. right) identity of horizontal composition ( and in general), except if (resp. ) is the identity functor of the category (resp. ).
Interchange law
The two operations are related by an identity which exchanges vertical composition with horizontal composition: if we have four natural transformations as shown on the image to the right, then the following identity holds:
.
Vertical and horizontal compositions are also linked through identity natural transformations:
for and , .
As whiskering is horizontal composition with an identity, the interchange law gives immediately the compact formulas of horizontal composition of and without having to analyze components and the commutative diagram:
.
Functor categories
If is any category and is a small category, we can form the functor category having as objects all functors from to and as morphisms the natural transformations between those functors. This forms a category since for any functor there is an identity natural transformation (which assigns to every object the identity morphism on ) and the composition of two natural transformations (the "vertical composition" above) is again a natural transformation.
The isomorphisms in are precisely the natural isomorphisms. That is, a natural transformation is a natural isomorphism if and only if there exists a natural transformation such that and .
The functor category is especially useful if arises from a directed graph. For instance, if is the category of the directed graph , then has as objects the morphisms of , and a morphism between and in is a pair of morphisms and in such that the "square commutes", i.e. .
More generally, one can build the 2-category whose
0-cells (objects) are the small categories,
1-cells (arrows) between two objects and are the functors from to ,
2-cells between two 1-cells (functors) and are the natural transformations from to .
The horizontal and vertical compositions are the compositions between natural transformations described previously. A functor category is then simply a hom-category in this category (smallness issues aside).
More examples
Every limit and colimit provides an example for a simple natural transformation, as a cone amounts to a natural transformation with the diagonal functor as domain. Indeed, if limits and colimits are defined directly in terms of their universal property, they are universal morphisms in a functor category.
Yoneda lemma
If is an object of a locally small category , then the assignment defines a covariant functor . This functor is called representable (more generally, a representable functor is any functor naturally isomorphic to this functor for an appropriate choice of ). The natural transformations from a representable functor to an arbitrary functor are completely known and easy to describe; this is the content of the Yoneda lemma.
Historical notes
Saunders Mac Lane, one of the founders of category theory, is said to have remarked, "I didn't invent categories to study functors; I invented them to study natural transformations." Just as the study of groups is not complete without a study of homomorphisms, so the study of categories is not complete without the study of functors. The reason for Mac Lane's comment is that the study of functors is itself not complete without the study of natural transformations.
The context of Mac Lane's remark was the axiomatic theory of homology. Different ways of constructing homology could be shown to coincide: for example in the case of a simplicial complex the groups defined directly would be isomorphic to those of the singular theory. What cannot easily be expressed without the language of natural transformations is how homology groups are compatible with morphisms between objects, and how two equivalent homology theories not only have the same homology groups, but also the same morphisms between those groups.
| Mathematics | Category theory | null |
45086 | https://en.wikipedia.org/wiki/Biodiversity | Biodiversity | Biodiversity is the variability of life on Earth. It can be measured on various levels. There is for example genetic variability, species diversity, ecosystem diversity and phylogenetic diversity. Diversity is not distributed evenly on Earth. It is greater in the tropics as a result of the warm climate and high primary productivity in the region near the equator. Tropical forest ecosystems cover less than one-fifth of Earth's terrestrial area and contain about 50% of the world's species. There are latitudinal gradients in species diversity for both marine and terrestrial taxa.
Since life began on Earth, six major mass extinctions and several minor events have led to large and sudden drops in biodiversity. The Phanerozoic aeon (the last 540 million years) marked a rapid growth in biodiversity via the Cambrian explosion. In this period, the majority of multicellular phyla first appeared. The next 400 million years included repeated, massive biodiversity losses. Those events have been classified as mass extinction events. In the Carboniferous, rainforest collapse may have led to a great loss of plant and animal life. The Permian–Triassic extinction event, 251 million years ago, was the worst; vertebrate recovery took 30 million years.
Human activities have led to an ongoing biodiversity loss and an accompanying loss of genetic diversity. This process is often referred to as Holocene extinction, or sixth mass extinction. For example, it was estimated in 2007 that up to 30% of all species will be extinct by 2050. Destroying habitats for farming is a key reason why biodiversity is decreasing today. Climate change also plays a role. This can be seen for example in the effects of climate change on biomes. This anthropogenic extinction may have started toward the end of the Pleistocene, as some studies suggest that the megafaunal extinction event that took place around the end of the last ice age partly resulted from overhunting.
Definitions
Biologists most often define biodiversity as the "totality of genes, species and ecosystems of a region". An advantage of this definition is that it presents a unified view of the traditional types of biological variety previously identified:
taxonomic diversity (usually measured at the species diversity level)
ecological diversity (often viewed from the perspective of ecosystem diversity)
morphological diversity (which stems from genetic diversity and molecular diversity)
functional diversity (which is a measure of the number of functionally disparate species within a population (e.g. different feeding mechanism, different motility, predator vs prey, etc.))
Biodiversity is most commonly used to replace the more clearly-defined and long-established terms, species diversity and species richness. However, there is no concrete definition for biodiversity, as its definition continues to be defined. Other definitions include (in chronological order):
An explicit definition consistent with this interpretation was first given in a paper by Bruce A. Wilcox commissioned by the International Union for the Conservation of Nature and Natural Resources (IUCN) for the 1982 World National Parks Conference. Wilcox's definition was "Biological diversity is the variety of life forms...at all levels of biological systems (i.e., molecular, organismic, population, species and ecosystem)...".
A publication by Wilcox in 1984: Biodiversity can be defined genetically as the diversity of alleles, genes and organisms. They study processes such as mutation and gene transfer that drive evolution.
The 1992 United Nations Earth Summit defined biological diversity as "the variability among living organisms from all sources, including, inter alia, terrestrial, marine and other aquatic ecosystems and the ecological complexes of which they are part: this includes diversity within species, between species and of ecosystems". This definition is used in the United Nations Convention on Biological Diversity.
Gaston and Spicer's definition in their book "Biodiversity: an introduction" in 2004 is "variation of life at all levels of biological organization".
The Food and Agriculture Organization of the United Nations (FAO) defined biodiversity in 2019 as "the variability that exists among living organisms (both within and between species) and the ecosystems of which they are part."
Number of species
According to estimates by Mora et al. (2011), there are approximately 8.7 million terrestrial species and 2.2 million oceanic species. The authors note that these estimates are strongest for eukaryotic organisms and likely represent the lower bound of prokaryote diversity. Other estimates include:
220,000 vascular plants, estimated using the species-area relation method
0.7-1 million marine species
10–30 million insects; (of some 0.9 million we know today)
5–10 million bacteria;
1.5-3 million fungi, estimates based on data from the tropics, long-term non-tropical sites and molecular studies that have revealed cryptic speciation. Some 0.075 million species of fungi had been documented by 2001;
1 million mites
The number of microbial species is not reliably known, but the Global Ocean Sampling Expedition dramatically increased the estimates of genetic diversity by identifying an enormous number of new genes from near-surface plankton samples at various marine locations, initially over the 2004–2006 period. The findings may eventually cause a significant change in the way science defines species and other taxonomic categories.
Since the rate of extinction has increased, many extant species may become extinct before they are described. Not surprisingly, in the animalia the most studied groups are birds and mammals, whereas fishes and arthropods are the least studied animals groups.
Current biodiversity loss
During the last century, decreases in biodiversity have been increasingly observed. It was estimated in 2007 that up to 30% of all species will be extinct by 2050. Of these, about one eighth of known plant species are threatened with extinction. Estimates reach as high as 140,000 species per year (based on Species-area theory). This figure indicates unsustainable ecological practices, because few species emerge each year. The rate of species loss is greater now than at any time in human history, with extinctions occurring at rates hundreds of times higher than background extinction rates. and expected to still grow in the upcoming years. As of 2012, some studies suggest that 25% of all mammal species could be extinct in 20 years.
In absolute terms, the planet has lost 58% of its biodiversity since 1970 according to a 2016 study by the World Wildlife Fund. The Living Planet Report 2014 claims that "the number of mammals, birds, reptiles, amphibians, and fish across the globe is, on average, about half the size it was 40 years ago". Of that number, 39% accounts for the terrestrial wildlife gone, 39% for the marine wildlife gone and 76% for the freshwater wildlife gone. Biodiversity took the biggest hit in Latin America, plummeting 83 percent. High-income countries showed a 10% increase in biodiversity, which was canceled out by a loss in low-income countries. This is despite the fact that high-income countries use five times the ecological resources of low-income countries, which was explained as a result of a process whereby wealthy nations are outsourcing resource depletion to poorer nations, which are suffering the greatest ecosystem losses.
A 2017 study published in PLOS One found that the biomass of insect life in Germany had declined by three-quarters in the last 25 years. Dave Goulson of Sussex University stated that their study suggested that humans "appear to be making vast tracts of land inhospitable to most forms of life, and are currently on course for ecological Armageddon. If we lose the insects then everything is going to collapse."
In 2020 the World Wildlife Foundation published a report saying that "biodiversity is being destroyed at a rate unprecedented in human history". The report claims that 68% of the population of the examined species were destroyed in the years 1970 – 2016.
Of 70,000 monitored species, around 48% are experiencing population declines from human activity (in 2023), whereas only 3% have increasing populations.
Rates of decline in biodiversity in the current sixth mass extinction match or exceed rates of loss in the five previous mass extinction events in the fossil record. Biodiversity loss is in fact "one of the most critical manifestations of the Anthropocene" (since around the 1950s); the continued decline of biodiversity constitutes "an unprecedented threat" to the continued existence of human civilization. The reduction is caused primarily by human impacts, particularly habitat destruction.
Since the Stone Age, species loss has accelerated above the average basal rate, driven by human activity. Estimates of species losses are at a rate 100–10,000 times as fast as is typical in the fossil record.
Loss of biodiversity results in the loss of natural capital that supplies ecosystem goods and services. Species today are being wiped out at a rate 100 to 1,000 times higher than baseline, and the rate of extinctions is increasing. This process destroys the resilience and adaptability of life on Earth.
In 2006, many species were formally classified as rare or endangered or threatened; moreover, scientists have estimated that millions more species are at risk which have not been formally recognized. About 40 percent of the 40,177 species assessed using the IUCN Red List criteria are now listed as threatened with extinction—a total of 16,119. As of late 2022 9251 species were considered part of the IUCN's critically endangered.
Numerous scientists and the IPBES Global Assessment Report on Biodiversity and Ecosystem Services assert that human population growth and overconsumption are the primary factors in this decline. However, other scientists have criticized this finding and say that loss of habitat caused by "the growth of commodities for export" is the main driver.
Some studies have however pointed out that habitat destruction for the expansion of agriculture and the overexploitation of wildlife are the more significant drivers of contemporary biodiversity loss, not climate change.
Distribution
Biodiversity is not evenly distributed, rather it varies greatly across the globe as well as within regions and seasons. Among other factors, the diversity of all living things (biota) depends on temperature, precipitation, altitude, soils, geography and the interactions between other species. The study of the spatial distribution of organisms, species and ecosystems, is the science of biogeography.
Diversity consistently measures higher in the tropics and in other localized regions such as the Cape Floristic Region and lower in polar regions generally. Rain forests that have had wet climates for a long time, such as Yasuní National Park in Ecuador, have particularly high biodiversity.
There is local biodiversity, which directly impacts daily life, affecting the availability of fresh water, food choices, and fuel sources for humans. Regional biodiversity includes habitats and ecosystems that synergizes and either overlaps or differs on a regional scale. National biodiversity within a country determines the ability for a country to thrive according to its habitats and ecosystems on a national scale. Also, within a country, endangered species are initially supported on a national level then internationally. Ecotourism may be utilized to support the economy and encourages tourists to continue to visit and support species and ecosystems they visit, while they enjoy the available amenities provided. International biodiversity impacts global livelihood, food systems, and health. Problematic pollution, over consumption, and climate change can devastate international biodiversity. Nature-based solutions are a critical tool for a global resolution. Many species are in danger of becoming extinct and need world leaders to be proactive with the Kunming-Montreal Global Biodiversity Framework.
Terrestrial biodiversity is thought to be up to 25 times greater than ocean biodiversity. Forests harbour most of Earth's terrestrial biodiversity. The conservation of the world's biodiversity is thus utterly dependent on the way in which we interact with and use the world's forests. A new method used in 2011, put the total number of species on Earth at 8.7 million, of which 2.1 million were estimated to live in the ocean. However, this estimate seems to under-represent the diversity of microorganisms. Forests provide habitats for 80 percent of amphibian species, 75 percent of bird species and 68 percent of mammal species. About 60 percent of all vascular plants are found in tropical forests. Mangroves provide breeding grounds and nurseries for numerous species of fish and shellfish and help trap sediments that might otherwise adversely affect seagrass beds and coral reefs, which are habitats for many more marine species. Forests span around 4 billion acres (nearly a third of the Earth's land mass) and are home to approximately 80% of the world's biodiversity. About 1 billion hectares are covered by primary forests. Over 700 million hectares of the world's woods are officially protected.
The biodiversity of forests varies considerably according to factors such as forest type, geography, climate and soils – in addition to human use. Most forest habitats in temperate regions support relatively few animal and plant species and species that tend to have large geographical distributions, while the montane forests of Africa, South America and Southeast Asia and lowland forests of Australia, coastal Brazil, the Caribbean islands, Central America and insular Southeast Asia have many species with small geographical distributions. Areas with dense human populations and intense agricultural land use, such as Europe, parts of Bangladesh, China, India and North America, are less intact in terms of their biodiversity. Northern Africa, southern Australia, coastal Brazil, Madagascar and South Africa, are also identified as areas with striking losses in biodiversity intactness. European forests in EU and non-EU nations comprise more than 30% of Europe's land mass (around 227 million hectares), representing an almost 10% growth since 1990.
Latitudinal gradients
Generally, there is an increase in biodiversity from the poles to the tropics. Thus localities at lower latitudes have more species than localities at higher latitudes. This is often referred to as the latitudinal gradient in species diversity. Several ecological factors may contribute to the gradient, but the ultimate factor behind many of them is the greater mean temperature at the equator compared to that at the poles.
Even though terrestrial biodiversity declines from the equator to the poles, some studies claim that this characteristic is unverified in aquatic ecosystems, especially in marine ecosystems. The latitudinal distribution of parasites does not appear to follow this rule. Also, in terrestrial ecosystems the soil bacterial diversity has been shown to be highest in temperate climatic zones, and has been attributed to carbon inputs and habitat connectivity.
In 2016, an alternative hypothesis ("the fractal biodiversity") was proposed to explain the biodiversity latitudinal gradient. In this study, the species pool size and the fractal nature of ecosystems were combined to clarify some general patterns of this gradient. This hypothesis considers temperature, moisture, and net primary production (NPP) as the main variables of an ecosystem niche and as the axis of the ecological hypervolume. In this way, it is possible to build fractal hyper volumes, whose fractal dimension rises to three moving towards the equator.
Biodiversity Hotspots
A biodiversity hotspot is a region with a high level of endemic species that have experienced great habitat loss. The term hotspot was introduced in 1988 by Norman Myers. While hotspots are spread all over the world, the majority are forest areas and most are located in the tropics.
Brazil's Atlantic Forest is considered one such hotspot, containing roughly 20,000 plant species, 1,350 vertebrates and millions of insects, about half of which occur nowhere else. The island of Madagascar and India are also particularly notable. Colombia is characterized by high biodiversity, with the highest rate of species by area unit worldwide and it has the largest number of endemics (species that are not found naturally anywhere else) of any country. About 10% of the species of the Earth can be found in Colombia, including over 1,900 species of bird, more than in Europe and North America combined, Colombia has 10% of the world's mammals species, 14% of the amphibian species and 18% of the bird species of the world. Madagascar dry deciduous forests and lowland rainforests possess a high ratio of endemism. Since the island separated from mainland Africa 66 million years ago, many species and ecosystems have evolved independently. Indonesia's 17,000 islands cover and contain 10% of the world's flowering plants, 12% of mammals and 17% of reptiles, amphibians and birds—along with nearly 240 million people. Many regions of high biodiversity and/or endemism arise from specialized habitats which require unusual adaptations, for example, alpine environments in high mountains, or Northern European peat bogs.
Accurately measuring differences in biodiversity can be difficult. Selection bias amongst researchers may contribute to biased empirical research for modern estimates of biodiversity. In 1768, Rev. Gilbert White succinctly observed of his Selborne, Hampshire "all nature is so full, that that district produces the most variety which is the most examined."
Evolution over geologic timeframes
Biodiversity is the result of 3.5 billion years of evolution. The origin of life has not been established by science, however, some evidence suggests that life may already have been well-established only a few hundred million years after the formation of the Earth. Until approximately 2.5 billion years ago, all life consisted of microorganisms – archaea, bacteria, and single-celled protozoans and protists.
Biodiversity grew fast during the Phanerozoic (the last 540 million years), especially during the so-called Cambrian explosion—a period during which nearly every phylum of multicellular organisms first appeared. However, recent studies suggest that this diversification had started earlier, at least in the Ediacaran, and that it continued in the Ordovician. Over the next 400 million years or so, invertebrate diversity showed little overall trend and vertebrate diversity shows an overall exponential trend. This dramatic rise in diversity was marked by periodic, massive losses of diversity classified as mass extinction events. A significant loss occurred in anamniotic limbed vertebrates when rainforests collapsed in the Carboniferous, but amniotes seem to have been little affected by this event; their diversification slowed down later, around the Asselian/Sakmarian boundary, in the early Cisuralian (Early Permian), about 293 Ma ago. The worst was the Permian-Triassic extinction event, 251 million years ago. Vertebrates took 30 million years to recover from this event.
The most recent major mass extinction event, the Cretaceous–Paleogene extinction event, occurred 66 million years ago. This period has attracted more attention than others because it resulted in the extinction of the dinosaurs, which were represented by many lineages at the end of the Maastrichtian, just before that extinction event. However, many other taxa were affected by this crisis, which affected even marine taxa, such as ammonites, which also became extinct around that time.
The biodiversity of the past is called Paleobiodiversity. The fossil record suggests that the last few million years featured the greatest biodiversity in history. However, not all scientists support this view, since there is uncertainty as to how strongly the fossil record is biased by the greater availability and preservation of recent geologic sections. Some scientists believe that corrected for sampling artifacts, modern biodiversity may not be much different from biodiversity 300 million years ago, whereas others consider the fossil record reasonably reflective of the diversification of life. Estimates of the present global macroscopic species diversity vary from 2 million to 100 million, with a best estimate of somewhere near 9 million, the vast majority arthropods. Diversity appears to increase continually in the absence of natural selection.
Diversification
The existence of a global carrying capacity, limiting the amount of life that can live at once, is debated, as is the question of whether such a limit would also cap the number of species. While records of life in the sea show a logistic pattern of growth, life on land (insects, plants and tetrapods) shows an exponential rise in diversity. As one author states, "Tetrapods have not yet invaded 64 percent of potentially habitable modes and it could be that without human influence the ecological and taxonomic diversity of tetrapods would continue to increase exponentially until most or all of the available eco-space is filled."
It also appears that the diversity continues to increase over time, especially after mass extinctions.
On the other hand, changes through the Phanerozoic correlate much better with the hyperbolic model (widely used in population biology, demography and macrosociology, as well as fossil biodiversity) than with exponential and logistic models. The latter models imply that changes in diversity are guided by a first-order positive feedback (more ancestors, more descendants) and/or a negative feedback arising from resource limitation. Hyperbolic model implies a second-order positive feedback. Differences in the strength of the second-order feedback due to different intensities of interspecific competition might explain the faster rediversification of ammonoids in comparison to bivalves after the end-Permian extinction. The hyperbolic pattern of the world population growth arises from a second-order positive feedback between the population size and the rate of technological growth. The hyperbolic character of biodiversity growth can be similarly accounted for by a feedback between diversity and community structure complexity. The similarity between the curves of biodiversity and human population probably comes from the fact that both are derived from the interference of the hyperbolic trend with cyclical and stochastic dynamics.
Most biologists agree however that the period since human emergence is part of a new mass extinction, named the Holocene extinction event, caused primarily by the impact humans are having on the environment. It has been argued that the present rate of extinction is sufficient to eliminate most species on the planet Earth within 100 years.
New species are regularly discovered (on average between 5–10,000 new species each year, most of them insects) and many, though discovered, are not yet classified (estimates are that nearly 90% of all arthropods are not yet classified). Most of the terrestrial diversity is found in tropical forests and in general, the land has more species than the ocean; some 8.7 million species may exist on Earth, of which some 2.1 million live in the ocean.
Species diversity in geologic time frames
It is estimated that 5 to 50 billion species have existed on the planet. Assuming that there may be a maximum of about 50 million species currently alive, it stands to reason that greater than 99% of the planet's species went extinct prior to the evolution of humans. Estimates on the number of Earth's current species range from 10 million to 14 million, of which about 1.2 million have been documented and over 86% have not yet been described. However, a May 2016 scientific report estimates that 1 trillion species are currently on Earth, with only one-thousandth of one percent described. The total amount of related DNA base pairs on Earth is estimated at 5.0 x 1037 and weighs 50 billion tonnes. In comparison, the total mass of the biosphere has been estimated to be as much as four trillion tons of carbon. In July 2016, scientists reported identifying a set of 355 genes from the last universal common ancestor (LUCA) of all organisms living on Earth.
The age of Earth is about 4.54 billion years. The earliest undisputed evidence of life dates at least from 3.7 billion years ago, during the Eoarchean era after a geological crust started to solidify following the earlier molten Hadean eon. There are microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. Other early physical evidence of a biogenic substance is graphite in 3.7 billion-year-old meta-sedimentary rocks discovered in Western Greenland.. More recently, in 2015, "remains of biotic life" were found in 4.1 billion-year-old rocks in Western Australia. According to one of the researchers, "If life arose relatively quickly on Earth...then it could be common in the universe."
Role and benefits of biodiversity
Ecosystem services
There have been many claims about biodiversity's effect on the ecosystem services, especially provisioning and regulating services. Some of those claims have been validated, some are incorrect and some lack enough evidence to draw definitive conclusions.
Ecosystem services have been grouped in three types:
Provisioning services which involve the production of renewable resources (e.g.: food, wood, fresh water)
Regulating services which are those that lessen environmental change (e.g.: climate regulation, pest/disease control)
Cultural services represent human value and enjoyment (e.g.: landscape aesthetics, cultural heritage, outdoor recreation and spiritual significance)
Experiments with controlled environments have shown that humans cannot easily build ecosystems to support human needs; for example insect pollination cannot be mimicked, though there have been attempts to create artificial pollinators using unmanned aerial vehicles. The economic activity of pollination alone represented between $2.1–14.6 billion in 2003. Other sources have reported somewhat conflicting results and in 1997 Robert Costanza and his colleagues reported the estimated global value of ecosystem services (not captured in traditional markets) at an average of $33 trillion annually.
Provisioning services
With regards to provisioning services, greater species diversity has the following benefits:
Greater species diversity of plants increases fodder yield (synthesis of 271 experimental studies).
Greater species diversity of plants (i.e. diversity within a single species) increases overall crop yield (synthesis of 575 experimental studies). Although another review of 100 experimental studies reported mixed evidence.
Greater species diversity of trees increases overall wood production (synthesis of 53 experimental studies). However, there is not enough data to draw a conclusion about the effect of tree trait diversity on wood production.
Regulating services
With regards to regulating services, greater species diversity has the following benefits:
Greater species diversity
of fish increases the stability of fisheries yield (synthesis of 8 observational studies)
of plants increases carbon sequestration, but note that this finding only relates to actual uptake of carbon dioxide and not long-term storage; synthesis of 479 experimental studies)
of plants increases soil nutrient remineralization (synthesis of 103 experimental studies), increases soil organic matter (synthesis of 85 experimental studies) and decreases disease prevalence on plants (synthesis of 107 experimental studies)
of natural pest enemies decreases herbivorous pest populations (data from two separate reviews; synthesis of 266 experimental and observational studies; Synthesis of 18 observational studies. Although another review of 38 experimental studies found mixed support for this claim, suggesting that in cases where mutual intraguild predation occurs, a single predatory species is often more effective
Agriculture
Agricultural diversity can be divided into two categories: intraspecific diversity, which includes the genetic variation within a single species, like the potato (Solanum tuberosum) that is composed of many different forms and types (e.g. in the U.S. they might compare russet potatoes with new potatoes or purple potatoes, all different, but all part of the same species, S. tuberosum). The other category of agricultural diversity is called interspecific diversity and refers to the number and types of different species.
Agricultural diversity can also be divided by whether it is 'planned' diversity or 'associated' diversity. This is a functional classification that we impose and not an intrinsic feature of life or diversity. Planned diversity includes the crops which a farmer has encouraged, planted or raised (e.g. crops, covers, symbionts, and livestock, among others), which can be contrasted with the associated diversity that arrives among the crops, uninvited (e.g. herbivores, weed species and pathogens, among others).
Associated biodiversity can be damaging or beneficial. The beneficial associated biodiversity include for instance wild pollinators such as wild bees and syrphid flies that pollinate crops and natural enemies and antagonists to pests and pathogens. Beneficial associated biodiversity occurs abundantly in crop fields and provide multiple ecosystem services such as pest control, nutrient cycling and pollination that support crop production.
Although about 80 percent of humans' food supply comes from just 20 kinds of plants, humans use at least 40,000 species. Earth's surviving biodiversity provides resources for increasing the range of food and other products suitable for human use, although the present extinction rate shrinks that potential.
Human health
Biodiversity's relevance to human health is becoming an international political issue, as scientific evidence builds on the global health implications of biodiversity loss. This issue is closely linked with the issue of climate change, as many of the anticipated health risks of climate change are associated with changes in biodiversity (e.g. changes in populations and distribution of disease vectors, scarcity of fresh water, impacts on agricultural biodiversity and food resources etc.). This is because the species most likely to disappear are those that buffer against infectious disease transmission, while surviving species tend to be the ones that increase disease transmission, such as that of West Nile Virus, Lyme disease and Hantavirus, according to a study done co-authored by Felicia Keesing, an ecologist at Bard College and Drew Harvell, associate director for Environment of the Atkinson Center for a Sustainable Future (ACSF) at Cornell University.
Some of the health issues influenced by biodiversity include dietary health and nutrition security, infectious disease, medical science and medicinal resources, social and psychological health. Biodiversity is also known to have an important role in reducing disaster risk and in post-disaster relief and recovery efforts.
Biodiversity provides critical support for drug discovery and the availability of medicinal resources. A significant proportion of drugs are derived, directly or indirectly, from biological sources: at least 50% of the pharmaceutical compounds on the US market are derived from plants, animals and microorganisms, while about 80% of the world population depends on medicines from nature (used in either modern or traditional medical practice) for primary healthcare. Only a tiny fraction of wild species has been investigated for medical potential.
Marine ecosystems are particularly important, although inappropriate bioprospecting can increase biodiversity loss, as well as violating the laws of the communities and states from which the resources are taken.
Business and industry
Many industrial materials derive directly from biological sources. These include building materials, fibers, dyes, rubber, and oil. Biodiversity is also important to the security of resources such as water, timber, paper, fiber, and food. As a result, biodiversity loss is a significant risk factor in business development and a threat to long-term economic sustainability.
Cultural and aesthetic value
Philosophically it could be argued that biodiversity has intrinsic aesthetic and spiritual value to mankind in and of itself. This idea can be used as a counterweight to the notion that tropical forests and other ecological realms are only worthy of conservation because of the services they provide.
Biodiversity also affords many non-material benefits including spiritual and aesthetic values, knowledge systems and education.
Measuring biodiversity
Analytical limits
Less than 1% of all species that have been described have been studied beyond noting their existence. The vast majority of Earth's species are microbial. Contemporary biodiversity physics is "firmly fixated on the visible [macroscopic] world". For example, microbial life is metabolically and environmentally more diverse than multicellular life (see e.g., extremophile). "On the tree of life, based on analyses of small-subunit ribosomal RNA, visible life consists of barely noticeable twigs. The inverse relationship of size and population recurs higher on the evolutionary ladder—to a first approximation, all multicellular species on Earth are insects". Insect extinction rates are high—supporting the Holocene extinction hypothesis.
Biodiversity changes (other than losses)
Natural seasonal variations
Biodiversity naturally varies due to seasonal shifts. Spring's arrival enhances biodiversity as numerous species breed and feed, while winter's onset temporarily reduces it as some insects perish and migrating animals leave. Additionally, the seasonal fluctuation in plant and invertebrate populations influences biodiversity.
Introduced and invasive species
Barriers such as large rivers, seas, oceans, mountains and deserts encourage diversity by enabling independent evolution on either side of the barrier, via the process of allopatric speciation. The term invasive species is applied to species that breach the natural barriers that would normally keep them constrained. Without barriers, such species occupy new territory, often supplanting native species by occupying their niches, or by using resources that would normally sustain native species.
Species are increasingly being moved by humans (on purpose and accidentally). Some studies say that diverse ecosystems are more resilient and resist invasive plants and animals. Many studies cite effects of invasive species on natives, but not extinctions.
Invasive species seem to increase local (alpha diversity) diversity, which decreases turnover of diversity (beta diversity). Overall gamma diversity may be lowered because species are going extinct because of other causes, but even some of the most insidious invaders (e.g.: Dutch elm disease, emerald ash borer, chestnut blight in North America) have not caused their host species to become extinct. Extirpation, population decline and homogenization of regional biodiversity are much more common. Human activities have frequently been the cause of invasive species circumventing their barriers, by introducing them for food and other purposes. Human activities therefore allow species to migrate to new areas (and thus become invasive) occurred on time scales much shorter than historically have been required for a species to extend its range.
At present, several countries have already imported so many exotic species, particularly agricultural and ornamental plants, that their indigenous fauna/flora may be outnumbered. For example, the introduction of kudzu from Southeast Asia to Canada and the United States has threatened biodiversity in certain areas. Another example are pines, which have invaded forests, shrublands and grasslands in the southern hemisphere.
Hybridization and genetic pollution
Endemic species can be threatened with extinction through the process of genetic pollution, i.e. uncontrolled hybridization, introgression and genetic swamping. Genetic pollution leads to homogenization or replacement of local genomes as a result of either a numerical and/or fitness advantage of an introduced species.
Hybridization and introgression are side-effects of introduction and invasion. These phenomena can be especially detrimental to rare species that come into contact with more abundant ones. The abundant species can interbreed with the rare species, swamping its gene pool. This problem is not always apparent from morphological (outward appearance) observations alone. Some degree of gene flow is normal adaptation and not all gene and genotype constellations can be preserved. However, hybridization with or without introgression may, nevertheless, threaten a rare species' existence.
Conservation
Conservation biology matured in the mid-20th century as ecologists, naturalists and other scientists began to research and address issues pertaining to global biodiversity declines.
The conservation ethic advocates management of natural resources for the purpose of sustaining biodiversity in species, ecosystems, the evolutionary process and human culture and society.
Conservation biology is reforming around strategic plans to protect biodiversity. Preserving global biodiversity is a priority in strategic conservation plans that are designed to engage public policy and concerns affecting local, regional and global scales of communities, ecosystems and cultures. Action plans identify ways of sustaining human well-being, employing natural capital, macroeconomic policies including economic incentives, and ecosystem services.
In the EU Directive 1999/22/EC zoos are described as having a role in the preservation of the biodiversity of wildlife animals by conducting research or participation in breeding programs.
Protection and restoration techniques
Removal of exotic species will allow the species that they have negatively impacted to recover their ecological niches. Exotic species that have become pests can be identified taxonomically (e.g., with Digital Automated Identification SYstem (DAISY), using the barcode of life). Removal is practical only given large groups of individuals due to the economic cost.
As sustainable populations of the remaining native species in an area become assured, "missing" species that are candidates for reintroduction can be identified using databases such as the Encyclopedia of Life and the Global Biodiversity Information Facility.
Biodiversity banking places a monetary value on biodiversity. One example is the Australian Native Vegetation Management Framework.
Gene banks are collections of specimens and genetic material. Some banks intend to reintroduce banked species to the ecosystem (e.g., via tree nurseries).
Reduction and better targeting of pesticides allows more species to survive in agricultural and urbanized areas.
Location-specific approaches may be less useful for protecting migratory species. One approach is to create wildlife corridors that correspond to the animals' movements. National and other boundaries can complicate corridor creation.
Protected areas
Protected areas, including forest reserves and biosphere reserves, serve many functions including for affording protection to wild animals and their habitat. Protected areas have been set up all over the world with the specific aim of protecting and conserving plants and animals. Some scientists have called on the global community to designate as protected areas of 30 percent of the planet by 2030, and 50 percent by 2050, in order to mitigate biodiversity loss from anthropogenic causes. The target of protecting 30% of the area of the planet by the year 2030 (30 by 30) was adopted by almost 200 countries in the 2022 United Nations Biodiversity Conference. At the moment of adoption (December 2022) 17% of land territory and 10% of ocean territory were protected. In a study published 4 September 2020 in Science Advances researchers mapped out regions that can help meet critical conservation and climate goals.
Protected areas safeguard nature and cultural resources and contribute to livelihoods, particularly at local level. There are over 238 563 designated protected areas worldwide, equivalent to 14.9 percent of the earth's land surface, varying in their extension, level of protection, and type of management (IUCN, 2018).
The benefits of protected areas extend beyond their immediate environment and time. In addition to conserving nature, protected areas are crucial for securing the long-term delivery of ecosystem services. They provide numerous benefits including the conservation of genetic resources for food and agriculture, the provision of medicine and health benefits, the provision of water, recreation and tourism, and for acting as a buffer against disaster. Increasingly, there is acknowledgement of the wider socioeconomic values of these natural ecosystems and of the ecosystem services they can provide.
National parks and wildlife sanctuaries
A national park is a large natural or near natural area set aside to protect large-scale ecological processes, which also provide a foundation for environmentally and culturally compatible, spiritual, scientific, educational, recreational and visitor opportunities. These areas are selected by governments or private organizations to protect natural biodiversity along with its underlying ecological structure and supporting environmental processes, and to promote education and recreation. The International Union for Conservation of Nature (IUCN), and its World Commission on Protected Areas (WCPA), has defined "National Park" as its Category II type of protected areas. Wildlife sanctuaries aim only at the conservation of species
Forest protected areas
Forest protected areas are a subset of all protected areas in which a significant portion of the area is forest. This may be the whole or only a part of the protected area. Globally, 18 percent of the world's forest area, or more than 700 million hectares, fall within legally established protected areas such as national parks, conservation areas and game reserves.
There is an estimated 726 million ha of forest in protected areas worldwide. Of the six major world regions, South America has the highest share of forests in protected areas, 31 percent. The forests play a vital role in harboring more than 45,000 floral and 81,000 faunal species of which 5150 floral and 1837 faunal species are endemic. In addition, there are 60,065 different tree species in the world. Plant and animal species confined to a specific geographical area are called endemic species.
In forest reserves, rights to activities like hunting and grazing are sometimes given to communities living on the fringes of the forest, who sustain their livelihood partially or wholly from forest resources or products.
Approximately 50 million hectares (or 24%) of European forest land is protected for biodiversity and landscape protection. Forests allocated for soil, water, and other ecosystem services encompass around 72 million hectares (32% of European forest area).
Role of society
Transformative change
In 2019, a summary for policymakers of the largest, most comprehensive study to date of biodiversity and ecosystem services, the Global Assessment Report on Biodiversity and Ecosystem Services, was published by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES). It stated that "the state of nature has deteriorated at an unprecedented and accelerating rate". To fix the problem, humanity will need a transformative change, including sustainable agriculture, reductions in consumption and waste, fishing quotas and collaborative water management.
The concept of nature-positive is playing a role in mainstreaming the goals of the Global Biodiversity Framework (GBF) for biodiversity. The aim of mainstreaming is to embed biodiversity considerations into public and private practice to conserve and sustainably use biodiversity on global and local levels. The concept of nature-positive refers to the societal goal to halt and reverse biodiversity loss, measured from a baseline of 2020 levels, and to achieve full so-called "nature recovery" by 2050.
Citizen science
Citizen science, also known as public participation in scientific research, has been widely used in environmental sciences and is particularly popular in a biodiversity-related context. It has been used to enable scientists to involve the general public in biodiversity research, thereby enabling the scientists to collect data that they would otherwise not have been able to obtain.
Volunteer observers have made significant contributions to on-the-ground knowledge about biodiversity, and recent improvements in technology have helped increase the flow and quality of occurrences from citizen sources. A 2016 study published in Biological Conservation registers the massive contributions that citizen scientists already make to data mediated by the Global Biodiversity Information Facility (GBIF). Despite some limitations of the dataset-level analysis, it is clear that nearly half of all occurrence records shared through the GBIF network come from datasets with significant volunteer contributions. Recording and sharing observations are enabled by several global-scale platforms, including iNaturalist and eBird.
Legal status
International
United Nations Convention on Biological Diversity (1992) and Cartagena Protocol on Biosafety;
UN BBNJ (High Seas Treaty) 2023 Intergovernmental conference on an international legally binding instrument under the UNCLOS on the conservation and sustainable use of marine biological diversity of areas beyond national jurisdiction (GA resolution 72/249)
Convention on International Trade in Endangered Species (CITES);
Ramsar Convention (Wetlands);
Bonn Convention on Migratory Species;
UNESCO Convention concerning the Protection of the World's Cultural and Natural Heritage (indirectly by protecting biodiversity habitats)
UNESCO Global Geoparks
Regional Conventions such as the Apia Convention
Bilateral agreements such as the Japan-Australia Migratory Bird Agreement.
Global agreements such as the Convention on Biological Diversity, give "sovereign national rights over biological resources" (not property). The agreements commit countries to "conserve biodiversity", "develop resources for sustainability" and "share the benefits" resulting from their use. Biodiverse countries that allow bioprospecting or collection of natural products, expect a share of the benefits rather than allowing the individual or institution that discovers/exploits the resource to capture them privately. Bioprospecting can become a type of biopiracy when such principles are not respected.
Sovereignty principles can rely upon what is better known as Access and Benefit Sharing Agreements (ABAs). The Convention on Biodiversity implies informed consent between the source country and the collector, to establish which resource will be used and for what and to settle on a fair agreement on benefit sharing.
On the 19 of December 2022, during the 2022 United Nations Biodiversity Conference every country on earth, with the exception of the United States and the Holy See, signed onto the agreement which includes protecting 30% of land and oceans by 2030 (30 by 30) and 22 other targets intended to reduce biodiversity loss. The agreement includes also recovering 30% of earth degraded ecosystems and increasing funding for biodiversity issues.
European Union
In May 2020, the European Union published its Biodiversity Strategy for 2030. The biodiversity strategy is an essential part of the climate change mitigation strategy of the European Union. From the 25% of the European budget that will go to fight climate change, large part will go to restore biodiversity and nature based solutions.
The EU Biodiversity Strategy for 2030 include the next targets:
Protect 30% of the sea territory and 30% of the land territory especially Old-growth forests.
Plant 3 billion trees by 2030.
Restore at least 25,000 kilometers of rivers, so they will become free flowing.
Reduce the use of Pesticides by 50% by 2030.
Increase Organic farming. In linked EU program From Farm to Fork it is said, that the target is making 25% of EU agriculture organic, by 2030.
Increase biodiversity in agriculture.
Give €20 billion per year to the issue and make it part of the business practice.
Approximately half of the global GDP depend on nature. In Europe many parts of the economy that generate trillions of euros per year depend on nature. The benefits of Natura 2000 alone in Europe are €200 – €300 billion per year.
National level laws
Biodiversity is taken into account in some political and judicial decisions:
The relationship between law and ecosystems is very ancient and has consequences for biodiversity. It is related to private and public property rights. It can define protection for threatened ecosystems, but also some rights and duties (for example, fishing and hunting rights).
Law regarding species is more recent. It defines species that must be protected because they may be threatened by extinction. The U.S. Endangered Species Act is an example of an attempt to address the "law and species" issue.
Laws regarding gene pools are only about a century old. Domestication and plant breeding methods are not new, but advances in genetic engineering have led to tighter laws covering distribution of genetically modified organisms, gene patents and process patents. Governments struggle to decide whether to focus on for example, genes, genomes, or organisms and species.
Uniform approval for use of biodiversity as a legal standard has not been achieved, however. Bosselman argues that biodiversity should not be used as a legal standard, claiming that the remaining areas of scientific uncertainty cause unacceptable administrative waste and increase litigation without promoting preservation goals.
India passed the Biological Diversity Act in 2002 for the conservation of biological diversity in India. The Act also provides mechanisms for equitable sharing of benefits from the use of traditional biological resources and knowledge.
History of the term
1916 – The term biological diversity was used first by J. Arthur Harris in "The Variable Desert", Scientific American: "The bare statement that the region contains a flora rich in genera and species and of diverse geographic origin or affinity is entirely inadequate as a description of its real biological diversity."
1967 – Raymond F. Dasmann used the term biological diversity in reference to the richness of living nature that conservationists should protect in his book A Different Kind of Country.
1974 – The term natural diversity was introduced by John Terborgh.
1980 – Thomas Lovejoy introduced the term biological diversity to the scientific community in a book. It rapidly became commonly used.
1985 – According to Edward O. Wilson, the contracted form biodiversity was coined by W. G. Rosen: "The National Forum on BioDiversity ... was conceived by Walter G.Rosen ... Dr. Rosen represented the NRC/NAS throughout the planning stages of the project. Furthermore, he introduced the term biodiversity".
1985 – The term "biodiversity" appears in the article, "A New Plan to Conserve the Earth's Biota" by Laura Tangley.
1988 – The term biodiversity first appeared in publication.
1988 to Present – The United Nations Environment Programme (UNEP) Ad Hoc Working Group of Experts on Biological Diversity in began working in November 1988, leading to the publication of the draft Convention on Biological Diversity in May 1992. Since this time, there have been 16 Conferences of the Parties (COPs) to discuss potential global political responses to biodiversity loss. Most recently COP 16 in Cali, Colombia in 2024.
| Biology and health sciences | Biology | null |
45116 | https://en.wikipedia.org/wiki/Lunokhod%201 | Lunokhod 1 | Lunokhod 1 (Russian: Луноход-1 "Moonwalker 1"), also known as Аппарат 8ЕЛ № 203 ("Device 8EL No. 203") was the first robotic rover on the Moon and the first to freely move across the surface of an astronomical object beyond the Earth. Sent by the Soviet Union it was part of the robotic rovers Lunokhod program. The Luna 17 spacecraft carried Lunokhod 1 to the Moon in 1970. Lunokhod 0 (No.201), the previous and first attempt to land a rover, launched in February 1969 but failed to reach Earth orbit.
Although only designed for a lifetime of three lunar days (approximately three Earth months), Lunokhod 1 operated on the lunar surface for eleven lunar days (321 Earth days) and traversed a total distance of 10.54 km.
Rover description
Lunokhod 1 was a lunar vehicle formed of a tub-like compartment with a large convex lid on eight independently powered wheels. The rover stood high and had a mass of . It was about long and wide.
Lunokhod 1 was equipped with a cone-shaped antenna, a highly directional helical antenna, four television cameras, and special extendable devices to test the lunar soil for soil density and mechanical properties.
An X-ray spectrometer, an X-ray telescope, cosmic ray detectors, and a laser retro-reflector (supplied by France) were also included.
The vehicle was powered by batteries which were recharged during the lunar day by a solar cell array mounted on the underside of the lid. To be able to work in a vacuum, special fluoride-based lubricant was used for the mechanical parts, and the electric motors (one in each wheel hub) were enclosed in pressurized containers.
During the lunar nights, the lid was closed, and a polonium-210 radioisotope heater unit kept the internal components at operating temperature.
Lunokhod 1 was intended to operate through three lunar days (approximately three Earth months), but actually operated for eleven lunar days.
Launch and lunar orbit
Luna 17 was launched on November 10, 1970, at 14:44:01 UTC. After reaching earth parking orbit, the final stage of Luna 17s launching rocket fired to place it into a trajectory towards the Moon (1970-11-10 at 14:54 UTC). After two course correction maneuvers (on November 12 and 14), it entered lunar orbit on November 15, 1970, at 22:00 UTC.
Landing and surface operations
The spacecraft soft-landed on the Moon in the Mare Imbrium (Sea of Rains) on November 17 at 03:47 UTC. It landed in western Mare Imbrium, about 60 km south of the Promontorium Heraclides. The lander had dual ramps from which the payload, Lunokhod 1, could descend to the lunar surface.
At 06:28 UTC the rover moved onto the Moon's surface. The rover would run during the lunar day, stopping occasionally to recharge its batteries via the solar panels. At night the rover hibernated until the next sunrise, heated by the radioactive source.
Small craters along its traverse were named unofficially during the mission. The names were officially approved by the IAU in 2012. They are called Albert, Leonid, Kolya, Valera, Borya, Gena, Vitya, Kostya, Igor, Slava, Nikolya, and Vasya.
Operations during 1970:
November 17–22: The rover drove 197 m, returned 14 close-up pictures of the Moon and 12 panoramic views, during 10 communication sessions. It also conducted analyses of the lunar soil.
December 9–22: 1,522 m
Operations during 1971:
January 8–20: 1,936 m
February 8–19: 1,573 m
March 9–20: 2,004 m
April 8–20: 1,029 m
May 7–20: 197 m
June 5–18: 1,559 m
July 4–17: 220 m
August 3–16: 215 m
August 31 – September 14: 88 m
Location
The final location of Lunokhod 1 was uncertain until 2010, as lunar laser ranging experiments had failed to detect a return signal from it since 1971. On March 17, 2010, Albert Abdrakhimov found both the lander and the rover in Lunar Reconnaissance Orbiter image M114185541RC (Line 21977, Sample 3189). In April 2010, the Apache Point Observatory Lunar Laser-ranging Operation (APOLLO) team from the University of California at San Diego used the LRO images to locate the rover closely enough for laser range (distance) measurements. On April 22, 2010, and days following, the team successfully measured the distance several times. The intersection of the spheres described by the measured distances then pinpoint the current location of Lunokhod 1 to within 1 meter. APOLLO is now using Lunokhod 1s reflector for experiments, as they discovered, to their surprise, that it was returning much more light than other reflectors on the Moon. According to a NASA press release, APOLLO researcher Tom Murphy said, "We got about 2,000 photons from Lunokhod 1 on our first try. After almost 40 years of silence, this rover still has a lot to say."
By November 2010, the location of the rover had been determined to within about a centimeter. The location near the limb of the Moon, combined with the ability to range the rover even when it is in sunlight, promises to be particularly useful for determining aspects of the Earth–Moon system.
In a report released in May 2013, French scientists at the Côte d'Azur Observatory led by Jean-Marie Torre reported replicating the 2010 laser ranging experiments by American scientists after research using images from the NASA Lunar Reconnaissance Orbiter. In both cases, laser pulses were returned from the Lunokhod 1 retroreflector.
End of mission and results
Controllers finished the last communications session with Lunokhod 1 at 13:05 UT on September 14, 1971. Attempts to re-establish contact were finally discontinued and the operations of Lunokhod 1 officially ceased on October 4, 1971, the anniversary of Sputnik 1. During its 322 Earth days of operations, Lunokhod 1 travelled 10,540 metres (6.55 miles) and returned more than 20,000 TV images and 206 high-resolution panoramas. In addition, it performed 25 lunar soil analyses with its RIFMA x-ray fluorescence spectrometer and used its penetrometer at 500 different locations.
Gallery
| Technology | Rovers | null |
45145 | https://en.wikipedia.org/wiki/Hubble%20sequence | Hubble sequence | The Hubble sequence is a morphological classification scheme for galaxies published by Edwin Hubble in 1926. It is often colloquially known as the Hubble tuning-fork diagram because the shape in which it is traditionally represented resembles a tuning fork.
It was invented by John Henry Reynolds and Sir James Jeans.
The tuning fork scheme divided regular galaxies into three broad classes – ellipticals, lenticulars and spirals – based on their visual appearance (originally on photographic plates). A fourth class contains galaxies with an irregular appearance. The Hubble sequence is the most commonly used system for classifying galaxies, both in professional astronomical research and in amateur astronomy.
Classes of galaxies
Ellipticals
On the left (in the sense that the sequence is usually drawn) lie the ellipticals. Elliptical galaxies have relatively smooth, featureless light distributions and appear as ellipses in photographic images. They are denoted by the letter E, followed by an integer representing their degree of ellipticity in the sky. By convention, is ten times the ellipticity of the galaxy, rounded to the nearest integer, where the ellipticity is defined as for an ellipse with the semi-major axis length and the semi-minor axis length. The ellipticity increases from left to right on the Hubble diagram, with near-circular (E0) galaxies situated on the very left of the diagram. It is important to note that the ellipticity of a galaxy on the sky is only indirectly related to the true 3-dimensional shape (for example, a flattened, discus-shaped galaxy can appear almost round if viewed face-on or highly elliptical if viewed edge-on). Observationally, the most flattened "elliptical" galaxies have ellipticities (denoted E7). However, from studying the light profiles and the ellipticity profiles, rather than just looking at the images, it was realised in the 1960s that the E5–E7 galaxies are probably misclassified lenticular galaxies with large-scale disks seen at various inclinations to our line-of-sight. Observations of the kinematics of early-type galaxies further confirmed this.
Examples of elliptical galaxies: M49, M59, M60, M87, NGC 4125.
Lenticulars
At the centre of the Hubble tuning fork, where the two spiral-galaxy branches and the elliptical branch join, lies an intermediate class of galaxies known as lenticulars and given the symbol S0. These galaxies consist of a bright central bulge, similar in appearance to an elliptical galaxy, surrounded by an extended, disk-like structure. Unlike spiral galaxies, the disks of lenticular galaxies have no visible spiral structure and are not actively forming stars in any significant quantity.
When simply looking at a galaxy's image, lenticular galaxies with relatively face-on disks are difficult to distinguish from ellipticals of type E0–E3, making the classification of many such galaxies uncertain. When viewed edge-on, the disk becomes more apparent and prominent dust-lanes are sometimes visible in absorption at optical wavelengths.
At the time of the initial publication of Hubble's galaxy classification scheme, the existence of lenticular galaxies was purely hypothetical. Hubble believed that they were necessary as an intermediate stage between the highly flattened "ellipticals" and spirals. Later observations (by Hubble himself, among others) showed Hubble's belief to be correct and the S0 class was included in the definitive exposition of the Hubble sequence by Allan Sandage. Missing from the Hubble sequence are the early-type galaxies with intermediate-scale disks, in between the E0 and S0 types, Martha Liller denoted them ES galaxies in 1966.
Lenticular and spiral galaxies, taken together, are often referred to as disk galaxies. The bulge-to-disk flux ratio in lenticular galaxies can take on a range of values, just as it does for each of the spiral galaxy morphological types (Sa, Sb, etc.).
Examples of lenticular galaxies: M85, M86, NGC 1316, NGC 2787, NGC 5866, Centaurus A.
Spirals
On the right of the Hubble sequence diagram are two parallel branches encompassing the spiral galaxies. A spiral galaxy consists of a flattened disk, with stars forming a (usually two-armed) spiral structure, and a central concentration of stars known as the bulge. Roughly half of all spirals are also observed to have a bar-like structure, with the bar extending from the central bulge, and the arms begin at the ends of the bar. In the tuning-fork diagram, the regular spirals occupy the upper branch and are denoted by the letter S, while the lower branch contains the barred spirals, given the symbol SB. Both type of spirals are further subdivided according to the detailed appearance of their spiral structures. Membership of one of these subdivisions is indicated by adding a lower-case letter to the morphological type, as follows:
Sa (SBa) – tightly wound, smooth arms; large, bright central bulge
Sb (SBb) – less tightly wound spiral arms than Sa (SBa); somewhat fainter bulge
Sc (SBc) – loosely wound spiral arms, clearly resolved into individual stellar clusters and nebulae; smaller, fainter bulge
Hubble originally described three classes of spiral galaxy. This was extended by Gérard de Vaucouleurs to include a fourth class:
Sd (SBd) – very loosely wound, fragmentary arms; most of the luminosity is in the arms and not the bulge
Although strictly part of the de Vaucouleurs system of classification, the Sd class is often included in the Hubble sequence. The basic spiral types can be extended to enable finer distinctions of appearance. For example, spiral galaxies whose appearance is intermediate between two of the above classes are often identified by appending two lower-case letters to the main galaxy type (for example, Sbc for a galaxy that is intermediate between an Sb and an Sc).
Our own Milky Way is generally classed as Sc or SBc, making it a barred spiral with well-defined arms.
Examples of regular spiral galaxies: (visually) M31 (Andromeda Galaxy), M74, M81, M104 (Sombrero Galaxy), M51a (Whirlpool Galaxy), NGC 300, NGC 772.
Examples of barred spiral galaxies: M91, M95, NGC 1097, NGC 1300, NGC1672, NGC 2536, NGC 2903.
Irregulars
Galaxies that do not fit into the Hubble sequence, because they have no regular structure (either disk-like or ellipsoidal), are termed irregular galaxies. Hubble defined two classes of irregular galaxy:
Irr I galaxies have asymmetric profiles and lack a central bulge or obvious spiral structure; instead they contain many individual clusters of young stars
Irr II galaxies have smoother, asymmetric appearances and are not clearly resolved into individual stars or stellar clusters
In his extension to the Hubble sequence, de Vaucouleurs called the Irr I galaxies 'Magellanic irregulars', after the Magellanic Clouds – two satellites of the Milky Way which Hubble classified as Irr I. The discovery of a faint spiral structure in the Large Magellanic Cloud led de Vaucouleurs to further divide the irregular galaxies into those that, like the LMC, show some evidence for spiral structure (these are given the symbol Sm) and those that have no obvious structure, such as the Small Magellanic Cloud (denoted Im). In the extended Hubble sequence, the Magellanic irregulars are usually placed at the end of the spiral branch of the Hubble tuning fork.
Examples of irregular galaxies: M82, NGC 1427A, Large Magellanic Cloud, Small Magellanic Cloud.
Physical significance
Elliptical and lenticular galaxies are commonly referred to together as "early-type" galaxies, while spirals and irregular galaxies are referred to as "late types". This nomenclature is the source of the common, but erroneous, belief that the Hubble sequence was intended to reflect a supposed evolutionary sequence, from elliptical galaxies through lenticulars to either barred or regular spirals. In fact, Hubble was clear from the beginning that no such interpretation was implied:
The nomenclature, it is emphasized, refers to position in the sequence, and temporal connotations are made at one's peril. The entire classification is purely empirical and without prejudice to theories of evolution...
The evolutionary picture appears to be lent weight by the fact that the disks of spiral galaxies are observed to be home to many young stars and regions of active star formation, while elliptical galaxies are composed of predominantly old stellar populations. In fact, current evidence suggests the opposite: the early Universe appears to be dominated by spiral and irregular galaxies. In the currently favored picture of galaxy formation, present-day ellipticals formed as a result of mergers between these earlier building blocks; while some lenticular galaxies may have formed this way, others may have accreted their disks around pre-existing spheroids. Some lenticular galaxies may also be evolved spiral galaxies, whose gas has been stripped away leaving no fuel for continued star formation, although the galaxy LEDA 2108986 opens the debate on this.
Shortcomings
A common criticism of the Hubble scheme is that the criteria for assigning galaxies to classes are subjective, leading to different observers assigning galaxies to different classes (although experienced observers usually agree to within less than a single Hubble type). Although not really a shortcoming, since the 1961 Hubble Atlas of Galaxies, the primary criteria used to assign the morphological type (a, b, c, etc.) has been the nature of the spiral arms, rather than the bulge-to-disk flux ratio, and thus a range of flux ratios exist for each morphological type, as with the lenticular galaxies.
Another criticism of the Hubble classification scheme is that, being based on the appearance of a galaxy in a two-dimensional image, the classes are only indirectly related to the true physical properties of galaxies. In particular, problems arise because of orientation effects. The same galaxy would look very different, if viewed edge-on, as opposed to a face-on or 'broadside' viewpoint. As such, the early-type sequence is poorly represented: The ES galaxies are missing from the Hubble sequence, and the E5–E7 galaxies are actually S0 galaxies. Furthermore, the barred ES and barred S0 galaxies are also absent.
Visual classifications are also less reliable for faint or distant galaxies, and the appearance of galaxies can change depending on the wavelength of light in which they are observed.
Nonetheless, the Hubble sequence is still commonly used in the field of extragalactic astronomy and Hubble types are known to correlate with many physically relevant properties of galaxies, such as luminosities, colours, masses (of stars and gas) and star formation rates.
In June 2019, citizen scientists in the Galaxy Zoo project argued that the usual Hubble classification, particularly concerning spiral galaxies, may not be supported by evidence. Consequently, the scheme may need revision.
| Physical sciences | Galaxy classification | Astronomy |
45148 | https://en.wikipedia.org/wiki/8-bit%20computing | 8-bit computing | In computer architecture, 8-bit integers or other data units are those that are 8 bits wide (1 octet). Also, 8-bit central processing unit (CPU) and arithmetic logic unit (ALU) architectures are those that are based on registers or data buses of that size. Memory addresses (and thus address buses) for 8-bit CPUs are generally larger than 8-bit, usually 16-bit. 8-bit microcomputers are microcomputers that use 8-bit microprocessors.
The term '8-bit' is also applied to the character sets that could be used on computers with 8-bit bytes, the best known being various forms of extended ASCII, including the ISO/IEC 8859 series of national character sets especially Latin 1 for English and Western European languages.
The IBM System/360 introduced byte-addressable memory with 8-bit bytes, as opposed to bit-addressable or decimal digit-addressable or word-addressable memory, although its general-purpose registers were 32 bits wide, and addresses were contained in the lower 24 bits of those addresses. Different models of System/360 had different internal data path widths; the IBM System/360 Model 30 (1965) implemented the 32-bit System/360 architecture, but had an 8-bit native path width, and performed 32-bit arithmetic 8 bits at a time.
The first widely adopted 8-bit microprocessor was the Intel 8080, being used in many hobbyist computers of the late 1970s and early 1980s, often running the CP/M operating system; it had 8-bit data words and 16-bit addresses. The Zilog Z80 (compatible with the 8080) and the Motorola 6800 were also used in similar computers. The Z80 and the MOS Technology 6502 8-bit CPUs were widely used in home computers and second- and third-generation game consoles of the 1970s and 1980s. Many 8-bit CPUs or microcontrollers are the basis of today's ubiquitous embedded systems.
Historical context
8-bit microprocessors were the first widely used microprocessors in the computing industry, marking a major shift from mainframes and minicomputers to smaller, more affordable systems. The introduction of 8-bit processors in the 1970s enabled the production of personal computers, leading to the popularization of computing and setting the foundation for the modern computing landscape.
The 1976 Zilog Z80, one of the most popular 8-bit CPUs (though with 4-bit ALU, at least in the original), was discontinued in 2024 (its product line Z84C00), with Last Time Buy (LTB) orders by June 14, 2024.
Details
An 8-bit register can store 28 different values. The range of integer values that can be stored in 8 bits depends on the integer representation used. With the two most common representations, the range is 0 through 255 for representation as an (unsigned) binary number, and −128 through 127 for representation as two's complement.
8-bit CPUs use an 8-bit data bus and can therefore access 8 bits of data in a single machine instruction. The address bus is typically a double octet (16 bits) wide, due to practical and economical considerations. This implies a direct address space of 64 KB (65,536 bytes) on most 8-bit processors.
Most home computers from the 8-bit era fully exploited the address space, such as the BBC Micro (Model B) with 32 KB of RAM plus 32 KB of ROM. Others like the very popular Commodore 64 had full 64 KB RAM, plus 20 KB ROM, meaning with 16-bit addressing not all of the RAM could be used by default (e.g. from the included BASIC language interpreter in ROM); without exploiting bank switching, which allows for breaking the 64 KB (RAM) limit in some systems. Other computers would have as low as 1 KB (plus 4 KB ROM), such as the Sinclair ZX80 (while the later very popular ZX Spectrum had more memory), or even only 128 bytes of RAM (plus storage from a ROM cartridge), as in an early game console Atari 2600 and thus 8-bit addressing would have been enough for the RAM, if it would not have needed to cover ROM too). The Commodore 128, and other 8-bit systems, meaning still with 16-bit addressing, could use more than 64 KB, i.e. 128 KB RAM, also the BBC Master with it expandable to 512 KB of RAM.
While in general 8-bit CPUs have 16-bit addressing, in some architectures you have both, such as in the MOS Technology 6502 CPU, where the zero page is used extensively, saving one byte in the instructions accessing that page, and also having 16-bit addressing instructions that take 2 bytes for the address plus 1 for the opcode.
Some index registers, such as the two in the 6502, are 8-bit. This limits the size of the arrays addressed using indexed addressing instructions to objects of up to 256 bytes without requiring more complicated code. Other 8-bit CPUs, such as the Motorola 6800 and Intel 8080, have 16-bit index registers.
Notable 8-bit CPUs
The first commercial 8-bit processor was the Intel 8008 (1972) which was originally intended for the Datapoint 2200 intelligent terminal. Most competitors to Intel started off with such character oriented 8-bit microprocessors. Modernized variants of these 8-bit machines are still one of the most common types of processor in embedded systems.
The MOS Technology 6502, and variants of it, were used in personal computers, such as the Apple I, Apple II, Atari 8-bit computers, BBC Micro, PET, VIC-20, and in home video game consoles such as the Atari 2600 and the Nintendo Entertainment System.
Use for training, prototyping, and general hardware education
8-bit processors continue to be designed for general education about computer hardware, as well as for hobbyists' interests. One such CPU was designed and implemented using 7400-series integrated circuits on a breadboard. Designing 8-bit CPU's and their respective assemblers is a common training exercise for engineering students, engineers, and hobbyists. FPGAs are used for this purpose.
| Technology | Computer architecture concepts | null |
45159 | https://en.wikipedia.org/wiki/Olivine | Olivine | The mineral olivine () is a magnesium iron silicate with the chemical formula . It is a type of nesosilicate or orthosilicate. The primary component of the Earth's upper mantle, it is a common mineral in Earth's subsurface, but weathers quickly on the surface. Olivine has many uses, such as the gemstone peridot (or chrysolite), as well as industrial applications like metalworking processes.
The ratio of magnesium to iron varies between the two endmembers of the solid solution series: forsterite (Mg-endmember: ) and fayalite (Fe-endmember: ). Compositions of olivine are commonly expressed as molar percentages of forsterite (Fo) and/or fayalite (Fa) (e.g., Fo70Fa30, or just Fo70 with Fa30 implied). Forsterite's melting temperature is unusually high at atmospheric pressure, almost , while fayalite's is much lower – about . Melting temperature varies smoothly between the two endmembers, as do other properties. Olivine incorporates only minor amounts of elements other than oxygen (O), silicon (Si), magnesium (Mg) and iron (Fe). Manganese (Mn) and nickel (Ni) commonly are the additional elements present in highest concentrations.
Olivine gives its name to the group of minerals with a related structure (the olivine group) – which includes tephroite (Mn2SiO4), monticellite (CaMgSiO4), larnite (Ca2SiO4) and kirschsteinite (CaFeSiO4) (commonly also spelled kirschteinite).
Olivine's crystal structure incorporates aspects of the orthorhombic P Bravais lattice, which arise from each silica (SiO4) unit being joined by metal divalent cations with each oxygen in SiO4 bound to three metal ions. It has a spinel-like structure similar to magnetite but uses one quadrivalent and two divalent cations M22+ M4+O4 instead of two trivalent and one divalent cations.
Identification and paragenesis
Olivine is named for its typically olive-green color, though it may alter to a reddish color from the oxidation of iron.
Translucent olivine is sometimes used as a gemstone called peridot (péridot, the French word for olivine). It is also called chrysolite (or chrysolithe, from the Greek words for gold and stone), though this name is now rarely used in the English language. Some of the finest gem-quality olivine has been obtained from a body of mantle rocks on Zabargad Island in the Red Sea.
Olivine occurs in both mafic and ultramafic igneous rocks and as a primary mineral in certain metamorphic rocks. Mg-rich olivine crystallizes from magma that is rich in magnesium and low in silica. That magma crystallizes to mafic rocks such as gabbro and basalt. Ultramafic rocks usually contain substantial olivine, and those with an olivine content of over 40% are described as peridotites. Dunite has an olivine content of over 90% and is likely a cumulate formed by olivine crystallizing and settling out of magma or a vein mineral lining magma conduits. Olivine and high pressure structural variants constitute over 50% of the Earth's upper mantle, and olivine is one of the Earth's most common minerals by volume. The metamorphism of impure dolomite or other sedimentary rocks with high magnesium and low silica content also produces Mg-rich olivine, or forsterite.
Fe-rich olivine fayalite is relatively much less common, but it occurs in igneous rocks in small amounts in rare granites and rhyolites, and extremely Fe-rich olivine can exist stably with quartz and tridymite. In contrast, Mg-rich olivine does not occur stably with silica minerals, as it would react with them to form orthopyroxene ().
Mg-rich olivine is stable to pressures equivalent to a depth of about within Earth. Because it is thought to be the most abundant mineral in Earth's mantle at shallower depths, the properties of olivine have a dominant influence upon the rheology of that part of Earth and hence upon the solid flow that drives plate tectonics. Experiments have documented that olivine at high pressures (12 GPa, the pressure at depths of about ) can contain at least as much as about 8900 parts per million (weight) of water, and that such water content drastically reduces the resistance of olivine to solid flow. Moreover, because olivine is so abundant, more water may be dissolved in olivine of the mantle than is contained in Earth's oceans.
Olivine pine forest (a plant community) is unique to Norway. It is rare and found on dry olivine ridges in the fjord districts of Sunnmøre and Nordfjord.
Extraterrestrial occurrences
Mg-rich olivine has also been discovered in meteorites, on the Moon and Mars, falling into infant stars, as well as on asteroid 25143 Itokawa. Such meteorites include chondrites, collections of debris from the early Solar System; and pallasites, mixes of iron-nickel and olivine. The rare A-type asteroids are suspected to have a surface dominated by olivine.
The spectral signature of olivine has been seen in the dust disks around young stars. The tails of comets (which formed from the dust disk around the young Sun) often have the spectral signature of olivine, and the presence of olivine was verified in samples of a comet from the Stardust spacecraft in 2006. Comet-like (magnesium-rich) olivine has also been detected in the planetesimal belt around the star Beta Pictoris.
Crystal structure
Minerals in the olivine group crystallize in the orthorhombic system (space group Pbnm) with isolated silicate tetrahedra, meaning that olivine is a nesosilicate. The structure can be described as a hexagonal, close-packed array of oxygen ions with half of the octahedral sites occupied with magnesium or iron ions and one-eighth of the tetrahedral sites occupied by silicon ions.
There are three distinct oxygen sites (marked O1, O2 and O3 in figure 1), two distinct metal sites (M1 and M2) and only one distinct silicon site. O1, O2, M2 and Si all lie on mirror planes, while M1 exists on an inversion center. O3 lies in a general position.
High-pressure polymorphs
At the high temperatures and pressures found at depth within the Earth the olivine structure is no longer stable. Below depths of about olivine undergoes an exothermic phase transition to the sorosilicate, wadsleyite and, at about depth, wadsleyite transforms exothermically into ringwoodite, which has the spinel structure. At a depth of about , ringwoodite decomposes into silicate perovskite () and ferropericlase () in an endothermic reaction. These phase transitions lead to a discontinuous increase in the density of the Earth's mantle that can be observed by seismic methods. They are also thought to influence the dynamics of mantle convection in that the exothermic transitions reinforce flow across the phase boundary, whereas the endothermic reaction hampers it.
The pressure at which these phase transitions occur depends on temperature and iron content. At , the pure magnesium end member, forsterite, transforms to wadsleyite at and to ringwoodite at pressures above . Increasing the iron content decreases the pressure of the phase transition and narrows the wadsleyite stability field. At about 0.8 mole fraction fayalite, olivine transforms directly to ringwoodite over the pressure range . Fayalite transforms to spinel at pressures below . Increasing the temperature increases the pressure of these phase transitions.
Weathering
Olivine is one of the less stable common minerals on the surface according to the Goldich dissolution series. It alters into iddingsite (a combination of clay minerals, iron oxides and ferrihydrite) readily in the presence of water. Artificially increasing the weathering rate of olivine, e.g. by dispersing fine-grained olivine on beaches, has been proposed as a cheap way to sequester CO2. The presence of iddingsite on Mars would suggest that liquid water once existed there, and might enable scientists to determine when there was last liquid water on the planet.
Because of its rapid weathering, olivine is rarely found in sedimentary rock.
Mining
Norway
Norway is the main source of olivine in Europe, particularly in an area stretching from Åheim to Tafjord, and from Hornindal to Flemsøy in the Sunnmøre district. There is also olivine in Eid municipality. About 50% of the world's olivine for industrial use is produced in Norway. At Svarthammaren in Norddal olivine was mined from around 1920 to 1979, with a daily output up to 600 metric tons. Olivine was also obtained from the construction site of the hydro power stations in Tafjord. At Robbervika in Norddal municipality an open-pit mine has been in operation since 1984. The characteristic red color is reflected in several local names with "red" such as Raudbergvik (Red rock bay) or Raudnakken (Red ridge).
Hans Strøm in 1766 described the olivine's typical red color on the surface and the blue color within. Strøm wrote that in Norddal district large quantities of olivine were broken from the bedrock and used as sharpening stones.
Kallskaret near Tafjord is a nature reserve with olivine.
Applications
Olivine is used as a substitute for dolomite in steel works.
The aluminium foundry industry uses olivine sand to cast objects in aluminium. Olivine sand requires less water than silica sands while still holding the mold together during handling and pouring of the metal. Less water means less gas (steam) to vent from the mold as metal is poured into the mold.
In Finland, olivine is marketed as an ideal rock for sauna stoves because of its comparatively high density and resistance to weathering under repeated heating and cooling.
Gem-quality olivine is used as a gemstone called peridot.
Experimental uses
Removal of atmospheric CO2 via reaction with crushed olivine has been considered. The end-products of the very slow reaction are silicon dioxide, magnesium carbonate, and iron oxides. A nonprofit, Project Vesta, is investigating this approach on beaches which increase the agitation and surface area of crushed olivine through wave action.
| Physical sciences | Silicate minerals | Earth science |
45162 | https://en.wikipedia.org/wiki/Peridot | Peridot | Peridot ( ), sometimes called chrysolite, is a yellow-green transparent variety of olivine. Peridot is one of the few gemstones that occur in only one color.
Peridot can be found in mafic and ultramafic rocks occurring in lava and peridotite xenoliths of the mantle. The gem occurs in silica-deficient rocks such as volcanic basalt and pallasitic meteorites. Along with diamonds, peridot is one of only two gems observed to be formed not in Earth's crust, but in the molten rock of the upper mantle. Gem-quality peridot is rare on Earth's surface due to its susceptibility to alteration during its movement from deep within the mantle and weathering at the surface. Peridot has a chemical formula of .
Peridot is one of the birthstones for the month of August.
Etymology
The origin of the name peridot is uncertain. The Oxford English Dictionary suggests an alteration of Anglo–Norman (classical Latin -), a kind of opal, rather than the Arabic word , meaning "gemstone".
The Middle English Dictionarys entry on peridot includes several variations: , , and — other variants substitute y for letter i used here.
The earliest use of the word in English is possibly in the 1705 register of the St. Albans Abbey: The dual entry is in Latin with the translation to English listed as peridot. It records that on his death in 1245, Bishop John bequeathed various items, including peridot gems, to the Abbey.
Appearance
Peridot is one of the few gemstones that occur in only one color: an olive-green. The intensity and tint of the green, however, depends on the percentage of iron in the crystal structure, so the color of individual peridot gems can vary from yellow, to olive, to brownish-green. In rare cases, peridot may have a medium-dark toned, pure green with no secondary yellow hue or brown mask. Lighter-colored gems are due to lower iron concentrations.
Mineral properties
Crystal structure
The molecular structure of peridot consists of isomorphic olivine, silicate, magnesium and iron in an orthorhombic crystal system. In an alternative view, the atomic structure can be described as a hexagonal, close-packed array of oxygen ions with half of the octahedral sites occupied by magnesium or iron ions and one-eighth of the tetrahedral sites occupied by silicon ions.
Surface property
Oxidation of peridot does not occur at natural surface temperature and pressure but begins to occur slowly at with rates increasing with temperature. The oxidation of the olivine occurs by an initial breakdown of the fayalite component, and subsequent reaction with the forsterite component, to give magnetite and orthopyroxene.
Occurrence
Geologically
Olivine, of which peridot is a type, is a common mineral in mafic and ultramafic rocks, often found in lava and in peridotite xenoliths of the mantle, which lava carries to the surface; however, gem-quality peridot occurs in only a fraction of these settings. Peridots can also be found in meteorites.
Peridots can be differentiated by size and composition. A peridot formed as a result of volcanic activity tends to contain higher concentrations of lithium, nickel and zinc than those found in meteorites.
Olivine is an abundant mineral, but gem-quality peridot is rather rare due to its chemical instability on Earth's surface. Olivine is usually found as small grains and tends to exist in a heavily weathered state, unsuitable for decorative use. Large crystals of forsterite, the variety most often used to cut peridot gems, are rare; as a result, peridot is considered to be precious.
In the ancient world, the mining of peridot was called topazios then, on St. John's Island, in the Red Sea began about 300 .
The principal source of peridot olivine today is the San Carlos Apache Indian Reservation in Arizona.
It is also mined at another location in Arizona, and in Arkansas, Hawaii, Nevada, and New Mexico at Kilbourne Hole, in the US; and in Australia, Brazil, China, Egypt, Kenya, Mexico, Myanmar (Burma), Norway, Pakistan, Saudi Arabia, South Africa, Sri Lanka, and Tanzania.
In meteorites
Peridot crystals have been collected from some pallasite meteorites. The most commonly studied pallasitic peridot belongs to the Indonesian Jeppara meteorite, but others exist such as the Brenham, Esquel, Fukang, and Imilac meteorites.
Pallasitic (extraterrestrial) peridot differs chemically from its earthbound counterpart, in that pallasitic peridot lacks nickel.
Gemology
Orthorhombic minerals, like peridot, have biaxial birefringence defined by three principal axes: , and . Refractive index readings of faceted gems can range around = 1.651, = 1.668, and = 1.689, with a biaxial positive birefringence of 0.037–0.038. With decreasing magnesium and increasing iron concentration, the specific gravity, color darkness and refractive indices increase, and the shifts toward the index. Increasing iron concentration ultimately forms the iron-rich end-member of the olivine solid solution series fayalite.
A study of Chinese peridot gem samples determined the hydro-static specific gravity to be 3.36 . The visible-light spectroscopy of the same Chinese peridot samples showed light bands between 493.0–481.0 nm, the strongest absorption at 492.0 nm.
The largest cut peridot olivine is a specimen in the gem collection of the Smithsonian Museum in Washington, D.C.
Inclusions are common in peridot crystals but their presence depends on the location where it was found and the geological conditions that led to its crystallization.
Primary negative crystals – rounded gas bubbles – form in situ with peridot, and are common in Hawaiian peridots.
Secondary negative crystals form in peridot fractures.
"Lily pad" cleavages are often seen in San Carlos peridots, and are a type of secondary negative crystal. They can easily be seen under reflected light as circular discs surrounding a negative crystal.
Silky and rod-like inclusions are common in Pakistani peridots.
The most common mineral inclusion in peridot is the chromium-rich mineral chromite.
Magnesium-rich minerals also can exist in the form of pyrope and magnesiochromite. These two types of mineral inclusions are typically surrounded "lily-pad" cleavages.
Biotite flakes appear flat, brown, translucent, and tabular.
Cultural history
Peridot has been prized since the earliest civilizations for its claimed protective powers to drive away fears and nightmares, according to superstitions. There is a superstition that it carries the gift of "inner radiance", sharpening the mind and opening it to new levels of awareness and growth, helping one to recognize and realize one's destiny and spiritual purpose. (There is no scientific evidence for any such claims.)
Peridot olivine is the birthstone for the month of August.
Peridot has often been mistaken for emerald beryl and other green gems. Noted gemologist G.F. Kunz discussed the confusion between beryl and peridot in many church treasures, most notably the "Three Magi treasure" in the Dom of Cologne, Germany.
Gallery
| Physical sciences | Silicate minerals | Earth science |
45165 | https://en.wikipedia.org/wiki/Orthoclase | Orthoclase | Orthoclase, or orthoclase feldspar (endmember formula KAlSi3O8), is an important tectosilicate mineral which forms igneous rock. The name is from the Ancient Greek for "straight fracture", because its two cleavage planes are at right angles to each other. It is a type of potassium feldspar, also known as K-feldspar. The gem known as moonstone (see below) is largely composed of orthoclase.
Formation and subtypes
Orthoclase is a common constituent of most granites and other felsic igneous rocks and often forms huge crystals and masses in pegmatite.
Typically, the pure potassium endmember of orthoclase forms a solid solution with albite, the sodium endmember (NaAlSi3O8), of plagioclase. While slowly cooling within the earth, sodium-rich albite lamellae form by exsolution, enriching the remaining orthoclase with potassium. The resulting intergrowth of the two feldspars is called perthite.
The higher-temperature polymorph of KAlSi3O8 is sanidine. Sanidine is common in rapidly cooled volcanic rocks such as obsidian and felsic pyroclastic rocks, and is notably found in trachytes of the Drachenfels, Germany. The lower-temperature polymorph of KAlSi3O8 is microcline.
Adularia is a low temperature form of either microcline or orthoclase originally reported from the low temperature hydrothermal deposits in the Adula Alps of Switzerland. It was first described by Ermenegildo Pini in 1781. The optical effect of adularescence in moonstone is typically due to adularia.
The largest documented single crystal of orthoclase was found in the Ural Mountains in Russia. It measured around and weighed around .
Applications
Together with the other potassium feldspars, orthoclase is a common raw material for the manufacture of some glasses and some ceramics such as porcelain, and as a constituent of scouring powder.
Some intergrowths of orthoclase and albite have an attractive pale luster and are called moonstone when used in jewelry. Most moonstones are translucent and white, although grey and peach-colored varieties also occur. In gemology, their luster is called adularescence and is typically described as creamy or silvery white with a "billowy" quality. It is the state gem of Florida.
The gemstone commonly called rainbow moonstone is more properly a colorless form of labradorite and can be distinguished from "true" moonstone by its greater transparency and play of color, although their value and durability do not greatly differ.
Orthoclase is one of the ten defining minerals of the Mohs scale of mineral hardness, on which it is listed as having a hardness of 6.
NASA's Curiosity rover discovery of high levels of orthoclase in Martian sandstones suggested that some Martian rocks may have experienced complex geological processing, such as repeated melting.
| Physical sciences | Silicate minerals | Earth science |
45166 | https://en.wikipedia.org/wiki/Microcline | Microcline | Microcline (KAlSi3O8) is an important igneous rock-forming tectosilicate mineral. It is a potassium-rich alkali feldspar. Microcline typically contains minor amounts of sodium. It is common in granite and pegmatites. Microcline forms during slow cooling of orthoclase; it is more stable at lower temperatures than orthoclase. Sanidine is a polymorph of alkali feldspar stable at yet higher temperature. Microcline may be clear, white, pale-yellow, brick-red, or green; it is generally characterized by cross-hatch twinning that forms as a result of the transformation of monoclinic orthoclase into triclinic microcline.
The chemical compound name is potassium aluminium silicate, and it is known as E number reference E555.
Geology
Microcline may be chemically the same as monoclinic orthoclase, but because it belongs to the triclinic crystal system, the prism angle is slightly less than right angles; hence the name "microcline" from the Greek "small slope". It is a fully ordered triclinic modification of potassium feldspar and is dimorphous with orthoclase. Microcline is identical to orthoclase in many physical properties, and can be distinguished by x-ray or optical examination. When viewed under a polarizing microscope, microcline exhibits a minute multiple twinning which forms a grating-like structure that is unmistakable.
Perthite is either microcline or orthoclase with thin lamellae of exsolved albite.
Amazon stone, or amazonite, is a green variety of microcline. It is not found anywhere in the Amazon Basin, however. The Spanish explorers who named it apparently confused it with another green mineral from that region.
The largest documented single crystals of microcline were found in Devil's Hole Beryl Mine, Colorado, US and measured ~50 × 36 × 14 m. This could be one of the largest crystals of any material found so far.
Microcline is exceptionally active ice-nucleating agent in the atmosphere. Recently it has been possible to understand how water binds to the microcline surface.
As food additive
The chemical compound name is potassium aluminium silicate, and it is known as E number reference E555. It was the subject in 2018 of a Call for technical and toxicological data from the EFSA.
In 2008, it (along with other Aluminum compounds) was the subject of
a Scientific Opinion of the Panel on Food Additives, Flavourings, Processing Aids and Food Contact Materials from the EFSA.
| Physical sciences | Silicate minerals | Earth science |
45168 | https://en.wikipedia.org/wiki/Plagioclase | Plagioclase | Plagioclase ( ) is a series of tectosilicate (framework silicate) minerals within the feldspar group. Rather than referring to a particular mineral with a specific chemical composition, plagioclase is a continuous solid solution series, more properly known as the plagioclase feldspar series. This was first shown by the German mineralogist Johann Friedrich Christian Hessel (1796–1872) in 1826. The series ranges from albite to anorthite endmembers (with respective compositions NaAlSi3O8 to CaAl2Si2O8), where sodium and calcium atoms can substitute for each other in the mineral's crystal lattice structure. Plagioclase in hand samples is often identified by its polysynthetic crystal twinning or "record-groove" effect.
Plagioclase is a major constituent mineral in Earth's crust and is consequently an important diagnostic tool in petrology for identifying the composition, origin and evolution of igneous rocks. Plagioclase is also a major constituent of rock in the highlands of the Moon. Analysis of thermal emission spectra from the surface of Mars suggests that plagioclase is the most abundant mineral in the crust of Mars.
Its name comes , in reference to its two cleavage angles.
Properties
Plagioclase is the most common and abundant mineral group in the Earth's crust. Part of the feldspar family of minerals, it is abundant in igneous and metamorphic rock, and it is also common as a detrital mineral in sedimentary rock. It is not a single mineral, but is a solid solution of two end members, albite or sodium feldspar () and anorthite or calcium feldspar (). These can be present in plagioclase in any proportion from pure anorthite to pure albite. The composition of plagioclase can thus be written as where x ranges from 0 for pure albite to 1 for pure anorthite. This solid solution series is known as the plagioclase series. The composition of a particular sample of plagioclase is customarily expressed as the mol% of anorthite in the sample. For example, plagioclase that is 40 mol% anorthite would be described as An40 plagioclase.
The ability of albite and anorthite to form solid solutions in any proportions at elevated temperature reflects the ease with which calcium and aluminium can substitute for sodium and silicon in the plagioclase crystal structure. Although a calcium ion has a charge of +2, versus +1 for a sodium ion, the two ions have very nearly the same effective radius. The difference in charge is accommodated by the coupled substitution of aluminium (charge +3) for silicon (charge +4), both of which can occupy tetrahedral sites (surrounded by four oxygen ions). This contrasts with potassium, which has the same charge as sodium, but is a significantly larger ion. As a result of the size and charge difference between potassium and calcium, there is a very wide miscibility gap between anorthite and potassium feldspar, (), the third common rock-forming feldspar end member. Potassium feldspar does form a solid solution series with albite, due to the identical charges of sodium and potassium ions, which is known as the alkali feldspar series. Thus, almost all feldspar found on Earth is either plagioclase or alkali feldspar, with the two series overlapping for pure albite. When a plagioclase composition is described by its anorthite mol% (such as An40 in the previous example) it is assumed that the remainder is albite, with only a minor component of potassium feldspar.
Plagioclase of any composition shares many basic physical characteristics, while other characteristics vary smoothly with composition. The Mohs hardness of all plagioclase species is 6 to 6.5, and cleavage is perfect on [001] and good on [010], with the cleavage planes meeting at an angle of 93 to 94 degrees. It is from this slightly oblique cleavage angle that plagioclase gets its name, Ancient Greek ( 'oblique') + ( 'fracture'). The name was introduced by August Breithaupt in 1847. There is also a poor cleavage on [110] rarely seen in hand samples.
The luster is vitreous to pearly and the diaphaneity is transparent to translucent. The tenacity is brittle, and the fracture is uneven or conchoidal, but the fracture is rarely observed due to the strong tendency of the mineral to cleave instead. At low temperature, the crystal structure belongs to the triclinic system, space group P Well-formed crystals are rare and are most commonly sodic in composition. Well-shaped samples are instead typically cleavage fragments. Well-formed crystals are typically bladed or tabular parallel to [010].
Plagioclase is usually white to greyish-white in color, with a slight tendency for more calcium-rich samples to be darker. Impurities can infrequently tint the mineral greenish, yellowish, or flesh-red. Ferric iron (Fe3+) gives a pale yellow color in plagioclase feldspar from Lake County, Oregon. The specific gravity increases smoothly with calcium content, from 2.62 for pure albite to 2.76 for pure anorthite, and this can provide a useful estimate of composition if measured accurately. The index of refraction likewise varies smoothly from 1.53 to 1.58, and, if measured carefully, this also gives a useful composition estimate.
Plagioclase almost universally shows a characteristic polysynthetic twinning that produces twinning striations on [010]. These striations allow plagioclase to be distinguished from alkali feldspar. Plagioclase often also displays Carlsbad, Baveno, and Manebach Law twinning.
Plagioclase series members
The composition of a plagioclase feldspar is typically denoted by its overall fraction of anorthite (%An) or albite (%Ab). There are several named plagioclase feldspars that fall between albite and anorthite in the series. The following table shows their compositions in terms of constituent anorthite and albite percentages.
The distinction between these minerals cannot easily be made in the field. The composition can be roughly determined by specific gravity, but accurate measurement requires chemical or optical tests. The composition in a crushed grain mount can be obtained by the Tsuboi method, which yields an accurate measurement of the minimum refractive index that in turn gives an accurate composition. In thin section, the composition can be determined by either the Michel Lévy or Carlsbad-albite methods. The former relies on accurate measure of minimum index of refraction, while the latter relies on measuring the extinction angle under a polarizing microscope. The extinction angle is an optical characteristic and varies with the albite fraction (%Ab).
Endmembers
Anorthite was named by Gustav Rose in 1823 from Greek ('not') + ('straight'), literally 'oblique', referring to its triclinic crystallization. Anorthite is a comparatively rare mineral but occurs in the basic plutonic rocks of some orogenic calc-alkaline suites.
Albite is named from the Latin , in reference to its unusually pure white color. The name was first applied by Johan Gottlieb Gahn and Jöns Jacob Berzelius in 1815. It is a relatively common and important rock-making mineral associated with the more silica-rich rock types, in hydrothermal veins, with greenschist facies metamorphic rocks, and in pegmatite dikes, often as the variety cleavelandite and associated with rarer minerals like tourmaline and beryl.
Intermediate members
The intermediate members of the plagioclase group are very similar to each other and normally cannot be distinguished except by their optical properties. The specific gravity in each member (albite 2.62) increases 0.02 per 10% increase in anorthite (2.75).
Bytownite, named after the former name for Ottawa, Ontario, Canada—Bytown— is a rare mineral occasionally found in more basic rocks.
Labradorite is the characteristic feldspar of the more basic rock types such as gabbro or basalt. Labradorite frequently shows an iridescent display of colors due to light refracting within the lamellae of the crystal. It is named after Labrador, where it is a constituent of the intrusive igneous rock anorthosite which is composed almost entirely of plagioclase. A variety of labradorite known as spectrolite is found in Finland.
Andesine is a characteristic mineral of rocks such as diorite which contain a moderate amount of silica and related volcanics such as andesite.
Oligoclase is common in granite and monzonite. The name oligoclase is derived from the Greek ('small, slight') + ('fracture'), in reference to the fact that its cleavage angle differs significantly from 90°. The term was first used by Breithaupt in 1826. Sunstone is mainly oligoclase (sometimes albite) with flakes of hematite.
Petrogenesis
Plagioclase is the primary aluminium-bearing mineral in mafic rocks formed at low pressure. It is normally the first and most abundant feldspar to crystallize from a cooling primitive magma. Anorthite has a much higher melting point than albite, and, as a result, calcium-rich plagioclase is the first to crystallize. The plagioclase becomes more enriched in sodium as the temperature drops, forming Bowen's continuous reaction series. However, the composition with which plagioclase crystallizes also depends on the other components of the melt, so it is not by itself a reliable thermometer.
The liquidus of plagioclase (the temperature at which the plagioclase first begins to crystallize) is about for olivine basalt, with a composition of 50.5 wt% silica; in andesite with a silica content of 60.7 wt%; and in dacite with a silica content of 69.9 wt%. These values are for dry magma. The liquidus is greatly lowered by the addition of water, and much more for plagioclase than for mafic minerals. The eutectic (minimum melting mixture) for a mixture of anorthite and diopside shifts from 40 wt% anorthite to 78 wt% anorthite as the water vapor pressure goes from 1 bar to 10 kbar. The presence of water also shifts the composition of the crystallizing plagioclase towards anorthite. The eutectic for this wet mixture drops to about .
Crystallizing plagioclase is always richer in anorthite than the melt from which it crystallizes. This plagioclase effect causes the residual melt to be enriched in sodium and silicon and depleted in aluminium and calcium. However, the simultaneous crystallization of mafic minerals not containing aluminium can partially offset the depletion in aluminium. In volcanic rock, the crystallized plagioclase incorporates most of the potassium in the melt as a trace element.
New plagioclase crystals nucleate only with difficulty, and diffusion is very slow within the solid crystals. As a result, as a magma cools, increasingly sodium-rich plagioclase is usually crystallized onto the rims of existing plagioclase crystals, which retain their more calcium-rich cores. This results in compositional zoning of plagioclase in igneous rocks. In rare cases, plagioclase shows reverse zoning, with a more calcium-rich rim on a more sodium-rich core. Plagioclase also sometimes shows oscillatory zoning, with the zones fluctuating between sodium-rich and calcium-rich compositions, though this is usually superimposed on an overall normal zoning trend.
Classification of igneous rocks
Plagioclase is very important for the classification of crystalline igneous rocks. Generally, the more silica is present in the rock, the fewer the mafic minerals, and the more sodium-rich the plagioclase. Alkali feldspar appears as the silica content becomes high. Under the QAPF classification, plagioclase is one of the three key minerals, along with quartz and alkali feldspar, used to make the initial classification of the rock type. Low-silica igneous rocks are further divided into dioritic rocks having sodium-rich plagioclase (An<50) and gabbroic rocks having calcium-rich plagioclase (An>50). Anorthosite is an intrusive rock composed of at least 90% plagioclase.
Albite is an end member of both the alkali and plagioclase series. However, it is included in the alkali feldspar fraction of the rock in the QAPF classification.
In metamorphic rocks
Plagioclase is also common in metamorphic rock. Plagioclase tends to be albite in low-grade metamorphic rock, while oligoclase to andesine are more common in medium- to high-grade metamorphic rock. Metacarbonate rock sometimes contains fairly pure anorthite.
In sedimentary rocks
Feldspar makes up between 10 and 20 percent of the framework grains in typical sandstones. Alkali feldspar is usually more abundant than plagioclase in sandstone because Alkali feldspars are more resistant to chemical weathering and more stable, but sandstone derived from volcanic rock contains more plagioclase. Plagioclase weathers relatively rapidly to clay minerals such as smectite.
At the Mohorovičić discontinuity
The Mohorovičić discontinuity, which defines the boundary between the Earth's crust and the upper mantle, is thought to be the depth where feldspar disappears from the rock. While plagioclase is the most important aluminium-bearing mineral in the crust, it breaks down at the high pressure of the upper mantle, with the aluminium tending to be incorporated into clinopyroxene as Tschermak's molecule () or in jadeite . At still higher pressure, the aluminium is incorporated into garnet.
Exsolution
At very high temperatures, plagioclase forms a solid solution with potassium feldspar, but this becomes highly unstable on cooling. The plagioclase separates from the potassium feldspar, a process called exsolution. The resulting rock, in which fine streaks of plagioclase (lamellae) are present in potassium feldspar, is called perthite.
The solid solution between anorthite and albite remains stable to lower temperatures, but ultimately becomes unstable as the rock approaches ambient surface temperatures. The resulting exsolution results in very fine lamellar and other intergrowths, normally detected only by sophisticated means. However, exsolution in the andesine to labradorite compositional range sometimes produces lamellae with thicknesses comparable to the wavelength of visible light. This acts like a diffraction grating, causing the labradorite to show the beautiful play of colors known as chatoyance.
Uses
In addition to its importance to geologists in classifying igneous rocks, plagioclase finds practical use as construction aggregate, as dimension stone, and in powdered form as a filler in paint, plastics, and rubber. Sodium-rich plagioclase finds use in the manufacture of glass and ceramics.
Anorthosite could someday be important as a source of aluminium.
| Physical sciences | Silicate minerals | Earth science |
45178 | https://en.wikipedia.org/wiki/Process%20%28computing%29 | Process (computing) | In computing, a process is the instance of a computer program that is being executed by one or many threads. There are many different process models, some of which are light weight, but almost all processes (even entire virtual machines) are rooted in an operating system (OS) process which comprises the program code, assigned system resources, physical and logical access permissions, and data structures to initiate, control and coordinate execution activity. Depending on the OS, a process may be made up of multiple threads of execution that execute instructions concurrently.
While a computer program is a passive collection of instructions typically stored in a file on disk, a process is the execution of those instructions after being loaded from the disk into memory. Several processes may be associated with the same program; for example, opening up several instances of the same program often results in more than one process being executed.
Multitasking is a method to allow multiple processes to share processors (CPUs) and other system resources. Each CPU (core) executes a single process at a time. However, multitasking allows each processor to switch between tasks that are being executed without having to wait for each task to finish (preemption). Depending on the operating system implementation, switches could be performed when tasks initiate and wait for completion of input/output operations, when a task voluntarily yields the CPU, on hardware interrupts, and when the operating system scheduler decides that a process has expired its fair share of CPU time (e.g, by the Completely Fair Scheduler of the Linux kernel).
A common form of multitasking is provided by CPU's time-sharing that is a method for interleaving the execution of users' processes and threads, and even of independent kernel tasks – although the latter feature is feasible only in preemptive kernels such as Linux. Preemption has an important side effect for interactive processes that are given higher priority with respect to CPU bound processes, therefore users are immediately assigned computing resources at the simple pressing of a key or when moving a mouse. Furthermore, applications like video and music reproduction are given some kind of real-time priority, preempting any other lower priority process. In time-sharing systems, context switches are performed rapidly, which makes it seem like multiple processes are being executed simultaneously on the same processor. This seemingly-simultaneous execution of multiple processes is called concurrency.
For security and reliability, most modern operating systems prevent direct communication between independent processes, providing strictly mediated and controlled inter-process communication.
Representation
In general, a computer system process consists of (or is said to own) the following resources:
An image of the executable machine code associated with a program.
Memory (typically some region of virtual memory); which includes the executable code, process-specific data (input and output), a call stack (to keep track of active subroutines and/or other events), and a heap to hold intermediate computation data generated during run time.
Operating system descriptors of resources that are allocated to the process, such as file descriptors (Unix terminology) or handles (Windows), and data sources and sinks.
Security attributes, such as the process owner and the process' set of permissions (allowable operations).
Processor state (context), such as the content of registers and physical memory addressing. The state is typically stored in computer registers when the process is executing, and in memory otherwise.
The operating system holds most of this information about active processes in data structures called process control blocks. Any subset of the resources, typically at least the processor state, may be associated with each of the process' threads in operating systems that support threads or child processes.
The operating system keeps its processes separate and allocates the resources they need, so that they are less likely to interfere with each other and cause system failures (e.g., deadlock or thrashing). The operating system may also provide mechanisms for inter-process communication to enable processes to interact in safe and predictable ways.
Multitasking and process management
A multitasking operating system may just switch between processes to give the appearance of many processes executing simultaneously (that is, in parallel), though in fact only one process can be executing at any one time on a single CPU (unless the CPU has multiple cores, then multithreading or other similar technologies can be used).
It is usual to associate a single process with a main program, and child processes with any spin-off, parallel processes, which behave like asynchronous subroutines. A process is said to own resources, of which an image of its program (in memory) is one such resource. However, in multiprocessing systems many processes may run off of, or share, the same reentrant program at the same location in memory, but each process is said to own its own image of the program.
Processes are often called "tasks" in embedded operating systems. The sense of "process" (or task) is "something that takes up time", as opposed to "memory", which is "something that takes up space".
The above description applies to both processes managed by an operating system, and processes as defined by process calculi.
If a process requests something for which it must wait, it will be blocked. When the process is in the blocked state, it is eligible for swapping to disk, but this is transparent in a virtual memory system, where regions of a process's memory may be really on disk and not in main memory at any time. Even portions of active processes/tasks (executing programs) are eligible for swapping to disk, if the portions have not been used recently. Not all parts of an executing program and its data have to be in physical memory for the associated process to be active.
Process states
An operating system kernel that allows multitasking needs processes to have certain states. Names for these states are not standardised, but they have similar functionality.
First, the process is "created" by being loaded from a secondary storage device (hard disk drive, CD-ROM, etc.) into main memory. After that the process scheduler assigns it the "waiting" state.
While the process is "waiting", it waits for the scheduler to do a so-called context switch. The context switch loads the process into the processor and changes the state to "running" while the previously "running" process is stored in a "waiting" state.
If a process in the "running" state needs to wait for a resource (wait for user input or file to open, for example), it is assigned the "blocked" state. The process state is changed back to "waiting" when the process no longer needs to wait (in a blocked state).
Once the process finishes execution, or is terminated by the operating system, it is no longer needed. The process is removed instantly or is moved to the "terminated" state. When removed, it just waits to be removed from main memory.
Inter-process communication
When processes need to communicate with each other they must share parts of their address spaces or use other forms of inter-process communication (IPC).
For instance in a shell pipeline, the output of the first process needs to pass to the second one, and so on. Another example is a task that has been decomposed into cooperating but partially independent processes which can run simultaneously (i.e., using concurrency, or true parallelism – the latter model is a particular case of concurrent execution and is feasible whenever multiple CPU cores are available for the processes that are ready to run).
It is even possible for two or more processes to be running on different machines that may run different operating system (OS), therefore some mechanisms for communication and synchronization (called communications protocols for distributed computing) are needed (e.g., the Message Passing Interface {MPI}).
History
By the early 1960s, computer control software had evolved from monitor control software, for example IBSYS, to executive control software. Over time, computers got faster while computer time was still neither cheap nor fully utilized; such an environment made multiprogramming possible and necessary. Multiprogramming means that several programs run concurrently. At first, more than one program ran on a single processor, as a result of underlying uniprocessor computer architecture, and they shared scarce and limited hardware resources; consequently, the concurrency was of a serial nature. On later systems with multiple processors, multiple programs may run concurrently in parallel.
Programs consist of sequences of instructions for processors. A single processor can run only one instruction at a time: it is impossible to run more programs at the same time. A program might need some resource, such as an input device, which has a large delay, or a program might start some slow operation, such as sending output to a printer. This would lead to processor being "idle" (unused). To keep the processor busy at all times, the execution of such a program is halted and the operating system switches the processor to run another program. To the user, it will appear that the programs run at the same time (hence the term "parallel").
Shortly thereafter, the notion of a "program" was expanded to the notion of an "executing program and its context". The concept of a process was born, which also became necessary with the invention of re-entrant code. Threads came somewhat later. However, with the advent of concepts such as time-sharing, computer networks, and multiple-CPU shared memory computers, the old "multiprogramming" gave way to true multitasking, multiprocessing and, later, multithreading.
| Technology | Operating systems | null |
45194 | https://en.wikipedia.org/wiki/Lp%20space | Lp space | {{DISPLAYTITLE:Lp space}}
In mathematics, the spaces are function spaces defined using a natural generalization of the -norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue , although according to the Bourbaki group they were first introduced by Frigyes Riesz .
spaces form an important class of Banach spaces in functional analysis, and of topological vector spaces. Because of their key role in the mathematical analysis of measure and probability spaces, Lebesgue spaces are used also in the theoretical discussion of problems in physics, statistics, economics, finance, engineering, and other disciplines.
Preliminaries
The -norm in finite dimensions
The Euclidean length of a vector in the -dimensional real vector space is given by the Euclidean norm:
The Euclidean distance between two points and is the length of the straight line between the two points. In many situations, the Euclidean distance is appropriate for capturing the actual distances in a given space. In contrast, consider taxi drivers in a grid street plan who should measure distance not in terms of the length of the straight line to their destination, but in terms of the rectilinear distance, which takes into account that streets are either orthogonal or parallel to each other. The class of -norms generalizes these two examples and has an abundance of applications in many parts of mathematics, physics, and computer science.
For a real number the -norm or -norm of is defined by
The absolute value bars can be dropped when is a rational number with an even numerator in its reduced form, and is drawn from the set of real numbers, or one of its subsets.
The Euclidean norm from above falls into this class and is the -norm, and the -norm is the norm that corresponds to the rectilinear distance.
The -norm or maximum norm (or uniform norm) is the limit of the -norms for , given by:
For all the -norms and maximum norm satisfy the properties of a "length function" (or norm), that is:
only the zero vector has zero length,
the length of the vector is positive homogeneous with respect to multiplication by a scalar (positive homogeneity), and
the length of the sum of two vectors is no larger than the sum of lengths of the vectors (triangle inequality).
Abstractly speaking, this means that together with the -norm is a normed vector space. Moreover, it turns out that this space is complete, thus making it a Banach space.
Relations between -norms
The grid distance or rectilinear distance (sometimes called the "Manhattan distance") between two points is never shorter than the length of the line segment between them (the Euclidean or "as the crow flies" distance). Formally, this means that the Euclidean norm of any vector is bounded by its 1-norm:
This fact generalizes to -norms in that the -norm of any given vector does not grow with :
For the opposite direction, the following relation between the -norm and the -norm is known:
This inequality depends on the dimension of the underlying vector space and follows directly from the Cauchy–Schwarz inequality.
In general, for vectors in where
This is a consequence of Hölder's inequality.
When
In for the formula
defines an absolutely homogeneous function for however, the resulting function does not define a norm, because it is not subadditive. On the other hand, the formula
defines a subadditive function at the cost of losing absolute homogeneity. It does define an F-norm, though, which is homogeneous of degree
Hence, the function
defines a metric. The metric space is denoted by
Although the -unit ball around the origin in this metric is "concave", the topology defined on by the metric is the usual vector space topology of hence is a locally convex topological vector space. Beyond this qualitative statement, a quantitative way to measure the lack of convexity of is to denote by the smallest constant such that the scalar multiple of the -unit ball contains the convex hull of which is equal to The fact that for fixed we have
shows that the infinite-dimensional sequence space defined below, is no longer locally convex.
When
There is one norm and another function called the "norm" (with quotation marks).
The mathematical definition of the norm was established by Banach's Theory of Linear Operations. The space of sequences has a complete metric topology provided by the F-norm on the product metric:
The -normed space is studied in functional analysis, probability theory, and harmonic analysis.
Another function was called the "norm" by David Donoho—whose quotation marks warn that this function is not a proper norm—is the number of non-zero entries of the vector Many authors abuse terminology by omitting the quotation marks. Defining the zero "norm" of is equal to
This is not a norm because it is not homogeneous. For example, scaling the vector by a positive constant does not change the "norm". Despite these defects as a mathematical norm, the non-zero counting "norm" has uses in scientific computing, information theory, and statistics–notably in compressed sensing in signal processing and computational harmonic analysis. Despite not being a norm, the associated metric, known as Hamming distance, is a valid distance, since homogeneity is not required for distances.
spaces and sequence spaces
The -norm can be extended to vectors that have an infinite number of components (sequences), which yields the space This contains as special cases:
the space of sequences whose series are absolutely convergent,
the space of square-summable sequences, which is a Hilbert space, and
the space of bounded sequences.
The space of sequences has a natural vector space structure by applying scalar addition and multiplication. Explicitly, the vector sum and the scalar action for infinite sequences of real (or complex) numbers are given by:
Define the -norm:
Here, a complication arises, namely that the series on the right is not always convergent, so for example, the sequence made up of only ones, will have an infinite -norm for The space is then defined as the set of all infinite sequences of real (or complex) numbers such that the -norm is finite.
One can check that as increases, the set grows larger. For example, the sequence
is not in but it is in for as the series
diverges for (the harmonic series), but is convergent for
One also defines the -norm using the supremum:
and the corresponding space of all bounded sequences. It turns out that
if the right-hand side is finite, or the left-hand side is infinite. Thus, we will consider spaces for
The -norm thus defined on is indeed a norm, and together with this norm is a Banach space.
General ℓp-space
In complete analogy to the preceding definition one can define the space over a general index set (and ) as
where convergence on the right means that only countably many summands are nonzero (see also Unconditional convergence).
With the norm
the space becomes a Banach space.
In the case where is finite with elements, this construction yields with the -norm defined above.
If is countably infinite, this is exactly the sequence space defined above.
For uncountable sets this is a non-separable Banach space which can be seen as the locally convex direct limit of -sequence spaces.
For the -norm is even induced by a canonical inner product called the , which means that holds for all vectors This inner product can expressed in terms of the norm by using the polarization identity.
On it can be defined by
Now consider the case Define
where for all
The index set can be turned into a measure space by giving it the discrete σ-algebra and the counting measure. Then the space is just a special case of the more general -space (defined below).
Lp spaces and Lebesgue integrals
An space may be defined as a space of measurable functions for which the -th power of the absolute value is Lebesgue integrable, where functions which agree almost everywhere are identified. More generally, let be a measure space and
When , consider the set of all measurable functions from to or whose absolute value raised to the -th power has a finite integral, or in symbols:
To define the set for recall that two functions and defined on are said to be , written , if the set is measurable and has measure zero.
Similarly, a measurable function (and its absolute value) is (or ) by a real number written , if the (necessarily) measurable set has measure zero.
The space is the set of all measurable functions that are bounded almost everywhere (by some real ) and is defined as the infimum of these bounds:
When then this is the same as the essential supremum of the absolute value of :
For example, if is a measurable function that is equal to almost everywhere then for every and thus for all
For every positive the value under of a measurable function and its absolute value are always the same (that is, for all ) and so a measurable function belongs to if and only if its absolute value does. Because of this, many formulas involving -norms are stated only for non-negative real-valued functions. Consider for example the identity which holds whenever is measurable, is real, and (here when ). The non-negativity requirement can be removed by substituting in for which gives
Note in particular that when is finite then the formula relates the -norm to the -norm.
Seminormed space of -th power integrable functions
Each set of functions forms a vector space when addition and scalar multiplication are defined pointwise.
That the sum of two -th power integrable functions and is again -th power integrable follows from
although it is also a consequence of Minkowski's inequality
which establishes that satisfies the triangle inequality for (the triangle inequality does not hold for ).
That is closed under scalar multiplication is due to being absolutely homogeneous, which means that for every scalar and every function
Absolute homogeneity, the triangle inequality, and non-negativity are the defining properties of a seminorm.
Thus is a seminorm and the set of -th power integrable functions together with the function defines a seminormed vector space. In general, the seminorm is not a norm because there might exist measurable functions that satisfy but are not equal to ( is a norm if and only if no such exists).
Zero sets of -seminorms
If is measurable and equals a.e. then for all positive
On the other hand, if is a measurable function for which there exists some such that then almost everywhere. When is finite then this follows from the case and the formula mentioned above.
Thus if is positive and is any measurable function, then if and only if almost everywhere. Since the right hand side ( a.e.) does not mention it follows that all have the same zero set (it does not depend on ). So denote this common set by
This set is a vector subspace of for every positive
Quotient vector space
Like every seminorm, the seminorm induces a norm (defined shortly) on the canonical quotient vector space of by its vector subspace
This normed quotient space is called and it is the subject of this article. We begin by defining the quotient vector space.
Given any the coset consists of all measurable functions that are equal to almost everywhere.
The set of all cosets, typically denoted by
forms a vector space with origin when vector addition and scalar multiplication are defined by and
This particular quotient vector space will be denoted by
Two cosets are equal if and only if (or equivalently, ), which happens if and only if almost everywhere; if this is the case then and are identified in the quotient space. Hence, strictly speaking consists of equivalence classes of functions.
Given any the value of the seminorm on the coset is constant and equal to , that is:
The map is a norm on called the .
The value of a coset is independent of the particular function that was chosen to represent the coset, meaning that if is any coset then for every (since for every ).
The Lebesgue space
The normed vector space is called or the of -th power integrable functions and it is a Banach space for every (meaning that it is a complete metric space, a result that is sometimes called the [[Riesz–Fischer theorem#Completeness of Lp, 0 < p ≤ ∞|Riesz–Fischer theorem]]).
When the underlying measure space is understood then is often abbreviated or even just
Depending on the author, the subscript notation might denote either or
If the seminorm on happens to be a norm (which happens if and only if ) then the normed space will be linearly isometrically isomorphic to the normed quotient space via the canonical map (since ); in other words, they will be, up to a linear isometry, the same normed space and so they may both be called " space".
The above definitions generalize to Bochner spaces.
In general, this process cannot be reversed: there is no consistent way to define a "canonical" representative of each coset of in For however, there is a theory of lifts enabling such recovery.
Special cases
For the spaces are a special case of spaces; when are the natural numbers and is the counting measure. More generally, if one considers any set with the counting measure, the resulting space is denoted For example, is the space of all sequences indexed by the integers, and when defining the -norm on such a space, one sums over all the integers. The space where is the set with elements, is with its -norm as defined above.
Similar to spaces, is the only Hilbert space among spaces. In the complex case, the inner product on is defined by
Functions in are sometimes called square-integrable functions, quadratically integrable functions or square-summable functions, but sometimes these terms are reserved for functions that are square-integrable in some other sense, such as in the sense of a Riemann integral .
As any Hilbert space, every space is linearly isometric to a suitable where the cardinality of the set is the cardinality of an arbitrary basis for this particular
If we use complex-valued functions, the space is a commutative C*-algebra with pointwise multiplication and conjugation. For many measure spaces, including all sigma-finite ones, it is in fact a commutative von Neumann algebra. An element of defines a bounded operator on any space by multiplication.
When
If then can be defined as above, that is:
In this case, however, the -norm does not satisfy the triangle inequality and defines only a quasi-norm. The inequality valid for implies that
and so the function
is a metric on The resulting metric space is complete.
In this setting satisfies a reverse Minkowski inequality, that is for
This result may be used to prove Clarkson's inequalities, which are in turn used to establish the uniform convexity of the spaces for .
The space for is an F-space: it admits a complete translation-invariant metric with respect to which the vector space operations are continuous. It is the prototypical example of an F-space that, for most reasonable measure spaces, is not locally convex: in or every open convex set containing the function is unbounded for the -quasi-norm; therefore, the vector does not possess a fundamental system of convex neighborhoods. Specifically, this is true if the measure space contains an infinite family of disjoint measurable sets of finite positive measure.
The only nonempty convex open set in is the entire space. Consequently, there are no nonzero continuous linear functionals on the continuous dual space is the zero space. In the case of the counting measure on the natural numbers (i.e. ), the bounded linear functionals on are exactly those that are bounded on , i.e., those given by sequences in Although does contain non-trivial convex open sets, it fails to have enough of them to give a base for the topology.
Having no linear functionals is highly undesirable for the purposes of doing analysis. In case of the Lebesgue measure on rather than work with for it is common to work with the Hardy space whenever possible, as this has quite a few linear functionals: enough to distinguish points from one another. However, the Hahn–Banach theorem still fails in for .
Properties
Hölder's inequality
Suppose satisfy . If and then and
This inequality, called Hölder's inequality, is in some sense optimal since if and is a measurable function such that
where the supremum is taken over the closed unit ball of then and
Atomic decomposition
If then every non-negative has an , meaning that there exist a sequence of non-negative real numbers and a sequence of non-negative functions called , whose supports are pairwise disjoint sets of measure such that
and for every integer
and
and where moreover, the sequence of functions depends only on (it is independent of ).
These inequalities guarantee that for all integers while the supports of being pairwise disjoint implies
Dual spaces
The dual space of for has a natural isomorphism with where is such that . This isomorphism associates with the functional defined by
for every
is a well defined continuous linear mapping which is an isometry by the extremal case of Hölder's inequality. If is a -finite measure space one can use the Radon–Nikodym theorem to show that any can be expressed this way, i.e., is an isometric isomorphism of Banach spaces. Hence, it is usual to say simply that is the continuous dual space of
For the space is reflexive. Let be as above and let be the corresponding linear isometry. Consider the map from to obtained by composing with the transpose (or adjoint) of the inverse of
This map coincides with the canonical embedding of into its bidual. Moreover, the map is onto, as composition of two onto isometries, and this proves reflexivity.
If the measure on is sigma-finite, then the dual of is isometrically isomorphic to (more precisely, the map corresponding to is an isometry from onto
The dual of is subtler. Elements of can be identified with bounded signed finitely additive measures on that are absolutely continuous with respect to See ba space for more details. If we assume the axiom of choice, this space is much bigger than except in some trivial cases. However, Saharon Shelah proved that there are relatively consistent extensions of Zermelo–Fraenkel set theory (ZF + DC + "Every subset of the real numbers has the Baire property") in which the dual of is
Embeddings
Colloquially, if then contains functions that are more locally singular, while elements of can be more spread out. Consider the Lebesgue measure on the half line A continuous function in might blow up near but must decay sufficiently fast toward infinity. On the other hand, continuous functions in need not decay at all but no blow-up is allowed. More formally, suppose that , then:
if and only if does not contain sets of finite but arbitrarily large measure (e.g. any finite measure).
if and only if does not contain sets of non-zero but arbitrarily small measure (e.g. the counting measure).
Neither condition holds for the Lebesgue measure on the real line while both conditions holds for the counting measure on any finite set. As a consequence of the closed graph theorem, the embedding is continuous, i.e., the identity operator is a bounded linear map from to in the first case and to in the second. Indeed, if the domain has finite measure, one can make the following explicit calculation using Hölder's inequality
leading to
The constant appearing in the above inequality is optimal, in the sense that the operator norm of the identity is precisely
the case of equality being achieved exactly when -almost-everywhere.
Dense subspaces
Let and be a measure space and consider an integrable simple function on given by
where are scalars, has finite measure and is the indicator function of the set for By construction of the integral, the vector space of integrable simple functions is dense in
More can be said when is a normal topological space and its Borel –algebra.
Suppose is an open set with Then for every Borel set contained in there exist a closed set and an open set such that
for every . Subsequently, there exists a Urysohn function on that is on and on with
If can be covered by an increasing sequence of open sets that have finite measure, then the space of –integrable continuous functions is dense in More precisely, one can use bounded continuous functions that vanish outside one of the open sets
This applies in particular when and when is the Lebesgue measure. For example, the space of continuous and compactly supported functions as well as the space of integrable step functions are dense in .
Closed subspaces
Suppose . If is a probability space and is a closed subspace of then is finite-dimensional.
It is crucial that the vector space be a subset of since it is possible to construct an infinite-dimensional closed vector subspace of which lies in ; taking the Lebesgue measure on the circle group divided by as the probability measure.
Applications
Statistics
In statistics, measures of central tendency and statistical dispersion, such as the mean, median, and standard deviation, can be defined in terms of metrics, and measures of central tendency can be characterized as solutions to variational problems.
In penalized regression, "L1 penalty" and "L2 penalty" refer to penalizing either the norm of a solution's vector of parameter values (i.e. the sum of its absolute values), or its squared norm (its Euclidean length). Techniques which use an L1 penalty, like LASSO, encourage sparse solutions (where the many parameters are zero). Elastic net regularization uses a penalty term that is a combination of the norm and the squared norm of the parameter vector.
Hausdorff–Young inequality
The Fourier transform for the real line (or, for periodic functions, see Fourier series), maps to (or to ) respectively, where and This is a consequence of the Riesz–Thorin interpolation theorem, and is made precise with the Hausdorff–Young inequality.
By contrast, if the Fourier transform does not map into
Hilbert spaces
Hilbert spaces are central to many applications, from quantum mechanics to stochastic calculus. The spaces and are both Hilbert spaces. In fact, by choosing a Hilbert basis i.e., a maximal orthonormal subset of or any Hilbert space, one sees that every Hilbert space is isometrically isomorphic to (same as above), i.e., a Hilbert space of type
Generalizations and extensions
Weak
Let be a measure space, and a measurable function with real or complex values on The distribution function of is defined for by
If is in for some with then by Markov's inequality,
A function is said to be in the space weak , or if there is a constant such that, for all
The best constant for this inequality is the -norm of and is denoted by
The weak coincide with the Lorentz spaces so this notation is also used to denote them.
The -norm is not a true norm, since the triangle inequality fails to hold. Nevertheless, for in
and in particular
In fact, one has
and raising to power and taking the supremum in one has
Under the convention that two functions are equal if they are equal almost everywhere, then the spaces are complete .
For any the expression
is comparable to the -norm. Further in the case this expression defines a norm if Hence for the weak spaces are Banach spaces .
A major result that uses the -spaces is the Marcinkiewicz interpolation theorem, which has broad applications to harmonic analysis and the study of singular integrals.
Weighted spaces
As before, consider a measure space Let be a measurable function. The -weighted space is defined as where means the measure defined by
or, in terms of the Radon–Nikodym derivative, the norm for is explicitly
As -spaces, the weighted spaces have nothing special, since is equal to But they are the natural framework for several results in harmonic analysis ; they appear for example in the Muckenhoupt theorem: for the classical Hilbert transform is defined on where denotes the unit circle and the Lebesgue measure; the (nonlinear) Hardy–Littlewood maximal operator is bounded on Muckenhoupt's theorem describes weights such that the Hilbert transform remains bounded on and the maximal operator on
spaces on manifolds
One may also define spaces on a manifold, called the intrinsic spaces of the manifold, using densities.
Vector-valued spaces
Given a measure space and a locally convex space (here assumed to be complete), it is possible to define spaces of -integrable -valued functions on in a number of ways. One way is to define the spaces of Bochner integrable and Pettis integrable functions, and then endow them with locally convex TVS-topologies that are (each in their own way) a natural generalization of the usual topology. Another way involves topological tensor products of with Element of the vector space are finite sums of simple tensors where each simple tensor may be identified with the function that sends This tensor product is then endowed with a locally convex topology that turns it into a topological tensor product, the most common of which are the projective tensor product, denoted by and the injective tensor product, denoted by In general, neither of these space are complete so their completions are constructed, which are respectively denoted by and (this is analogous to how the space of scalar-valued simple functions on when seminormed by any is not complete so a completion is constructed which, after being quotiented by is isometrically isomorphic to the Banach space ). Alexander Grothendieck showed that when is a nuclear space (a concept he introduced), then these two constructions are, respectively, canonically TVS-isomorphic with the spaces of Bochner and Pettis integral functions mentioned earlier; in short, they are indistinguishable.
space of measurable functions
The vector space of (equivalence classes of) measurable functions on is denoted . By definition, it contains all the and is equipped with the topology of convergence in measure. When is a probability measure (i.e., ), this mode of convergence is named convergence in probability. The space is always a topological abelian group but is only a topological vector space if This is because scalar multiplication is continuous if and only if If is -finite then the weaker topology of local convergence in measure is an F-space, i.e. a completely metrizable topological vector space. Moreover, this topology is isometric to global convergence in measure for a suitable choice of probability measure
The description is easier when is finite. If is a finite measure on the function admits for the convergence in measure the following fundamental system of neighborhoods
The topology can be defined by any metric of the form
where is bounded continuous concave and non-decreasing on with and when (for example, Such a metric is called Lévy-metric for Under this metric the space is complete. However, as mentioned above, scalar multiplication is continuous with respect to this metric only if . To see this, consider the Lebesgue measurable function defined by . Then clearly . The space is in general not locally bounded, and not locally convex.
For the infinite Lebesgue measure on the definition of the fundamental system of neighborhoods could be modified as follows
The resulting space , with the topology of local convergence in measure, is isomorphic to the space for any positive –integrable density
| Mathematics | Mathematical analysis | null |
45196 | https://en.wikipedia.org/wiki/Injective%20function | Injective function | In mathematics, an injective function (also known as injection, or one-to-one function ) is a function that maps distinct elements of its domain to distinct elements of its codomain; that is, implies (equivalently by contraposition, implies ). In other words, every element of the function's codomain is the image of one element of its domain. The term must not be confused with that refers to bijective functions, which are functions such that each element in the codomain is an image of exactly one element in the domain.
A homomorphism between algebraic structures is a function that is compatible with the operations of the structures. For all common algebraic structures, and, in particular for vector spaces, an is also called a . However, in the more general context of category theory, the definition of a monomorphism differs from that of an injective homomorphism. This is thus a theorem that they are equivalent for algebraic structures; see for more details.
A function that is not injective is sometimes called many-to-one.
Definition
Let be a function whose domain is a set The function is said to be injective provided that for all and in if then ; that is, implies Equivalently, if then in the contrapositive statement.
Symbolically,
which is logically equivalent to the contrapositive,An injective function (or, more generally, a monomorphism) is often denoted by using the specialized arrows ↣ or ↪ (for example, or ), although some authors specifically reserve ↪ for an inclusion map.
Examples
For visual examples, readers are directed to the gallery section.
For any set and any subset the inclusion map (which sends any element to itself) is injective. In particular, the identity function is always injective (and in fact bijective).
If the domain of a function is the empty set, then the function is the empty function, which is injective.
If the domain of a function has one element (that is, it is a singleton set), then the function is always injective.
The function defined by is injective.
The function defined by is injective, because (for example) However, if is redefined so that its domain is the non-negative real numbers [0,+∞), then is injective.
The exponential function defined by is injective (but not surjective, as no real value maps to a negative number).
The natural logarithm function defined by is injective.
The function defined by is not injective, since, for example,
More generally, when and are both the real line then an injective function is one whose graph is never intersected by any horizontal line more than once. This principle is referred to as the .
Injections can be undone
Functions with left inverses are always injections. That is, given if there is a function such that for every , , then is injective. In this case, is called a retraction of Conversely, is called a section of
Conversely, every injection with a non-empty domain has a left inverse . It can be defined by choosing an element in the domain of and setting to the unique element of the pre-image (if it is non-empty) or to (otherwise).
The left inverse is not necessarily an inverse of because the composition in the other order, may differ from the identity on In other words, an injective function can be "reversed" by a left inverse, but is not necessarily invertible, which requires that the function is bijective.
Injections may be made invertible
In fact, to turn an injective function into a bijective (hence invertible) function, it suffices to replace its codomain by its actual image That is, let such that for all ; then is bijective. Indeed, can be factored as where is the inclusion function from into
More generally, injective partial functions are called partial bijections.
Other properties
If and are both injective then is injective.
If is injective, then is injective (but need not be).
is injective if and only if, given any functions whenever then In other words, injective functions are precisely the monomorphisms in the category Set of sets.
If is injective and is a subset of then Thus, can be recovered from its image
If is injective and and are both subsets of then
Every function can be decomposed as for a suitable injection and surjection This decomposition is unique up to isomorphism, and may be thought of as the inclusion function of the range of as a subset of the codomain of
If is an injective function, then has at least as many elements as in the sense of cardinal numbers. In particular, if, in addition, there is an injection from to then and have the same cardinal number. (This is known as the Cantor–Bernstein–Schroeder theorem.)
If both and are finite with the same number of elements, then is injective if and only if is surjective (in which case is bijective).
An injective function which is a homomorphism between two algebraic structures is an embedding.
Unlike surjectivity, which is a relation between the graph of a function and its codomain, injectivity is a property of the graph of the function alone; that is, whether a function is injective can be decided by only considering the graph (and not the codomain) of
Proving that functions are injective
A proof that a function is injective depends on how the function is presented and what properties the function holds.
For functions that are given by some formula there is a basic idea.
We use the definition of injectivity, namely that if then
Here is an example:
Proof: Let Suppose So implies which implies Therefore, it follows from the definition that is injective.
There are multiple other methods of proving that a function is injective. For example, in calculus if is a differentiable function defined on some interval, then it is sufficient to show that the derivative is always positive or always negative on that interval. In linear algebra, if is a linear transformation it is sufficient to show that the kernel of contains only the zero vector. If is a function with finite domain it is sufficient to look through the list of images of each domain element and check that no image occurs twice on the list.
A graphical approach for a real-valued function of a real variable is the horizontal line test. If every horizontal line intersects the curve of in at most one point, then is injective or one-to-one.
Gallery
| Mathematics | Functions: General | null |
45206 | https://en.wikipedia.org/wiki/Submarine%20communications%20cable | Submarine communications cable | A submarine communications cable is a cable laid on the seabed between land-based stations to carry telecommunication signals across stretches of ocean and sea. The first submarine communications cables were laid beginning in the 1850s and carried telegraphy traffic, establishing the first instant telecommunications links between continents, such as the first transatlantic telegraph cable which became operational on 16 August 1858.
Submarine cables first connected all the world's continents (except Antarctica) when Java was connected to Darwin, Northern Territory, Australia, in 1871 in anticipation of the completion of the Australian Overland Telegraph Line in 1872 connecting to Adelaide, South Australia and thence to the rest of Australia.
Subsequent generations of cables carried telephone traffic, then data communications traffic. These early cables used copper wires in their cores, but modern cables use optical fiber technology to carry digital data, which includes telephone, Internet and private data traffic. Modern cables are typically about in diameter and weigh around for the deep-sea sections which comprise the majority of the run, although larger and heavier cables are used for shallow-water sections near shore.
Early history: telegraph and coaxial cables
First successful trials
After William Cooke and Charles Wheatstone had introduced their working telegraph in 1839, the idea of a submarine line across the Atlantic Ocean began to be thought of as a possible triumph of the future. Samuel Morse proclaimed his faith in it as early as 1840, and in 1842, he submerged a wire, insulated with tarred hemp and India rubber, in the water of New York Harbor, and telegraphed through it. The following autumn, Wheatstone performed a similar experiment in Swansea Bay. A good insulator to cover the wire and prevent the electric current from leaking into the water was necessary for the success of a long submarine line. India rubber had been tried by Moritz von Jacobi, the Prussian electrical engineer, as far back as the early 19th century.
Another insulating gum which could be melted by heat and readily applied to wire made its appearance in 1842. Gutta-percha, the adhesive juice of the Palaquium gutta tree, was introduced to Europe by William Montgomerie, a Scottish surgeon in the service of the British East India Company. Twenty years earlier, Montgomerie had seen whips made of gutta-percha in Singapore, and he believed that it would be useful in the fabrication of surgical apparatus. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845, the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. In 1847 William Siemens, then an officer in the army of Prussia, laid the first successful underwater cable using gutta percha insulation, across the Rhine between Deutz and Cologne. In 1849, Charles Vincent Walker, electrician to the South Eastern Railway, submerged of wire coated with gutta-percha off the coast from Folkestone, which was tested successfully.
First commercial cables
In August 1850, having earlier obtained a concession from the French government, John Watkins Brett's English Channel Submarine Telegraph Company laid the first line across the English Channel, using the converted tugboat Goliath. It was simply a copper wire coated with gutta-percha, without any other protection, and was not successful. However, the experiment served to secure renewal of the concession, and in September 1851, a protected core, or true, cable was laid by the reconstituted Submarine Telegraph Company from a government hulk, Blazer, which was towed across the Channel.
In 1853, more successful cables were laid, linking Great Britain with Ireland, Belgium, and the Netherlands, and crossing The Belts in Denmark. The British & Irish Magnetic Telegraph Company completed the first successful Irish link on May 23 between Portpatrick and Donaghadee using the collier William Hutt. The same ship was used for the link from Dover to Ostend in Belgium, by the Submarine Telegraph Company. Meanwhile, the Electric & International Telegraph Company completed two cables across the North Sea, from Orford Ness to Scheveningen, the Netherlands. These cables were laid by Monarch, a paddle steamer which later became the first vessel with permanent cable-laying equipment.
In 1858, the steamship Elba was used to lay a telegraph cable from Jersey to Guernsey, on to Alderney and then to Weymouth, the cable being completed successfully in September of that year. Problems soon developed with eleven breaks occurring by 1860 due to storms, tidal and sand movements, and wear on rocks. A report to the Institution of Civil Engineers in 1860 set out the problems to assist in future cable-laying operations.
Crimean War (1853–1856)
In the Crimean War various forms of telegraphy played a major role; this was a first. At the start of the campaign there was a telegraph link at Bucharest connected to London. In the winter of 1854 the French extended the telegraph link to the Black Sea coast. In April 1855 the British laid an underwater cable from Varna to the Crimean peninsula so that news of the Crimean War could reach London in a handful of hours.
Transatlantic telegraph cable
The first attempt at laying a transatlantic telegraph cable was promoted by Cyrus West Field, who persuaded British industrialists to fund and lay one in 1858. However, the technology of the day was not capable of supporting the project; it was plagued with problems from the outset, and was in operation for only a month. Subsequent attempts in 1865 and 1866 with the world's largest steamship, the SS Great Eastern, used a more advanced technology and produced the first successful transatlantic cable. Great Eastern later went on to lay the first cable reaching to India from Aden, Yemen, in 1870.
British dominance of early cable
From the 1850s until 1911, British submarine cable systems dominated the most important market, the North Atlantic Ocean. The British had both supply side and demand side advantages. In terms of supply, Britain had entrepreneurs willing to put forth enormous amounts of capital necessary to build, lay and maintain these cables. In terms of demand, Britain's vast colonial empire led to business for the cable companies from news agencies, trading and shipping companies, and the British government. Many of Britain's colonies had significant populations of European settlers, making news about them of interest to the general public in the home country.
British officials believed that depending on telegraph lines that passed through non-British territory posed a security risk, as lines could be cut and messages could be interrupted during wartime. They sought the creation of a worldwide network within the empire, which became known as the All Red Line, and conversely prepared strategies to quickly interrupt enemy communications. Britain's very first action after declaring war on Germany in World War I was to have the cable ship Alert (not the CS Telconia as frequently reported) cut the five cables linking Germany with France, Spain and the Azores, and through them, North America. Thereafter, the only way Germany could communicate was by wireless, and that meant that Room 40 could listen in.
The submarine cables were an economic benefit to trading companies, because owners of ships could communicate with captains when they reached their destination and give directions as to where to go next to pick up cargo based on reported pricing and supply information. The British government had obvious uses for the cables in maintaining administrative communications with governors throughout its empire, as well as in engaging other nations diplomatically and communicating with its military units in wartime. The geographic location of British territory was also an advantage as it included both Ireland on the east side of the Atlantic Ocean and Newfoundland in North America on the west side, making for the shortest route across the ocean, which reduced costs significantly.
A few facts put this dominance of the industry in perspective. In 1896, there were 30 cable-laying ships in the world, 24 of which were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. During World War I, Britain's telegraph communications were almost completely uninterrupted, while it was able to quickly cut Germany's cables worldwide.
Cable to India, Singapore, East Asia and Australia
Throughout the 1860s and 1870s, British cable expanded eastward, into the Mediterranean Sea and the Indian Ocean. An 1863 cable to Bombay (now Mumbai), India, provided a crucial link to Saudi Arabia. In 1870, Bombay was linked to London via submarine cable in a combined operation by four cable companies, at the behest of the British Government. In 1872, these four companies were combined to form the mammoth globe-spanning Eastern Telegraph Company, owned by John Pender. A spin-off from Eastern Telegraph Company was a second sister company, the Eastern Extension, China and Australasia Telegraph Company, commonly known simply as "the Extension." In 1872, Australia was linked by cable to Bombay via Singapore and China and in 1876, the cable linked the British Empire from London to New Zealand.
Submarine cables across the Pacific, 1902-1991
The first trans-Pacific cables providing telegraph service were completed in 1902 and 1903, linking the US mainland to Hawaii in 1902 and Guam to the Philippines in 1903. Canada, Australia, New Zealand and Fiji were also linked in 1902 with the trans-Pacific segment of the All Red Line. Japan was connected into the system in 1906. Service beyond Midway Atoll was abandoned in 1941 due to World War II, but the remainder stayed in operation until 1951 when the FCC gave permission to cease operations.
The first trans-Pacific telephone cable was laid from Hawaii to Japan in 1964, with an extension from Guam to The Philippines. Also in 1964, the Commonwealth Pacific Cable System (COMPAC), with 80 telephone channel capacity, opened for traffic from Sydney to Vancouver, and in 1967, the South East Asia Commonwealth (SEACOM) system, with 160 telephone channel capacity, opened for traffic. This system used microwave radio from Sydney to Cairns (Queensland), cable running from Cairns to Madang (Papua New Guinea), Guam, Hong Kong, Kota Kinabalu (capital of Sabah, Malaysia), Singapore, then overland by microwave radio to Kuala Lumpur. In 1991, the North Pacific Cable system was the first regenerative system (i.e., with repeaters) to completely cross the Pacific from the US mainland to Japan. The US portion of NPC was manufactured in Portland, Oregon, from 1989 to 1991 at STC Submarine Systems, and later Alcatel Submarine Networks. The system was laid by Cable & Wireless Marine on the CS Cable Venture.
Construction, 19-20th century
Transatlantic cables of the 19th century consisted of an outer layer of iron and later steel wire, wrapping India rubber, wrapping gutta-percha, which surrounded a multi-stranded copper wire at the core. The portions closest to each shore landing had additional protective armour wires. Gutta-percha, a natural polymer similar to rubber, had nearly ideal properties for insulating submarine cables, with the exception of a rather high dielectric constant which made cable capacitance high. William Thomas Henley had developed a machine in 1837 for covering wires with silk or cotton thread that he developed into a wire wrapping capability for submarine cable with a factory in 1857 that became W.T. Henley's Telegraph Works Co., Ltd. The India Rubber, Gutta Percha and Telegraph Works Company, established by the Silver family and giving that name to a section of London, furnished cores to Henley's as well as eventually making and laying finished cable. In 1870 William Hooper established Hooper's Telegraph Works to manufacture his patented vulcanized rubber core, at first to furnish other makers of finished cable, that began to compete with the gutta-percha cores. The company later expanded into complete cable manufacture and cable laying, including the building of the first cable ship specifically designed to lay transatlantic cables.
Gutta-percha and rubber were not replaced as a cable insulation until polyethylene was introduced in the 1930s. Even then, the material was only available to the military and the first submarine cable using it was not laid until 1945 during World War II across the English Channel. In the 1920s, the American military experimented with rubber-insulated cables as an alternative to gutta-percha, since American interests controlled significant supplies of rubber but did not have easy access to gutta-percha manufacturers. The 1926 development by John T. Blake of deproteinized rubber improved the impermeability of cables to water.
Many early cables suffered from attack by sea life. The insulation could be eaten, for instance, by species of Teredo (shipworm) and Xylophaga. Hemp laid between the steel wire armouring gave pests a route to eat their way in. Damaged armouring, which was not uncommon, also provided an entrance. Cases of sharks biting cables and attacks by sawfish have been recorded. In one case in 1873, a whale damaged the Persian Gulf Cable between Karachi and Gwadar. The whale was apparently attempting to use the cable to clean off barnacles at a point where the cable descended over a steep drop. The unfortunate whale got its tail entangled in loops of cable and drowned. The cable repair ship Amber Witch was only able to winch up the cable with difficulty, weighed down as it was with the dead whale's body.
Bandwidth problems
Early long-distance submarine telegraph cables exhibited formidable electrical problems. Unlike modern cables, the technology of the 19th century did not allow for in-line repeater amplifiers in the cable. Large voltages were used to attempt to overcome the electrical resistance of their tremendous length but the cables' distributed capacitance and inductance combined to distort the telegraph pulses in the line, reducing the cable's bandwidth, severely limiting the data rate for telegraph operation to 10–12 words per minute.
As early as 1816, Francis Ronalds had observed that electric signals were slowed in passing through an insulated wire or core laid underground, and outlined the cause to be induction, using the analogy of a long Leyden jar. The same effect was noticed by Latimer Clark (1853) on cores immersed in water, and particularly on the lengthy cable between England and The Hague. Michael Faraday showed that the effect was caused by capacitance between the wire and the earth (or water) surrounding it. Faraday had noticed that when a wire is charged from a battery (for example when pressing a telegraph key), the electric charge in the wire induces an opposite charge in the water as it travels along. In 1831, Faraday described this effect in what is now referred to as Faraday's law of induction. As the two charges attract each other, the exciting charge is retarded. The core acts as a capacitor distributed along the length of the cable which, coupled with the resistance and inductance of the cable, limits the speed at which a signal travels through the conductor of the cable.
Early cable designs failed to analyse these effects correctly. Famously, E.O.W. Whitehouse had dismissed the problems and insisted that a transatlantic cable was feasible. When he subsequently became chief electrician of the Atlantic Telegraph Company, he became involved in a public dispute with William Thomson. Whitehouse believed that, with enough voltage, any cable could be driven. Thomson believed that his law of squares showed that retardation could not be overcome by a higher voltage. His recommendation was a larger cable. Because of the excessive voltages recommended by Whitehouse, Cyrus West Field's first transatlantic cable never worked reliably, and eventually short circuited to the ocean when Whitehouse increased the voltage beyond the cable design limit.
Thomson designed a complex electric-field generator that minimized current by resonating the cable, and a sensitive light-beam mirror galvanometer for detecting the faint telegraph signals. Thomson became wealthy on the royalties of these, and several related inventions. Thomson was elevated to Lord Kelvin for his contributions in this area, chiefly an accurate mathematical model of the cable, which permitted design of the equipment for accurate telegraphy. The effects of atmospheric electricity and the geomagnetic field on submarine cables also motivated many of the early polar expeditions.
Thomson had produced a mathematical analysis of propagation of electrical signals into telegraph cables based on their capacitance and resistance, but since long submarine cables operated at slow rates, he did not include the effects of inductance. By the 1890s, Oliver Heaviside had produced the modern general form of the telegrapher's equations, which included the effects of inductance and which were essential to extending the theory of transmission lines to the higher frequencies required for high-speed data and voice.
Transatlantic telephony
While laying a transatlantic telephone cable was seriously considered from the 1920s, the technology required for economically feasible telecommunications was not developed until the 1940s. A first attempt to lay a "pupinized" telephone cable—one with loading coils added at regular intervals—failed in the early 1930s due to the Great Depression.
TAT-1 (Transatlantic No. 1) was the first transatlantic telephone cable system. Between 1955 and 1956, cable was laid between Gallanach Bay, near Oban, Scotland and Clarenville, Newfoundland and Labrador, in Canada. It was inaugurated on September 25, 1956, initially carrying 36 telephone channels.
In the 1960s, transoceanic cables were coaxial cables that transmitted frequency-multiplexed voiceband signals. A high-voltage direct current on the inner conductor powered repeaters (two-way amplifiers placed at intervals along the cable). The first-generation repeaters remain among the most reliable vacuum tube amplifiers ever designed. Later ones were transistorized. Many of these cables are still usable, but have been abandoned because their capacity is too small to be commercially viable. Some have been used as scientific instruments to measure earthquake waves and other geomagnetic events.
Other uses
In 1942, Siemens Brothers of New Charlton, London, in conjunction with the United Kingdom National Physical Laboratory, adapted submarine communications cable technology to create the world's first submarine oil pipeline in Operation Pluto during World War II.
Active fiber-optic cables may be useful in detecting seismic events which alter cable polarization.
Modern history
Optical telecommunications cables
In the 1980s, fiber-optic cables were developed. The first transatlantic telephone cable to use optical fiber was TAT-8, which went into operation in 1988. A fiber-optic cable comprises multiple pairs of fibers. Each pair has one fiber in each direction. TAT-8 had two operational pairs and one backup pair. Except for very short lines, fiber-optic submarine cables include repeaters at regular intervals.
Modern optical fiber repeaters use a solid-state optical amplifier, usually an erbium-doped fiber amplifier (EDFA). Each repeater contains separate equipment for each fiber. These comprise signal reforming, error measurement and controls. A solid-state laser dispatches the signal into the next length of fiber. The solid-state laser excites a short length of doped fiber that itself acts as a laser amplifier. As the light passes through the fiber, it is amplified. This system also permits wavelength-division multiplexing, which dramatically increases the capacity of the fiber. EDFA amplifiers were first used in submarine cables in 1995.
Repeaters are powered by a constant direct current passed down the conductor near the centre of the cable, so all repeaters in a cable are in series. Power feed equipment (PFE) is installed at the terminal stations. Typically both ends share the current generation with one end providing a positive voltage and the other a negative voltage. A virtual earth point exists roughly halfway along the cable under normal operation. The amplifiers or repeaters derive their power from the potential difference across them. The voltage passed down the cable is often anywhere from 3000 to 15,000VDC at a current of up to 1,100mA, with the current increasing with decreasing voltage; the current at 10,000VDC is up to 1,650mA. Hence the total amount of power sent into the cable is often up to 16.5 kW.
The optic fiber used in undersea cables is chosen for its exceptional clarity, permitting runs of more than between repeaters to minimize the number of amplifiers and the distortion they cause. Unrepeated cables are cheaper than repeated cables and their maximum transmission distance is limited, although this has increased over the years; in 2014 unrepeated cables of up to in length were in service; however these require unpowered repeaters to be positioned every 100 km.
The rising demand for these fiber-optic cables outpaced the capacity of providers such as AT&T. Having to shift traffic to satellites resulted in lower-quality signals. To address this issue, AT&T had to improve its cable-laying abilities. It invested $100 million in producing two specialized fiber-optic cable laying vessels. These included laboratories in the ships for splicing cable and testing its electrical properties. Such field monitoring is important because the glass of fiber-optic cable is less malleable than the copper cable that had been formerly used. The ships are equipped with thrusters that increase maneuverability. This capability is important because fiber-optic cable must be laid straight from the stern, which was another factor that copper-cable-laying ships did not have to contend with.
Originally, submarine cables were simple point-to-point connections. With the development of submarine branching units (SBUs), more than one destination could be served by a single cable system. Modern cable systems now usually have their fibers arranged in a self-healing ring to increase their redundancy, with the submarine sections following different paths on the ocean floor. One reason for this development was that the capacity of cable systems had become so large that it was not possible to completely back up a cable system with satellite capacity, so it became necessary to provide sufficient terrestrial backup capability. Not all telecommunications organizations wish to take advantage of this capability, so modern cable systems may have dual landing points in some countries (where back-up capability is required) and only single landing points in other countries where back-up capability is either not required, the capacity to the country is small enough to be backed up by other means, or having backup is regarded as too expensive.
A further redundant-path development over and above the self-healing rings approach is the mesh network whereby fast switching equipment is used to transfer services between network paths with little to no effect on higher-level protocols if a path becomes inoperable. As more paths become available to use between two points, it is less likely that one or two simultaneous failures will prevent end-to-end service.
As of 2012, operators had "successfully demonstrated long-term, error-free transmission at 100 Gbps across Atlantic Ocean" routes of up to , meaning a typical cable can move tens of terabits per second overseas. Speeds improved rapidly in the previous few years, with 40 Gbit/s having been offered on that route only three years earlier in August 2009.
Switching and all-by-sea routing commonly increases the distance and thus the round trip latency by more than 50%. For example, the round trip delay (RTD) or latency of the fastest transatlantic connections is under 60 ms, close to the theoretical optimum for an all-sea route. While in theory, a great circle route (GCP) between London and New York City is only , this requires several land masses (Ireland, Newfoundland, Prince Edward Island and the isthmus connecting New Brunswick to Nova Scotia) to be traversed, as well as the extremely tidal Bay of Fundy and a land route along Massachusetts' north shore from Gloucester to Boston and through fairly built up areas to Manhattan itself. In theory, using this partial land route could result in round trip times below 40 ms (which is the speed of light minimum time), and not counting switching. Along routes with less land in the way, round trip times can approach speed of light minimums in the long term.
The type of optical fiber used in unrepeated and very long cables is often PCSF (pure silica core) due to its low loss of 0.172dB per kilometer when carrying a 1550 nm wavelength laser light. The large chromatic dispersion of PCSF means that its use requires transmission and receiving equipment designed with this in mind; this property can also be used to reduce interference when transmitting multiple channels through a single fiber using wavelength division multiplexing (WDM), which allows for multiple optical carrier channels to be transmitted through a single fiber, each carrying its own information. WDM is limited by the optical bandwidth of the amplifiers used to transmit data through the cable and by the spacing between the frequencies of the optical carriers; however this minimum spacing is also limited, with the minimum spacing often being 50 GHz (0.4 nm). The use of WDM can reduce the maximum length of the cable although this can be overcome by designing equipment with this in mind.
Optical post amplifiers, used to increase the strength of the signal generated by the optical transmitter often use a diode-pumped erbium-doped fiber laser. The diode is often a high power 980 or 1480 nm laser diode. This setup allows for an amplification of up to +24dBm in an affordable manner. Using an erbium-ytterbium doped fiber instead allows for a gain of +33dBm, however again the amount of power that can be fed into the fiber is limited. In single carrier configurations the dominating limitation is self phase modulation induced by the Kerr effect which limits the amplification to +18 dBm per fiber. In WDM configurations the limitation due to crossphase modulation becomes predominant instead. Optical pre-amplifiers are often used to negate the thermal noise of the receiver. Pumping the pre-amplifier with a 980 nm laser leads to a noise of at most 3.5 dB, with a noise of 5 dB usually obtained with a 1480 nm laser. The noise has to be filtered using optical filters.
Raman amplification can be used to extend the reach or the capacity of an unrepeatered cable, by launching 2 frequencies into a single fiber; one carrying data signals at 1550 nm, and the other pumping them at 1450 nm. Launching a pump frequency (pump laser light) at a power of just one watt leads to an increase in reach of 45 km or a 6-fold increase in capacity.
Another way to increase the reach of a cable is by using unpowered repeaters called remote optical pre-amplifiers (ROPAs); these still make a cable count as unrepeatered since the repeaters do not require electrical power but they do require a pump laser light to be transmitted alongside the data carried by the cable; the pump light and the data are often transmitted in physically separate fibers. The ROPA contains a doped fiber that uses the pump light (often a 1480 nm laser light) to amplify the data signals carried on the rest of the fibers.
WDM or wavelength division multiplexing was first implemented in submarine fiber optic cables from the 1990s to the 2000s, followed by DWDM or dense wavelength division mulltiplexing around 2007. Each fiber can carry 30 wavelengths at a time. SDM or spatial division multiplexing submarine cables have at least 12 fiber pairs which is an increase from the maximum of 8 pairs found in conventional submarine cables, and submarine cables with up to 24 fiber pairs have been deployed. The type of modulation employed in a submarine cable can have a major impact in its capacity. SDM is combined with DWDM to improve capacity.
Transponders are used to send data through the cable. The open cable concept allows for the design of a submarine cable independently of the transponders that will be used to transmit data through the cable. SLTE (Submarine Line Terminal Equipment) has transponders and a ROADM (Reconfigurable optical add-drop multiplexer) used for handling the signals in the cable via software control. The ROADM is used to improve the reliability of the cable by allowing it to operate even if it has faults. This equipment is located inside a cable landing station (CLS). C-OTDR (Coherent Optical Time Domain Reflectometry) is used in submarine cables to detect the location of cable faults. The wet plant of a submarine cable comprises the cable itself, branching units, repeaters and possibly OADMs (Optical add-drop multiplexers).
Investment and finances
A typical multi-terabit, transoceanic submarine cable system costs several hundred million dollars to construct. Almost all fiber-optic cables from TAT-8 in 1988 until approximately 1997 were constructed by consortia of operators. For example, TAT-8 counted 35 participants including most major international carriers at the time such as AT&T Corporation. Two privately financed, non-consortium cables were constructed in the late 1990s, which preceded a massive, speculative rush to construct privately financed cables that peaked in more than $22 billion worth of investment between 1999 and 2001. This was followed by the bankruptcy and reorganization of cable operators such as Global Crossing, 360networks, FLAG, Worldcom, and Asia Global Crossing. Tata Communications' Global Network (TGN) is the only wholly owned fiber network circling the planet.
Most cables in the 20th century crossed the Atlantic Ocean, to connect the United States and Europe. However, capacity in the Pacific Ocean was much expanded starting in the 1990s. For example, between 1998 and 2003, approximately 70% of undersea fiber-optic cable was laid in the Pacific. This is in part a response to the emerging significance of Asian markets in the global economy.
After decades of heavy investment in already developed markets such as the transatlantic and transpacific routes, efforts increased in the 21st century to expand the submarine cable network to serve the developing world. For instance, in July 2009, an underwater fiber-optic cable line plugged East Africa into the broader Internet. The company that provided this new cable was SEACOM, which is 75% owned by East African and South African investors. The project was delayed by a month due to increased piracy along the coast.
Investments in cables present a commercial risk because cables cover 6,200 km of ocean floor, cross submarine mountain ranges and rifts. Because of this most companies only purchase capacity after the cable is finished.
Antarctica
Antarctica is the only continent not yet reached by a submarine telecommunications cable. Phone, video, and e-mail traffic must be relayed to the rest of the world via satellite links that have limited availability and capacity. Bases on the continent itself are able to communicate with one another via radio, but this is only a local network. To be a viable alternative, a fiber-optic cable would have to be able to withstand temperatures of as well as massive strain from ice flowing up to per year. Thus, plugging into the larger Internet backbone with the high bandwidth afforded by fiber-optic cable is still an as-yet infeasible economic and technical challenge in the Antarctic.
Arctic
The climate change induced melting of Arctic ice has provided the opportunity to lay new cable networks, linking continents and remote regions. Several projects are underway in the Arctic including 12,650 km "Polar Express" and 14,500 km Far North Fiber. However, scholars have raised environmental concerns about the laying of submarine cables in the region and the general lack of a nuanced regulatory framework. Environmental concerns pertain both to ice-related hazards damaging the cables, and cable installation disturbing the seabed or electromagnetic fields and thermal radiation of the cables impacting sensitive organisms.
Importance of submarine cables
Submarine cables, while often perceived as ‘insignificant’ parts of communication infrastructure as they lay “hidden” in the seabed, are an essential infrastructure in the digital era, carrying 99% of the data traffic across the oceans. This data includes all internet traffic, military transmissions, and financial transactions.
The total carrying capacity of a submarine cable is in the terabits per second, while a satellite typically offers only 1 gigabit per second, a ratio of more than 1000 to 1. Satellites handle less than 5% - to an estimate of even 0.5% - of global data transmission, and are less efficient, slower, and more expensive. Therefore, satellites are often exclusively considered for remote areas with challenging conditions for laying submarine cables. Submarine cables are thus the essential technical infrastructure for all internet communication.
National security
As a result of these cables' cost and usefulness, they are highly valued not only by the corporations building and operating them for profit, but also by national governments. For instance, the Australian government considers its submarine cable systems to be "vital to the national economy". Accordingly, the Australian Communications and Media Authority (ACMA) has created protection zones that restrict activities that could potentially damage cables linking Australia to the rest of the world. The ACMA also regulates all projects to install new submarine cables.
Due to their critical role, disruptions to these cables can lead to communication blackouts and, thus, extensive economic losses. The impact of such disruptions is often exemplified by the 2022 Tonga volcanic eruption that severed the island's only submarine cable and thus connectivity to the rest of the world for several days. The cable break was declared a “national crisis,” and repairs took several weeks, leaving Tonga largely isolated during a crucial period for disaster response.
Submarine cable infrastructure may even have additional technical advantages, such as carrying SMART environmental sensors supporting national disaster early warning systems. Furthermore, the cables are predicted to become even more critical with growing demands from 5G networks, the ‘Internet of Things’ (IoT), and artificial intelligence on large data transfers.
International security
Submarine communication cables are a critical infrastructure within the context of international security. Transmitting massive amounts of sensitive data every day, they are essential for both state operations and private enterprises. One of the catalysts for the amount and sensitivity of data flowing through these cables has been the global rise of cloud computing.
The U.S military, for example, uses the submarine cable network for data transfer from conflict zones to command staff in the United States (U.S.). Interruption of the cable network during intense operations could have direct consequences for the military on the ground.
The criticality of cable services makes their geopolitical influence profound. Scholars argue that state dominance in cable networks can exert political pressure, or shape global internet governance.
An example of such state dominance in the global cable infrastructure is China’s ‘Digital Silk Road’ strategy funding the expansion of Chinese cable networks, with the Chinese company HMN Technologies often criticised for providing networks for other states, holding up to 10% of the global market share. Some critiques argue that Chinese investments in critical cable infrastructure, being involvement in approximately 25% of global submarine cables, such as the PEACE cable linking Eastafrica and Europe, may enable China to reroute data traffic through its own networks, and thus apply political pressure. The strategy is countered by the U.S., supporting alternative projects.
Vulnerabilities of submarine cables to organized crime
Submarine cables are exposed to a variety of potential threats. Many of these threats are accidental, such as by fishing trawlers, ship anchors, earthquakes, turbidity currents, and even shark bites.
Based on surveying breaks in the Atlantic Ocean and the Caribbean Sea, it was found that between 1959 and 1996, fewer than 9% were due to natural events. In response to this threat to the communications network, the practice of cable burial has developed. The average incidence of cable faults was 3.7 per per year from 1959 to 1979. That rate was reduced to 0.44 faults per 1,000 km per year after 1985, due to widespread burial of cable starting in 1980.
Still, cable breaks are by no means a thing of the past, with more than 50 repairs a year in the Atlantic Ocean alone, and significant breaks in 2006, 2008, 2009 and 2011.
Several vulnerabilities of submarine communication cables make them attractive targets for organized crime. The following section explores these vulnerabilities and currently proposed counter measures to organized crime from different perspectives.
Technical perspective
Technical vulnerabilities
The remoteness of these cables in international waters, poses significant challenges for continuous monitoring and increases their attractiveness as targets of physical tampering, data theft, and service disruptions.
The cables' vulnerability is further compounded by technological advancements, such as the development of Unmanned Underwater Vehicles (UUVs), which enable covert cable damage while avoiding detection. However, even low-tech attacks can impact the cable's security significantly, as demonstrated in 2013, when three divers were arrested for severing the main cable linking Egypt with Europe, drastically lowering Egypt's internet speed.
Even in shallow waters, cables remain exposed to risks, as illustrated in the context of the Korea Strait. Such sea passages are often marked as ‘maritime choke points’ where several nations have conflicting interests, increasing the risk of harm from shipping activities and disputes.
Further, most cable locations are publicly available, making them an easy target for criminal acts such as disrupting services or stealing cable materials, which potentially lead to substantial communication blackouts. The stealing of submarine cable has been reported in Vietnam, where more than 11 km of cables went missing in 2007 and were later presumed to be found on fishing boats, attributed to their incentives to sell them, according to media reports.
Technical countermeasures
Typically, cables are buried in waters with a depth of less than 2,000 meters, but increasingly, they are buried in deeper seabed as a means of protection against high seas fishing and bottom trawling. However, this may also be advantageous against physical attacks from organized crime.
Further technical solutions are advanced protective casings, and monitoring them with, e.g., UUVs. Such technical solutions, however, can be challenging to implement and are limited in the remote areas of the high sea. Other proposed solutions include spatial modelling through protective or safety zones and penalties, increasing resources for surveillance, and a more collaborative approach between states and the private sector. However, how to implement and enforce these solutions remains to be determined. The cables' remoteness thus complicates both physical attacks and their protection.
Cable repair
Shore stations can locate a break in a cable by electrical measurements, such as through spread-spectrum time-domain reflectometry (SSTDR), a type of time-domain reflectometry that can be used in live environments very quickly. Presently, SSTDR can collect a complete data set in 20ms. Spread spectrum signals are sent down the wire and then the reflected signal is observed. It is then correlated with the copy of the sent signal and algorithms are applied to the shape and timing of the signals to locate the break.
A cable repair ship will be sent to the location to drop a marker buoy near the break. Several types of grapples are used depending on the situation. If the sea bed in question is sandy, a grapple with rigid prongs is used to plough under the surface and catch the cable. If the cable is on a rocky sea surface, the grapple is more flexible, with hooks along its length so that it can adjust to the changing surface. In especially deep water, the cable may not be strong enough to lift as a single unit, so a special grapple that cuts the cable soon after it has been hooked is used and only one length of cable is brought to the surface at a time, whereupon a new section is spliced in. The repaired cable is longer than the original, so the excess is deliberately laid in a "U" shape on the seabed. A submersible can be used to repair cables that lie in shallower waters.
A number of ports near important cable routes became homes to specialized cable repair ships. Halifax, Nova Scotia, was home to a half dozen such vessels for most of the 20th century including long-lived vessels such as the CS Cyrus West Field, CS Minia and CS Mackay-Bennett. The latter two were contracted to recover victims from the sinking of the RMS Titanic. The crews of these vessels developed many new techniques and devices to repair and improve cable laying, such as the "plough".
Cybersecurity perspective
Cyber vulnerabilities
Increasingly, sophisticated cyber-attacks threaten the data traffic on the cables, with incentives ranging from financial gain, espionage, or extortion by either state actors or non-state actors. Further, hybrid warfare tactics can interfere with or even weaponize the data transferred by the cables. For example, low-intensity cyber-attacks can be employed for ransomware, data manipulation and theft, opening up new a new opportunity for the use of cybercrime and grey-zone tactics in interstate disputes.
The lack of binding international cybersecurity standards may create a gap in dealing with cyber-enabled sabotage, that can be used by organized crime. However, attributing an incident to a specific actor or motivation of such actor can be challenging, specifically in cyberspace.
Cyber espionage and Intelligence-gathering
The rising sophistication of cyberattacks underscores the vulnerability of submarine cables to cyberespionage, ultimately complicating their security. Techniques like cable tapping, hacking into network management systems, and targeting cable landing stations enable covert data access by intelligence agencies, with Russia, the U.S., and the United Kingdom (U.K.) noted as primary players.
These activities are driven by both strategic and economic motives, with advancements in technology making interception and data manipulation more effective and difficult to detect. Recent technological advancements increasing the vulnerability include the use of remote access portals and remote network management systems centralizing control over components, enabling attackers to monitor traffic and potentially disrupt data flows.
Intelligence-gathering techniques have been deployed since the late 19th century. Frequently at the beginning of wars, nations have cut the cables of the other sides to redirect the information flow into cables that were being monitored. The most ambitious efforts occurred in World War I, when British and German forces systematically attempted to destroy the others' worldwide communications systems by cutting their cables with surface ships or submarines.
During the Cold War, the United States Navy and National Security Agency (NSA) succeeded in placing wire taps on Soviet underwater communication lines in Operation Ivy Bells.
These historical intelligence-gathering techniques were eventually countered with technological advancements like the widespread use of end-to-end encryption minimizing the threat of wire tapping.
Cybersecurity countermeasures
Cybersecurity strategies for submarine cables, such as encryption, access controls, and continuous monitoring, primarily focus on preventing unauthorized data access but do not adequately address the physical protection of cables in vulnerable, remote, high-sea areas as stated above.
As a result, while cybersecurity protocols are effective near coastal landing points, their enforcement across vast stretches of the open ocean becomes a challenge. To address these limitations, experts suggest a broader, multi-layered approach that integrates physical security measures with international cooperation and legal frameworks, especially given the jurisdictional ambiguities in international waters.
Multilateral agreements to establish cybersecurity standards specific to submarine cables are highlighted as critical. These agreements can help bridge the jurisdictional ambiguities and often resulting enforcement gaps in international waters, which ultimately hinder effective protection and are frequently exploited by organized crime.
Some scholars advocate for heightened European Union (E.U.) coordination, recommending improvements in surveillance and response capabilities across various agencies, such as the Coast Guard and specific telecommunication regulators. Given the central role of private companies in cable ownership, some experts also underscore the need for stronger collaboration between governments and tech firms to pool resources and develop more innovative security measures tailored to this critical infrastructure.
Geopolitical perspective
Geopolitical vulnerabilities
Fishing vessels are the leading cause of accidental damage to submarine communication cables. However, some of the academic discussions and recent incidents point to geopolitical tactics influencing the cable's security more than previously expected. These tactics include the ease and potential with which fishing vessels can blend into regular maritime traffic and implement their attacks.
The propensity for fishing trawler nets to cause cable faults may well have been exploited during the Cold War. For example, in February 1959, a series of 12 breaks occurred in five American trans-Atlantic communications cables. In response, a U.S. naval vessel, the USS Roy O. Hale, detained and investigated the Soviet trawler Novorosiysk. A review of the ship's log indicated it had been in the region of each of the cables when they broke. Broken sections of cable were also found on the deck of the Novorosiysk. It appeared that the cables had been dragged along by the ship's nets, and then cut once they were pulled up onto the deck to release the nets. The Soviet Union's stance on the investigation was that it was unjustified, but the U.S. cited the Convention for the Protection of Submarine Telegraph Cables of 1884 to which Russia had signed (prior to the formation of the Soviet Union) as evidence of violation of international protocol.
Several media outlets and organizations indicate that Russian fishing vessels, particularly in 2022, passed over a damaged submarine cable up to 20 times, suggesting potential political motives and the possibility of hybrid warfare tactics used from Russia's side. Russian naval activities near submarine cables are often linked to increased hybrid warfare strategies targeting submarine cables, where sabotage is argued to serve as a tool to disrupt communication networks during conflict and destabilise adversaries.
These tactics elevate cable security to a significant geopolitical issue. Criminal actors may further target cables as a means of economic warfare, aiming to destabilize economies or convey political messages. The disruption of submarine communication cables in highly politicised maritime areas thus has a significant political component that is receiving increased attention.
After two cable breaks in the Baltic Sea in November 2024, one between Lithuania and Sweden and the other between Finland and Germany, Defence Minister Boris Pistorius argued:
“No one believes that these cables were cut accidentally. I also don't want to believe in versions that these were ship anchors that accidentally caused the damage. Therefore, we have to state, without knowing specifically who it came from, that it is a 'hybrid' action. And we also have to assume, without knowing it yet, that it is sabotage."
This statement underlines the current discourse to recognize cable disruptions as threats to national securiy, which ultimately leads to their securitization in the international context.
Geopolitical risks and countermeasures
Submarine cables are inherently vulnerable to transnational threats like organized crime. International collaboration to address these threats tends to fall to existing organizations with a cable specific focus - such as the International Cable Protection Committee (ICPC) - which represent key submarine stakeholders, and play a vital role in promoting cooperation and information sharing among stakeholders. Such organizations are argued to be crucial to develop and implement a comprehensive and coordinated global strategy for cable security.
As of 2025, a tense U.S.-China relationship complicates this task especially in the South China Sea where there are territorial disputes. China has increasing control and influence over global cables networks, while both it and the USA financially supports allied-owned cable projects and exerts diplomatic pressure and regulatory action, e.g. against Vietnam.
In light of Nord Stream pipelines sabotage in the Baltic Sea, where subsea infrastructure vital to Germany and Russia was physically destroyed, and other incidents there, NATO has increased patrols and monitoring operations.
Legal perspective
Legal vulnerabilities
Submarine cables are internationally regulated within the framework of the United Nations Convention on the Law of the Sea (UNCLOS), in particular through the provisions of Articles 112 and 97, 112 and 115, which mandate operational freedom to lay cables in international waters and beyond the continental shelf and reward measures to protect against shipping accidents.
However, submarine cables face significant legal challenges and lack specific legal protection in UNCLOS and enforcement mechanisms against emerging threats, particular in international waters. This is further complicated by the non-ratification of the treaty by key states such as the U.S. and Turkey. Many countries lack explicit legal provisions to criminalize the destruction or theft of undersea cables, creating jurisdictional ambiguities that organized crime can exploit. Other legal frameworks, such as the 1884 Convention for the Protection of Submarine Telegraph Cables are outdated and fail to address modern threats like cyberattacks and hybrid warfare tactics. The unclear jurisdiction and weak enforcement mechanisms, demonstrate the difficulty to protect submarine cables from organized crime.
The Arctic Ocean in particular exemplifies the challenges associated with surveillance and enforcement in vast and remote areas, leaving a legal vacuum that criminals may exploit. In the Arctic, the absence of a central international authority to oversee submarine cable protection and the reliance on military organizations like NATO hinders general coordinated global responses.
Organizations such as the ICPC thus highlight the need for updated and more comprehensive legal frameworks to ensure the security of submarine cables.
Legal countermeasures
The legal challenges of protecting submarine cables from organized crime have resulted in recommendations ranging from treaty amendments to domestic law reforms and multi-level governance models.
Some scholars argue that UNCLOS should be updated to protect cables extensively, including cooperative monitoring and enforcement protocols. Additionally, principles from the law of the sea, state responsibility, and the laws on the use of force could be creatively applied to strengthen protections for cables. Enforcement issues could be tackled by aligning domestic laws with UNCLOS, implementing national response protocols, and creating streamlined points of contact for cable incidents. Given the increased involvement of organizations like NATO, others recommend to clarify the roles of military and non-military actors in cable security and enhanced multi-level governance models.
While these proposed legal solutions seem promising, their practical implementation still remains a challenge due to the complexity of international treaties, the need for international cooperation, the lack of domestic criminalization of cable damage, and the evolving nature of technological threats. Additionally, while UNCLOS's ambiguous jurisdiction in international waters hinders effective enforcement, limited political interests seems to hamper treaty development.
Environmental impact
The presence of cables in the oceans can be a danger to marine life. With the proliferation of cable installations and the increasing demand for inter-connectivity that today's society demands, the environmental impact is increasing.
Submarine cables can impact marine life in a number of ways.
Alteration of the seabed
Seabed ecosystems can be disturbed by the installation and maintenance of cables. The effects of cable installation are generally limited to specific areas. The intensity of disturbance depends on the installation method.
Cables are often laid in the so-called benthic zone of the seabed. The benthic zone is the ecological region at the bottom of the sea where benthos, clams and crabs live, and where the surface sediments, which are deposits of matter and particles in the water that provide a habitat for marine species, are located.
Sediment can be damaged by cable installation by trenching with water jets or ploughing. This can lead to reworking of the sediments, altering the substrate of which they are composed.
According to several studies, the biota of the benthic zone is only slightly affected by the presence of cables. However, the presence of cables can trigger behavioral disturbances in living organisms. The main observation is that the presence of cables provides a hard substrate for anemones attachment. These organisms are found in large number around cables that run through soft sediments, which are not normally suitable for these organisms. This is also the case for flatfish. Although little observed, the presence of cables can also change the water temperature and therefore disturb the surrounding natural habitat.
However, these disturbances are not very persistent over time, and can stabilize within a few days. Cable operators are trying to implement measures to route cables in such a way as to avoid areas with sensitive and vulnerable ecosystems.
Entanglement
Entanglement of marine animals in cables is one of the main causes of cable damage. Whales and sperm whales are the main animals that entangle themselves in cables and damage them. The encounter between these animals and cables can cause injury and sometimes death. Studies carried out between 1877 and 1955 reported 16 cable ruptures caused by whale entanglement, 13 of them by sperm whales. Between 1907 and 2006, 39 such events were recorded. Cable burial techniques are gradually being introduced to prevent such incidents.
The risk of fishing
Although submarine cables are located on the seabed, fishing activity can damage the cables. Fishermen using fishing techniques that involve scraping the seabed, or dragging equipment such as trawls or cages, can damage the cables, resulting in the loss of liquids and the chemical and toxic materials that make up the cables.
Areas with a high density of submarine cables have the advantage of being safer from fishing. At the expense of benthic and sedimentary zones, marine fauna is better protected in these maritime regions, thanks to limitations and bans. Studies have shown a positive effect on the fauna surrounding cable installation zones.
Pollution
Submarine cables are made of copper or optical fibers, surrounded by several protective layers of plastic, wire or synthetic materials. Cables can also be composed of dielectric fluids or hydrocarbon fluids, which act as electrical insulators. These substances can be harmful to marine life.
Fishing, aging cables and marine species that collide with or become entangled in cables can damage cables and spread toxic and harmful substances into the sea. However, the impact of submarine cables is limited compared with other sources of ocean pollution.
There is also a risk of releasing pollutants buried in sediments. When sediments are re-suspended due to the installation of cables, toxic substances such as hydrocarbons may be released.
Preliminary analyses can assess the level of sediment toxicity and select a cable route that avoids the remobilization and dispersion of sediment pollutants. And new, more modern techniques will make it possible to use less polluting materials for cable construction.
Sound waves and electromagnetic waves
The installation and maintenance of cables requires the use of machinery and equipment that can trigger sound waves or electromagnetic waves that can disturb animals that use waves to find their bearings in space or to communicate. Underwater sound waves depend on the equipment used, the characteristics of the seabed area where the cables are located, and the relief of the area.
Underwater noise and waves can modify the behavior of certain underwater species, such as migratory behavior, disrupting communication or reproduction. Available information is that underwater noise generated by submarine cable engineering operations has limited acoustic footprint and limited duration.
| Technology | Telecommunications | null |
45207 | https://en.wikipedia.org/wiki/Communications%20satellite | Communications satellite | A communications satellite is an artificial satellite that relays and amplifies radio telecommunication signals via a transponder; it creates a communication channel between a source transmitter and a receiver at different locations on Earth. Communications satellites are used for television, telephone, radio, internet, and military applications. Many communications satellites are in geostationary orbit above the equator, so that the satellite appears stationary at the same point in the sky; therefore the satellite dish antennas of ground stations can be aimed permanently at that spot and do not have to move to track the satellite. Others form satellite constellations in low Earth orbit, where antennas on the ground have to follow the position of the satellites and switch between satellites frequently.
The radio waves used for telecommunications links travel by line of sight and so are obstructed by the curve of the Earth. The purpose of communications satellites is to relay the signal around the curve of the Earth allowing communication between widely separated geographical points. Communications satellites use a wide range of radio and microwave frequencies. To avoid signal interference, international organizations have regulations for which frequency ranges or "bands" certain organizations are allowed to use. This allocation of bands minimizes the risk of signal interference.
History
Origins
In October 1945, Arthur C. Clarke published an article titled "Extraterrestrial Relays" in the British magazine Wireless World. The article described the fundamentals behind the deployment of artificial satellites in geostationary orbits to relay radio signals. Because of this, Arthur C. Clarke is often quoted as being the inventor of the concept of the communications satellite, and the term 'Clarke Belt' is employed as a description of the orbit.
The first artificial Earth satellite was Sputnik 1, which was put into orbit by the Soviet Union on 4 October 1957. It was developed by Mikhail Tikhonravov and Sergey Korolev, building on work by Konstantin Tsiolkovsky. Sputnik 1 was equipped with an on-board radio transmitter that worked on two frequencies of 20.005 and 40.002 MHz, or 7 and 15 meters wavelength. The satellite was not placed in orbit to send data from one point on Earth to another, but the radio transmitter was meant to study the properties of radio wave distribution throughout the ionosphere. The launch of Sputnik 1 was a major step in the exploration of space and rocket development, and marks the beginning of the Space Age.
Early active and passive satellite experiments
There are two major classes of communications satellites, passive and active. Passive satellites only reflect the signal coming from the source, toward the direction of the receiver. With passive satellites, the reflected signal is not amplified at the satellite, and only a small amount of the transmitted energy actually reaches the receiver. Since the satellite is so far above Earth, the radio signal is attenuated due to free-space path loss, so the signal received on Earth is very weak. Active satellites, on the other hand, amplify the received signal before retransmitting it to the receiver on the ground. Passive satellites were the first communications satellites, but are little used now.
Work that was begun in the field of electrical intelligence gathering at the United States Naval Research Laboratory in 1951 led to a project named Communication Moon Relay. Military planners had long shown considerable interest in secure and reliable communications lines as a tactical necessity, and the ultimate goal of this project was the creation of the longest communications circuit in human history, with the Moon, Earth's natural satellite, acting as a passive relay. After achieving the first transoceanic communication between Washington, D.C., and Hawaii on 23 January 1956, this system was publicly inaugurated and put into formal production in January 1960.
The first satellite purpose-built to actively relay communications was Project SCORE, led by Advanced Research Projects Agency (ARPA) and launched on 18 December 1958, which used a tape recorder to carry a stored voice message, as well as to receive, store, and retransmit messages. It was used to send a Christmas greeting to the world from U.S. President Dwight D. Eisenhower. The satellite also executed several realtime transmissions before the non-rechargeable batteries failed on 30 December 1958 after eight hours of actual operation.
The direct successor to SCORE was another ARPA-led project called Courier. Courier 1B was launched on 4 October 1960 to explore whether it would be possible to establish a global military communications network by using "delayed repeater" satellites, which receive and store information until commanded to rebroadcast them. After 17 days, a command system failure ended communications from the satellite.
NASA's satellite applications program launched the first artificial satellite used for passive relay communications in Echo 1 on 12 August 1960. Echo 1 was an aluminized balloon satellite acting as a passive reflector of microwave signals. Communication signals were bounced off the satellite from one point on Earth to another. This experiment sought to establish the feasibility of worldwide broadcasts of telephone, radio, and television signals.
More firsts and further experiments
Telstar was the first active, direct relay communications commercial satellite and marked the first transatlantic transmission of television signals. Belonging to AT&T as part of a multi-national agreement between AT&T, Bell Telephone Laboratories, NASA, the British General Post Office, and the French National PTT (Post Office) to develop satellite communications, it was launched by NASA from Cape Canaveral on 10 July 1962, in the first privately sponsored space launch.
Another passive relay experiment primarily intended for military communications purposes was Project West Ford, which was led by Massachusetts Institute of Technology's Lincoln Laboratory. After an initial failure in 1961, a launch on 9 May 1963 dispersed 350 million copper needle dipoles to create a passive reflecting belt. Even though only about half of the dipoles properly separated from each other, the project was able to successfully experiment and communicate using frequencies in the SHF X band spectrum.
An immediate antecedent of the geostationary satellites was the Hughes Aircraft Company's Syncom 2, launched on 26 July 1963. Syncom 2 was the first communications satellite in a geosynchronous orbit. It revolved around the Earth once per day at constant speed, but because it still had north–south motion, special equipment was needed to track it. Its successor, Syncom 3, launched on 19 July 1964, was the first geostationary communications satellite. Syncom 3 obtained a geosynchronous orbit, without a north–south motion, making it appear from the ground as a stationary object in the sky.
A direct extension of the passive experiments of Project West Ford was the Lincoln Experimental Satellite program, also conducted by the Lincoln Laboratory on behalf of the United States Department of Defense. The LES-1 active communications satellite was launched on 11 February 1965 to explore the feasibility of active solid-state X band long-range military communications. A total of nine satellites were launched between 1965 and 1976 as part of this series.
International commercial satellite projects
In the United States, 1962 saw the creation of the Communications Satellite Corporation (COMSAT) private corporation, which was subject to instruction by the US Government on matters of national policy. Over the next two years, international negotiations led to the Intelsat Agreements, which in turn led to the launch of Intelsat 1, also known as Early Bird, on 6 April 1965, and which was the first commercial communications satellite to be placed in geosynchronous orbit. Subsequent Intelsat launches in the 1960s provided multi-destination service and video, audio, and data service to ships at sea (Intelsat 2 in 1966–67), and the completion of a fully global network with Intelsat 3 in 1969–70. By the 1980s, with significant expansions in commercial satellite capacity, Intelsat was on its way to become part of the competitive private telecommunications industry, and had started to get competition from the likes of PanAmSat in the United States, which, ironically, was then bought by its archrival in 2005.
When Intelsat was launched, the United States was the only launch source outside of the Soviet Union, who did not participate in the Intelsat agreements. The Soviet Union launched its first communications satellite on 23 April 1965 as part of the Molniya program. This program was also unique at the time for its use of what then became known as the Molniya orbit, which describes a highly elliptical orbit, with two high apogees daily over the northern hemisphere. This orbit provides a long dwell time over Russian territory as well as over Canada at higher latitudes than geostationary orbits over the equator.
In the 2020s, the popularity of low Earth orbit satellite internet constellations providing relatively low-cost internet services led to reducing demand for new geostationary orbit communications satellites.
Satellite orbits
Communications satellites usually have one of three primary types of orbit, while other orbital classifications are used to further specify orbital details. MEO and LEO are non-geostationary orbit (NGSO).
Geostationary satellites have a geostationary orbit (GEO), which is from Earth's surface. This orbit has the special characteristic that the apparent position of the satellite in the sky when viewed by a ground observer does not change, the satellite appears to "stand still" in the sky. This is because the satellite's orbital period is the same as the rotation rate of the Earth. The advantage of this orbit is that ground antennas do not have to track the satellite across the sky, they can be fixed to point at the location in the sky the satellite appears.
Medium Earth orbit (MEO) satellites are closer to Earth. Orbital altitudes range from above Earth.
The region below medium orbits is referred to as low Earth orbit (LEO), and is about above Earth.
As satellites in MEO and LEO orbit the Earth faster, they do not remain visible in the sky to a fixed point on Earth continually like a geostationary satellite, but appear to a ground observer to cross the sky and "set" when they go behind the Earth beyond the visible horizon. Therefore, to provide continuous communications capability with these lower orbits requires a larger number of satellites, so that one of these satellites will always be visible in the sky for transmission of communication signals. However, due to their closer distance to the Earth, LEO or MEO satellites can communicate to ground with reduced latency and at lower power than would be required from a geosynchronous orbit.
Low Earth orbit (LEO)
A low Earth orbit (LEO) typically is a circular orbit about above the Earth's surface and, correspondingly, a period (time to revolve around the Earth) of about 90 minutes.
Because of their low altitude, these satellites are only visible from within a radius of roughly from the sub-satellite point. In addition, satellites in low Earth orbit change their position relative to the ground position quickly. So even for local applications, many satellites are needed if the mission requires uninterrupted connectivity.
Low-Earth-orbiting satellites are less expensive to launch into orbit than geostationary satellites and, due to proximity to the ground, do not require as high signal strength (signal strength falls off as the square of the distance from the source, so the effect is considerable). Thus there is a trade off between the number of satellites and their cost.
In addition, there are important differences in the onboard and ground equipment needed to support the two types of missions.
Satellite constellation
A group of satellites working in concert is known as a satellite constellation. Two such constellations, intended to provide satellite phone and low-speed data services, primarily to remote areas, are the Iridium and Globalstar systems. The Iridium system has 66 satellites, which orbital inclination of 86.4° and inter-satellite links provide service availability over the entire surface of Earth. Starlink is a satellite internet constellation operated by SpaceX, that aims for global satellite Internet access coverage.
It is also possible to offer discontinuous coverage using a low-Earth-orbit satellite capable of storing data received while passing over one part of Earth and transmitting it later while passing over another part. This will be the case with the CASCADE system of Canada's CASSIOPE communications satellite. Another system using this store and forward method is Orbcomm.
Medium Earth orbit (MEO)
A medium Earth orbit is a satellite in orbit somewhere between above the Earth's surface. MEO satellites are similar to LEO satellites in functionality. MEO satellites are visible for much longer periods of time than LEO satellites, usually between 2 and 8 hours. MEO satellites have a larger coverage area than LEO satellites. A MEO satellite's longer duration of visibility and wider footprint means fewer satellites are needed in a MEO network than a LEO network. One disadvantage is that a MEO satellite's distance gives it a longer time delay and weaker signal than a LEO satellite, although these limitations are not as severe as those of a GEO satellite.
Like LEOs, these satellites do not maintain a stationary distance from the Earth. This is in contrast to the geostationary orbit, where satellites are always from Earth.
Typically the orbit of a medium Earth orbit satellite is about above Earth. In various patterns, these satellites make the trip around Earth in anywhere from 2 to 8 hours.
Examples of MEO
In 1962, the communications satellite, Telstar, was launched. It was a medium Earth orbit satellite designed to help facilitate high-speed telephone signals. Although it was the first practical way to transmit signals over the horizon, its major drawback was soon realised. Because its orbital period of about 2.5 hours did not match the Earth's rotational period of 24 hours, continuous coverage was impossible. It was apparent that multiple MEOs needed to be used in order to provide continuous coverage.
In 2013, the first four of a constellation of 20 MEO satellites was launched. The O3b satellites provide broadband internet services, in particular to remote locations and maritime and in-flight use, and orbit at an altitude of ).
Geostationary orbit (GEO)
To an observer on Earth, a satellite in a gestationary orbit appears motionless, in a fixed position in the sky. This is because it revolves around the Earth at Earth's own angular velocity (one revolution per sidereal day, in an equatorial orbit).
A geostationary orbit is useful for communications because ground antennas can be aimed at the satellite without their having to track the satellite's motion. This is relatively inexpensive.
In applications that require many ground antennas, such as DirecTV distribution, the savings in ground equipment can more than outweigh the cost and complexity of placing a satellite into orbit.
Examples of GEO
The first geostationary satellite was Syncom 3, launched on 19 August 1964, and used for communication across the Pacific starting with television coverage of the 1964 Summer Olympics. Shortly after Syncom 3, Intelsat I, aka Early Bird, was launched on 6 April 1965 and placed in orbit at 28° west longitude. It was the first geostationary satellite for telecommunications over the Atlantic Ocean.
On 9 November 1972, Canada's first geostationary satellite serving the continent, Anik A1, was launched by Telesat Canada, with the United States following suit with the launch of Westar 1 by Western Union on 13 April 1974.
On 30 May 1974, the first geostationary communications satellite in the world to be three-axis stabilized was launched: the experimental satellite ATS-6 built for NASA.
After the launches of the Telstar through Westar 1 satellites, RCA Americom (later GE Americom, now SES) launched Satcom 1 in 1975. It was Satcom 1 that was instrumental in helping early cable TV channels such as WTBS (now TBS), HBO, CBN (now Freeform) and The Weather Channel become successful, because these channels distributed their programming to all of the local cable TV headends using the satellite. Additionally, it was the first satellite used by broadcast television networks in the United States, like ABC, NBC, and CBS, to distribute programming to their local affiliate stations. Satcom 1 was widely used because it had twice the communications capacity of the competing Westar 1 in America (24 transponders as opposed to the 12 of Westar 1), resulting in lower transponder-usage costs. Satellites in later decades tended to have even higher transponder numbers.
By 2000, Hughes Space and Communications (now Boeing Satellite Development Center) had built nearly 40 percent of the more than one hundred satellites in service worldwide. Other major satellite manufacturers include Space Systems/Loral, Orbital Sciences Corporation with the Star Bus series, Indian Space Research Organisation, Lockheed Martin (owns the former RCA Astro Electronics/GE Astro Space business), Northrop Grumman, Alcatel Space, now Thales Alenia Space, with the Spacebus series, and Astrium.
Molniya orbit
Geostationary satellites must operate above the equator and therefore appear lower on the horizon as the receiver gets farther from the equator. This will cause problems for extreme northerly latitudes, affecting connectivity and causing multipath interference (caused by signals reflecting off the ground and into the ground antenna).
Thus, for areas close to the North (and South) Pole, a geostationary satellite may appear below the horizon. Therefore, Molniya orbit satellites have been launched, mainly in Russia, to alleviate this problem.
Molniya orbits can be an appealing alternative in such cases. The Molniya orbit is highly inclined, guaranteeing good elevation over selected positions during the northern portion of the orbit. (Elevation is the extent of the satellite's position above the horizon. Thus, a satellite at the horizon has zero elevation and a satellite directly overhead has elevation of 90 degrees.)
The Molniya orbit is designed so that the satellite spends the great majority of its time over the far northern latitudes, during which its ground footprint moves only slightly. Its period is one half day, so that the satellite is available for operation over the targeted region for six to nine hours every second revolution. In this way a constellation of three Molniya satellites (plus in-orbit spares) can provide uninterrupted coverage.
The first satellite of the Molniya series was launched on 23 April 1965 and was used for experimental transmission of TV signals from a Moscow uplink station to downlink stations located in Siberia and the Russian Far East, in Norilsk, Khabarovsk, Magadan and Vladivostok. In November 1967 Soviet engineers created a unique system of national TV network of satellite television, called Orbita, that was based on Molniya satellites.
Polar orbit
In the United States, the National Polar-orbiting Operational Environmental Satellite System (NPOESS) was established in 1994 to consolidate the polar satellite operations of
NASA (National Aeronautics and Space Administration)
NOAA (National Oceanic and Atmospheric Administration). NPOESS manages a number of satellites for various purposes; for example, METSAT for meteorological satellite, EUMETSAT for the European branch of the program, and METOP for meteorological operations.
These orbits are Sun synchronous, meaning that they cross the equator at the same local time each day. For example, the satellites in the NPOESS (civilian) orbit will cross the equator, going from south to north, at times 1:30 P.M., 5:30 P.M., and 9:30 P.M.
Beyond geostationary orbit
There are plans and initiatives to bring dedicated communications satellite beyond geostationary orbits.
NASA proposed LunaNet as a data network aiming to provide a "Lunar Internet" for cis-lunar spacecraft and Installations.
The Moonlight Initiative is an equivalent ESA project that is stated to be compatible and providing navigational services for the lunar surface. Both programmes are satellite constellations of several satellites in various orbits around the Moon.
Other orbits are also planned to be used. Positions in the Earth-Moon-Libration points are also proposed for communication satellites covering the Moon alike communication satellites in geosynchronous orbit cover the Earth. Also, dedicated communication satellites in orbits around Mars supporting different missions on surface and other orbits are considered, such as the Mars Telecommunications Orbiter.
Structure
Communications Satellites are usually composed of the following subsystems:
Communication Payload, normally composed of transponders, antennas, amplifiers and switching systems
Engines used to bring the satellite to its desired orbit
A station keeping tracking and stabilization subsystem used to keep the satellite in the right orbit, with its antennas pointed in the right direction, and its power system pointed towards the Sun
Power subsystem, used to power the Satellite systems, normally composed of solar cells, and batteries that maintain power during solar eclipse
Command and Control subsystem, which maintains communications with ground control stations. The ground control Earth stations monitor the satellite performance and control its functionality during various phases of its life-cycle.
The bandwidth available from a satellite depends upon the number of transponders provided by the satellite. Each service (TV, Voice, Internet, radio) requires a different amount of bandwidth for transmission. This is typically known as link budgeting and a network simulator can be used to arrive at the exact value.
Frequency allocation for satellite systems
Allocating frequencies to satellite services is a complicated process which requires international coordination and planning. This is carried out under the auspices of the International Telecommunication Union (ITU).
To facilitate frequency planning, the world is divided into three regions:
Region 1: Europe, Africa, the Middle East, what was formerly the Soviet Union, and Mongolia
Region 2: North and South America and Greenland
Region 3: Asia (excluding region 1 areas), Australia, and the southwest Pacific
Within these regions, frequency bands are allocated to various satellite services, although a given service may be allocated different frequency bands in different regions. Some of the services provided by satellites are:
Fixed satellite service (FSS)
Broadcasting satellite service (BSS)
Mobile-satellite service
Radionavigation-satellite service
Meteorological-satellite service
Applications
Telephony
The first and historically most important application for communication satellites was in intercontinental long distance telephony. The fixed Public Switched Telephone Network relays telephone calls from land line telephones to an Earth station, where they are then transmitted to a geostationary satellite. The downlink follows an analogous path. Improvements in submarine communications cables through the use of fiber-optics caused some decline in the use of satellites for fixed telephony in the late 20th century.
Satellite communications are still used in many applications today. Remote islands such as Ascension Island, Saint Helena, Diego Garcia, and Easter Island, where no submarine cables are in service, need satellite telephones. There are also regions of some continents and countries where landline telecommunications are rare to non existent, for example large regions of South America, Africa, Canada, China, Russia, and Australia. Satellite communications also provide connection to the edges of Antarctica and Greenland. Other land use for satellite phones are rigs at sea, a backup for hospitals, military, and recreation. Ships at sea, as well as planes, often use satellite phones.
Satellite phone systems can be accomplished by a number of means. On a large scale, often there will be a local telephone system in an isolated area with a link to the telephone system in a main land area. There are also services that will patch a radio signal to a telephone system. In this example, almost any type of satellite can be used. Satellite phones connect directly to a constellation of either geostationary or low-Earth-orbit satellites. Calls are then forwarded to a satellite teleport connected to the Public Switched Telephone Network .
Television
As television became the main market, its demand for simultaneous delivery of relatively few signals of large bandwidth to many receivers being a more precise match for the capabilities of geosynchronous comsats. Two satellite types are used for North American television and radio: Direct broadcast satellite (DBS), and Fixed Service Satellite (FSS).
The definitions of FSS and DBS satellites outside of North America, especially in Europe, are a bit more ambiguous. Most satellites used for direct-to-home television in Europe have the same high power output as DBS-class satellites in North America, but use the same linear polarization as FSS-class satellites. Examples of these are the Astra, Eutelsat, and Hotbird spacecraft in orbit over the European continent. Because of this, the terms FSS and DBS are more so used throughout the North American continent, and are uncommon in Europe.
Fixed Service Satellites use the C band, and the lower portions of the Ku band. They are normally used for broadcast feeds to and from television networks and local affiliate stations (such as program feeds for network and syndicated programming, live shots, and backhauls), as well as being used for distance learning by schools and universities, business television (BTV), Videoconferencing, and general commercial telecommunications. FSS satellites are also used to distribute national cable channels to cable television headends.
Free-to-air satellite TV channels are also usually distributed on FSS satellites in the Ku band. The Intelsat Americas 5, Galaxy 10R and AMC 3 satellites over North America provide a quite large amount of FTA channels on their Ku band transponders.
The American Dish Network DBS service has also recently used FSS technology as well for their programming packages requiring their SuperDish antenna, due to Dish Network needing more capacity to carry local television stations per the FCC's "must-carry" regulations, and for more bandwidth to carry HDTV channels.
A direct broadcast satellite is a communications satellite that transmits to small DBS satellite dishes (usually 18 to 24 inches or 45 to 60 cm in diameter). Direct broadcast satellites generally operate in the upper portion of the microwave Ku band. DBS technology is used for DTH-oriented (Direct-To-Home) satellite TV services, such as DirecTV, DISH Network and Orby TV in the United States, Bell Satellite TV and Shaw Direct in Canada, Freesat and Sky in the UK, Ireland, and New Zealand and DSTV in South Africa.
Operating at lower frequency and lower power than DBS, FSS satellites require a much larger dish for reception (3 to 8 feet (1 to 2.5 m) in diameter for Ku band, and 12 feet (3.6 m) or larger for C band). They use linear polarization for each of the transponders' RF input and output (as opposed to circular polarization used by DBS satellites), but this is a minor technical difference that users do not notice. FSS satellite technology was also originally used for DTH satellite TV from the late 1970s to the early 1990s in the United States in the form of TVRO (Television Receive Only) receivers and dishes. It was also used in its Ku band form for the now-defunct Primestar satellite TV service.
Some satellites have been launched that have transponders in the Ka band, such as DirecTV's SPACEWAY-1 satellite, and Anik F2. NASA and ISRO have also launched experimental satellites carrying Ka band beacons recently.
Some manufacturers have also introduced special antennas for mobile reception of DBS television. Using Global Positioning System (GPS) technology as a reference, these antennas automatically re-aim to the satellite no matter where or how the vehicle (on which the antenna is mounted) is situated. These mobile satellite antennas are popular with some recreational vehicle owners. Such mobile DBS antennas are also used by JetBlue Airways for DirecTV (supplied by LiveTV, a subsidiary of JetBlue), which passengers can view on-board on LCD screens mounted in the seats.
Radio broadcasting
Satellite radio offers audio broadcast services in some countries, notably the United States. Mobile services allow listeners to roam a continent, listening to the same audio programming anywhere.
A satellite radio or subscription radio (SR) is a digital radio signal that is broadcast by a communications satellite, which covers a much wider geographical range than terrestrial radio signals.
Amateur radio
Amateur radio operators have access to amateur satellites, which have been designed specifically to carry amateur radio traffic. Most such satellites operate as spaceborne repeaters, and are generally accessed by amateurs equipped with UHF or VHF radio equipment and highly directional antennas such as Yagis or dish antennas. Due to launch costs, most current amateur satellites are launched into fairly low Earth orbits, and are designed to deal with only a limited number of brief contacts at any given time. Some satellites also provide data-forwarding services using the X.25 or similar protocols.
Internet access
After the 1990s, satellite communication technology has been used as a means to connect to the Internet via broadband data connections. This can be very useful for users who are located in remote areas, and cannot access a broadband connection, or require high availability of services.
Military
Communications satellites are used for military communications applications, such as Global Command and Control Systems. Examples of military systems that use communication satellites are the MILSTAR, the DSCS, and the FLTSATCOM of the United States, NATO satellites, United Kingdom satellites (for instance Skynet), and satellites of the former Soviet Union. India has launched its first Military Communication satellite GSAT-7, its transponders operate in UHF, F, C and bands. Typically military satellites operate in the UHF, SHF (also known as X-band) or EHF (also known as Ka band) frequency bands.
Data collection
Near-ground in situ environmental monitoring equipment (such as tide gauges, weather stations, weather buoys, and radiosondes), may use satellites for one-way data transmission or two-way telemetry and telecontrol. It may be based on a secondary payload of a weather satellite (as in the case of GOES and METEOSAT and others in the Argos system) or in dedicated satellites (such as SCD). The data rate is typically much lower than in satellite Internet access.
| Technology | Media and communication | null |
45240 | https://en.wikipedia.org/wiki/Kernel%20%28algebra%29 | Kernel (algebra) | In algebra, the kernel of a homomorphism (function that preserves the structure) is generally the inverse image of 0 (except for groups whose operation is denoted multiplicatively, where the kernel is the inverse image of 1). An important special case is the kernel of a linear map. The kernel of a matrix, also called the null space, is the kernel of the linear map defined by the matrix.
The kernel of a homomorphism is reduced to 0 (or 1) if and only if the homomorphism is injective, that is if the inverse image of every element consists of a single element. This means that the kernel can be viewed as a measure of the degree to which the homomorphism fails to be injective.
For some types of structure, such as abelian groups and vector spaces, the possible kernels are exactly the substructures of the same type. This is not always the case, and, sometimes, the possible kernels have received a special name, such as normal subgroup for groups and two-sided ideals for rings.
Kernels allow defining quotient objects (also called quotient algebras in universal algebra, and cokernels in category theory). For many types of algebraic structure, the fundamental theorem on homomorphisms (or first isomorphism theorem) states that image of a homomorphism is isomorphic to the quotient by the kernel.
The concept of a kernel has been extended to structures such that the inverse image of a single element is not sufficient for deciding whether a homomorphism is injective. In these cases, the kernel is a congruence relation.
This article is a survey for some important types of kernels in algebraic structures.
Survey of examples
Linear maps
Let V and W be vector spaces over a field (or more generally, modules over a ring) and let T be a linear map from V to W. If 0W is the zero vector of W, then the kernel of T is the preimage of the zero subspace {0W}; that is, the subset of V consisting of all those elements of V that are mapped by T to the element 0W. The kernel is usually denoted as , or some variation thereof:
Since a linear map preserves zero vectors, the zero vector 0V of V must belong to the kernel. The transformation T is injective if and only if its kernel is reduced to the zero subspace.
The kernel ker T is always a linear subspace of V. Thus, it makes sense to speak of the quotient space . The first isomorphism theorem for vector spaces states that this quotient space is naturally isomorphic to the image of T (which is a subspace of W). As a consequence, the dimension of V equals the dimension of the kernel plus the dimension of the image.
If V and W are finite-dimensional and bases have been chosen, then T can be described by a matrix M, and the kernel can be computed by solving the homogeneous system of linear equations . In this case, the kernel of T may be identified to the kernel of the matrix M, also called "null space" of M. The dimension of the null space, called the nullity of M, is given by the number of columns of M minus the rank of M, as a consequence of the rank–nullity theorem.
Solving homogeneous differential equations often amounts to computing the kernel of certain differential operators.
For instance, in order to find all twice-differentiable functions f from the real line to itself such that
let V be the space of all twice differentiable functions, let W be the space of all functions, and define a linear operator T from V to W by
for f in V and x an arbitrary real number.
Then all solutions to the differential equation are in .
One can define kernels for homomorphisms between modules over a ring in an analogous manner. This includes kernels for homomorphisms between abelian groups as a special case. This example captures the essence of kernels in general abelian categories; see Kernel (category theory).
Group homomorphisms
Let G and H be groups and let f be a group homomorphism from G to H. If eH is the identity element of H, then the kernel of f is the preimage of the singleton set {eH}; that is, the subset of G consisting of all those elements of G that are mapped by f to the element eH.
The kernel is usually denoted (or a variation). In symbols:
Since a group homomorphism preserves identity elements, the identity element eG of G must belong to the kernel.
The homomorphism f is injective if and only if its kernel is only the singleton set {eG}. If f were not injective, then the non-injective elements can form a distinct element of its kernel: there would exist such that and . Thus . f is a group homomorphism, so inverses and group operations are preserved, giving ; in other words, , and ker f would not be the singleton. Conversely, distinct elements of the kernel violate injectivity directly: if there would exist an element , then , thus f would not be injective.
is a subgroup of G and further it is a normal subgroup. Thus, there is a corresponding quotient group . This is isomorphic to f(G), the image of G under f (which is a subgroup of H also), by the first isomorphism theorem for groups.
In the special case of abelian groups, there is no deviation from the previous section.
Example
Let G be the cyclic group on 6 elements with modular addition, H be the cyclic on 2 elements with modular addition, and f the homomorphism that maps each element g in G to the element g modulo 2 in H. Then , since all these elements are mapped to 0H. The quotient group has two elements: and . It is indeed isomorphic to H.
Ring homomorphisms
Let R and S be rings (assumed unital) and let f be a ring homomorphism from R to S.
If 0S is the zero element of S, then the kernel of f is its kernel as linear map over the integers, or, equivalently, as additive groups. It is the preimage of the zero ideal , which is, the subset of R consisting of all those elements of R that are mapped by f to the element 0S.
The kernel is usually denoted (or a variation).
In symbols:
Since a ring homomorphism preserves zero elements, the zero element 0R of R must belong to the kernel.
The homomorphism f is injective if and only if its kernel is only the singleton set .
This is always the case if R is a field, and S is not the zero ring.
Since ker f contains the multiplicative identity only when S is the zero ring, it turns out that the kernel is generally not a subring of R. The kernel is a subrng, and, more precisely, a two-sided ideal of R.
Thus, it makes sense to speak of the quotient ring .
The first isomorphism theorem for rings states that this quotient ring is naturally isomorphic to the image of f (which is a subring of S). (Note that rings need not be unital for the kernel definition).
To some extent, this can be thought of as a special case of the situation for modules, since these are all bimodules over a ring R:
R itself;
any two-sided ideal of R (such as ker f);
any quotient ring of R (such as ); and
the codomain of any ring homomorphism whose domain is R (such as S, the codomain of f).
However, the isomorphism theorem gives a stronger result, because ring isomorphisms preserve multiplication while module isomorphisms (even between rings) in general do not.
This example captures the essence of kernels in general Mal'cev algebras.
Monoid homomorphisms
Let M and N be monoids and let f be a monoid homomorphism from M to N. Then the kernel of f is the subset of the direct product consisting of all those ordered pairs of elements of M whose components are both mapped by f to the same element in N. The kernel is usually denoted (or a variation thereof). In symbols:
Since f is a function, the elements of the form must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the diagonal set .
It turns out that is an equivalence relation on M, and in fact a congruence relation. Thus, it makes sense to speak of the quotient monoid . The first isomorphism theorem for monoids states that this quotient monoid is naturally isomorphic to the image of f (which is a submonoid of N; for the congruence relation).
This is very different in flavour from the above examples. In particular, the preimage of the identity element of N is not enough to determine the kernel of f.
Universal algebra
All the above cases may be unified and generalized in universal algebra.
General case
Let A and B be algebraic structures of a given type and let f be a homomorphism of that type from A to B.
Then the kernel of f is the subset of the direct product consisting of all those ordered pairs of elements of A whose components are both mapped by f to the same element in B.
The kernel is usually denoted (or a variation).
In symbols:
Since f is a function, the elements of the form must belong to the kernel.
The homomorphism f is injective if and only if its kernel is exactly the diagonal set .
It is easy to see that ker f is an equivalence relation on A, and in fact a congruence relation.
Thus, it makes sense to speak of the quotient algebra .
The first isomorphism theorem in general universal algebra states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B).
Note that the definition of kernel here (as in the monoid example) doesn't depend on the algebraic structure; it is a purely set-theoretic concept.
For more on this general concept, outside of abstract algebra, see kernel of a function.
Malcev algebras
In the case of Malcev algebras, this construction can be simplified. Every Malcev algebra has a special neutral element (the zero vector in the case of vector spaces, the identity element in the case of commutative groups, and the zero element in the case of rings or modules). The characteristic feature of a Malcev algebra is that we can recover the entire equivalence relation ker f from the equivalence class of the neutral element.
To be specific, let A and B be Malcev algebraic structures of a given type and let f be a homomorphism of that type from A to B. If eB is the neutral element of B, then the kernel of f is the preimage of the singleton set {eB}; that is, the subset of A consisting of all those elements of A that are mapped by f to the element eB.
The kernel is usually denoted (or a variation). In symbols:
Since a Malcev algebra homomorphism preserves neutral elements, the identity element eA of A must belong to the kernel. The homomorphism f is injective if and only if its kernel is only the singleton set .
The notion of ideal generalises to any Malcev algebra (as linear subspace in the case of vector spaces, normal subgroup in the case of groups, two-sided ideals in the case of rings, and submodule in the case of modules).
It turns out that ker f is not a subalgebra of A, but it is an ideal.
Then it makes sense to speak of the quotient algebra .
The first isomorphism theorem for Malcev algebras states that this quotient algebra is naturally isomorphic to the image of f (which is a subalgebra of B).
The connection between this and the congruence relation for more general types of algebras is as follows.
First, the kernel-as-an-ideal is the equivalence class of the neutral element eA under the kernel-as-a-congruence. For the converse direction, we need the notion of quotient in the Mal'cev algebra (which is division on either side for groups and subtraction for vector spaces, modules, and rings).
Using this, elements a and b of A are equivalent under the kernel-as-a-congruence if and only if their quotient a/b is an element of the kernel-as-an-ideal.
Algebras with nonalgebraic structure
Sometimes algebras are equipped with a nonalgebraic structure in addition to their algebraic operations.
For example, one may consider topological groups or topological vector spaces, which are equipped with a topology.
In this case, we would expect the homomorphism f to preserve this additional structure; in the topological examples, we would want f to be a continuous map.
The process may run into a snag with the quotient algebras, which may not be well-behaved.
In the topological examples, we can avoid problems by requiring that topological algebraic structures be Hausdorff (as is usually done); then the kernel (however it is constructed) will be a closed set and the quotient space will work fine (and also be Hausdorff).
Kernels in category theory
The notion of kernel in category theory is a generalisation of the kernels of abelian algebras; see Kernel (category theory).
The categorical generalisation of the kernel as a congruence relation is the kernel pair.
(There is also the notion of difference kernel, or binary equaliser.)
| Mathematics | Abstract algebra | null |
45241 | https://en.wikipedia.org/wiki/Isomorphism%20theorems | Isomorphism theorems | In mathematics, specifically abstract algebra, the isomorphism theorems (also known as Noether's isomorphism theorems) are theorems that describe the relationship among quotients, homomorphisms, and subobjects. Versions of the theorems exist for groups, rings, vector spaces, modules, Lie algebras, and other algebraic structures. In universal algebra, the isomorphism theorems can be generalized to the context of algebras and congruences.
History
The isomorphism theorems were formulated in some generality for homomorphisms of modules by Emmy Noether in her paper Abstrakter Aufbau der Idealtheorie in algebraischen Zahl- und Funktionenkörpern, which was published in 1927 in Mathematische Annalen. Less general versions of these theorems can be found in work of Richard Dedekind and previous papers by Noether.
Three years later, B.L. van der Waerden published his influential Moderne Algebra, the first abstract algebra textbook that took the groups-rings-fields approach to the subject. Van der Waerden credited lectures by Noether on group theory and Emil Artin on algebra, as well as a seminar conducted by Artin, Wilhelm Blaschke, Otto Schreier, and van der Waerden himself on ideals as the main references. The three isomorphism theorems, called homomorphism theorem, and two laws of isomorphism when applied to groups, appear explicitly.
Groups
We first present the isomorphism theorems of the groups.
Theorem A (groups)
Let G and H be groups, and let f : G → H be a homomorphism. Then:
The kernel of f is a normal subgroup of G,
The image of f is a subgroup of H, and
The image of f is isomorphic to the quotient group G / ker(f).
In particular, if f is surjective then H is isomorphic to G / ker(f).
This theorem is usually called the first isomorphism theorem.
Theorem B (groups)
Let be a group. Let be a subgroup of , and let be a normal subgroup of . Then the following hold:
The product is a subgroup of ,
The subgroup is a normal subgroup of ,
The intersection is a normal subgroup of , and
The quotient groups and are isomorphic.
Technically, it is not necessary for to be a normal subgroup, as long as is a subgroup of the normalizer of in . In this case, is not a normal subgroup of , but is still a normal subgroup of the product .
This theorem is sometimes called the second isomorphism theorem, diamond theorem or the parallelogram theorem.
An application of the second isomorphism theorem identifies projective linear groups: for example, the group on the complex projective line starts with setting , the group of invertible 2 × 2 complex matrices, , the subgroup of determinant 1 matrices, and the normal subgroup of scalar matrices , we have , where is the identity matrix, and . Then the second isomorphism theorem states that:
Theorem C (groups)
Let be a group, and a normal subgroup of .
Then
If is a subgroup of such that , then has a subgroup isomorphic to .
Every subgroup of is of the form for some subgroup of such that .
If is a normal subgroup of such that , then has a normal subgroup isomorphic to .
Every normal subgroup of is of the form for some normal subgroup of such that .
If is a normal subgroup of such that , then the quotient group is isomorphic to .
The last statement is sometimes referred to as the third isomorphism theorem. The first four statements are often subsumed under Theorem D below, and referred to as the lattice theorem, correspondence theorem, or fourth isomorphism theorem.
Theorem D (groups)
Let be a group, and a normal subgroup of .
The canonical projection homomorphism defines a bijective correspondence
between the set of subgroups of containing and the set of (all) subgroups of . Under this correspondence normal subgroups correspond to normal subgroups.
This theorem is sometimes called the correspondence theorem, the lattice theorem, and the fourth isomorphism theorem.
The Zassenhaus lemma (also known as the butterfly lemma) is sometimes called the fourth isomorphism theorem.
Discussion
The first isomorphism theorem can be expressed in category theoretical language by saying that the category of groups is (normal epi, mono)-factorizable; in other words, the normal epimorphisms and the monomorphisms form a factorization system for the category. This is captured in the commutative diagram in the margin, which shows the objects and morphisms whose existence can be deduced from the morphism . The diagram shows that every morphism in the category of groups has a kernel in the category theoretical sense; the arbitrary morphism f factors into , where ι is a monomorphism and π is an epimorphism (in a conormal category, all epimorphisms are normal). This is represented in the diagram by an object and a monomorphism (kernels are always monomorphisms), which complete the short exact sequence running from the lower left to the upper right of the diagram. The use of the exact sequence convention saves us from having to draw the zero morphisms from to and .
If the sequence is right split (i.e., there is a morphism σ that maps to a -preimage of itself), then G is the semidirect product of the normal subgroup and the subgroup . If it is left split (i.e., there exists some such that ), then it must also be right split, and is a direct product decomposition of G. In general, the existence of a right split does not imply the existence of a left split; but in an abelian category (such as that of abelian groups), left splits and right splits are equivalent by the splitting lemma, and a right split is sufficient to produce a direct sum decomposition . In an abelian category, all monomorphisms are also normal, and the diagram may be extended by a second short exact sequence .
In the second isomorphism theorem, the product SN is the join of S and N in the lattice of subgroups of G, while the intersection S ∩ N is the meet.
The third isomorphism theorem is generalized by the nine lemma to abelian categories and more general maps between objects.
Note on numbers and names
Below we present four theorems, labelled A, B, C and D. They are often numbered as "First isomorphism theorem", "Second..." and so on; however, there is no universal agreement on the numbering. Here we give some examples of the group isomorphism theorems in the literature. Notice that these theorems have analogs for rings and modules.
It is less common to include the Theorem D, usually known as the lattice theorem or the correspondence theorem, as one of isomorphism theorems, but when included, it is the last one.
Rings
The statements of the theorems for rings are similar, with the notion of a normal subgroup replaced by the notion of an ideal.
Theorem A (rings)
Let and be rings, and let be a ring homomorphism. Then:
The kernel of is an ideal of ,
The image of is a subring of , and
The image of is isomorphic to the quotient ring .
In particular, if is surjective then is isomorphic to .
Theorem B (rings)
Let R be a ring. Let S be a subring of R, and let I be an ideal of R. Then:
The sum S + I = {s + i | s ∈ S, i ∈ I } is a subring of R,
The intersection S ∩ I is an ideal of S, and
The quotient rings (S + I) / I and S / (S ∩ I) are isomorphic.
Theorem C (rings)
Let R be a ring, and I an ideal of R. Then
If is a subring of such that , then is a subring of .
Every subring of is of the form for some subring of such that .
If is an ideal of such that , then is an ideal of .
Every ideal of is of the form for some ideal of such that .
If is an ideal of such that , then the quotient ring is isomorphic to .
Theorem D (rings)
Let be an ideal of . The correspondence is an inclusion-preserving bijection between the set of subrings of that contain and the set of subrings of . Furthermore, (a subring containing ) is an ideal of if and only if is an ideal of .
Modules
The statements of the isomorphism theorems for modules are particularly simple, since it is possible to form a quotient module from any submodule. The isomorphism theorems for vector spaces (modules over a field) and abelian groups (modules over ) are special cases of these. For finite-dimensional vector spaces, all of these theorems follow from the rank–nullity theorem.
In the following, "module" will mean "R-module" for some fixed ring R.
Theorem A (modules)
Let M and N be modules, and let φ : M → N be a module homomorphism. Then:
The kernel of φ is a submodule of M,
The image of φ is a submodule of N, and
The image of φ is isomorphic to the quotient module M / ker(φ).
In particular, if φ is surjective then N is isomorphic to M / ker(φ).
Theorem B (modules)
Let M be a module, and let S and T be submodules of M. Then:
The sum S + T = {s + t | s ∈ S, t ∈ T} is a submodule of M,
The intersection S ∩ T is a submodule of M, and
The quotient modules (S + T) / T and S / (S ∩ T) are isomorphic.
Theorem C (modules)
Let M be a module, T a submodule of M.
If is a submodule of such that , then is a submodule of .
Every submodule of is of the form for some submodule of such that .
If is a submodule of such that , then the quotient module is isomorphic to .
Theorem D (modules)
Let be a module, a submodule of . There is a bijection between the submodules of that contain and the submodules of . The correspondence is given by for all . This correspondence commutes with the processes of taking sums and intersections (i.e., is a lattice isomorphism between the lattice of submodules of and the lattice of submodules of that contain ).
Universal algebra
To generalise this to universal algebra, normal subgroups need to be replaced by congruence relations.
A congruence on an algebra is an equivalence relation that forms a subalgebra of considered as an algebra with componentwise operations. One can make the set of equivalence classes into an algebra of the same type by defining the operations via representatives; this will be well-defined since is a subalgebra of . The resulting structure is the quotient algebra.
Theorem A (universal algebra)
Let be an algebra homomorphism. Then the image of is a subalgebra of , the relation given by (i.e. the kernel of ) is a congruence on , and the algebras and are isomorphic. (Note that in the case of a group, iff , so one recovers the notion of kernel used in group theory in this case.)
Theorem B (universal algebra)
Given an algebra , a subalgebra of , and a congruence on , let be the trace of in and the collection of equivalence classes that intersect . Then
is a congruence on ,
is a subalgebra of , and
the algebra is isomorphic to the algebra .
Theorem C (universal algebra)
Let be an algebra and two congruence relations on such that . Then is a congruence on , and is isomorphic to
Theorem D (universal algebra)
Let be an algebra and denote the set of all congruences on . The set
is a complete lattice ordered by inclusion.
If is a congruence and we denote by the set of all congruences that contain (i.e. is a principal filter in , moreover it is a sublattice), then
the map is a lattice isomorphism.
| Mathematics | Abstract algebra | null |
45249 | https://en.wikipedia.org/wiki/User%20interface | User interface | In the industrial design field of human–computer interaction, a user interface (UI) is the space where interactions between humans and machines occur. The goal of this interaction is to allow effective operation and control of the machine from the human end, while the machine simultaneously feeds back information that aids the operators' decision-making process. Examples of this broad concept of user interfaces include the interactive aspects of computer operating systems, hand tools, heavy machinery operator controls and process controls. The design considerations applicable when creating user interfaces are related to, or involve such disciplines as, ergonomics and psychology.
Generally, the goal of user interface design is to produce a user interface that makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the way which produces the desired result (i.e. maximum usability). This generally means that the operator needs to provide minimal input to achieve the desired output, and also that the machine minimizes undesired outputs to the user.
User interfaces are composed of one or more layers, including a human–machine interface (HMI) that typically interfaces machines with physical input hardware (such as keyboards, mice, or game pads) and output hardware (such as computer monitors, speakers, and printers). A device that implements an HMI is called a human interface device (HID). User interfaces that dispense with the physical movement of body parts as an intermediary step between the brain and the machine use no input or output devices except electrodes alone; they are called brain–computer interfaces (BCIs) or brain–machine interfaces (BMIs).
Other terms for human–machine interfaces are man–machine interface (MMI) and, when the machine in question is a computer, human–computer interface. Additional UI layers may interact with one or more human senses, including: tactile UI (touch), visual UI (sight), auditory UI (sound), olfactory UI (smell), equilibria UI (balance), and gustatory UI (taste).
Composite user interfaces (CUIs) are UIs that interact with two or more senses. The most common CUI is a graphical user interface (GUI), which is composed of a tactile UI and a visual UI capable of displaying graphics. When sound is added to a GUI, it becomes a multimedia user interface (MUI). There are three broad categories of CUI: standard, virtual and augmented. Standard CUI use standard human interface devices like keyboards, mice, and computer monitors. When the CUI blocks out the real world to create a virtual reality, the CUI is virtual and uses a virtual reality interface. When the CUI does not block out the real world and creates augmented reality, the CUI is augmented and uses an augmented reality interface. When a UI interacts with all human senses, it is called a qualia interface, named after the theory of qualia. CUI may also be classified by how many senses they interact with as either an X-sense virtual reality interface or X-sense augmented reality interface, where X is the number of senses interfaced with. For example, a Smell-O-Vision is a 3-sense (3S) Standard CUI with visual display, sound and smells; when virtual reality interfaces interface with smells and touch it is said to be a 4-sense (4S) virtual reality interface; and when augmented reality interfaces interface with smells and touch it is said to be a 4-sense (4S) augmented reality interface.
Overview
The user interface or human–machine interface is the part of the machine that handles the human–machine interaction. Membrane switches, rubber keypads and touchscreens are examples of the physical part of the Human Machine Interface which we can see and touch.
In complex systems, the human–machine interface is typically computerized. The term human–computer interface refers to this kind of system. In the context of computing, the term typically extends as well to the software dedicated to control the physical elements used for human–computer interaction.
The engineering of human–machine interfaces is enhanced by considering ergonomics (human factors). The corresponding disciplines are human factors engineering (HFE) and usability engineering (UE) which is part of systems engineering.
Tools used for incorporating human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Nowadays, we use the expression graphical user interface for human–machine interface on computers, as nearly all of them are now using graphics.
Multimodal interfaces allow users to interact using more than one modality of user input.
Terminology
There is a difference between a user interface and an operator interface or a human–machine interface (HMI).
The term "user interface" is often used in the context of (personal) computer systems and electronic devices.
Where a network of equipment or computers are interlinked through an MES (Manufacturing Execution System)-or Host to display information.
A human–machine interface (HMI) is typically local to one machine or piece of equipment, and is the interface method between the human and the equipment/machine. An operator interface is the interface method by which multiple pieces of equipment, linked by a host control system, are accessed or controlled.
The system may expose several user interfaces to serve different kinds of users. For example, a computerized library database might provide two user interfaces, one for library patrons (limited set of functions, optimized for ease of use) and the other for library personnel (wide set of functions, optimized for efficiency).
The user interface of a mechanical system, a vehicle or an industrial installation is sometimes referred to as the human–machine interface (HMI). HMI is a modification of the original term MMI (man–machine interface). In practice, the abbreviation MMI is still frequently used although some may claim that MMI stands for something different now. Another abbreviation is HCI, but is more commonly used for human–computer interaction. Other terms used are operator interface console (OIC) and operator interface terminal (OIT). However it is abbreviated, the terms refer to the 'layer' that separates a human that is operating a machine from the machine itself. Without a clean and usable interface, humans would not be able to interact with information systems.
In science fiction, HMI is sometimes used to refer to what is better described as a direct neural interface. However, this latter usage is seeing increasing application in the real-life use of (medical) prostheses—the artificial extension that replaces a missing body part (e.g., cochlear implants).
In some circumstances, computers might observe the user and react according to their actions without specific commands. A means of tracking parts of the body is required, and sensors noting the position of the head, direction of gaze and so on have been used experimentally. This is particularly relevant to immersive interfaces.
History
The history of user interfaces can be divided into the following phases according to the dominant type of user interface:
1945–1968: Batch interface
In the batch era, computing power was extremely scarce and expensive. User interfaces were rudimentary. Users had to accommodate computers rather than the other way around; user interfaces were considered overhead, and software was designed to keep the processor at maximum utilization with as little overhead as possible.
The input side of the user interfaces for batch machines was mainly punched cards or equivalent media like paper tape. The output side added line printers to these media. With the limited exception of the system operator's console, human beings did not interact with batch machines in real time at all.
Submitting a job to a batch machine involved first preparing a deck of punched cards that described a program and its dataset. The program cards were not punched on the computer itself but on keypunches, specialized, typewriter-like machines that were notoriously bulky, unforgiving, and prone to mechanical failure. The software interface was similarly unforgiving, with very strict syntaxes designed to be parsed by the smallest possible compilers and interpreters.
Once the cards were punched, one would drop them in a job queue and wait. Eventually, operators would feed the deck to the computer, perhaps mounting magnetic tapes to supply another dataset or helper software. The job would generate a printout, containing final results or an abort notice with an attached error log. Successful runs might also write a result on magnetic tape or generate some data cards to be used in a later computation.
The turnaround time for a single job often spanned entire days. If one was very lucky, it might be hours; there was no real-time response. But there were worse fates than the card queue; some computers required an even more tedious and error-prone process of toggling in programs in binary code using console switches. The very earliest machines had to be partly rewired to incorporate program logic into themselves, using devices known as plugboards.
Early batch systems gave the currently running job the entire computer; program decks and tapes had to include what we would now think of as operating system code to talk to I/O devices and do whatever other housekeeping was needed. Midway through the batch period, after 1957, various groups began to experiment with so-called "load-and-go" systems. These used a monitor program which was always resident on the computer. Programs could call the monitor for services. Another function of the monitor was to do better error checking on submitted jobs, catching errors earlier and more intelligently and generating more useful feedback to the users. Thus, monitors represented the first step towards both operating systems and explicitly designed user interfaces.
1969–present: Command-line user interface
Command-line interfaces (CLIs) evolved from batch monitors connected to the system console. Their interaction model was a series of request-response transactions, with requests expressed as textual commands in a specialized vocabulary. Latency was far lower than for batch systems, dropping from days or hours to seconds. Accordingly, command-line systems allowed the user to change their mind about later stages of the transaction in response to real-time or near-real-time feedback on earlier results. Software could be exploratory and interactive in ways not possible before. But these interfaces still placed a relatively heavy mnemonic load on the user, requiring a serious investment of effort and learning time to master.
The earliest command-line systems combined teleprinters with computers, adapting a mature technology that had proven effective for mediating the transfer of information over wires between human beings. Teleprinters had originally been invented as devices for automatic telegraph transmission and reception; they had a history going back to 1902 and had already become well-established in newsrooms and elsewhere by 1920. In reusing them, economy was certainly a consideration, but psychology and the rule of least surprise mattered as well; teleprinters provided a point of interface with the system that was familiar to many engineers and users.
The widespread adoption of video-display terminals (VDTs) in the mid-1970s ushered in the second phase of command-line systems. These cut latency further, because characters could be thrown on the phosphor dots of a screen more quickly than a printer head or carriage can move. They helped quell conservative resistance to interactive programming by cutting ink and paper consumables out of the cost picture, and were to the first TV generation of the late 1950s and 60s even more iconic and comfortable than teleprinters had been to the computer pioneers of the 1940s.
Just as importantly, the existence of an accessible screen—a two-dimensional display of text that could be rapidly and reversibly modified—made it economical for software designers to deploy interfaces that could be described as visual rather than textual. The pioneering applications of this kind were computer games and text editors; close descendants of some of the earliest specimens, such as rogue(6), and vi(1), are still a live part of Unix tradition.
1985: SAA user interface or text-based user interface
In 1985, with the beginning of Microsoft Windows and other graphical user interfaces, IBM created what is called the Systems Application Architecture (SAA) standard which include the Common User Access (CUA) derivative. CUA successfully created what we know and use today in Windows, and most of the more recent DOS or Windows Console Applications will use that standard as well.
This defined that a pulldown menu system should be at the top of the screen, status bar at the bottom, shortcut keys should stay the same for all common functionality (F2 to Open for example would work in all applications that followed the SAA standard). This greatly helped the speed at which users could learn an application so it caught on quick and became an industry standard.
1968–present: Graphical user interface
1968 – Douglas Engelbart demonstrated NLS, a system which uses a mouse, pointers, hypertext, and multiple windows.
1970 – Researchers at Xerox Palo Alto Research Center (many from SRI) develop WIMP paradigm (Windows, Icons, Menus, Pointers)
1973 – Xerox Alto: commercial failure due to expense, poor user interface, and lack of programs
1979 – Steve Jobs and other Apple engineers visit Xerox PARC. Though Pirates of Silicon Valley dramatizes the events, Apple had already been working on developing a GUI, such as the Macintosh and Lisa projects, before the visit.
1981 – Xerox Star: focus on WYSIWYG. Commercial failure (25K sold) due to cost ($16K each), performance (minutes to save a file, couple of hours to recover from crash), and poor marketing
1982 – Rob Pike and others at Bell Labs designed Blit, which was released in 1984 by AT&T and Teletype as DMD 5620 terminal.
1984 – Apple Macintosh popularizes the GUI. Super Bowl commercial shown twice, was the most expensive commercial ever made at that time
1984 – MIT's X Window System: hardware-independent platform and networking protocol for developing GUIs on UNIX-like systems
1985 – Windows 1.0 – provided GUI interface to MS-DOS. No overlapping windows (tiled instead).
1985 – Microsoft and IBM start work on OS/2 meant to eventually replace MS-DOS and Windows
1986 – Apple threatens to sue Digital Research because their GUI desktop looked too much like Apple's Mac.
1987 – Windows 2.0 – Overlapping and resizable windows, keyboard and mouse enhancements
1987 – Macintosh II: first full-color Mac
1988 – OS/2 1.10 Standard Edition (SE) has GUI written by Microsoft, looks a lot like Windows 2
Interface design
Primary methods used in the interface design include prototyping and simulation.
Typical human–machine interface design consists of the following stages: interaction specification, interface software specification and prototyping:
Common practices for interaction specification include user-centered design, persona, activity-oriented design, scenario-based design, and resiliency design.
Common practices for interface software specification include use cases and constrain enforcement by interaction protocols (intended to avoid use errors).
Common practices for prototyping are based on libraries of interface elements (controls, decoration, etc.).
Principles of quality
In broad terms, interfaces generally regarded as user friendly, efficient, intuitive, etc. are typified by one or more particular qualities. For the purpose of example, a non-exhaustive list of such characteristics follows:
Clarity: The interface avoids ambiguity by making everything clear through language, flow, hierarchy and metaphors for visual elements.
Concision: However ironically, the over-clarification of information—for instance, by labelling the majority, if not the entirety, of items displayed on-screen at once, and regardless of whether or not the user would in fact require a visual indicator of some kind in order to identify a given item—can, and, under most normal circumstances, most likely will lead to the obfuscation of whatever information.
Familiarity: Even if someone uses an interface for the first time, certain elements can still be familiar. Real-life metaphors can be used to communicate meaning.
Responsiveness: A good interface should not feel sluggish. This means that the interface should provide good feedback to the user about what's happening and whether the user's input is being successfully processed.
Consistency: Keeping your interface consistent across your application is important because it allows users to recognize usage patterns.
Aesthetics: While you do not need to make an interface attractive for it to do its job, making something look good will make the time your users spend using your application more enjoyable; and happier users can only be a good thing.
Efficiency: Time is money, and a great interface should make the user more productive through shortcuts and good design.
Forgiveness: A good interface should not punish users for their mistakes but should instead provide the means to remedy them.
Principle of least astonishment
The principle of least astonishment (POLA) is a general principle in the design of all kinds of interfaces. It is based on the idea that human beings can only pay full attention to one thing at one time, leading to the conclusion that novelty should be minimized.
Principle of habit formation
If an interface is used persistently, the user will unavoidably develop habits for using the interface. The designer's role can thus be characterized as ensuring the user forms good habits. If the designer is experienced with other interfaces, they will similarly develop habits, and often make unconscious assumptions regarding how the user will interact with the interface.
A model of design criteria: User Experience Honeycomb
Peter Morville of Google designed the User Experience Honeycomb framework in 2004 when leading operations in user interface design. The framework was created to guide user interface design. It would act as a guideline for many web development students for a decade.
Usable: Is the design of the system easy and simple to use? The application should feel familiar, and it should be easy to use.
Useful: Does the application fulfill a need? A business's product or service needs to be useful.
Desirable: Is the design of the application sleek and to the point? The aesthetics of the system should be attractive, and easy to translate.
Findable: Are users able to quickly find the information they are looking for? Information needs to be findable and simple to navigate. A user should never have to hunt for your product or information.
Accessible: Does the application support enlarged text without breaking the framework? An application should be accessible to those with disabilities.
Credible: Does the application exhibit trustworthy security and company details? An application should be transparent, secure, and honest.
Valuable: Does the end-user think it's valuable? If all 6 criteria are met, the end-user will find value and trust in the application.
Types
Attentive user interfaces manage the user attention deciding when to interrupt the user, the kind of warnings, and the level of detail of the messages presented to the user.
Batch interfaces are non-interactive user interfaces, where the user specifies all the details of the batch job in advance to batch processing, and receives the output when all the processing is done. The computer does not prompt for further input after the processing has started.
Command line interfaces (CLIs) prompt the user to provide input by typing a command string with the computer keyboard and respond by outputting text to the computer monitor. Used by programmers and system administrators, in engineering and scientific environments, and by technically advanced personal computer users.
Conversational interfaces enable users to command the computer with plain text English (e.g., via text messages, or chatbots) or voice commands, instead of graphic elements. These interfaces often emulate human-to-human conversations.
Conversational interface agents attempt to personify the computer interface in the form of an animated person, robot, or other character (such as Microsoft's Clippy the paperclip), and present interactions in a conversational form.
Crossing-based interfaces are graphical user interfaces in which the primary task consists in crossing boundaries instead of pointing.
Direct manipulation interface is a general class of user interfaces that allow users to manipulate objects presented to them, using actions that correspond to the physical world, at least loosely.
Gesture interfaces are graphical user interfaces which accept input in a form of hand gestures, or mouse gestures sketched with a computer mouse or a stylus.
Graphical user interfaces (GUI) accept input via devices such as a computer keyboard and mouse and provide articulated graphical output on the computer monitor. There are at least two different principles widely used in GUI design: Object-oriented user interfaces (OOUIs) and application-oriented interfaces.
Hardware interfaces are the physical, spatial interfaces found on products in the real world from toasters, to car dashboards, to airplane cockpits. They are generally a mixture of knobs, buttons, sliders, switches, and touchscreens.
provide input to electronic or electro-mechanical devices by passing a finger through reproduced holographic images of what would otherwise be tactile controls of those devices, floating freely in the air, detected by a wave source and without tactile interaction.
Intelligent user interfaces are human–machine interfaces that aim to improve the efficiency, effectiveness, and naturalness of human–machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media (e.g., graphics, natural language, gesture).
Motion tracking interfaces monitor the user's body motions and translate them into commands, currently being developed by Apple.
Multi-screen interfaces, employ multiple displays to provide a more flexible interaction. This is often employed in computer game interaction in both the commercial arcades and more recently the handheld markets.
Natural-language interfaces are used for search engines and on webpages. User types in a question and waits for a response.
Non-command user interfaces, which observe the user to infer their needs and intentions, without requiring that they formulate explicit commands.
Object-oriented user interfaces (OOUI) are based on object-oriented programming metaphors, allowing users to manipulate simulated objects and their properties.
Permission-driven user interfaces show or conceal menu options or functions depending on the user's level of permissions. The system is intended to improve the user experience by removing items that are unavailable to the user. A user who sees functions that are unavailable for use may become frustrated. It also provides an enhancement to security by hiding functional items from unauthorized persons.
Reflexive user interfaces where the users control and redefine the entire system via the user interface alone, for instance to change its command verbs. Typically, this is only possible with very rich graphic user interfaces.
Search interface is how the search box of a site is displayed, as well as the visual representation of the search results.
Tangible user interfaces, which place a greater emphasis on touch and physical environment or its element.
Task-focused interfaces are user interfaces which address the information overload problem of the desktop metaphor by making tasks, not files, the primary unit of interaction.
Text-based user interfaces (TUIs) are user interfaces which interact via text. TUIs include command-line interfaces and text-based WIMP environments.
Touchscreens are displays that accept input by touch of fingers or a stylus. Used in a growing amount of mobile devices and many types of point of sale, industrial processes and machines, self-service machines, etc.
Touch user interface are graphical user interfaces using a touchpad or touchscreen display as a combined input and output device. They supplement or replace other forms of output with haptic feedback methods. Used in computerized simulators, etc.
Voice user interfaces, which accept input and provide output by generating voice prompts. The user input is made by pressing keys or buttons, or responding verbally to the interface.
Web-based user interfaces or web user interfaces (WUI) that accept input and provide output by generating web pages viewed by the user using a web browser program.
Zero-input interfaces get inputs from a set of sensors instead of querying the user with input dialogs.
Zooming user interfaces are graphical user interfaces in which information objects are represented at different levels of scale and detail, and where the user can change the scale of the viewed area in order to show more detail.
Gallery
| Technology | User interface | null |
45286 | https://en.wikipedia.org/wiki/Bonobo | Bonobo | The bonobo (; Pan paniscus), also historically called the pygmy chimpanzee (less often the dwarf chimpanzee or gracile chimpanzee), is an endangered great ape and one of the two species making up the genus Pan (the other being the common chimpanzee, Pan troglodytes). While bonobos are, today, recognized as a distinct species in their own right, they were initially thought to be a subspecies of Pan troglodytes, because of the physical similarities between the two species. Taxonomically, members of the chimpanzee/bonobo subtribe Panina—composed entirely by the genus Pan—are collectively termed panins.
Bonobos are distinguished from common chimpanzees by relatively long limbs, pinker lips, a darker face, a tail-tuft through adulthood, and parted, longer hair on their heads. Some individuals have sparser, thin hair over parts of their bodies. The bonobo is found in a area within the Congo Basin of the Democratic Republic of the Congo (DRC), Central Africa. It is predominantly frugivorous, compared to the often highly omnivorous diets and hunting of small monkeys, duiker and other antelope exhibited by common chimpanzees. Bonobos inhabit primary and secondary forest, including seasonally inundated swamp forest. Because of political instability in the region, and the general timidity of bonobos, there has been relatively little field work done observing the species in its natural habitat.
According to studies published in 2017 by researchers at The George Washington University, the ancestors of the genus Pan split from the human line about 8 million years ago; moreover, bonobos split from the common chimpanzee line about 2 million years ago.
Along with the common chimpanzee, the bonobo is the closest extant relative to humans. As the two species are not proficient swimmers, the natural formation of the Congo River (around 1.5–2 million years ago) possibly led to the isolation and speciation of the bonobo. Bonobos live south of the river, and thereby were separated from the ancestors of the common chimpanzee, which live north of the river. There are no concrete figures regarding population, but the estimate is between 29,500 and 50,000 individuals. The species is listed as Endangered on the IUCN Red List and is most threatened by habitat destruction, human population growth and movement (as well as ongoing civil unrest and political infighting), with commercial poaching being, by far, the most prominent threat. Bonobos typically live 40 years in captivity; their lifespan in the wild is unknown, but it is almost certainly much shorter.
Etymology
Formerly the bonobo was known as the "pygmy chimpanzee", despite the bonobo having a similar body size to the common chimpanzee. The name "pygmy" was given by the German zoologist Ernst Schwarz in 1929, who classified the species on the basis of a previously mislabeled bonobo cranium, noting its diminutive size compared to chimpanzee skulls.
The name "bonobo" first appeared in 1954, when Austrian zoologist Eduard Paul Tratz and German biologist Heinz Heck proposed it as a new and separate generic term for pygmy chimpanzees. The name is thought to derive from a misspelling on a shipping crate from the town of Bolobo on the Congo River near the location from which the first bonobo specimens were collected in the 1920s.
Taxonomy
The bonobo was first recognised as a distinct taxon in 1928 by German anatomist Ernst Schwarz, based on a skull in the Tervuren Museum in Belgium which had previously been classified as a juvenile chimpanzee (Pan troglodytes). Schwarz published his findings in 1929, classifying the bonobo as a subspecies of chimpanzee, Pan satyrus paniscus. In 1933, American anatomist Harold Coolidge elevated it to species status. Major behavioural differences between bonobos and chimpanzees were first discussed in detail by Tratz and Heck in the early 1950s. Unaware of any taxonomic distinction with the common chimpanzee, American psychologist and primatologist Robert Yerkes had already noticed an unexpected major behavioural difference in the 1920s.
Bonobos and chimpanzees are the two species which make up the genus Pan, and are the closest living relatives to humans (Homo sapiens).
According to studies published in 2017 by researchers at The George Washington University, bonobos, along with common chimpanzees, split from the human line about 8 million years ago; moreover, bonobos split from the common chimpanzee line about 2 million years ago.
Nonetheless, the exact timing of the Pan–Homo last common ancestor is contentious, but DNA comparison suggests continual interbreeding between ancestral Pan and Homo groups, post-divergence, until about 4 million years ago. DNA evidence suggests the bonobo and common chimpanzee species diverged approximately 890,000–860,000 years ago following separation of these two populations possibly because of acidification and the spread of savannas at this time. Currently, these two species are separated by the Congo River, which had existed well before the divergence date, though ancestral Pan may have dispersed across the river using corridors which no longer exist. The first Pan fossils were reported in 2005 from the Middle Pleistocene (after the bonobo–chimpanzee split) of Kenya, alongside early Homo fossils.
According to A. Zihlman, bonobo body proportions closely resemble those of Australopithecus, leading evolutionary biologist Jeremy Griffith to suggest that bonobos may be a living example of our distant human ancestors. According to Australian anthropologists Gary Clark and Maciej Henneberg, human ancestors went through a bonobo-like phase featuring reduced aggression and associated anatomical changes, exemplified in Ardipithecus ramidus.
The first official publication of the sequencing and assembly of the bonobo genome was released in June 2012. The genome of a female bonobo from Leipzig Zoo was deposited with the International Nucleotide Sequence Database Collaboration (DDBJ/EMBL/GenBank) under the EMBL accession number AJFE01000000 after a previous analysis by the National Human Genome Research Institute confirmed that the bonobo genome is about 0.4% divergent from the chimpanzee genome.
Genetics and genomics
Relationships of bonobos to humans and other apes can be determined by comparing their genes or whole genomes. While the first bonobo genome was published in 2012, a high-quality reference genome became available only in 2021. The overall nucleotide divergence between chimpanzee and bonobo based on the latter is 0.421 ± 0.086% for autosomes and 0.311 ± 0.060% for the X chromosome. The reference genome predicts 22,366 full-length protein-coding genes and 9,066 noncoding genes, although cDNA sequencing confirmed only 20,478 protein-coding and 36,880 noncoding bonobo genes, similar to the number of genes annotated in the human genome. Overall, 206 and 1,576 protein-coding genes are part of gene families that contracted or expanded in the bonobo genome compared to the human genome, respectively, that is, these genes were lost or gained in the bonobo genome compared to humans.
Hybrids
Researchers have found that both central (Pan troglodytes troglodytes) and eastern chimpanzees (Pan troglodytes schweinfurthii) share more genetic material with bonobos than other chimpanzee subspecies. It is believed that genetic admixture has occurred at least two times within the past 550,000 years. In modern times hybridization between bonobos and chimpanzees in the wild is prevented as populations are allopatric and kept isolated on different sides of the Congo river.
Within captivity, hybrids between bonobos and chimpanzees have been recorded. Between 1990 and 1992, five pregnancies were conceived and studied between a male bonobo and two female chimpanzees. The two initial pregnancies were aborted because of environmental stressors. The following three pregnancies however led to the birth of three hybrid offspring.
A bonobo and chimpanzee hybrid called Tiby was also featured in the 2017 Swedish film The Square.
Description
The bonobo is commonly considered to be more gracile than the common chimpanzee. Although large male chimpanzees can exceed any bonobo in bulk and weight, the two species broadly overlap in body size. Adult female bonobos are somewhat smaller than adult males. Body mass ranges from with an average weight of in males against an average of in females. The total length of bonobos (from the nose to the rump while on all fours) is . Male bonobos average when standing upright, compared to in females. The bonobo's head is relatively smaller than that of the common chimpanzee with less prominent brow ridges above the eyes. It has a black face with pink lips, small ears, wide nostrils, and long hair on its head that forms a parting. Females have slightly more prominent breasts, in contrast to the flat breasts of other female apes, although not so prominent as those of humans. The bonobo also has a slim upper body, narrow shoulders, thin neck, and long legs when compared to the common chimpanzee.
Bonobos are both terrestrial and arboreal. Most ground locomotion is characterized by quadrupedal knuckle-walking. Bipedal walking has been recorded as less than 1% of terrestrial locomotion in the wild, a figure that decreased with habituation, while in captivity there is a wide variation. Bipedal walking in captivity, as a percentage of bipedal plus quadrupedal locomotion bouts, has been observed from 3.9% for spontaneous bouts to nearly 19% when abundant food is provided. These physical characteristics and its posture give the bonobo an appearance more closely resembling that of humans than the common chimpanzee does. The bonobo also has highly individuated facial features, as humans do, so that one individual may look significantly different from another, a characteristic adapted for visual facial recognition in social interaction.
Multivariate analysis has shown bonobos are more neotenized than the common chimpanzee, taking into account such features as the proportionately long torso length of the bonobo. Other researchers challenged this conclusion.
Behavior
Primatologist Frans de Waal states bonobos are capable of altruism, compassion, empathy, kindness, patience, and sensitivity, and described "bonobo society" as a "gynecocracy". Primatologists who have studied bonobos in the wild have documented a wide range of behaviors, including aggressive behavior and more cyclic sexual behavior similar to chimpanzees, even though bonobos show more sexual behavior in a greater variety of relationships. An analysis of female bonding among wild bonobos by Takeshi Furuichi stresses female sexuality and shows how female bonobos spend much more time in estrus than female chimpanzees.
Some primatologists have argued that de Waal's data reflect only the behavior of captive bonobos, suggesting that wild bonobos show levels of aggression closer to what is found among chimpanzees. De Waal has responded that the contrast in temperament between bonobos and chimpanzees observed in captivity is meaningful, because it controls for the influence of environment. The two species behave quite differently even if kept under identical conditions. A 2014 study also found bonobos to be less aggressive than chimpanzees, particularly eastern chimpanzees. The authors argued that the relative peacefulness of western chimpanzees and bonobos was primarily due to ecological factors. Bonobos warn each other of danger less efficiently than chimpanzees in the same situation.
Nonetheless, on 12 April 2024, biologists reported that bonobos behave more aggressively than thought earlier.
Social behavior
Bonobos are unusual among apes for their matriarchal social structure (extensive overlap between the male and female hierarchies leads some to refer to them as gender-balanced in their power structure). Bonobos do not have a defined territory and communities will travel over a wide range. Because of the nomadic nature of the females and evenly distributed food in their environment, males do not gain any obvious advantages by forming alliances with other males, or by defending a home range, as chimpanzees do. Female bonobos possess sharper canines than female chimpanzees, further fueling their status in the group. Although a male bonobo is dominant to a female in a dyadic interaction, depending on the community, socially-bonded females may be co-dominant with males or dominant over them, even to the extent that females can coerce reluctant males into mating with them.
At the top of the hierarchy is a coalition of high-ranking females and males typically headed by an old, experienced matriarch who acts as the decision-maker and leader of the group. Female bonobos typically earn their rank through experience, age, and ability to forge alliances with other females in their group, rather than physical intimidation, and top-ranking females will protect immigrant females from male harassment. While bonobos are often called matriarchal, and while every community is dominated by a female, some males will still obtain a high rank and act as coalitionary partners to the alpha female, often taking initiative in coordinating the groups movements. These males may outrank not only the other males in the group, but also many females. Certain males alert the group to any possible threats, protecting the group from predators such as pythons and leopards.
Aggressive encounters between males and females are rare, and males are tolerant of infants and juveniles. A male derives his status from the status of his mother. The mother–son bond often stays strong and continues throughout life. While social hierarchies do exist, and although the son of a high ranking female may outrank a lower female, rank plays a less prominent role than in other primate societies. Relationships between different communities are often positive and affiliative, and bonobos are not a territorial species. Bonobos will also share food with others, even unrelated strangers. Bonobos exhibit paedomorphism (retaining infantile physical characteristics and behaviours), which greatly inhibits aggression and enables unfamiliar bonobos to freely mingle and cooperate with each other.
Males engage in lengthy friendships with females and, in turn, female bonobos prefer to associate with and mate with males who are respectful and easygoing around them. Because female bonobos can use alliances to rebuff coercive and domineering males and select males at their own leisure, they show preference for males who are not aggressive towards them. Aging bonobos lose their playful streak and become noticeably more irritable in old age. Both sexes have a similar level of aggressiveness. Bonobos live in a male philopatric society where the females immigrate to new communities while males remain in their natal troop. However, it is not entirely unheard of for males to occasionally transfer into new groups. Additionally, females with powerful mothers may remain in their natal clan.
Alliances between males are poorly developed in most bonobo communities, while females will form alliances with each other and alliances between males and females occur, including multisex hunting parties. There is a confirmed case of a grown male bonobo adopting his orphaned infant brother. A mother bonobo will also support her grown son in conflicts with other males and help him secure better ties with other females, enhancing her chance of gaining grandchildren from him. She will even take measures such as physical intervention to prevent other males from breeding with certain females she wants her son to mate with. Although mothers play a role in aiding their sons, and the hierarchy among males is largely reflected by their mother's social status, some motherless males will still successfully dominate some males who do have mothers.
Female bonobos have also been observed fostering infants from outside their established community. Bonobos are not known to kill each other, and are generally less violent than chimpanzees, yet aggression still manifests itself in this species. Although female bonobos dominate males and selectively mate with males who do not exhibit aggression toward them, competition between the males themselves is intense and high-ranking males secure more matings than low-ranking ones. Indeed, the size difference between males and females is more pronounced in bonobos than it is in chimpanzees, as male bonobos do not form alliances and therefore have little incentive to hold back when fighting for access to females. Male bonobos are known to attack each other and inflict serious injuries such as missing digits, damaged eyes and torn ears. Some of these injuries may also occur when a male threatens the high ranking females and is injured by them, as the larger male is swarmed and outnumbered by a female mob.
Because of the promiscuous mating behavior of female bonobos, a male cannot be sure which offspring are his. As a result, the entirety of parental care in bonobos is assumed by the mothers. However, bonobos are not as promiscuous as chimpanzees and slightly polygamous tendencies occur, with high-ranking males enjoying greater reproductive success than low-ranking males. Unlike chimpanzees, where any male can coerce a female into mating with him, female bonobos enjoy greater sexual preferences and can rebuff undesirable males, an advantage of female-female bonding, and actively seek out higher-ranking males.
Bonobo party size tends to vary because the groups exhibit a fission–fusion pattern. A community of approximately 100 will split into small groups during the day while looking for food, and then will come back together to sleep. They sleep in nests that they construct in trees. Female bonobos more often than not secure feeding privileges and feed before males do, and although they are rarely successful in one-on-one confrontations with males, a female bonobo with several allies supporting her has extremely high success in monopolizing food sources. Different communities favour different prey. In some communities females exclusively hunt and have a preference for rodents, in others both sexes hunt, and will target monkeys. In captive settings, females exhibit extreme food-based aggression towards males, and forge coalitions against them to monopolize specific food items, often going as far as to mutilate any males who fail to heed their warning. In wild settings, however, female bonobos will quietly ask males for food if they had gotten it first, instead of forcibly confiscating it, suggesting sex-based hierarchy roles are less rigid than in captive colonies. Female bonobos are known to lead hunts on duikers and successfully defend their bounty from marauding males in the wild. They are more tolerant of younger males pestering them yet exhibit heightened aggression towards older males.
In a study published in November 2023, scientists reported, for the first time, evidence that groups of primates, particularly bonobos, are capable of cooperating with each other. Researchers observed unprecedented cooperation between two distinct bonobo groups in the Congo's Kokolopori Bonobo Reserve, Ekalakala and Kokoalongo, challenging traditional notions of ape societies. Over two years of observation, researchers witnessed 95 encounters between the groups. Contrary to expectations, these interactions resembled those within a single group. During these encounters, the bonobos engaged in behaviors such as grooming, food sharing, and collective defense against threats like snakes. Notably, the two groups, while displaying cooperative tendencies, maintained distinct identities, and there was no evidence of interbreeding or a blending of cultures. The cooperation observed was not arbitrary but evolved through individual bonds formed by exchanging favors and gifts. Some bonobos even formed alliances to target a third individual, demonstrating a nuanced social dynamic within the groups.
Sociosexual behaviour
Sexual activity generally plays a major role in bonobo society, being used as what some scientists perceive as a greeting, a means of forming social bonds, a means of conflict resolution, and postconflict reconciliation. Bonobos are the only non-human animal to have been observed engaging in tongue kissing. Bonobos and humans are the only primates to typically engage in face-to-face genital sex, although a pair of western gorillas has also been photographed in this position.
Bonobos do not form permanent monogamous sexual relationships with individual partners. They also do not seem to discriminate in their sexual behavior by sex or age, with the possible exception of abstaining from sexual activity between mothers and their adult sons. When bonobos come upon a new food source or feeding ground, the increased excitement will usually lead to communal sexual activity, presumably decreasing tension and encouraging peaceful feeding.
More often than the males, female bonobos engage in mutual genital-rubbing behavior, possibly to bond socially with each other, thus forming a female nucleus of bonobo society. The bonding among females enables them to dominate most of the males. Adolescent females often leave their native community to join another community. This migration mixes the bonobo gene pools, providing genetic diversity. Sexual bonding with other females establishes these new females as members of the group.
Bonobo clitorises are larger and more externalized than in most mammals; while the weight of a young adolescent female bonobo "is maybe half" that of a human teenager, she has a clitoris that is "three times bigger than the human equivalent, and visible enough to waggle unmistakably as she walks". In scientific literature, the female–female behavior of bonobos pressing vulvas together is often referred to as genito-genital (GG) rubbing. This sexual activity happens within the immediate female bonobo community and sometimes outside of it. Ethologist Jonathan Balcombe stated that female bonobos rub their clitorises together rapidly for ten to twenty seconds, and this behavior, "which may be repeated in rapid succession, is usually accompanied by grinding, shrieking, and clitoral engorgement"; he added that it is estimated that they engage in this practice "about once every two hours" on average. As bonobos occasionally copulate face-to-face, "evolutionary biologist Marlene Zuk has suggested that the position of the clitoris in bonobos and some other primates has evolved to maximize stimulation during sexual intercourse". The position of the clitoris may alternatively permit GG-rubbings, which has been hypothesized to function as a means for female bonobos to evaluate their intrasocial relationships.
Bonobo males engage in various forms of male–male genital behavior. The most common form of male–male mounting is similar to that of a heterosexual mounting: one of the males sits "passively on his back [with] the other male thrusting on him", with the penises rubbing together because of both males' erections. In another, rarer form of genital rubbing, two bonobo males hang from a tree limb face-to-face while penis fencing. This also may occur when two males rub their penises together while in face-to-face position. Another form of genital interaction (rump rubbing) often occurs to express reconciliation between two males after a conflict, when they stand back-to-back and rub their scrotal sacs together, but such behavior also occurs outside agonistic contexts: Kitamura (1989) observed rump–rump contacts between adult males following sexual solicitation behaviors similar to those between female bonobos prior to GG-rubbing. Takayoshi Kano observed similar practices among bonobos in the natural habitat. Tongue kissing, oral sex, and genital massaging have also been recorded among male bonobos.
Wild females give birth for the first time at 13 or 14 years of age. Bonobo reproductive rates are no higher than those of the common chimpanzee. However, female bonobo oestrus periods are longer. During oestrus, females undergo a swelling of the perineal tissue lasting 10 to 20 days. The gestation period is on average 240 days. Postpartum amenorrhea (absence of menstruation) lasts less than one year and a female may resume external signs of oestrus within a year of giving birth, though the female is probably not fertile at this point. Female bonobos carry and nurse their young for four years and give birth on average every 4.6 years. Compared to common chimpanzees, bonobo females resume the genital swelling cycle much sooner after giving birth, enabling them to rejoin the sexual activities of their society. Also, bonobo females which are sterile or too young to reproduce still engage in sexual activity. Mothers will help their sons get more matings from females in oestrus.
Adult male bonobos have sex with infants, although without penetration. Adult females also have sex with infants, but less frequently. Infants are not passive participants. They quite often initiate contacts with both adult males and females, as well as with peers. They have also been shown to be sexually active even in the absence of any stimulation or learning from adults.
Infanticide, while well documented in chimpanzees, is apparently absent in bonobo society. Although infanticide has not been directly observed, there have been documented cases of both female and male bonobos kidnapping infants, sometimes resulting in infants dying from dehydration. Although male bonobos have not yet been seen to practice infanticide, there is a documented incident in captivity involving a dominant female abducting an infant from a lower-ranking female, treating the infant roughly and denying it the chance to suckle. During the kidnapping, the infant's mother was clearly distressed and tried to retrieve her infant. Had the zookeepers not intervened, the infant almost certainly would have died from dehydration. This suggests female bonobos can have hostile rivalries with each other and a propensity to carry out infanticide.
The highly sexual nature of bonobo society and the fact that there is little competition over mates means that many males and females are mating with each other, in contrast to the one dominant male chimpanzee that fathers most of the offspring in a group. The strategy of bonobo females mating with many males may be a counterstrategy to infanticide because it confuses paternity. If male bonobos cannot distinguish their own offspring from others, the incentive for infanticide essentially disappears. This is a reproductive strategy that seems specific to bonobos; infanticide is observed in all other great apes except orangutans. Bonobos engage in sexual activity numerous times a day.
It is unknown how the bonobo avoids simian immunodeficiency virus (SIV) and its effects.
Peacefulness
Observations in the wild indicate that the males among the related common chimpanzee communities are hostile to males from outside the community. Parties of males 'patrol' for the neighboring males that might be traveling alone, and attack those single males, often killing them. This does not appear to be the behavior of bonobo males or females, which seem to prefer sexual contact over violent confrontation with outsiders.
While bonobos are more peaceful than chimpanzees, it is not true that they are unaggressive. In the wild, among males, bonobos are more aggressive than chimpanzees, having higher rates of aggressive acts, about three times as much. Although, male chimpanzees are more likely to be aggressive to a lethal degree than male bonobos which are more likely to engage in more frequent, yet less intense squabbling. There is also more female to male aggression with bonobos than there is with chimpanzees. Female bonobos are also more aggressive than female chimpanzees, in general. Both bonobos and chimpanzees exhibit physical aggression more than 100 times as often as humans do.
Although referred to as peaceful, bonobo aggression is not restricted to each other, and humans have also been attacked by bonobos, and suffered serious, albeit non-fatal, injuries.
Bonobos are far less violent than chimpanzees, though, as lethal aggression is essentially nonexistent among bonobos while being not infrequent among chimpanzees.
It has been hypothesized that bonobos are able to live a more peaceful lifestyle in part because of an abundance of nutritious vegetation in their natural habitat, allowing them to travel and forage in large parties.
Recent studies show that there are significant brain differences between bonobos and chimpanzees. Bonobos have more grey matter volume in the right anterior insula, right dorsal amygdala, hypothalamus, and right dorsomedial prefrontal cortex, all of which are regions assumed to be vital for feeling empathy, sensing distress in others and feeling anxiety. They also have a thick connection between the amygdala, an important area that can spark aggression, and the ventral anterior cingulate cortex, which has been shown to help control impulses in humans. This thicker connection may make them better at regulating their emotional impulses and behavior.
Bonobo society is dominated by females, and severing the lifelong alliance between mothers and their male offspring may make them vulnerable to female aggression. De Waal has warned of the danger of romanticizing bonobos: "All animals are competitive by nature and cooperative only under specific circumstances" and that "when first writing about their behaviour, I spoke of 'sex for peace' precisely because bonobos had plenty of conflicts. There would obviously be no need for peacemaking if they lived in perfect harmony."
Surbeck and Hohmann showed in 2008 that bonobos sometimes do hunt monkey species. Five incidents were observed in a group of bonobos in Salonga National Park, which seemed to reflect deliberate cooperative hunting. On three occasions, the hunt was successful, and infant monkeys were captured and eaten.
There is one inferred intraspecies killing in the wild, and a confirmed lethal attack in captivity. In both cases, the attackers were female and the victims were male.
Diet
The bonobo is an omnivorous frugivore; 57% of its diet is fruit, but this is supplemented with leaves, honey, eggs, meat from small vertebrates such as anomalures, flying squirrels and duikers, and invertebrates. The truffle species Hysterangium bonobo is eaten by bonobos. In some instances, bonobos have been shown to consume lower-order primates. Some claim bonobos have also been known to practise cannibalism in captivity, a claim disputed by others. However, at least one confirmed report of cannibalism in the wild of a dead infant was described in 2008. A 2016 paper reported two more instances of infant cannibalism, although it was not confirmed if infanticide was involved.
Cognitive comparisons to chimpanzees
In 2020, the first whole-genome comparison between chimpanzees and bonobos was published and showed genomic aspects that may underlie or have resulted from their divergence and behavioral differences, including selection for genes related to diet and hormones. A 2010 study found that "female bonobos displayed a larger range of tool use behaviours than males, a pattern previously described for chimpanzees but not for other great apes". This finding was affirmed by the results of another 2010 study which also found that "bonobos were more skilled at solving tasks related to theory of mind or an understanding of social causality, while chimpanzees were more skilled at tasks requiring the use of tools and an understanding of physical causality". Bonobos have been found to be more risk-averse compared to chimpanzees, preferring immediate rather than delayed rewards when it comes to foraging. Bonobos also have a weaker spatial memory compared to chimpanzees, with adult bonobos performing comparably to juvenile chimpanzees.
Similarity to humans
Bonobos are capable of passing the mirror-recognition test for self-awareness, as are all great apes. They communicate primarily through vocal means, although the meanings of their vocalizations are not currently known. However, most humans do understand their facial expressions and some of their natural hand gestures, such as their invitation to play. The communication system of wild bonobos includes a characteristic that was earlier only known in humans: bonobos use the same call to mean different things in different situations, and the other bonobos have to take the context into account when determining the meaning.
Two bonobos at the Great Ape Trust, Kanzi and Panbanisha, have been taught how to communicate using a keyboard labeled with lexigrams (geometric symbols) and they can respond to spoken sentences. Kanzi's vocabulary consists of more than 500 English words, and he has comprehension of around 3,000 spoken English words.
Kanzi is also known for learning by observing people trying to teach his mother; Kanzi started doing the tasks that his mother was taught just by watching, some of which his mother had failed to learn. Some, such as philosopher and bioethicist Peter Singer, argue that these results qualify them for "rights to survival and life"—rights which humans theoretically accord to all persons (See great ape personhood).
In the 1990s, Kanzi was taught to make and use simple stone tools. This resulted from a study undertaken by researchers Kathy Schick and Nicholas Toth, and later Gary Garufi. The researchers wanted to know if Kanzi possessed the cognitive and biomechanical abilities required to make and use stone tools. Though Kanzi was able to form flakes, he did not create them in the same way as humans, who hold the core in one hand and knap it with the other; Kanzi threw the cobble against a hard surface or against another cobble. This allowed him to produce a larger force to initiate a fracture as opposed to knapping it in his hands.
As in other great apes and humans, third party affiliation toward the victim—the affinitive contact made toward the recipient of an aggression by a group member other than the aggressor—is present in bonobos. A 2013 study found that both the affiliation spontaneously offered by a bystander to the victim and the affiliation requested by the victim (solicited affiliation) can reduce the probability of further aggression by group members on the victim (this fact supporting the Victim-Protection Hypothesis). Yet, only spontaneous affiliation reduced victim anxiety—measured via self-scratching rates—thus suggesting not only that non-solicited affiliation has a consolatory function but also that the spontaneous gesture—more than the protection itself—works in calming the distressed subject. The authors hypothesize that the victim may perceive the motivational autonomy of the bystander, who does not require an invitation to provide post-conflict affinitive contact. Moreover, spontaneous—but not solicited—third party affiliation was affected by the bond between consoler and victim (this supporting the Consolation Hypothesis). Importantly, spontaneous affiliation followed the empathic gradient described for humans, being mostly offered to kin, then friends, then acquaintances (these categories having been determined using affiliation rates between individuals). Hence, consolation in the bonobo may be an empathy-based phenomenon.
Instances in which bonobos have expressed joy have been reported. One study analyzed and recorded sounds made by human infants and bonobos when they were tickled. Although the bonobos' laugh was at a higher frequency, the laugh was found to follow a spectrographic pattern similar to that of human babies.
Distribution and habitat
Bonobos are found only south of the Congo River and north of the Kasai River (a tributary of the Congo), in the humid forests of the Democratic Republic of Congo. Ernst Schwarz's 1927 paper "Le Chimpanzé de la Rive Gauche du Congo", announcing his discovery, has been read as an association between the Parisian Left Bank and the left bank of the Congo River; the bohemian culture in Paris, and an unconventional ape in the Congo.
The ranges of bonobos and chimpanzees are separated by the Congo River, with bonobos living to its south and chimpanzees to the north.
Ecological role
In the Congo tropical rainforest, the very great majority of plants need animals to reproduce and disperse their seeds. Bonobos are the second largest frugivorous animals in this region, after elephants. It is estimated that during its life, each bonobo will ingest and disperse nine tons of seeds, from more than 91 species of lianas, grass, trees and shrubs. These seeds travel for about 24 hours in the bonobo digestive tract, which can transfer them over several kilometers (mean 1.3 km; max: 4.5 km), far from their parents, where they will be deposited intact in their feces. These dispersed seeds remain viable, germinating better and more quickly than unpassed seeds. For those seeds, diplochory with dung-beetles (Scarabaeidae) improves post-dispersal survival.
Certain plants such as Dialium may even be dependent on bonobos to activate the germination of their seeds, characterized by tegumentary dormancy. The first parameters of the effectiveness of seed dispersal by bonobos are present. Behavior of the bonobo could affect the population structure of plants whose seeds they disperse. The majority of these zoochorous plants cannot recruit without dispersal and the homogeneous spatial structure of the trees suggests a direct link with their dispersal agent. Few species could replace bonobos in terms of seed dispersal services, just as bonobos could not replace elephants. There is little functional redundancy between frugivorous mammals of the Congo, which face severe human hunting pressures and local extinction. The defaunation of the forests, leading to the empty forest syndrome, is critical in conservation biology. The disappearance of the bonobos, which disperse seeds of 40% of the tree species in these forests, or 11.6 million individual seeds during the life of each bonobo, would have consequences for the conservation of the Congo rainforest.
Conservation status
The IUCN Red List classifies bonobos as an endangered species, with conservative population estimates ranging from 29,500 to 50,000 individuals. Major threats to bonobo populations include habitat loss and hunting for bushmeat, the latter activity having increased dramatically during the first and second Congo Wars in the Democratic Republic of Congo, due to the presence of heavily armed militias (even in remote, "protected" areas such as Salonga National Park). This is part of a more general trend of ape extinction.
As the bonobos' habitat is shared with many people, the ultimate success of conservation efforts still relies on local and community involvement. The issue of parks versus people is salient in the Cuvette Centrale, within the bonobos' range. There is strong local, and broad-based Congolese, resistance to establishing national parks, as indigenous communities have previously been driven from their forest homes by the forming of parks. In Salonga National Park (the only national park in bonobo habitat), there is no local involvement, and surveys undertaken since 2000 indicate the bonobo, the African forest elephant, the okapi, and other rare species have been devastated by poachers and the thriving bushmeat trade. In contrast, areas do exist where the bonobo and ecological biodiversity still thrive without any established park borders, because of the indigenous beliefs/taboos against killing bonobos and other animals.
During the wars in the 1990s, researchers and international non-governmental organizations (NGOs) were driven out of the bonobo habitat. In 2002, the Bonobo Conservation Initiative initiated the Bonobo Peace Forest Project (supported by the Global Conservation Fund of Conservation International), in cooperation with national institutions, local NGOs, and local communities; the Peace Forest Project works with local communities to establish a linked constellation of community-based reserves managed by local and indigenous people. This model, implemented mainly through DRC organizations and local communities, has helped bring about agreements to protect over of the bonobo habitat. According to Amy Parish, the Bonobo Peace Forest "is going to be a model for conservation in the 21st century".
The port town of Basankusu is situated on the Lulonga River, at the confluence of the Lopori and Maringa Rivers, in the north of the country, making it well placed to receive and transport local goods to the cities of Mbandaka and Kinshasa. With Basankusu being the last port of substance before the wilderness of the Lopori Basin and the Lomako River—the bonobo heartland—conservation efforts for the bonobo use the town as a base.
In 1995, concern over declining numbers of bonobos in the wild led the Zoological Society of Milwaukee (ZSM), in Milwaukee, Wisconsin, with contributions from bonobo scientists around the world, to publish the Action Plan for Pan paniscus: A Report on Free Ranging Populations and Proposals for their Preservation. The Action Plan compiles population data on bonobos from 20 years of research conducted at various sites throughout the bonobo's range. The plan identifies priority actions for bonobo conservation and serves as a reference for developing conservation programs for researchers, government officials, and donor agencies.
Acting on Action Plan recommendations, the ZSM developed the Bonobo and Congo Biodiversity Initiative. This program includes habitat and rain-forest preservation, training for Congolese nationals and conservation institutions, wildlife population assessment and monitoring, and education. The ZSM has conducted regional surveys within the range of the bonobo in conjunction with training Congolese researchers in survey methodology and biodiversity monitoring. The ZSM's initial goal was to survey Salonga National Park to determine the conservation status of the bonobo within the park and to provide financial and technical assistance to strengthen park protection. As the project has developed, the ZSM has become more involved in helping the Congolese living in bonobo habitat. They have built schools, hired teachers, provided some medicines, and started an agriculture project to help the Congolese learn to grow crops and depend less on hunting wild animals.
With grants from the United Nations, USAID, the U.S. Embassy, the World Wildlife Fund, and many other groups and individuals, the ZSM also has been working to:
Survey the bonobo population and its habitat to find ways to help protect these apes
Develop antipoaching measures to help save apes, forest elephants, and other endangered animals in Congo's Salonga National Park, a UN World Heritage Site
Provide training, literacy education, agricultural techniques, schools, equipment, and jobs for Congolese living near bonobo habitats so that they will have a vested interest in protecting the great apes – the ZSM started an agriculture project to help the Congolese learn to grow crops and depend less on hunting wild animals.
Model small-scale conservation methods that can be used throughout Congo
Starting in 2003, the U.S. government allocated $54 million to the Congo Basin Forest Partnership. This significant investment has triggered the involvement of international NGOs to establish bases in the region and work to develop bonobo conservation programs. This initiative should improve the likelihood of bonobo survival, but its success still may depend upon building greater involvement and capability in local and indigenous communities.
The bonobo population is believed to have declined sharply in the last 30 years, though surveys have been hard to carry out in war-ravaged central Congo. Estimates range from 60,000 to fewer than 50,000 living, according to the World Wildlife Fund.
In addition, concerned parties have addressed the crisis on several science and ecological websites. Organizations such as the World Wide Fund for Nature, the African Wildlife Foundation, and others, are trying to focus attention on the extreme risk to the species. Some have suggested that a reserve be established in a more stable part of Africa, or on an island in a place such as Indonesia. Awareness is ever increasing, and even nonscientific or ecological sites have created various groups to collect donations to help with the conservation of this species.
Bonobos in human culture
World Bonobo Day is February 14 (Valentine's Day). This was established in 2017 by the African Wildlife Foundation.
| Biology and health sciences | Primates | null |
45303 | https://en.wikipedia.org/wiki/Thread%20%28computing%29 | Thread (computing) | In computer science, a thread of execution is the smallest sequence of programmed instructions that can be managed independently by a scheduler, which is typically a part of the operating system. In many cases, a thread is a component of a process.
The multiple threads of a given process may be executed concurrently (via multithreading capabilities), sharing resources such as memory, while different processes do not share these resources. In particular, the threads of a process share its executable code and the values of its dynamically allocated variables and non-thread-local global variables at any given time.
The implementation of threads and processes differs between operating systems.
History
Threads made an early appearance under the name of "tasks" in IBM's batch processing operating system, OS/360, in 1967. It provided users with three available configurations of the OS/360 control system, of which Multiprogramming with a Variable Number of Tasks (MVT) was one. Saltzer (1966) credits Victor A. Vyssotsky with the term "thread".
The use of threads in software applications became more common in the early 2000s as CPUs began to utilize multiple cores. Applications wishing to take advantage of multiple cores for performance advantages were required to employ concurrency to utilize the multiple cores.
Related concepts
Scheduling can be done at the kernel level or user level, and multitasking can be done preemptively or cooperatively. This yields a variety of related concepts.
Processes
At the kernel level, a process contains one or more kernel threads, which share the process's resources, such as memory and file handles – a process is a unit of resources, while a thread is a unit of scheduling and execution. Kernel scheduling is typically uniformly done preemptively or, less commonly, cooperatively. At the user level a process such as a runtime system can itself schedule multiple threads of execution. If these do not share data, as in Erlang, they are usually analogously called processes, while if they share data they are usually called (user) threads, particularly if preemptively scheduled. Cooperatively scheduled user threads are known as fibers; different processes may schedule user threads differently. User threads may be executed by kernel threads in various ways (one-to-one, many-to-one, many-to-many). The term "light-weight process" variously refers to user threads or to kernel mechanisms for scheduling user threads onto kernel threads.
A process is a "heavyweight" unit of kernel scheduling, as creating, destroying, and switching processes is relatively expensive. Processes own resources allocated by the operating system. Resources include memory (for both code and data), file handles, sockets, device handles, windows, and a process control block. Processes are isolated by process isolation, and do not share address spaces or file resources except through explicit methods such as inheriting file handles or shared memory segments, or mapping the same file in a shared way – see interprocess communication. Creating or destroying a process is relatively expensive, as resources must be acquired or released. Processes are typically preemptively multitasked, and process switching is relatively expensive, beyond basic cost of context switching, due to issues such as cache flushing (in particular, process switching changes virtual memory addressing, causing invalidation and thus flushing of an untagged translation lookaside buffer (TLB), notably on x86).
Kernel threads
A kernel thread is a "lightweight" unit of kernel scheduling. At least one kernel thread exists within each process. If multiple kernel threads exist within a process, then they share the same memory and file resources. Kernel threads are preemptively multitasked if the operating system's process scheduler is preemptive. Kernel threads do not own resources except for a stack, a copy of the registers including the program counter, and thread-local storage (if any), and are thus relatively cheap to create and destroy. Thread switching is also relatively cheap: it requires a context switch (saving and restoring registers and stack pointer), but does not change virtual memory and is thus cache-friendly (leaving TLB valid). The kernel can assign one or more software threads to each core in a CPU (it being able to assign itself multiple software threads depending on its support for multithreading), and can swap out threads that get blocked. However, kernel threads take much longer than user threads to be swapped.
User threads
Threads are sometimes implemented in userspace libraries, thus called user threads. The kernel is unaware of them, so they are managed and scheduled in userspace. Some implementations base their user threads on top of several kernel threads, to benefit from multi-processor machines (M:N model). User threads as implemented by virtual machines are also called green threads.
As user thread implementations are typically entirely in userspace, context switching between user threads within the same process is extremely efficient because it does not require any interaction with the kernel at all: a context switch can be performed by locally saving the CPU registers used by the currently executing user thread or fiber and then loading the registers required by the user thread or fiber to be executed. Since scheduling occurs in userspace, the scheduling policy can be more easily tailored to the requirements of the program's workload.
However, the use of blocking system calls in user threads (as opposed to kernel threads) can be problematic. If a user thread or a fiber performs a system call that blocks, the other user threads and fibers in the process are unable to run until the system call returns. A typical example of this problem is when performing I/O: most programs are written to perform I/O synchronously. When an I/O operation is initiated, a system call is made, and does not return until the I/O operation has been completed. In the intervening period, the entire process is "blocked" by the kernel and cannot run, which starves other user threads and fibers in the same process from executing.
A common solution to this problem (used, in particular, by many green threads implementations) is providing an I/O API that implements an interface that blocks the calling thread, rather than the entire process, by using non-blocking I/O internally, and scheduling another user thread or fiber while the I/O operation is in progress. Similar solutions can be provided for other blocking system calls. Alternatively, the program can be written to avoid the use of synchronous I/O or other blocking system calls (in particular, using non-blocking I/O, including lambda continuations and/or async/await primitives).
Fibers
Fibers are an even lighter unit of scheduling which are cooperatively scheduled: a running fiber must explicitly "yield" to allow another fiber to run, which makes their implementation much easier than kernel or user threads. A fiber can be scheduled to run in any thread in the same process. This permits applications to gain performance improvements by managing scheduling themselves, instead of relying on the kernel scheduler (which may not be tuned for the application). Some research implementations of the OpenMP parallel programming model implement their tasks through fibers. Closely related to fibers are coroutines, with the distinction being that coroutines are a language-level construct, while fibers are a system-level construct.
Threads vs processes
Threads differ from traditional multitasking operating-system processes in several ways:
processes are typically independent, while threads exist as subsets of a process
processes carry considerably more state information than threads, whereas multiple threads within a process share process state as well as memory and other resources
processes have separate address spaces, whereas threads share their address space
processes interact only through system-provided inter-process communication mechanisms
context switching between threads in the same process typically occurs faster than context switching between processes
Systems such as Windows NT and OS/2 are said to have cheap threads and expensive processes; in other operating systems there is not so great a difference except in the cost of an address-space switch, which on some architectures (notably x86) results in a translation lookaside buffer (TLB) flush.
Advantages and disadvantages of threads vs processes include:
Lower resource consumption of threads: using threads, an application can operate using fewer resources than it would need when using multiple processes.
Simplified sharing and communication of threads: unlike processes, which require a message passing or shared memory mechanism to perform inter-process communication (IPC), threads can communicate through data, code and files they already share.
Thread crashes a process: due to threads sharing the same address space, an illegal operation performed by a thread can crash the entire process; therefore, one misbehaving thread can disrupt the processing of all the other threads in the application.
Scheduling
Preemptive vs cooperative scheduling
Operating systems schedule threads either preemptively or cooperatively. Multi-user operating systems generally favor preemptive multithreading for its finer-grained control over execution time via context switching. However, preemptive scheduling may context-switch threads at moments unanticipated by programmers, thus causing lock convoy, priority inversion, or other side-effects. In contrast, cooperative multithreading relies on threads to relinquish control of execution, thus ensuring that threads run to completion. This can cause problems if a cooperatively multitasked thread blocks by waiting on a resource or if it starves other threads by not yielding control of execution during intensive computation.
Single- vs multi-processor systems
Until the early 2000s, most desktop computers had only one single-core CPU, with no support for hardware threads, although threads were still used on such computers because switching between threads was generally still quicker than full-process context switches. In 2002, Intel added support for simultaneous multithreading to the Pentium 4 processor, under the name hyper-threading; in 2005, they introduced the dual-core Pentium D processor and AMD introduced the dual-core Athlon 64 X2 processor.
Systems with a single processor generally implement multithreading by time slicing: the central processing unit (CPU) switches between different software threads. This context switching usually occurs frequently enough that users perceive the threads or tasks as running in parallel (for popular server/desktop operating systems, maximum time slice of a thread, when other threads are waiting, is often limited to 100–200ms). On a multiprocessor or multi-core system, multiple threads can execute in parallel, with every processor or core executing a separate thread simultaneously; on a processor or core with hardware threads, separate software threads can also be executed concurrently by separate hardware threads.
Threading models
1:1 (kernel-level threading)
Threads created by the user in a 1:1 correspondence with schedulable entities in the kernel are the simplest possible threading implementation. OS/2 and Win32 used this approach from the start, while on Linux the GNU C Library implements this approach (via the NPTL or older LinuxThreads). This approach is also used by Solaris, NetBSD, FreeBSD, macOS, and iOS.
M:1 (user-level threading)
An M:1 model implies that all application-level threads map to one kernel-level scheduled entity; the kernel has no knowledge of the application threads. With this approach, context switching can be done very quickly and, in addition, it can be implemented even on simple kernels which do not support threading. One of the major drawbacks, however, is that it cannot benefit from the hardware acceleration on multithreaded processors or multi-processor computers: there is never more than one thread being scheduled at the same time. For example: If one of the threads needs to execute an I/O request, the whole process is blocked and the threading advantage cannot be used. The GNU Portable Threads uses User-level threading, as does State Threads.
M:N (hybrid threading)
M:N maps some number of application threads onto some number of kernel entities, or "virtual processors." This is a compromise between kernel-level ("1:1") and user-level ("N:1") threading. In general, "M:N" threading systems are more complex to implement than either kernel or user threads, because changes to both kernel and user-space code are required. In the M:N implementation, the threading library is responsible for scheduling user threads on the available schedulable entities; this makes context switching of threads very fast, as it avoids system calls. However, this increases complexity and the likelihood of priority inversion, as well as suboptimal scheduling without extensive (and expensive) coordination between the userland scheduler and the kernel scheduler.
Hybrid implementation examples
Scheduler activations used by older versions of the NetBSD native POSIX threads library implementation (an M:N model as opposed to a 1:1 kernel or userspace implementation model)
Light-weight processes used by older versions of the Solaris operating system
Marcel from the PM2 project.
The OS for the Tera-Cray MTA-2
The Glasgow Haskell Compiler (GHC) for the language Haskell uses lightweight threads which are scheduled on operating system threads.
History of threading models in Unix systems
SunOS 4.x implemented light-weight processes or LWPs. NetBSD 2.x+, and DragonFly BSD implement LWPs as kernel threads (1:1 model). SunOS 5.2 through SunOS 5.8 as well as NetBSD 2 to NetBSD 4 implemented a two level model, multiplexing one or more user level threads on each kernel thread (M:N model). SunOS 5.9 and later, as well as NetBSD 5 eliminated user threads support, returning to a 1:1 model. FreeBSD 5 implemented M:N model. FreeBSD 6 supported both 1:1 and M:N, users could choose which one should be used with a given program using /etc/libmap.conf. Starting with FreeBSD 7, the 1:1 became the default. FreeBSD 8 no longer supports the M:N model.
Single-threaded vs multithreaded programs
In computer programming, single-threading is the processing of one instruction at a time. In the formal analysis of the variables' semantics and process state, the term single threading can be used differently to mean "backtracking within a single thread", which is common in the functional programming community.
Multithreading is mainly found in multitasking operating systems. Multithreading is a widespread programming and execution model that allows multiple threads to exist within the context of one process. These threads share the process's resources, but are able to execute independently. The threaded programming model provides developers with a useful abstraction of concurrent execution. Multithreading can also be applied to one process to enable parallel execution on a multiprocessing system.
Multithreading libraries tend to provide a function call to create a new thread, which takes a function as a parameter. A concurrent thread is then created which starts running the passed function and ends when the function returns. The thread libraries also offer data synchronization functions.
Threads and data synchronization
Threads in the same process share the same address space. This allows concurrently running code to couple tightly and conveniently exchange data without the overhead or complexity of an IPC. When shared between threads, however, even simple data structures become prone to race conditions if they require more than one CPU instruction to update: two threads may end up attempting to update the data structure at the same time and find it unexpectedly changing underfoot. Bugs caused by race conditions can be very difficult to reproduce and isolate.
To prevent this, threading application programming interfaces (APIs) offer synchronization primitives such as mutexes to lock data structures against concurrent access. On uniprocessor systems, a thread running into a locked mutex must sleep and hence trigger a context switch. On multi-processor systems, the thread may instead poll the mutex in a spinlock. Both of these may sap performance and force processors in symmetric multiprocessing (SMP) systems to contend for the memory bus, especially if the granularity of the locking is too fine.
Other synchronization APIs include condition variables, critical sections, semaphores, and monitors.
Thread pools
A popular programming pattern involving threads is that of thread pools where a set number of threads are created at startup that then wait for a task to be assigned. When a new task arrives, it wakes up, completes the task and goes back to waiting. This avoids the relatively expensive thread creation and destruction functions for every task performed and takes thread management out of the application developer's hand and leaves it to a library or the operating system that is better suited to optimize thread management.
Multithreaded programs vs single-threaded programs pros and cons
Multithreaded applications have the following advantages vs single-threaded ones:
Responsiveness: multithreading can allow an application to remain responsive to input. In a one-thread program, if the main execution thread blocks on a long-running task, the entire application can appear to freeze. By moving such long-running tasks to a worker thread that runs concurrently with the main execution thread, it is possible for the application to remain responsive to user input while executing tasks in the background. On the other hand, in most cases multithreading is not the only way to keep a program responsive, with non-blocking I/O and/or Unix signals being available for obtaining similar results.
Parallelization: applications looking to use multicore or multi-CPU systems can use multithreading to split data and tasks into parallel subtasks and let the underlying architecture manage how the threads run, either concurrently on one core or in parallel on multiple cores. GPU computing environments like CUDA and OpenCL use the multithreading model where dozens to hundreds of threads run in parallel across data on a large number of cores. This, in turn, enables better system utilization, and (provided that synchronization costs don't eat the benefits up), can provide faster program execution.
Multithreaded applications have the following drawbacks:
Synchronization complexity and related bugs: when using shared resources typical for threaded programs, the programmer must be careful to avoid race conditions and other non-intuitive behaviors. In order for data to be correctly manipulated, threads will often need to rendezvous in time in order to process the data in the correct order. Threads may also require mutually exclusive operations (often implemented using mutexes) to prevent common data from being read or overwritten in one thread while being modified by another. Careless use of such primitives can lead to deadlocks, livelocks or races over resources. As Edward A. Lee has written: "Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism. Threads, as a model of computation, are wildly non-deterministic, and the job of the programmer becomes one of pruning that nondeterminism."
Being untestable. In general, multithreaded programs are non-deterministic, and as a result, are untestable. In other words, a multithreaded program can easily have bugs which never manifest on a test system, manifesting only in production. This can be alleviated by restricting inter-thread communications to certain well-defined patterns (such as message-passing).
Synchronization costs. As thread context switch on modern CPUs can cost up to 1 million CPU cycles, it makes writing efficient multithreading programs difficult. In particular, special attention has to be paid to avoid inter-thread synchronization from being too frequent.
Programming language support
Many programming languages support threading in some capacity.
IBM PL/I(F) included support for multithreading (called multitasking) as early as in the late 1960s, and this was continued in the Optimizing Compiler and later versions. The IBM Enterprise PL/I compiler introduced a new model "thread" API. Neither version was part of the PL/I standard.
Many implementations of C and C++ support threading, and provide access to the native threading APIs of the operating system. A standardized interface for thread implementation is POSIX Threads (Pthreads), which is a set of C-function library calls. OS vendors are free to implement the interface as desired, but the application developer should be able to use the same interface across multiple platforms. Most Unix platforms, including Linux, support Pthreads. Microsoft Windows has its own set of thread functions in the process.h interface for multithreading, like beginthread.
Some higher level (and usually cross-platform) programming languages, such as Java, Python, and .NET Framework languages, expose threading to developers while abstracting the platform specific differences in threading implementations in the runtime. Several other programming languages and language extensions also try to abstract the concept of concurrency and threading from the developer fully (Cilk, OpenMP, Message Passing Interface (MPI)). Some languages are designed for sequential parallelism instead (especially using GPUs), without requiring concurrency or threads (Ateji PX, CUDA).
A few interpreted programming languages have implementations (e.g., Ruby MRI for Ruby, CPython for Python) which support threading and concurrency but not parallel execution of threads, due to a global interpreter lock (GIL). The GIL is a mutual exclusion lock held by the interpreter that can prevent the interpreter from simultaneously interpreting the application's code on two or more threads at once. This effectively limits the parallelism on multiple core systems. It also limits performance for processor-bound threads (which require the processor), but doesn't effect I/O-bound or network-bound ones as much. Other implementations of interpreted programming languages, such as Tcl using the Thread extension, avoid the GIL limit by using an Apartment model where data and code must be explicitly "shared" between threads. In Tcl each thread has one or more interpreters.
In programming models such as CUDA designed for data parallel computation, an array of threads run the same code in parallel using only its ID to find its data in memory. In essence, the application must be designed so that each thread performs the same operation on different segments of memory so that they can operate in parallel and use the GPU architecture.
Hardware description languages such as Verilog have a different threading model that supports extremely large numbers of threads (for modeling hardware).
| Technology | Operating systems | null |
45337 | https://en.wikipedia.org/wiki/Nash%20equilibrium | Nash equilibrium | In game theory, the Nash equilibrium is the most commonly-used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy (holding all other players' strategies fixed). The idea of Nash equilibrium dates back to the time of Cournot, who in 1838 applied it to his model of competition in an oligopoly.
If each player has chosen a strategy an action plan based on what has happened so far in the game and no one can increase one's own expected payoff by changing one's strategy while the other players keep theirs unchanged, then the current set of strategy choices constitutes a Nash equilibrium.
If two players Alice and Bob choose strategies A and B, (A, B) is a Nash equilibrium if Alice has no other strategy available that does better than A at maximizing her payoff in response to Bob choosing B, and Bob has no other strategy available that does better than B at maximizing his payoff in response to Alice choosing A. In a game in which Carol and Dan are also players, (A, B, C, D) is a Nash equilibrium if A is Alice's best response to (B, C, D), B is Bob's best response to (A, C, D), and so forth.
Nash showed that there is a Nash equilibrium, possibly in mixed strategies, for every finite game.
Applications
Game theorists use Nash equilibrium to analyze the outcome of the strategic interaction of several decision makers. In a strategic interaction, the outcome for each decision-maker depends on the decisions of the others as well as their own. The simple insight underlying Nash's idea is that one cannot predict the choices of multiple decision makers if one analyzes those decisions in isolation. Instead, one must ask what each player would do taking into account what the player expects the others to do. Nash equilibrium requires that one's choices be consistent: no players wish to undo their decision given what the others are deciding.
The concept has been used to analyze hostile situations such as wars and arms races (see prisoner's dilemma), and also how conflict may be mitigated by repeated interaction (see tit-for-tat). It has also been used to study to what extent people with different preferences can cooperate (see battle of the sexes), and whether they will take risks to achieve a cooperative outcome (see stag hunt). It has been used to study the adoption of technical standards, and also the occurrence of bank runs and currency crises (see coordination game). Other applications include traffic flow (see Wardrop's principle), how to organize auctions (see auction theory), the outcome of efforts exerted by multiple parties in the education process, regulatory legislation such as environmental regulations (see tragedy of the commons), natural resource management, analysing strategies in marketing, penalty kicks in football (see matching pennies), robot navigation in crowds, energy systems, transportation systems, evacuation problems and wireless communications.
History
Nash equilibrium is named after American mathematician John Forbes Nash Jr. The same idea was used in a particular application in 1838 by Antoine Augustin Cournot in his theory of oligopoly. In Cournot's theory, each of several firms choose how much output to produce to maximize its profit. The best output for one firm depends on the outputs of the others. A Cournot equilibrium occurs when each firm's output maximizes its profits given the output of the other firms, which is a pure-strategy Nash equilibrium. Cournot also introduced the concept of best response dynamics in his analysis of the stability of equilibrium. Cournot did not use the idea in any other applications, however, or define it generally.
The modern concept of Nash equilibrium is instead defined in terms of mixed strategies, where players choose a probability distribution over possible pure strategies (which might put 100% of the probability on one pure strategy; such pure strategies are a subset of mixed strategies). The concept of a mixed-strategy equilibrium was introduced by John von Neumann and Oskar Morgenstern in their 1944 book The Theory of Games and Economic Behavior, but their analysis was restricted to the special case of zero-sum games. They showed that a mixed-strategy Nash equilibrium will exist for any zero-sum game with a finite set of actions. The contribution of Nash in his 1951 article "Non-Cooperative Games" was to define a mixed-strategy Nash equilibrium for any game with a finite set of actions and prove that at least one (mixed-strategy) Nash equilibrium must exist in such a game. The key to Nash's ability to prove existence far more generally than von Neumann lay in his definition of equilibrium. According to Nash, "an equilibrium point is an n-tuple such that each player's mixed strategy maximizes [their] payoff if the strategies of the others are held fixed. Thus each player's strategy is optimal against those of the others." Putting the problem in this framework allowed Nash to employ the Kakutani fixed-point theorem in his 1950 paper to prove existence of equilibria. His 1951 paper used the simpler Brouwer fixed-point theorem for the same purpose.
Game theorists have discovered that in some circumstances Nash equilibrium makes invalid predictions or fails to make a unique prediction. They have proposed many solution concepts ('refinements' of Nash equilibria) designed to rule out implausible Nash equilibria. One particularly important issue is that some Nash equilibria may be based on threats that are not 'credible'. In 1965 Reinhard Selten proposed subgame perfect equilibrium as a refinement that eliminates equilibria which depend on non-credible threats. Other extensions of the Nash equilibrium concept have addressed what happens if a game is repeated, or what happens if a game is played in the absence of complete information. However, subsequent refinements and extensions of Nash equilibrium share the main insight on which Nash's concept rests: the equilibrium is a set of strategies such that each player's strategy is optimal given the choices of the others.
Definitions
Nash equilibrium
A strategy profile is a set of strategies, one for each player. Informally, a strategy profile is a Nash equilibrium if no player can do better by unilaterally changing their strategy. To see what this means, imagine that each player is told the strategies of the others. Suppose then that each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, can I benefit by changing my strategy?"
For instance if a player prefers "Yes", then that set of strategies is not a Nash equilibrium. But if every player prefers not to switch (or is indifferent between switching and not) then the strategy profile is a Nash equilibrium. Thus, each strategy in a Nash equilibrium is a best response to the other players' strategies in that equilibrium.
Formally, let be the set of all possible strategies for player , where . Let be a strategy profile, a set consisting of one strategy for each player, where denotes the strategies of all the players except . Let be player is payoff as a function of the strategies. The strategy profile is a Nash equilibrium if
A game can have more than one Nash equilibrium. Even if the equilibrium is unique, it might be weak: a player might be indifferent among several strategies given the other players' choices. It is unique and called a strict Nash equilibrium if the inequality is strict so one strategy is the unique best response:
The strategy set can be different for different players, and its elements can be a variety of mathematical objects. Most simply, a player might choose between two strategies, e.g. Or the strategy set might be a finite set of conditional strategies responding to other players, e.g. Or it might be an infinite set, a continuum or unbounded, e.g. such that is a non-negative real number. Nash's existing proofs assume a finite strategy set, but the concept of Nash equilibrium does not require it.
Variants
Pure/mixed equilibrium
A game can have a pure-strategy or a mixed-strategy Nash equilibrium. In the latter, not every player always plays the same strategy. Instead, there is a probability distribution over different strategies.
Strict/non-strict equilibrium
Suppose that in the Nash equilibrium, each player asks themselves: "Knowing the strategies of the other players, and treating the strategies of the other players as set in stone, would I suffer a loss by changing my strategy?"
If every player's answer is "Yes", then the equilibrium is classified as a strict Nash equilibrium.
If instead, for some player, there is exact equality between the strategy in Nash equilibrium and some other strategy that gives exactly the same payout (i.e. the player is indifferent between switching and not), then the equilibrium is classified as a weak or non-strict Nash equilibrium.
Equilibria for coalitions
The Nash equilibrium defines stability only in terms of individual player deviations. In cooperative games such a concept is not convincing enough. Strong Nash equilibrium allows for deviations by every conceivable coalition. Formally, a strong Nash equilibrium is a Nash equilibrium in which no coalition, taking the actions of its complements as given, can cooperatively deviate in a way that benefits all of its members. However, the strong Nash concept is sometimes perceived as too "strong" in that the environment allows for unlimited private communication. In fact, strong Nash equilibrium has to be Pareto efficient. As a result of these requirements, strong Nash is too rare to be useful in many branches of game theory. However, in games such as elections with many more players than possible outcomes, it can be more common than a stable equilibrium.
A refined Nash equilibrium known as coalition-proof Nash equilibrium (CPNE) occurs when players cannot do better even if they are allowed to communicate and make "self-enforcing" agreement to deviate. Every correlated strategy supported by iterated strict dominance and on the Pareto frontier is a CPNE. Further, it is possible for a game to have a Nash equilibrium that is resilient against coalitions less than a specified size, k. CPNE is related to the theory of the core.
Existence
Nash's existence theorem
Nash proved that if mixed strategies (where a player chooses probabilities of using various pure strategies) are allowed, then every game with a finite number of players in which each player can choose from finitely many pure strategies has at least one Nash equilibrium, which might be a pure strategy for each player or might be a probability distribution over strategies for each player.
Nash equilibria need not exist if the set of choices is infinite and non-compact. For example:
A game where two players simultaneously name a number and the player naming the larger number wins does not have a NE, as the set of choices is not compact because it is unbounded.
Each of two players chooses a real number strictly less than 5 and the winner is whoever has the biggest number; no biggest number strictly less than 5 exists (if the number could equal 5, the Nash equilibrium would have both players choosing 5 and tying the game). Here, the set of choices is not compact because it is not closed.
However, a Nash equilibrium exists if the set of choices is compact with each player's payoff continuous in the strategies of all the players.
Rosen's existence theorem
Rosen extended Nash's existence theorem in several ways. He considers an n-player game, in which the strategy of each player i is a vector si in the Euclidean space Rmi. Denote m:=m1+...+mn; so a strategy-tuple is a vector in Rm. Part of the definition of a game is a subset S of Rm such that the strategy-tuple must be in S. This means that the actions of players may potentially be constrained based on actions of other players. A common special case of the model is when S is a Cartesian product of convex sets S1,...,Sn, such that the strategy of player i must be in Si. This represents the case that the actions of each player i are constrained independently of other players' actions. If the following conditions hold:
T is convex, closed and bounded;
Each payoff function ui is continuous in the strategies of all players, and concave in si for every fixed value of s−i.
Then a Nash equilibrium exists. The proof uses the Kakutani fixed-point theorem. Rosen also proves that, under certain technical conditions which include strict concavity, the equilibrium is unique.
Nash's result refers to the special case in which each Si is a simplex (representing all possible mixtures of pure strategies), and the payoff functions of all players are bilinear functions of the strategies.
Rationality
The Nash equilibrium may sometimes appear non-rational in a third-person perspective. This is because a Nash equilibrium is not necessarily Pareto optimal.
Nash equilibrium may also have non-rational consequences in sequential games because players may "threaten" each other with threats they would not actually carry out. For such games the subgame perfect Nash equilibrium may be more meaningful as a tool of analysis.
Examples
Coordination game
The coordination game is a classic two-player, two-strategy game, as shown in the example payoff matrix to the right. There are two pure-strategy equilibria, (A,A) with payoff 4 for each player and (B,B) with payoff 2 for each. The combination (B,B) is a Nash equilibrium because if either player unilaterally changes their strategy from B to A, their payoff will fall from 2 to 1.
A famous example of a coordination game is the stag hunt. Two players may choose to hunt a stag or a rabbit, the stag providing more meat (4 utility units, 2 for each player) than the rabbit (1 utility unit). The caveat is that the stag must be cooperatively hunted, so if one player attempts to hunt the stag, while the other hunts the rabbit, the stag hunter will totally fail, for a payoff of 0, whereas the rabbit hunter will succeed, for a payoff of 1. The game has two equilibria, (stag, stag) and (rabbit, rabbit), because a player's optimal strategy depends on their expectation on what the other player will do. If one hunter trusts that the other will hunt the stag, they should hunt the stag; however if they think the other will hunt the rabbit, they too will hunt the rabbit. This game is used as an analogy for social cooperation, since much of the benefit that people gain in society depends upon people cooperating and implicitly trusting one another to act in a manner corresponding with cooperation.
Driving on a road against an oncoming car, and having to choose either to swerve on the left or to swerve on the right of the road, is also a coordination game. For example, with payoffs 10 meaning no crash and 0 meaning a crash, the coordination game can be defined with the following payoff matrix:
In this case there are two pure-strategy Nash equilibria, when both choose to either drive on the left or on the right. If we admit mixed strategies (where a pure strategy is chosen at random, subject to some fixed probability), then there are three Nash equilibria for the same case: two we have seen from the pure-strategy form, where the probabilities are (0%, 100%) for player one, (0%, 100%) for player two; and (100%, 0%) for player one, (100%, 0%) for player two respectively. We add another where the probabilities for each player are (50%, 50%).
Network traffic
An application of Nash equilibria is in determining the expected flow of traffic in a network. Consider the graph on the right. If we assume that there are "cars" traveling from to , what is the expected distribution of traffic in the network?
This situation can be modeled as a "game", where every traveler has a choice of 3 strategies and where each strategy is a route from to (one of , , or ). The "payoff" of each strategy is the travel time of each route. In the graph on the right, a car travelling via experiences travel time of , where is the number of cars traveling on edge . Thus, payoffs for any given strategy depend on the choices of the other players, as is usual. However, the goal, in this case, is to minimize travel time, not maximize it. Equilibrium will occur when the time on all paths is exactly the same. When that happens, no single driver has any incentive to switch routes, since it can only add to their travel time. For the graph on the right, if, for example, 100 cars are travelling from to , then equilibrium will occur when 25 drivers travel via , 50 via , and 25 via . Every driver now has a total travel time of 3.75 (to see this, a total of 75 cars take the edge, and likewise, 75 cars take the edge).
Notice that this distribution is not, actually, socially optimal. If the 100 cars agreed that 50 travel via and the other 50 through , then travel time for any single car would actually be 3.5, which is less than 3.75. This is also the Nash equilibrium if the path between and is removed, which means that adding another possible route can decrease the efficiency of the system, a phenomenon known as Braess's paradox.
Competition game
This can be illustrated by a two-player game in which both players simultaneously choose an integer from 0 to 3 and they both win the smaller of the two numbers in points. In addition, if one player chooses a larger number than the other, then they have to give up two points to the other.
This game has a unique pure-strategy Nash equilibrium: both players choosing 0 (highlighted in light red). Any other strategy can be improved by a player switching their number to one less than that of the other player. In the adjacent table, if the game begins at the green square, it is in player 1's interest to move to the purple square and it is in player 2's interest to move to the blue square. Although it would not fit the definition of a competition game, if the game is modified so that the two players win the named amount if they both choose the same number, and otherwise win nothing, then there are 4 Nash equilibria: (0,0), (1,1), (2,2), and (3,3).
Nash equilibria in a payoff matrix
There is an easy numerical way to identify Nash equilibria on a payoff matrix. It is especially helpful in two-person games where players have more than two strategies. In this case formal analysis may become too long. This rule does not apply to the case where mixed (stochastic) strategies are of interest. The rule goes as follows: if the first payoff number, in the payoff pair of the cell, is the maximum of the column of the cell and if the second number is the maximum of the row of the cell then the cell represents a Nash equilibrium.
We can apply this rule to a 3×3 matrix:
Using the rule, we can very quickly (much faster than with formal analysis) see that the Nash equilibria cells are (B,A), (A,B), and (C,C). Indeed, for cell (B,A), 40 is the maximum of the first column and 25 is the maximum of the second row. For (A,B), 25 is the maximum of the second column and 40 is the maximum of the first row; the same applies for cell (C,C). For other cells, either one or both of the duplet members are not the maximum of the corresponding rows and columns.
This said, the actual mechanics of finding equilibrium cells is obvious: find the maximum of a column and check if the second member of the pair is the maximum of the row. If these conditions are met, the cell represents a Nash equilibrium. Check all columns this way to find all NE cells. An N×N matrix may have between 0 and N×N pure-strategy Nash equilibria.
Stability
The concept of stability, useful in the analysis of many kinds of equilibria, can also be applied to Nash equilibria.
A Nash equilibrium for a mixed-strategy game is stable if a small change (specifically, an infinitesimal change) in probabilities for one player leads to a situation where two conditions hold:
the player who did not change has no better strategy in the new circumstance
the player who did change is now playing with a strictly worse strategy.
If these cases are both met, then a player with the small change in their mixed strategy will return immediately to the Nash equilibrium. The equilibrium is said to be stable. If condition one does not hold then the equilibrium is unstable. If only condition one holds then there are likely to be an infinite number of optimal strategies for the player who changed.
In the "driving game" example above there are both stable and unstable equilibria. The equilibria involving mixed strategies with 100% probabilities are stable. If either player changes their probabilities slightly, they will be both at a disadvantage, and their opponent will have no reason to change their strategy in turn. The (50%,50%) equilibrium is unstable. If either player changes their probabilities (which would neither benefit or damage the expectation of the player who did the change, if the other player's mixed strategy is still (50%,50%)), then the other player immediately has a better strategy at either (0%, 100%) or (100%, 0%).
Stability is crucial in practical applications of Nash equilibria, since the mixed strategy of each player is not perfectly known, but has to be inferred from statistical distribution of their actions in the game. In this case unstable equilibria are very unlikely to arise in practice, since any minute change in the proportions of each strategy seen will lead to a change in strategy and the breakdown of the equilibrium.
Finally in the eighties, building with great depth on such ideas Mertens-stable equilibria were introduced as a solution concept. Mertens stable equilibria satisfy both forward induction and backward induction. In a game theory context stable equilibria now usually refer to Mertens stable equilibria.
Occurrence
If a game has a unique Nash equilibrium and is played among players under certain conditions, then the NE strategy set will be adopted. Sufficient conditions to guarantee that the Nash equilibrium is played are:
The players all will do their utmost to maximize their expected payoff as described by the game.
The players are flawless in execution.
The players have sufficient intelligence to deduce the solution.
The players know the planned equilibrium strategy of all of the other players.
The players believe that a deviation in their own strategy will not cause deviations by any other players.
There is common knowledge that all players meet these conditions, including this one. So, not only must each player know the other players meet the conditions, but also they must know that they all know that they meet them, and know that they know that they know that they meet them, and so on.
Where the conditions are not met
Examples of game theory problems in which these conditions are not met:
The first condition is not met if the game does not correctly describe the quantities a player wishes to maximize. In this case there is no particular reason for that player to adopt an equilibrium strategy. For instance, the prisoner's dilemma is not a dilemma if either player is happy to be jailed indefinitely.
Intentional or accidental imperfection in execution. For example, a computer capable of flawless logical play facing a second flawless computer will result in equilibrium. Introduction of imperfection will lead to its disruption either through loss to the player who makes the mistake, or through negation of the common knowledge criterion leading to possible victory for the player. (An example would be a player suddenly putting the car into reverse in the game of chicken, ensuring a no-loss no-win scenario).
In many cases, the third condition is not met because, even though the equilibrium must exist, it is unknown due to the complexity of the game, for instance in Chinese chess. Or, if known, it may not be known to all players, as when playing tic-tac-toe with a small child who desperately wants to win (meeting the other criteria).
The criterion of common knowledge may not be met even if all players do, in fact, meet all the other criteria. Players wrongly distrusting each other's rationality may adopt counter-strategies to expected irrational play on their opponents’ behalf. This is a major consideration in "chicken" or an arms race, for example.
Where the conditions are met
In his Ph.D. dissertation, John Nash proposed two interpretations of his equilibrium concept, with the objective of showing how equilibrium points can be connected with observable phenomenon.
This idea was formalized by R. Aumann and A. Brandenburger, 1995, Epistemic Conditions for Nash Equilibrium, Econometrica, 63, 1161-1180 who interpreted each player's mixed strategy as a conjecture about the behaviour of other players and have shown that if the game and the rationality of players is mutually known and these conjectures are commonly known, then the conjectures must be a Nash equilibrium (a common prior assumption is needed for this result in general, but not in the case of two players. In this case, the conjectures need only be mutually known).
A second interpretation, that Nash referred to by the mass action interpretation, is less demanding on players:
For a formal result along these lines, see Kuhn, H. and et al., 1996, "The Work of John Nash in Game Theory", Journal of Economic Theory, 69, 153–185.
Due to the limited conditions in which NE can actually be observed, they are rarely treated as a guide to day-to-day behaviour, or observed in practice in human negotiations. However, as a theoretical concept in economics and evolutionary biology, the NE has explanatory power. The payoff in economics is utility (or sometimes money), and in evolutionary biology is gene transmission; both are the fundamental bottom line of survival. Researchers who apply games theory in these fields claim that strategies failing to maximize these for whatever reason will be competed out of the market or environment, which are ascribed the ability to test all strategies. This conclusion is drawn from the "stability" theory above. In these situations the assumption that the strategy observed is actually a NE has often been borne out by research.
NE and non-credible threats
The Nash equilibrium is a superset of the subgame perfect Nash equilibrium. The subgame perfect equilibrium in addition to the Nash equilibrium requires that the strategy also is a Nash equilibrium in every subgame of that game. This eliminates all non-credible threats, that is, strategies that contain non-rational moves in order to make the counter-player change their strategy.
The image to the right shows a simple sequential game that illustrates the issue with subgame imperfect Nash equilibria. In this game player one chooses left(L) or right(R), which is followed by player two being called upon to be kind (K) or unkind (U) to player one, However, player two only stands to gain from being unkind if player one goes left. If player one goes right the rational player two would de facto be kind to her/him in that subgame. However, The non-credible threat of being unkind at 2(2) is still part of the blue (L, (U,U)) Nash equilibrium. Therefore, if rational behavior can be expected by both parties the subgame perfect Nash equilibrium may be a more meaningful solution concept when such dynamic inconsistencies arise.
Proof of existence
Proof using the Kakutani fixed-point theorem
Nash's original proof (in his thesis) used Brouwer's fixed-point theorem (e.g., see below for a variant). This section presents a simpler proof via the Kakutani fixed-point theorem, following Nash's 1950 paper (he credits David Gale with the observation that such a simplification is possible).
To prove the existence of a Nash equilibrium, let be the best response of player i to the strategies of all other players.
Here, , where , is a mixed-strategy profile in the set of all mixed strategies and is the payoff function for player i. Define a set-valued function such that . The existence of a Nash equilibrium is equivalent to having a fixed point.
Kakutani's fixed point theorem guarantees the existence of a fixed point if the following four conditions are satisfied.
is compact, convex, and nonempty.
is nonempty.
is upper hemicontinuous
is convex.
Condition 1. is satisfied from the fact that is a simplex and thus compact. Convexity follows from players' ability to mix strategies. is nonempty as long as players have strategies.
Condition 2. and 3. are satisfied by way of Berge's maximum theorem. Because is continuous and compact, is non-empty and upper hemicontinuous.
Condition 4. is satisfied as a result of mixed strategies. Suppose , then . i.e. if two strategies maximize payoffs, then a mix between the two strategies will yield the same payoff.
Therefore, there exists a fixed point in and a Nash equilibrium.
When Nash made this point to John von Neumann in 1949, von Neumann famously dismissed it with the words, "That's trivial, you know. That's just a fixed-point theorem." (See Nasar, 1998, p. 94.)
Alternate proof using the Brouwer fixed-point theorem
We have a game where is the number of players and is the action set for the players. All of the action sets are finite. Let denote the set of mixed strategies for the players. The finiteness of the s ensures the compactness of .
We can now define the gain functions. For a mixed strategy , we let the gain for player on action be
The gain function represents the benefit a player gets by unilaterally changing their strategy. We now define where
for . We see that
Next we define:
It is easy to see that each is a valid mixed strategy in . It is also easy to check that each is a continuous function of , and hence is a continuous function. As the cross product of a finite number of compact convex sets, is also compact and convex. Applying the Brouwer fixed point theorem to and we conclude that has a fixed point in , call it . We claim that is a Nash equilibrium in . For this purpose, it suffices to show that
This simply states that each player gains no benefit by unilaterally changing their strategy, which is exactly the necessary condition for a Nash equilibrium.
Now assume that the gains are not all zero. Therefore, and such that . Then
So let
Also we shall denote as the gain vector indexed by actions in . Since is the fixed point we have:
Since we have that is some positive scaling of the vector . Now we claim that
To see this, first if then this is true by definition of the gain function. Now assume that . By our previous statements we have that
and so the left term is zero, giving us that the entire expression is as needed.
So we finally have that
where the last inequality follows since is a non-zero vector. But this is a clear contradiction, so all the gains must indeed be zero. Therefore, is a Nash equilibrium for as needed.
Computing Nash equilibria
If a player A has a dominant strategy then there exists a Nash equilibrium in which A plays . In the case of two players A and B, there exists a Nash equilibrium in which A plays and B plays a best response to . If is a strictly dominant strategy, A plays in all Nash equilibria. If both A and B have strictly dominant strategies, there exists a unique Nash equilibrium in which each plays their strictly dominant strategy.
In games with mixed-strategy Nash equilibria, the probability of a player choosing any particular (so pure) strategy can be computed by assigning a variable to each strategy that represents a fixed probability for choosing that strategy. In order for a player to be willing to randomize, their expected payoff for each (pure) strategy should be the same. In addition, the sum of the probabilities for each strategy of a particular player should be 1. This creates a system of equations from which the probabilities of choosing each strategy can be derived.
Examples
In the matching pennies game, player A loses a point to B if A and B play the same strategy and wins a point from B if they play different strategies. To compute the mixed-strategy Nash equilibrium, assign A the probability of playing H and of playing T, and assign B the probability of playing H and of playing T.
Thus, a mixed-strategy Nash equilibrium in this game is for each player to randomly choose H or T with and .
Oddness of equilibrium points
In 1971, Robert Wilson came up with the "oddness theorem", which says that "almost all" finite games have a finite and odd number of Nash equilibria. In 1993, Harsanyi published an alternative proof of the result. "Almost all" here means that any game with an infinite or even number of equilibria is very special in the sense that if its payoffs were even slightly randomly perturbed, with probability one it would have an odd number of equilibria instead.
The prisoner's dilemma, for example, has one equilibrium, while the battle of the sexes has three—two pure and one mixed, and this remains true even if the payoffs change slightly. The free money game is an example of a "special" game with an even number of equilibria. In it, two players have to both vote "yes" rather than "no" to get a reward and the votes are simultaneous. There are two pure-strategy Nash equilibria, (yes, yes) and (no, no), and no mixed strategy equilibria, because the strategy "yes" weakly dominates "no". "Yes" is as good as "no" regardless of the other player's action, but if there is any chance the other player chooses "yes" then "yes" is the best reply. Under a small random perturbation of the payoffs, however, the probability that any two payoffs would remain tied, whether at 0 or some other number, is vanishingly small, and the game would have either one or three equilibria instead.
| Mathematics | Game theory | null |
45459 | https://en.wikipedia.org/wiki/Control%20flow | Control flow | In computer science, control flow (or flow of control) is the order in which individual statements, instructions or function calls of an imperative program are executed or evaluated. The emphasis on explicit control flow distinguishes an imperative programming language from a declarative programming language.
Within an imperative programming language, a control flow statement is a statement that results in a choice being made as to which of two or more paths to follow. For non-strict functional languages, functions and language constructs exist to achieve the same result, but they are usually not termed control flow statements.
A set of statements is in turn generally structured as a block, which in addition to grouping, also defines a lexical scope.
Interrupts and signals are low-level mechanisms that can alter the flow of control in a way similar to a subroutine, but usually occur as a response to some external stimulus or event (that can occur asynchronously), rather than execution of an in-line control flow statement.
At the level of machine language or assembly language, control flow instructions usually work by altering the program counter. For some central processing units (CPUs), the only control flow instructions available are conditional or unconditional branch instructions, also termed jumps.
Categories
The kinds of control flow statements supported by different languages vary, but can be categorized by their effect:
Continuation at a different statement (unconditional branch or jump)
Executing a set of statements only if some condition is met (choice - i.e., conditional branch)
Executing a set of statements zero or more times, until some condition is met (i.e., loop - the same as conditional branch)
Executing a set of distant statements, after which the flow of control usually returns (subroutines, coroutines, and continuations)
Stopping the program, preventing any further execution (unconditional halt)
Primitives
Labels
A label is an explicit name or number assigned to a fixed position within the source code, and which may be referenced by control flow statements appearing elsewhere in the source code. A label marks a position within source code and has no other effect.
Line numbers are an alternative to a named label used in some languages (such as BASIC). They are whole numbers placed at the start of each line of text in the source code. Languages which use these often impose the constraint that the line numbers must increase in value in each following line, but may not require that they be consecutive. For example, in BASIC:
10 LET X = 3
20 PRINT X
In other languages such as C and Ada, a label is an identifier, usually appearing at the start of a line and immediately followed by a colon. For example, in C:
Success: printf("The operation was successful.\n");
The language ALGOL 60 allowed both whole numbers and identifiers as labels (both linked by colons to the following statement), but few if any other ALGOL variants allowed whole numbers. Early Fortran compilers only allowed whole numbers as labels. Beginning with Fortran-90, alphanumeric labels have also been allowed.
Goto
The goto statement (a combination of the English words go and to, and pronounced accordingly) is the most basic form of unconditional transfer of control.
Although the keyword may either be in upper or lower case depending on the language, it is usually written as:
goto label
The effect of a goto statement is to cause the next statement to be executed to be the statement appearing at (or immediately after) the indicated label.
Goto statements have been considered harmful by many computer scientists, notably Dijkstra.
Subroutines
The terminology for subroutines varies; they may alternatively be known as routines, procedures, functions (especially if they return results) or methods (especially if they belong to classes or type classes).
In the 1950s, computer memories were very small by current standards so subroutines were used mainly to reduce program size. A piece of code was written once and then used many times from various other places in a program.
Today, subroutines are more often used to help make a program more structured, e.g., by isolating some algorithm or hiding some data access method. If many programmers are working on one program, subroutines are one kind of modularity that can help divide the work.
Sequence
In structured programming, the ordered sequencing of successive commands is considered one of the basic control structures, which is used as a building block for programs alongside iteration, recursion and choice.
Minimal structured control flow
In May 1966, Böhm and Jacopini published an article in Communications of the ACM which showed that any program with gotos could be transformed into a goto-free form involving only choice (IF THEN ELSE) and loops (WHILE condition DO xxx), possibly with duplicated code and/or the addition of Boolean variables (true/false flags). Later authors showed that choice can be replaced by loops (and yet more Boolean variables).
That such minimalism is possible does not mean that it is necessarily desirable; computers theoretically need only one machine instruction (subtract one number from another and branch if the result is negative), but practical computers have dozens or even hundreds of machine instructions.
Other research showed that control structures with one entry and one exit were much easier to understand than any other form, mainly because they could be used anywhere as a statement without disrupting the control flow. In other words, they were composable. (Later developments, such as non-strict programming languages – and more recently, composable software transactions – have continued this strategy, making components of programs even more freely composable.)
Some academics took a purist approach to the Böhm–Jacopini result and argued that even instructions like break and return from the middle of loops are bad practice as they are not needed in the Böhm–Jacopini proof, and thus they advocated that all loops should have a single exit point. This purist approach is embodied in the language Pascal (designed in 1968–1969), which up to the mid-1990s was the preferred tool for teaching introductory programming in academia. The direct application of the Böhm–Jacopini theorem may result in additional local variables being introduced in the structured chart, and may also result in some code duplication. Pascal is affected by both of these problems and according to empirical studies cited by Eric S. Roberts, student programmers had difficulty formulating correct solutions in Pascal for several simple problems, including writing a function for searching an element in an array. A 1980 study by Henry Shapiro cited by Roberts found that using only the Pascal-provided control structures, the correct solution was given by only 20% of the subjects, while no subject wrote incorrect code for this problem if allowed to write a return from the middle of a loop.
Control structures in practice
Most programming languages with control structures have an initial keyword which indicates the type of control structure involved. Languages then divide as to whether or not control structures have a final keyword.
No final keyword: ALGOL 60, C, C++, Go, Haskell, Java, Pascal, Perl, PHP, PL/I, Python, PowerShell. Such languages need some way of grouping statements together:
ALGOL 60 and Pascal: begin ... end
C, C++, Go, Java, Perl, PHP, and PowerShell: curly brackets { ... }
PL/I: DO ... END
Python: uses indent level (see Off-side rule)
Haskell: either indent level or curly brackets can be used, and they can be freely mixed
Lua: uses do ... end
Final keyword: Ada, APL, ALGOL 68, Modula-2, Fortran 77, Mythryl, Visual Basic. The forms of the final keyword vary:
Ada: final keyword is end + space + initial keyword e.g., if ... end if, loop ... end loop
APL: final keyword is :End optionally + initial keyword, e.g., :If ... :End or :If ... :EndIf, Select ... :End or :Select ... :EndSelect, however, if adding an end condition, the end keyword becomes :Until
ALGOL 68, Mythryl: initial keyword spelled backwards e.g., if ... fi, case ... esac
Fortran 77: final keyword is END + initial keyword e.g., IF ... ENDIF, DO ... ENDDO
Modula-2: same final keyword END for everything
Visual Basic: every control structure has its own keyword. If ... End If; For ... Next; Do ... Loop; While ... Wend
Choice
If-then-(else) statements
Conditional expressions and conditional constructs are features of a programming language that perform different computations or actions depending on whether a programmer-specified Boolean condition evaluates to true or false.
IF..GOTO. A form found in unstructured languages, mimicking a typical machine code instruction, would jump to (GOTO) a label or line number when the condition was met.
IF..THEN..(ENDIF). Rather than being restricted to a jump, any simple statement, or nested block, could follow the THEN key keyword. This a structured form.
IF..THEN..ELSE..(ENDIF). As above, but with a second action to be performed if the condition is false. This is one of the most common forms, with many variations. Some require a terminal ENDIF, others do not. C and related languages do not require a terminal keyword, or a 'then', but do require parentheses around the condition.
Conditional statements can be and often are nested inside other conditional statements. Some languages allow ELSE and IF to be combined into ELSEIF, avoiding the need to have a series of ENDIF or other final statements at the end of a compound statement.
Less common variations include:
Some languages, such as early Fortran, have a three-way or arithmetic if, testing whether a numeric value is negative, zero, or positive.
Some languages have a functional form of an if statement, for instance Lisp's cond.
Some languages have an operator form of an if statement, such as C's ternary operator.
Perl supplements a C-style if with when and unless.
Smalltalk uses ifTrue and ifFalse messages to implement conditionals, rather than any fundamental language construct.
Case and switch statements
Switch statements (or case statements, or multiway branches) compare a given value with specified constants and take action according to the first constant to match. There is usually a provision for a default action ("else", "otherwise") to be taken if no match succeeds. Switch statements can allow compiler optimizations, such as lookup tables. In dynamic languages, the cases may not be limited to constant expressions, and might extend to pattern matching, as in the shell script example on the right, where the *) implements the default case as a glob matching any string. Case logic can also be implemented in functional form, as in SQL's decode statement.
Loops
A loop is a sequence of statements which is specified once but which may be carried out several times in succession. The code "inside" the loop (the body of the loop, shown below as xxx) is obeyed a specified number of times, or once for each of a collection of items, or until some condition is met, or indefinitely. When one of those items is itself also a loop, it is called a "nested loop".
In functional programming languages, such as Haskell and Scheme, both recursive and iterative processes are expressed with tail recursive procedures instead of looping constructs that are syntactic.
Count-controlled loops
Most programming languages have constructions for repeating a loop a certain number of times.
In most cases counting can go downwards instead of upwards and step sizes other than 1 can be used.
In these examples, if N < 1 then the body of loop may execute once (with I having value 1) or not at all, depending on the programming language.
In many programming languages, only integers can be reliably used in a count-controlled loop. Floating-point numbers are represented imprecisely due to hardware constraints, so a loop such as
for X := 0.1 step 0.1 to 1.0 do
might be repeated 9 or 10 times, depending on rounding errors and/or the hardware and/or the compiler version. Furthermore, if the increment of X occurs by repeated addition, accumulated rounding errors may mean that the value of X in each iteration can differ quite significantly from the expected sequence 0.1, 0.2, 0.3, ..., 1.0.
Condition-controlled loops
Most programming languages have constructions for repeating a loop until some condition changes. Some variations test the condition at the start of the loop; others test it at the end. If the test is at the start, the body may be skipped completely; if it is at the end, the body is always executed at least once.
A control break is a value change detection method used within ordinary loops to trigger processing for groups of values. Values are monitored within the loop and a change diverts program flow to the handling of the group event associated with them.
DO UNTIL (End-of-File)
IF new-zipcode <> current-zipcode
display_tally(current-zipcode, zipcount)
current-zipcode = new-zipcode
zipcount = 0
ENDIF
zipcount++
LOOP
Collection-controlled loops
Several programming languages (e.g., Ada, D, C++11, Smalltalk, PHP, Perl, Object Pascal, Java, C#, MATLAB, Visual Basic, Ruby, Python, JavaScript, Fortran 95 and later) have special constructs which allow implicit looping through all elements of an array, or all members of a set or collection.
someCollection do: [:eachElement |xxx].
for Item in Collection do begin xxx end;
foreach (item; myCollection) { xxx }
foreach someArray { xxx }
foreach ($someArray as $k => $v) { xxx }
Collection<String> coll; for (String s : coll) {}
foreach (string s in myStringCollection) { xxx }
someCollection | ForEach-Object { $_ }
forall ( index = first:last:step... )
Scala has for-expressions, which generalise collection-controlled loops, and also support other uses, such as asynchronous programming. Haskell has do-expressions and comprehensions, which together provide similar function to for-expressions in Scala.
General iteration
General iteration constructs such as C's for statement and Common Lisp's do form can be used to express any of the above sorts of loops, and others, such as looping over some number of collections in parallel. Where a more specific looping construct can be used, it is usually preferred over the general iteration construct, since it often makes the purpose of the expression clearer.
Infinite loops
Infinite loops are used to assure a program segment loops forever or until an exceptional condition arises, such as an error. For instance, an event-driven program (such as a server) should loop forever, handling events as they occur, only stopping when the process is terminated by an operator.
Infinite loops can be implemented using other control flow constructs. Most commonly, in unstructured programming this is jump back up (goto), while in structured programming this is an indefinite loop (while loop) set to never end, either by omitting the condition or explicitly setting it to true, as while (true) .... Some languages have special constructs for infinite loops, typically by omitting the condition from an indefinite loop. Examples include Ada (loop ... end loop), Fortran (DO ... END DO), Go (for { ... }), and Ruby (loop do ... end).
Often, an infinite loop is unintentionally created by a programming error in a condition-controlled loop, wherein the loop condition uses variables that never change within the loop.
Continuation with next iteration
Sometimes within the body of a loop there is a desire to skip the remainder of the loop body and continue with the next iteration of the loop. Some languages provide a statement such as continue (most languages), skip, cycle (Fortran), or next (Perl and Ruby), which will do this. The effect is to prematurely terminate the innermost loop body and then resume as normal with the next iteration. If the iteration is the last one in the loop, the effect is to terminate the entire loop early.
Redo current iteration
Some languages, like Perl and Ruby, have a redo statement that restarts the current iteration from the start.
Restart loop
Ruby has a retry statement that restarts the entire loop from the initial iteration.
Early exit from loops
When using a count-controlled loop to search through a table, it might be desirable to stop searching as soon as the required item is found. Some programming languages provide a statement such as break (most languages), Exit (Visual Basic), or last (Perl), which effect is to terminate the current loop immediately, and transfer control to the statement immediately after that loop. Another term for early-exit loops is loop-and-a-half.
The following example is done in Ada which supports both early exit from loops and loops with test in the middle. Both features are very similar and comparing both code snippets will show the difference: early exit must be combined with an if statement while a condition in the middle is a self-contained construct.
with Ada.Text IO;
with Ada.Integer Text IO;
procedure Print_Squares is
X : Integer;
begin
Read_Data : loop
Ada.Integer Text IO.Get(X);
exit Read_Data when X = 0;
Ada.Text IO.Put (X * X);
Ada.Text IO.New_Line;
end loop Read_Data;
end Print_Squares;
Python supports conditional execution of code depending on whether a loop was exited early (with a break statement) or not by using an else-clause with the loop. For example,
for n in set_of_numbers:
if isprime(n):
print("Set contains a prime number")
break
else:
print("Set did not contain any prime numbers")
The else clause in the above example is linked to the for statement, and not the inner if statement. Both Python's for and while loops support such an else clause, which is executed only if early exit of the loop has not occurred.
Some languages support breaking out of nested loops; in theory circles, these are called multi-level breaks. One common use example is searching a multi-dimensional table. This can be done either via multilevel breaks (break out of N levels), as in bash and PHP, or via labeled breaks (break out and continue at given label), as in Go, Java and Perl. Alternatives to multilevel breaks include single breaks, together with a state variable which is tested to break out another level; exceptions, which are caught at the level being broken out to; placing the nested loops in a function and using return to effect termination of the entire nested loop; or using a label and a goto statement. C does not include a multilevel break, and the usual alternative is to use a goto to implement a labeled break. Python does not have a multilevel break or continue – this was proposed in PEP 3136, and rejected on the basis that the added complexity was not worth the rare legitimate use.
The notion of multi-level breaks is of some interest in theoretical computer science, because it gives rise to what is today called the Kosaraju hierarchy. In 1973 S. Rao Kosaraju refined the structured program theorem by proving that it is possible to avoid adding additional variables in structured programming, as long as arbitrary-depth, multi-level breaks from loops are allowed. Furthermore, Kosaraju proved that a strict hierarchy of programs exists: for every integer n, there exists a program containing a multi-level break of depth n that cannot be rewritten as a program with multi-level breaks of depth less than n without introducing added variables.
One can also return out of a subroutine executing the looped statements, breaking out of both the nested loop and the subroutine. There are other proposed control structures for multiple breaks, but these are generally implemented as exceptions instead.
In his 2004 textbook, David Watt uses Tennent's notion of sequencer to explain the similarity between multi-level breaks and return statements. Watt notes that a class of sequencers known as escape sequencers, defined as "sequencer that terminates execution of a textually enclosing command or procedure", encompasses both breaks from loops (including multi-level breaks) and return statements. As commonly implemented, however, return sequencers may also carry a (return) value, whereas the break sequencer as implemented in contemporary languages usually cannot.
Loop variants and invariants
Loop variants and loop invariants are used to express correctness of loops.
In practical terms, a loop variant is an integer expression which has an initial non-negative value. The variant's value must decrease during each loop iteration but must never become negative during the correct execution of the loop. Loop variants are used to guarantee that loops will terminate.
A loop invariant is an assertion which must be true before the first loop iteration and remain true after each iteration. This implies that when a loop terminates correctly, both the exit condition and the loop invariant are satisfied. Loop invariants are used to monitor specific properties of a loop during successive iterations.
Some programming languages, such as Eiffel contain native support for loop variants and invariants. In other cases, support is an add-on, such as the Java Modeling Language's specification for loop statements in Java.
Loop sublanguage
Some Lisp dialects provide an extensive sublanguage for describing Loops. An early example can be found in Conversional Lisp of Interlisp. Common Lisp provides a Loop macro which implements such a sublanguage.
Loop system cross-reference table
while (true) does not count as an infinite loop for this purpose, because it is not a dedicated language structure.
C's for (init; test; increment) loop is a general loop construct, not specifically a counting one, although it is often used for that.
Deep breaks may be accomplished in APL, C, C++ and C# through the use of labels and gotos.
Iteration over objects was added in PHP 5.
A counting loop can be simulated by iterating over an incrementing list or generator, for instance, Python's range().
Deep breaks may be accomplished through the use of exception handling.
There is no special construct, since the while function can be used for this.
There is no special construct, but users can define general loop functions.
The C++11 standard introduced the range-based for. In the STL, there is a std::for_each template function which can iterate on STL containers and call a unary function for each element. The functionality also can be constructed as macro on these containers.
Count-controlled looping is effected by iteration across an integer interval; early exit by including an additional condition for exit.
Eiffel supports a reserved word retry, however it is used in exception handling, not loop control.
Requires Java Modeling Language (JML) behavioral interface specification language.
Requires loop variants to be integers; transfinite variants are not supported.
D supports infinite collections, and the ability to iterate over those collections. This does not require any special construct.
Deep breaks can be achieved using GO TO and procedures.
Common Lisp predates the concept of generic collection type.
Structured non-local control flow
Many programming languages, especially those favoring more dynamic styles of programming, offer constructs for non-local control flow. These cause the flow of execution to jump out of a given context and resume at some predeclared point. Conditions, exceptions and continuations are three common sorts of non-local control constructs; more exotic ones also exist, such as generators, coroutines and the async keyword.
Conditions
The earliest Fortran compilers had statements for testing exceptional conditions. These included the IF ACCUMULATOR OVERFLOW, IF QUOTIENT OVERFLOW, and IF DIVIDE CHECK statements. In the interest of machine independence, they were not included in FORTRAN IV and the Fortran 66 Standard. However since Fortran 2003 it is possible to test for numerical issues via calls to functions in the IEEE_EXCEPTIONS module.
PL/I has some 22 standard conditions (e.g., ZERODIVIDE SUBSCRIPTRANGE ENDFILE) which can be raised and which can be intercepted by: ON condition action; Programmers can also define and use their own named conditions.
Like the unstructured if, only one statement can be specified so in many cases a GOTO is needed to decide where flow of control should resume.
Unfortunately, some implementations had a substantial overhead in both space and time (especially SUBSCRIPTRANGE), so many programmers tried to avoid using conditions.
Common Syntax examples:
ON condition GOTO label
Exceptions
Modern languages have a specialized structured construct for exception handling which does not rely on the use of GOTO or (multi-level) breaks or returns. For example, in C++ one can write:
try {
xxx1 // Somewhere in here
xxx2 // use: '''throw''' someValue;
xxx3
} catch (someClass& someId) { // catch value of someClass
actionForSomeClass
} catch (someType& anotherId) { // catch value of someType
actionForSomeType
} catch (...) { // catch anything not already caught
actionForAnythingElse
}
Any number and variety of catch clauses can be used above. If there is no catch matching a particular throw, control percolates back through subroutine calls and/or nested blocks until a matching catch is found or until the end of the main program is reached, at which point the program is forcibly stopped with a suitable error message.
Via C++'s influence, catch is the keyword reserved for declaring a pattern-matching exception handler in other languages popular today, like Java or C#. Some other languages like Ada use the keyword exception to introduce an exception handler and then may even employ a different keyword (when in Ada) for the pattern matching. A few languages like AppleScript incorporate placeholders in the exception handler syntax to automatically extract several pieces of information when the exception occurs. This approach is exemplified below by the on error construct from AppleScript:
try
set myNumber to myNumber / 0
on error e number n from f to t partial result pr
if ( e = "Can't divide by zero" ) then display dialog "You must not do that"
end try
David Watt's 2004 textbook also analyzes exception handling in the framework of sequencers (introduced in this article in the section on early exits from loops). Watt notes that an abnormal situation, generally exemplified with arithmetic overflows or input/output failures like file not found, is a kind of error that "is detected in some low-level program unit, but [for which] a handler is more naturally located in a high-level program unit". For example, a program might contain several calls to read files, but the action to perform when a file is not found depends on the meaning (purpose) of the file in question to the program and thus a handling routine for this abnormal situation cannot be located in low-level system code. Watts further notes that introducing status flags testing in the caller, as single-exit structured programming or even (multi-exit) return sequencers would entail, results in a situation where "the application code tends to get cluttered by tests of status flags" and that "the programmer might forgetfully or lazily omit to test a status flag. In fact, abnormal situations represented by status flags are by default ignored!" Watt notes that in contrast to status flags testing, exceptions have the opposite default behavior, causing the program to terminate unless the program deals with the exception explicitly in some way, possibly by adding explicit code to ignore it. Based on these arguments, Watt concludes that jump sequencers or escape sequencers are less suitable as a dedicated exception sequencer with the semantics discussed above.
In Object Pascal, D, Java, C#, and Python a finally clause can be added to the try construct. No matter how control leaves the try the code inside the finally clause is guaranteed to execute. This is useful when writing code that must relinquish an expensive resource (such as an opened file or a database connection) when finished processing:
FileStream stm = null; // C# example
try
{
stm = new FileStream("logfile.txt", FileMode.Create);
return ProcessStuff(stm); // may throw an exception
}
finally
{
if (stm != null)
stm.Close();
}
Since this pattern is fairly common, C# has a special syntax:
using (var stm = new FileStream("logfile.txt", FileMode.Create))
{
return ProcessStuff(stm); // may throw an exception
}
Upon leaving the using-block, the compiler guarantees that the stm object is released, effectively binding the variable to the file stream while abstracting from the side effects of initializing and releasing the file. Python's with statement and Ruby's block argument to File.open are used to similar effect.
All the languages mentioned above define standard exceptions and the circumstances under which they are thrown. Users can throw exceptions of their own; C++ allows users to throw and catch almost any type, including basic types like int, whereas other languages like Java are less permissive.
Continuations
Async
C# 5.0 introduced the async keyword for supporting asynchronous I/O in a "direct style".
Generators
Generators, also known as semicoroutines, allow control to be yielded to a consumer method temporarily, typically using a keyword (yield description) . Like the async keyword, this supports programming in a "direct style".
Coroutines
Coroutines are functions that can yield control to each other - a form of co-operative multitasking without threads.
Coroutines can be implemented as a library if the programming language provides either continuations or generators - so the distinction between coroutines and generators in practice is a technical detail.
Non-local control flow cross reference
Proposed control structures
In a spoof Datamation article in 1973, R. Lawrence Clark suggested that the GOTO statement could be replaced by the COMEFROM statement, and provides some entertaining examples. COMEFROM was implemented in one esoteric programming language named INTERCAL.
Donald Knuth's 1974 article "Structured Programming with go to Statements", identifies two situations which were not covered by the control structures listed above, and gave examples of control structures which could handle these situations. Despite their utility, these constructs have not yet found their way into mainstream programming languages.
Loop with test in the middle
The following was proposed by Dahl in 1972:
loop loop
xxx1 read(char);
while test; while not atEndOfFile;
xxx2 write(char);
repeat; repeat;
If xxx1 is omitted, we get a loop with the test at the top (a traditional while loop). If xxx2 is omitted, we get a loop with the test at the bottom, equivalent to a do while loop in many languages. If while is omitted, we get an infinite loop. The construction here can be thought of as a do loop with the while check in the middle. Hence this single construction can replace several constructions in most programming languages.
Languages lacking this construct generally emulate it using an equivalent infinite-loop-with-break idiom:
while (true) {
xxx1
if (not test)
break
xxx2
}
A possible variant is to allow more than one while test; within the loop, but the use of exitwhen (see next section) appears to cover this case better.
In Ada, the above loop construct (loop-while-repeat) can be represented using a standard infinite loop (loop - end loop) that has an exit when clause in the middle (not to be confused with the exitwhen statement in the following section).
with Ada.Text_IO;
with Ada.Integer_Text_IO;
procedure Print_Squares is
X : Integer;
begin
Read_Data : loop
Ada.Integer_Text_IO.Get(X);
exit Read_Data when X = 0;
Ada.Text IO.Put (X * X);
Ada.Text IO.New_Line;
end loop Read_Data;
end Print_Squares;
Naming a loop (like Read_Data in this example) is optional but permits leaving the outer loop of several nested loops.
Multiple early exit/exit from nested loops
This construct was proposed by Zahn in 1974. A modified version is presented here.
exitwhen EventA or EventB or EventC;
xxx
exits
EventA: actionA
EventB: actionB
EventC: actionC
endexit;
exitwhen is used to specify the events which may occur within xxx,
their occurrence is indicated by using the name of the event as a statement. When some event does occur, the relevant action is carried out, and then control passes just after . This construction provides a very clear separation between determining that some situation applies, and the action to be taken for that situation.
exitwhen is conceptually similar to exception handling, and exceptions or similar constructs are used for this purpose in many languages.
The following simple example involves searching a two-dimensional table for a particular item.
exitwhen found or missing;
for I := 1 to N do
for J := 1 to M do
if table[I,J] = target then found;
missing;
exits
found: print ("item is in table");
missing: print ("item is not in table");
endexit;
Security
One way to attack a piece of software is to redirect the flow of execution of a program. A variety of control-flow integrity techniques, including stack canaries, buffer overflow protection, shadow stacks, and vtable pointer verification, are used to defend against these attacks.
| Technology | Software development: General | null |
45503 | https://en.wikipedia.org/wiki/Coelacanth | Coelacanth | Coelacanths ( ) (order Coelacanthiformes) are an ancient group of lobe-finned fish (Sarcopterygii) in the class Actinistia. As sarcopterygians, they are more closely related to lungfish and tetrapods (which includes amphibians, reptiles, birds and mammals) than to ray-finned fish.
Well-represented in both freshwater and marine fossils since the Devonian, they are now represented by only two extant marine species in the genus Latimeria: the West Indian Ocean coelacanth (Latimeria chalumnae), primarily found near the Comoro Islands off the east coast of Africa, and the Indonesian coelacanth (Latimeria menadoensis). The name coelacanth originates from the Permian genus Coelacanthus, which was the first scientifically named coelacanth.
The oldest known coelacanth fossils date back more than 410million years. Coelacanths were thought to have become extinct in the Late Cretaceous, around 66million years ago, but were discovered living off the coast of South Africa in 1938.
The coelacanth was long considered a "living fossil" because scientists thought it was the sole remaining member of a taxon otherwise known only from fossils, with no close relations alive, and that it evolved into roughly its current form approximately 400million years ago. However, several more recent studies have shown that coelacanth body shapes are much more diverse than previously thought.
Etymology
The word Coelacanth is an adaptation of the Modern Latin ('hollow spine'), from the Greek (, 'hollow') and (, 'spine'), referring to the hollow caudal fin rays of the first fossil specimen described and named by Louis Agassiz in 1839, belonging to the genus Coelacanthus. The genus name Latimeria commemorates Marjorie Courtenay-Latimer, who discovered the first specimen.
Discovery
The earliest fossils of coelacanths were discovered in the 19th century. Coelacanths, which are related to lungfishes and tetrapods, were believed to have become extinct at the end of the Cretaceous period. More closely related to tetrapods than to the ray-finned fish, coelacanths were considered transitional species between fish and tetrapods. On 22 December 1938, the first Latimeria specimen was found off the east coast of South Africa, off the Chalumna River (now Tyolomnqa). Museum curator Marjorie Courtenay-Latimer discovered the fish among the catch of a local fisherman. Courtenay-Latimer contacted a Rhodes University ichthyologist, J. L. B. Smith, sending him drawings of the fish, and he confirmed the fish's importance with a famous cable: "Most Important Preserve Skeleton and Gills = Fish Described."
Its discovery 66 million years after its supposed extinction makes the coelacanth the best-known example of a Lazarus taxon, an evolutionary line that seems to have disappeared from the fossil record only to reappear much later. Since 1938, West Indian Ocean coelacanth have been found in the Comoros, Kenya, Tanzania, Mozambique, Madagascar, in iSimangaliso Wetland Park, and off the South Coast of Kwazulu-Natal in South Africa.
The Comoro Islands specimen was discovered in December 1952. Between 1938 and 1975, 84 specimens were caught and recorded.
The second extant species, the Indonesian coelacanth, was described from Manado, North Sulawesi, Indonesia, in 1999 by Pouyaud et al. based on a specimen discovered by Mark V. Erdmann in 1998 and deposited at the Indonesian Institute of Sciences (LIPI). Erdmann and his wife Arnaz Mehta first encountered a specimen at a local market in September 1997, but took only a few photographs of the first specimen of this species before it was sold. After confirming that it was a unique discovery, Erdmann returned to Sulawesi in November 1997 to interview fishermen and look for further examples. A second specimen was caught by a fisherman in July 1998 and was then handed to Erdmann.
Description
Latimeria chalumnae and L. menadoensis are the only two known living coelacanth species. Coelacanths are large, plump, lobe-finned fish that can grow to more than and weigh around . They are estimated to live up to 100 years, based on analysis of annual growth marks on scales, and reach maturity around the age of 55; the oldest known specimen was 84 years old at the time of its capture in 1960.
Even though their estimated lifetime is similar to humans, gestation can last 5 years, which is 1.5 years more than the deep-sea frilled shark, the previous record holder.
They are nocturnal piscivorous drift-hunters.
The body is covered in ctenoid elasmoid scales that act as armor. Coelacanths have eight fins – two dorsal fins, two pectoral fins, two pelvic fins, one anal fin and one caudal fin. The tail is very nearly equally proportioned and is split by a terminal tuft of fin rays that make up its caudal lobe. The eyes of the coelacanth are very large, while the mouth is very small. The eye is acclimatized to seeing in poor light by rods that absorb mostly short wavelengths. Coelacanth vision has evolved to a mainly blue-shifted color capacity. Pseudomaxillary folds surround the mouth and replace the maxilla, a structure absent in coelacanths. Two nostrils, along with four other external openings, appear between the premaxilla and lateral rostral bones. The nasal sacs resemble those of many other fish and do not contain an internal nostril. The coelacanth's rostral organ, contained within the ethmoid region of the braincase, has three unguarded openings into the environment and is used as a part of the coelacanth's laterosensory system. The coelacanth's auditory reception is mediated by its inner ear, which is very similar to that of tetrapods and is classified as being a basilar papilla.
Coelacanths are a part of the clade Sarcopterygii, or the lobe-finned fishes. They share membership in this clade with lungfish and tetrapods. Externally, several characteristics distinguish coelacanths from other lobe-finned fish. They possess a three-lobed caudal fin, also called a trilobate fin or a diphycercal tail. A secondary tail extending past the primary tail separates the upper and lower halves of the coelacanth. Ctenoid elasmoid scales act as thick armor to protect the coelacanth's exterior. Several internal traits also aid in differentiating coelacanths from other lobe-finned fish. At the back of the skull, the coelacanth possesses a hinge, the intracranial joint, which allows it to open its mouth extremely wide. Coelacanths also retain an oil-filled notochord, a hollow, pressurized tube which is replaced by a vertebral column early in embryonic development in most other vertebrates. The coelacanth's heart is shaped differently from that of most modern fish, with its chambers arranged in a straight tube. The coelacanth's braincase is 98.5% filled with fat; only 1.5% of the braincase contains brain tissue. The cheeks of the coelacanth are unique because the opercular bone is very small and holds a large soft-tissue opercular flap. A spiracular chamber is present, but the spiracle is closed and never opens during development. Also unique to extant coelacanths is the presence of a "fatty lung" or a fat-filled single-lobed vestigial lung, homologous to other fishes' swim bladders. The parallel development of a fatty organ for buoyancy control suggests a unique specialization for deep-water habitats. There are small and hard but flexible plates around the vestigial lung in adult specimens, though not around the fatty organ. The plates most likely had a regulation function for the volume of the lung. Due to the size of the fatty organ, researchers assume that it is responsible for the kidney's unusual relocation. The two kidneys, which are fused into one, are located ventrally within the abdominal cavity, posterior to the cloaca.
DNA
In 2013, a research group published the genome sequence of the coelacanth in the scientific journal Nature.
Due to their lobed fins and other features, it was once hypothesized that the coelacanth might be the youngest diverging non-tetrapod sarcopterygian. But after sequencing the full genome of the coelacanth, it was discovered that the lungfish instead is more closely related to tetrapods. Coelacanths and rhipidistians (the concestor of lungfish and tetrapods) had already diverged from each other before the lungfish made the transition to land.
Another important discovery made from the genome sequencing is that the coelacanths are still evolving today. While phenotypic similarity between extant and extinct coelacanths suggests there is limited evolutionary pressure on these organisms to undergo morphological divergence, they are undergoing measurable genetic divergence. Despite prior studies showing that protein coding regions are undergoing evolution at a substitution rate much lower than other sarcopterygians (consistent with phenotypic stasis observed between extant and fossil members of the taxa), the non-coding regions subject to higher transposable element activity show marked divergence even between the two extant coelacanth species. This has been facilitated in part by a coelacanth-specific endogenous retrovirus of the Epsilon retrovirus family.
Taxonomy
Cladogram showing the relationships of coelacanth genera after Torino, Soto and Perea, 2021.
Fossil record
According to the fossil record, the divergence of coelacanths, lungfish, and tetrapods is thought to have occurred during the Silurian. Over 100 fossil species of coelacanth have been described. The oldest identified coelacanth fossils are around 420–410 million years old, dating to the early Devonian. Coelacanths were never a diverse group in comparison to other groups of fish, and reached a peak diversity during the Early Triassic (252–247 million years ago), coinciding with a burst of diversification between the Late Permian and Middle Triassic. Most Mesozoic coelacanths belong to the order Latimerioidei, which contains two major subdivisions, the marine Latimeriidae, which contains modern coelacanths, as well as the extinct Mawsoniidae, which were native to brackish, freshwater as well as marine environments.
Paleozoic coelacanths are generally small (~ in length), while Mesozoic forms were larger. Several specimens belonging to the Jurassic and Cretaceous mawsoniid coelacanth genera Trachymetopon and Mawsonia likely reached or exceeded in length, making them amongst the largest known fishes of the Mesozoic, and amongst the largest bony fishes of all time.
The most recent fossil latimeriid is Megalocoelacanthus dobiei, whose disarticulated remains are found in late Santonian to middle Campanian, and possibly earliest Maastrichtian-aged marine strata of the Eastern and Central United States, the most recent mawsoniids are Axelrodichthys megadromos from early Campanian to early Maastrichtian freshwater continental deposits of France, as well as an indeterminate marine mawsoniid from Morocco, dating to the late Maastrichtian A small bone fragment from the European Paleocene has been considered the only plausible post-Cretaceous record, but this identification is based on comparative bone histology methods of doubtful reliability.
Living coelacanths have been considered "living fossils" based on their supposedly conservative morphology relative to fossil species; however, recent studies have expressed the view that coelacanth morphologic conservatism is a belief not based on data. Fossils suggest that coelacanths were most morphologically diverse during the Devonian and Carboniferous, while Mesozoic species are generally morphologically similar to each other.
Timeline of genera
Distribution and habitat
The current coelacanth range is primarily along the eastern African coast, although Latimeria menadoensis was discovered off Indonesia. Coelacanths have been found in the waters of Kenya, Tanzania, Mozambique, South Africa, Madagascar, Comoros and Indonesia. Most Latimeria chalumnae specimens that have been caught have been captured around the islands of Grande Comore and Anjouan in the Comoros Archipelago (Indian Ocean). Though there are cases of L. chalumnae caught elsewhere, amino acid sequencing has shown no big difference between these exceptions and those found around Comore and Anjouan. Even though these few may be considered strays, there are several reports of coelacanths being caught off the coast of Madagascar. This leads scientists to believe that the endemic range of Latimeria chalumnae coelacanths stretches along the eastern coast of Africa from the Comoros Islands, past the western coast of Madagascar to the South African coastline. Mitochondrial DNA sequencing of coelacanths caught off the coast of southern Tanzania suggests a divergence of the two populations some 200,000 years ago. This could refute the theory that the Comoros population is the main population while others represent recent offshoots. A live specimen was seen and recorded on video in November 2019 at off the village of Umzumbe on the South Coast of KwaZulu-Natal, south of the iSimangaliso Wetland Park. This is the farthest south since the original discovery, and the second shallowest record after in the Diepgat Canyon. These sightings suggest that they may live shallower than previously thought, at least at the southern end of their range, where colder, better-oxygenated water is available at shallower depths.
The geographical range of the Indonesia coelacanth, Latimeria menadoensis, is believed to be off the coast of Manado Tua Island, Sulawesi, Indonesia, in the Celebes Sea. Key components confining coelacanths to these areas are food and temperature restrictions, as well as ecological requirements such as caves and crevices that are well-suited for drift feeding. Teams of researchers using submersibles have recorded live sightings of the fish in the Sulawesi Sea as well as in the waters of Biak in Papua.
Anjouan Island and the Grande Comore provide ideal underwater cave habitats for coelacanths. The islands' underwater volcanic slopes, steeply eroded and covered in sand, house a system of caves and crevices which allow coelacanths resting places during the daylight hours. These islands support a large benthic fish population that helps to sustain coelacanth populations.
During the daytime, coelacanths rest in caves anywhere from deep. Others migrate to deeper waters. The cooler waters (below ) reduce the coelacanths' metabolic costs. Drifting toward reefs and night feeding saves vital energy. Resting in caves during the day also saves energy that otherwise would be expended to fight currents.
Behavior
Coelacanth locomotion is unique. To move around they most commonly take advantage of up- or down-wellings of current and drift. Their paired fins stabilize movement through the water. While on the ocean floor, they do not use the paired fins for any kind of movement. Coelacanths generate thrust with their caudal fins for quick starts. Due to the abundance of its fins, the coelacanth has high maneuverability and can orient its body in almost any direction in the water. They have been seen doing headstands as well as swimming belly up. It is thought that the rostral organ helps give the coelacanth electroreception, which aids in movement around obstacles.
Coelacanths are fairly peaceful when encountering others of their kind. They do avoid body contact, however, withdrawing immediately if contact occurs. When approached by foreign potential predators (e.g. a submersible), they show panic flight reactions, suggesting that coelacanths are most likely prey to large deepwater predators. Shark bite marks have been seen on coelacanths; sharks are common in areas inhabited by coelacanths. Electrophoresis testing of 14 coelacanth enzymes shows little genetic diversity between coelacanth populations. Among the fish that have been caught were about equal numbers of males and females. Population estimates range from 210 individuals per population to 500 per population. Because coelacanths have individual color markings, scientists think that they recognize other coelacanths via electric communication.
Feeding
Coelacanths are nocturnal piscivores that feed mainly on benthic smaller fish and various cephalopods. They are "passive drift feeders", slowly drifting along currents with only minimal self-propulsion, eating whatever prey they encounter. Coelacanths also use their rostral organ for its electroreception to be able to detect nearby prey in low light settings.
Life history
Coelacanths are ovoviviparous, meaning that the female retains the fertilized eggs within her body while the embryos develop during a gestation period of five years. Typically, females are larger than the males; their scales and the skin folds around the cloaca differ. The male coelacanth has no distinct copulatory organs, just a cloaca, which has a urogenital papilla surrounded by erectile caruncles. It is hypothesized that the cloaca everts to serve as a copulatory organ.
Coelacanth eggs are large, with only a thin layer of membrane to protect them. Embryos hatch within the female and eventually are born alive, which is a rarity in fish. This was only discovered when the American Museum of Natural History dissected its first coelacanth specimen in 1975 and found it pregnant with five embryos. Young coelacanths resemble the adult, the main differences being an external yolk sac, larger eyes relative to body size and a more pronounced downward slope of the body. The juvenile coelacanth's broad yolk sac hangs below the pelvic fins. The scales and fins of the juvenile are completely matured; however, it does lack odontodes, which it gains during maturation.
A study that assessed the paternity of the embryos inside two coelacanth females indicated that each clutch was sired by a single male. This could mean that females mate monandrously, i.e. with one male only. Polyandry, female mating with multiple males, is common in both plants and animals and can be advantageous (e.g. insurance against mating with an infertile or incompatible mate), but also confers costs (increased risk of infection, danger of falling prey to predators, increased energy input when searching for new males).
Conservation
Because little is known about the coelacanth, the conservation status is difficult to characterize. According to Fricke et al. (1995), it is important to conserve the species. From 1988 to 1994, Fricke counted some 60 individuals of L. chalumnae on each dive. In 1995 that number dropped to 40. Even though this could be a result of natural population fluctuation, it also could be a result of overfishing. The IUCN currently classifies L. chalumnae as "critically endangered", with a total population size of 500 or fewer individuals. L. menadoensis is considered Vulnerable, with a significantly larger population size (fewer than 10,000 individuals).
The major threat towards the coelacanth is the accidental capture by fishing operations, especially commercial deep-sea trawling. Coelacanths usually are caught when local fishermen are fishing for oilfish. Fishermen sometimes snag a coelacanth instead of an oilfish because they traditionally fish at night, when oilfish (and coelacanths) feed.
Before scientists became interested in coelacanths, they were thrown back into the water if caught. Now that they are recognized as important, fishermen trade them to scientists or other officials. Before the 1980s, this was a problem for coelacanth populations. In the 1980s, international aid gave fiberglass boats to the local fishermen, which moved fishing beyond the coelacanth territories into more productive waters. Since then, most of the motors on the boats failed, forcing the fishermen back into coelacanth territory and putting the species at risk again.
Methods to minimize the number of coelacanths caught include moving fishers away from the shore, using different laxatives and malarial salves to reduce the demand for oilfish, using coelacanth models to simulate live specimens, and increasing awareness of the need for conservation. In 1987 the Coelacanth Conservation Council advocated the conservation of coelacanths. The CCC has branches located in Comoros, South Africa, Canada, the United Kingdom, the U.S., Japan, and Germany. The agencies were established to help protect and encourage population growth of coelacanths.
A "deep release kit" was developed in 2014 and distributed by private initiative, consisting of a weighted hook assembly that allows a fisherman to return an accidentally caught coelacanth to deep waters where the hook can be detached once it hits the seafloor. Conclusive reports about the effectiveness of this method are still pending.
In 2002, the South African Coelacanth Conservation and Genome Resource Programme was launched to help further the studies and conservation of the coelacanth. This program focuses on biodiversity conservation, evolutionary biology, capacity building, and public understanding. The South African government committed to spending R10 million on the program. In 2011, a plan was made for a Tanga Coelacanth Marine Park to conserve biodiversity for marine animals including the coelacanth. The park was designed to reduce habitat destruction and improve prey availability for endangered species.
Human consumption
Coelacanths are considered a poor source of food for humans and likely most other fish-eating animals. Coelacanth flesh has large amounts of oil, urea, wax esters, and other compounds that give the flesh a distinctly unpleasant flavor, make it difficult to digest, and can cause diarrhea. Their scales themselves secrete mucus, which combined with the excessive oil their bodies produce, make coelacanths a slimy food. Where the coelacanth is more common, local fishermen avoid it because of its potential to sicken consumers. As a result, the coelacanth has no real commercial value apart from being coveted by museums and private collectors.
Cultural significance
Because of the surprising nature of the coelacanth's discovery, they have been a frequent source of inspiration in modern artwork, craftsmanship, and literature. At least 22 countries have depicted them on their postage stamps, particularly the Comoros, which has issued 12 different sets of coelacanth stamps. The coelacanth is also depicted on the 1000 Comorian franc banknote, as well as the 5 CF coin.
In the Pokémon media franchise, the Pokémon known as Relicanth is based on the coelacanth.
In the video game series Animal Crossing, the coelacanth is a rare fish that can be caught by the player by fishing in the ocean.
| Biology and health sciences | Fishes | null |
45511 | https://en.wikipedia.org/wiki/Treeshrew | Treeshrew | The treeshrews (also called tree shrews or banxrings) are small mammals native to the tropical forests of South and Southeast Asia. They make up the entire order Scandentia (from Latin scandere, "to climb"), which split into two families: the Tupaiidae (19 species, "ordinary" treeshrews), and the Ptilocercidae (one species, the pen-tailed treeshrew).
Though called 'treeshrews', and despite having previously been classified in Insectivora, they are not true shrews, and not all species live in trees. They are omnivores; among other things, treeshrews eat fruit. As fellow members of Euarchonta, treeshrews are closely related to primates, and have been used as an alternative to primates in experimental studies of myopia, psychosocial stress, and hepatitis.
Description
Treeshrews are slender animals with long tails and soft, greyish to reddish-brown fur. The terrestrial species tend to be larger than the arboreal forms, and to have larger claws, which they use for digging up insect prey. They have poorly developed canine teeth and unspecialised molars, with an overall dental formula of They have a higher brain to body mass ratio than any other mammal, including humans, but high ratios are not uncommon for animals weighing less than .
Treeshrews have good vision, which is binocular in the case of the more arboreal species.
Reproduction
Female treeshrews have a gestation period of 45–50 days and give birth to up to three young in nests lined with dry leaves inside tree hollows. The young are born blind and hairless, but are able to leave the nest after about a month. During this period, the mother provides relatively little maternal care, visiting her young only for a few minutes every other day to suckle them.
Treeshrews reach sexual maturity after around four months, and breed for much of the year, with no clear breeding season in most species.
Behavior
Treeshrews live in small family groups, which defend their territory from intruders. Most are diurnal, although the pen-tailed treeshrew is nocturnal.
They mark their territories using various scent glands or urine, depending on the particular species.
Diet
Treeshrews are omnivorous, feeding on insects, small vertebrates, fruit, and seeds. Among other things, treeshrews eat Rafflesia fruit.
The pen-tailed treeshrew in Malaysia is able to consume large amounts of naturally fermented nectar from flower buds of the bertam palm Eugeissona tristis (with up to 3.8% alcohol content) the entire year without it having any effects on behaviour.
Treeshrews have also been observed intentionally eating foods high in capsaicin, a behavior unique among mammals other than humans. A single TRPV1 mutation reduces their pain response to capsaicinoids, which scientists believe is an evolutionary adaptation to be able to consume spicy foods in their natural habitats.
Pitcher plants like the Nepenthes lowii, supplements its carnivorous diet with tree shrew droppings.
Taxonomy
Treeshrews were moved from the order Insectivora into the order Primates because of certain internal similarities to primates (for example, similarities in the brain anatomy, highlighted by Sir Wilfrid Le Gros Clark), and classified as a "primitive prosimian", however they were soon split from the primates and moved into their own clade. Taxonomists continue to refine the treeshrews' relations to primates and to other closely related clades.
Molecular phylogenetic studies have suggested that the treeshrews should be given the same rank (order) as the primates and, with the primates and the flying lemurs (colugos), belong to the grandorder Euarchonta. According to this classification, the Euarchonta are sister to the Glires (lagomorphs and rodents), and the two groups are combined into the superorder Euarchontoglires. However, the alternative placement of treeshrews as sister to both Glires and Primatomorpha cannot be ruled out. Some studies place Scandentia as sister of the Glires, which would invalidate Euarchonta: It is this organization that is shown in the tree diagram below.
Several other arrangements of these orders have been proposed in the past, and the above tree is only a well-favored proposal. Although it is known that Scandentia is one of the most basal euarchontoglire clades, the exact phylogenetic position is not yet considered resolved: It may be a sister of Glires, Primatomorpha, or Dermoptera, or separate from and sister to all other Euarchontoglires. Shared short interspersed nuclear elements (SINEs) offer strong evidence for Scandentia belonging to the Euarchonta group:
Order Scandentia
The 23 species are placed in four genera, which are divided into two families. The majority are in the "ordinary" treeshrew family, Tupaiidae, but one species, the pen-tailed treeshrew, is different enough to warrant placement in its own family, Ptilocercidae; the two families are thought to have separated 60 million years ago. The former Tupaiidae genus Urogale was disbanded in 2011 when the Mindanao treeshrew was moved to Tupaia based on a molecular phylogeny.
Family Tupaiidae
Genus Anathana
Madras treeshrew, A. ellioti
Genus Dendrogale
Bornean smooth-tailed treeshrew, D. melanura
Northern smooth-tailed treeshrew, D. murina
Genus Tupaia
Northern treeshrew, T. belangeri
Golden-bellied treeshrew, T. chrysogaster
Bangka Island treeshrew, T. discolor
Striped treeshrew, T. dorsalis
Mindanao treeshrew, T. everetti
Sumatran treeshrew, T. ferruginea
Common treeshrew, T. glis
Slender treeshrew, T. gracilis
Javan treeshrew, T. hypochrysa
Horsfield's treeshrew, T. javanica
Long-footed treeshrew, T. longipes
Pygmy treeshrew, T. minor
Mountain treeshrew, T. montana
Nicobar treeshrew, T. nicobarica
Palawan treeshrew, T. palawanensis
Painted treeshrew, T. picta
Kalimantan treeshrew, T. salatana
Ruddy treeshrew, T. splendidula
Large treeshrew, T. tana
Family Ptilocercidae
Genus Ptilocercus
Pen-tailed treeshrew, P. lowii
Fossil record
The fossil record of treeshrews is poor. The oldest putative treeshrew, Eodendrogale parva, is from the Middle Eocene of Henan, China, but the identity of this animal is uncertain. Other fossils have come from the Miocene of Thailand, Pakistan, India, and Yunnan, China, as well as the Pliocene of India. Most belong to the family Tupaiidae; one fossil species described from the Oligocene of Yunnan is thought to be closer to the pen-tailed treeshrew.
Named fossil species include Prodendrogale yunnanica, Prodendrogale engesseri, and Tupaia storchi from Yunnan, Tupaia miocenica from Thailand, Palaeotupaia sivalicus from India and Ptilocercus kylin from Yunnan.
| Biology and health sciences | Mammals: General | Animals |
45598 | https://en.wikipedia.org/wiki/Strepsiptera | Strepsiptera | The Strepsiptera () are an order of insects with eleven extant families that include about 600 described species. They are endoparasites of other insects, such as bees, wasps, leafhoppers, silverfish, and cockroaches. Females of most species never emerge from the host after entering its body, finally dying inside it. The early-stage larvae do emerge because they must find an unoccupied living host, and the short-lived males must emerge to seek a receptive female in her host. They are believed to be most closely related to beetles, from which they diverged 300–350 million years ago, but do not appear in the fossil record until the mid-Cretaceous around 100 million years ago.
The order is not well known to non-specialists, and the nearest they have to a common name is stylops, in reference to the genus Stylops. The name of the order translates to "twisted wing", giving rise to other common names used for the order, twisted-wing insects and twisted-winged parasites.
Adult males are rarely observed, although specimens may be lured using cages containing virgin females. Nocturnal specimens can also be collected at light traps.
Biology
Appearance and structure
Males
Males of the Strepsiptera have wings, legs, eyes, and antennae, though their mouthparts cannot be used for feeding. Many have mouthparts modified into sensory structures. The males bear a superficial resemblance to flies. The forewings are modified into small club-shaped structures called halteres, which sense gyroscopic information. A similar organ exists in flies, though in that group the hindwings are modified instead, and the two groups are thought to have independently evolved the structures. The hindwings are generally fan-shaped, and have strongly reduced venation. The antennae are flabellate, and are covered in specialised chemoreceptors, likely to detect females over long distances.
Adult male Strepsiptera have eyes unlike those of any other insect, resembling the eyes found in the trilobite group Phacopina. Instead of a compound eye consisting of hundreds to thousands of ommatidia, that each produce a pixel of the entire image, the strepsipteran eyes consist of only a few dozen "eyelets" that each produce a complete image. These eyelets are separated by cuticle and/or setae, giving the cluster eye as a whole a blackberry-like appearance.
Females
The females of Stylopidia, which includes 97% of all described strepsipteran species and all modern strepsipteran families except Mengenillidae and Bahiaxenidae, are not known to leave their hosts and are neotenic in form, lacking wings, legs, and eyes, but have a well sclerotised cephalothorax (fused head and thorax). Adult female mengenillids are wingless but are free living and somewhat mobile with legs and small eyes. This is probably also true for the bahiaxenids, though this has not been observed.
Larvae
Newly hatched primary (first instar) larvae are on average in length, smaller than many single-celled organisms. They are highly mobile with well developed stemmata, which are able to distinguish color. The underside of the body is covered in minute hair-like structures (microtrichia), which allow the larvae to stick to wet surfaces via capillary action. At the back of the body are well developed large bristle-like cerci, which are attached to muscles, which allow the larvae to jump. The tarsal segment of their legs have structures which allow them to cling to their hosts. Later larval instars which develop inside the host are completely immobile.
Life cycle
Virgin females release a pheromone which the males use to locate them. Mating in at least some species is polyandrous, where the female mates with more than one male.
In the Stylopidia, the female's anterior region protrudes out between the segments of the host's abdomen. In all strepsipterans the male mates by rupturing the female's cuticle (in the case of Stylopidia, this is in a deep narrow fissure of the cephalothorax near the birth canal). Sperm passes through the opening directly into the body in a process called traumatic insemination, which has independently evolved in some other insects like bed bugs.
Strepsiptera eggs hatch inside the female, and the planidium larvae can move around freely within the female's haemocoel; this behavior is unique to these insects. The offspring consume their mother from the inside in a process known as haemocoelous viviparity. Each female produces many thousands of planidium larvae. The larvae emerge from the brood opening/canal on the female's head, which protrudes outside the host body.
Larvae have legs and actively seek out new hosts. Their legs are partly vestigial in that they lack a trochanter, the leg segment that forms the articulation between the basal coxa and the femur. The larvae are very active as they only have a limited amount of time to find a host before they exhaust their energy reserves. These first-instar larvae have stemmata (simple, single-lens eyes). When the larvae latch onto a host, they enter it by secreting enzymes that soften the cuticle, usually in the abdominal region of the host. Some species have been reported to enter the eggs of hosts. Larvae of Stichotrema dallatorreanum Hofeneder from Papua New Guinea were found to enter their orthopteran host's tarsus (foot).
Once inside the host, they undergo hypermetamorphosis and transform into a less-mobile, legless larval form. They induce the host to produce a bag-like structure inside which they feed and grow. This structure, made from host tissue, protects them from the immune defences of the host. Larvae go through four more instars, and in each moult the older cuticle separates but is not discarded ("apolysis without ecdysis"), so multiple layers form around the larvae. Male larvae pupate after the last moult, but females directly become neotenous adults. The colour and shape of the host's abdomen may be changed and the host usually becomes sterile. The parasites then undergo pupation to become adults. Adult males emerge from the host bodies, while females stay inside. Females may occupy up to 90% of the abdominal volume of their hosts. Adult males are very short-lived, usually surviving less than five hours, and do not feed.
Parasitism
Strepsiptera of various species have been documented to attack hosts in many orders, including members of the orders Zygentoma (silverfish and allies), Orthoptera (grasshoppers, crickets), Blattodea (cockroaches), Mantodea (praying mantis), Heteroptera (bugs), Hymenoptera (wasps, ants and bees), and Diptera (flies). In the strepsipteran family Myrmecolacidae, the males parasitize ants, while the females parasitize Orthoptera. Members of Mengenillidae target Zygentoma exclusively, while Stylopidia targets only winged insects, with a large number of stylopidians targeting wasps and bees, while the largest family of strepsipterans, the Stylopidae, with over 27% of all described strepsipterans, targets bees exclusively.
Very rarely, multiple females may live within a single stylopized host; multiple males within a single host are somewhat more common.
Strepsiptera of the family Myrmecolacidae can influence their host's behaviour, causing their ant hosts to linger on the tips of grass leaves, increasing the chance of being found by strepsipteran males (in the case of females) and putting them in a good position for male emergence (in the case of males).
Taxonomy
The order, named by William Kirby in 1813, is named for the hindwings, which are held at a twisted angle when at rest (from Greek (), to twist; and (), wing). The forewings are reduced to halteres.
Strepsiptera were once believed to be the sister group to the beetle families Meloidae and Ripiphoridae, which have similar parasitic development and forewing reduction. Early molecular research suggested their inclusion as a sister group to the flies, in a clade called Halteria, which have one pair of the wings modified into halteres, and failed to support their relationship to the beetles. Further molecular studies, however, suggested they are outside the clade Mecopterida (containing the Diptera and Lepidoptera), but found no strong evidence for affinity with any other extant group. Study of their evolutionary position has been problematic due to difficulties in phylogenetic analysis arising from long branch attraction. Most modern molecular studies find strepsipterans as the sister group of beetles (Coleoptera), with both groups together forming the clade Coleopterida. The most basal strepsipteran is the fossil Protoxenos janzeni discovered in Eocene aged Baltic amber, while the most basal living strepsipteran is Bahiaxenos relictus, the sole member of the family Bahiaxenidae. The earliest known strepsipteran fossils are those of Cretostylops engeli (Cretostylopdiae) and Kinzelbachilla ellenbergeri, Phthanoxenos nervosus and Heterobathmilla kakopoios (Phthanoxenidae), discovered in middle Cretaceous Burmese amber from Myanmar, around 99 million years old, which all lie outside the crown group, but are all more closely related to modern strepsiperans than Protoxenos is. The finding of a parasitic first instar in the same deposit indicates that the parasitic lifestyle of the group has likely existed nearly unchanged for 100 million years, though their evolutionary history prior to this remains a mystery. The idea that mengellinids' targeting of zygentomans represents the ancestral ecology of the group as a whole has been considered questionable.
Families
The vast majority of living strepispterans are placed within the grouping Stylopidia, which includes the families Corioxenidae, Halictophagidae, Callipharixenidae, Bohartillidae, Elenchidae, Myrmecolacidae, Stylopidae, Protelencholacidae (extinct) and Xenidae. All Stylopidia have endoparasitic females having multiple genital openings. Two living families, Mengenillidae and Bahiaxenidae, are placed outside of this group, along with several extinct families.
The Stylopidae have four-segmented tarsi and four- to six-segmented antennae, with the third segment having a lateral process. The family Stylopidae may be paraphyletic. The Elenchidae have two-segmented tarsi and four-segmented antennae, with the third segment having a lateral process. The Halictophagidae have three-segmented tarsi and seven-segmented antennae, with lateral processes from the third and fourth segments.
The Stylopidae mostly parasitize wasps and bees, the Elenchidae are known to parasitize Fulgoroidea, while the Halictophagidae are found on leafhoppers, treehoppers, and mole cricket hosts.
Strepsipteran insects in the genus Xenos parasitize Polistes carnifex, a species of social wasps. These obligate parasites infect the developing wasp larvae in the nest and are present within the abdomens of female wasps when they hatch out. Here they remain until they thrust through the cuticle and pupate (males) or release infective first-instar larvae onto flowers (females). These larvae are transported back to their nests by foraging wasps.
Cladogram
After:
Relationship with humans
Some insects which have been considered pests may have strepsipteran endoparasites. Inoculation of a pest population with the corresponding parasitoid may sometimes aid in reducing the impact of such pests, although no strepsipterans have ever been tested for use in this capacity, let alone being available for such purposes, either commercially or experimentally.
| Biology and health sciences | Insects: General | Animals |
45599 | https://en.wikipedia.org/wiki/Surgery | Surgery | Surgery is a medical specialty that uses manual and instrumental techniques to diagnose or treat pathological conditions (e.g., trauma, disease, injury, malignancy), to alter bodily functions (e.g., malabsorption created by bariatric surgery such as gastric bypass), to reconstruct or alter aesthetics and appearance (cosmetic surgery), or to remove unwanted tissues (body fat, glands, scars or skin tags) or foreign bodies.
The act of performing surgery may be called a surgical procedure or surgical operation, or simply "surgery" or "operation". In this context, the verb "operate" means to perform surgery. The adjective surgical means pertaining to surgery; e.g. surgical instruments, surgical facility or surgical nurse. Most surgical procedures are performed by a pair of operators: a surgeon who is the main operator performing the surgery, and a surgical assistant who provides in-procedure manual assistance during surgery. Modern surgical operations typically require a surgical team that typically consists of the surgeon, the surgical assistant, an anaesthetist (often also complemented by an anaesthetic nurse), a scrub nurse (who handles sterile equipment), a circulating nurse and a surgical technologist, while procedures that mandate cardiopulmonary bypass will also have a perfusionist. All surgical procedures are considered invasive and often require a period of postoperative care (sometimes intensive care) for the patient to recover from the iatrogenic trauma inflicted by the procedure. The duration of surgery can span from several minutes to tens of hours depending on the specialty, the nature of the condition, the target body parts involved and the circumstance of each procedure, but most surgeries are designed to be one-off interventions that are typically not intended as an ongoing or repeated type of treatment.
In British colloquialism, the term "surgery" can also refer to the facility where surgery is performed, or simply the office/clinic of a physician, dentist or veterinarian.
Definitions
As a general rule, a procedure is considered surgical when it involves cutting of a person's tissues or closure of a previously sustained wound. Other procedures that do not necessarily fall under this rubric, such as angioplasty or endoscopy, may be considered surgery if they involve "common" surgical procedure or settings, such as use of antiseptic measures and sterile fields, sedation/anesthesia, proactive hemostasis, typical surgical instruments, suturing or stapling. All forms of surgery are considered invasive procedures; the so-called "noninvasive surgery" ought to be more appropriately called minimally invasive procedures, which usually refers to a procedure that utilizes natural orifices (e.g. most urological procedures) or does not penetrate the structure being excised (e.g. endoscopic polyp excision, rubber band ligation, laser eye surgery), are percutaneous (e.g. arthroscopy, catheter ablation, angioplasty and valvuloplasty), or to a radiosurgical procedure (e.g. irradiation of a tumor).
Types of surgery
Surgical procedures are commonly categorized by urgency, type of procedure, body system involved, the degree of invasiveness, and special instrumentation.
Based on timing:
Elective surgery is done to correct a non-life-threatening condition, and is carried out at the person's convenience, or to the surgeon's and the surgical facility's availability.
Semi-elective surgery is one that is better done early to avoid complications or potential deterioration of the patient's condition, but such risk are sufficiently low that the procedure can be postponed for a short period time.
Emergency surgery is surgery which must be done without any delay to prevent death or serious disabilities or loss of limbs and functions.
Based on purpose:
Exploratory surgery is performed to establish or aid a diagnosis.
Therapeutic surgery is performed to treat a previously diagnosed condition.
Curative surgery is a therapeutic procedure done to permanently remove a pathology.
Plastic surgery is done to improve a body part's function or appearance.
Reconstructive plastic surgery is done to improve the function or subjective appearance of a damaged or malformed body part.
Cosmetic surgery is done to subjectively improve the appearance of an otherwise normal body part.
Bariatric surgery is done to assist weight loss when dietary and pharmaceutical methods alone have failed.
Non-survival surgery, or terminal surgery, is where Euthanasia is performed while the subject is under Anesthesia so that the subject will not regain conscious pain perception. This type of surgery is usually done in Animal testing experiments.
By type of procedure:
Amputation involves removing an entire body part, usually a limb or digit; castration is the amputation of testes; circumcision is the removal of prepuce from the penis or clitoral hood from the clitoris (see female circumcision). Replantation involves reattaching a severed body part.
Resection is the removal of all or part of an internal organ and/or connective tissue. A segmental resection specifically removes an independent vascular region of an organ such as a hepatic segment, a bronchopulmonary segment or a renal lobe. Excision is the resection of only part of an organ, tissue or other body part (e.g. skin) without discriminating specific vascular territories. Exenteration is the complete removal of all organs and soft tissue content (especially lymphoid tissues) within a body cavity.
Extirpation is the complete excision or surgical destruction of a body part.
Ablation is destruction of tissue through the use of energy-transmitting devices such as electrocautery/fulguration, laser, focused ultrasound or freezing.
Repair involves the direct closure or restoration of an injured, mutilated or deformed organ or body part, usually by suturing or internal fixation. Reconstruction is an extensive repair of a complex body part (such as joints), often with some degrees of structural/functional replacement and commonly involves grafting and/or use of implants.
Grafting is the relocation and establishment of a tissue from one part of the body to another. A flap is the relocation of a tissue without complete separation of its original attachment, and a free flap is a completely detached flap that carries an intact neurovascular structure ready for grafting onto a new location.
Bypass involves the relocation/grafting of a tubular structure onto another in order to reroute the content flow of that target structure from a specific segment directly to a more distal ("downstream") segment.
Implantation is insertion of artificial medical devices to replace or augment existing tissue.
Transplantation is the replacement of an organ or body part by insertion of another from a different human (or animal) into the person undergoing surgery. Harvesting is the resection of an organ or body part from a live human or animal (known as the donor) for transplantation into another patient (known as the recipient).
By organ system: Surgical specialties are traditionally and academically categorized by the organ, organ system or body region involved. Examples include:
Cardiac surgery — the heart and mediastinal great vessels;
Thoracic surgery — the thoracic cavity including the lungs;
Gastrointestinal surgery — the digestive tract and its accessory organs;
Vascular surgery — the extra-mediastinal great vessels and peripheral circulatory system;
Urological surgery — the genitourinary system;
ENT surgery — ear, nose and throat, also known as head and neck surgery when including the neck region;
Oral and maxillofacial surgery — the oral cavity, jaws, and face;
Neurosurgery — the central nervous system, and;
Orthopedic surgery — the musculoskeletal system.
By degree of invasiveness of surgical procedures:
Conventional open surgery (such as a laparotomy) requires a large incision to access the area of interest, and directly exposes the internal body cavity to the outside.
Minimally-invasive surgery involves much smaller surface incisions or even natural orifices (nostril, mouth, anus or urethra) to insert miniaturized instruments within a body cavity or structure, as in laparoscopic surgery or angioplasty.
Hybrid surgery uses a combination of open and minimally-invasive techniques, and may include hand ports or larger incisions to assist with performance of elements of the procedure.
By equipment used:
Laser surgery involves use of laser ablation to divide tissue instead of a scalpel, scissors or similar sharp-edged instruments.
Cryosurgery uses low-temperature cryoablation to freeze and destroy a target tissue.
Electrosurgery involves use of electrocautery to cut and coagulate tissue.
Microsurgery involves the use of an operating microscope for the surgeon to see and manipulate small structures.
Endoscopic surgery uses optical instruments to relay the image from inside an enclosed body cavity to the outside, and the surgeon performs the procedure using specialized handheld instruments inserted through trocars placed through the body wall. Most modern endoscopic procedures are video-assisted, meaning the images are viewed on a display screen rather than through the eyepiece on the endoscope.
Robotic surgery makes use of robotics such as the Da Vinci or the ZEUS robotic surgical systems, to remotely control endoscopic or minimally-invasive instruments.
Terminology
Resection and excisional procedures start with a prefix for the target organ to be excised (cut out) and end in the suffix -ectomy. For example, removal of part of the stomach would be called a subtotal gastrectomy.
Procedures involving cutting into an organ or tissue end in -otomy. A surgical procedure cutting through the abdominal wall to gain access to the abdominal cavity is a laparotomy.
Minimally invasive procedures, involving small incisions through which an endoscope is inserted, end in -oscopy. For example, such surgery in the abdominal cavity is called laparoscopy.
Procedures for formation of a permanent or semi-permanent opening called a stoma in the body end in -ostomy, such as creation of a colostomy, a connection of colon and the abdominal wall. This prefix is also used for connection between two viscera, such as how an esophagojejunostomy refers to a connection created between the esophagus and the jejunum.
Plastic and reconstruction procedures start with the name for the body part to be reconstructed and end in -plasty. For example, rhino- is a prefix meaning "nose", therefore a rhinoplasty is a reconstructive or cosmetic surgery for the nose. A pyloroplasty refers to a type of reconstruction of the gastric pylorus.
Procedures that involve cutting the muscular layers of an organ end in -myotomy. A pyloromyotomy refers to cutting the muscular layers of the gastric pylorus.
Repair of a damaged or abnormal structure ends in -orraphy. This includes herniorrhaphy, another name for a hernia repair.
Reoperation, revision, or "redo" procedures refer to a planned or unplanned return to the operating theater after a surgery is performed to re-address an aspect of patient care. Unplanned reasons for reoperation include postoperative complications such as bleeding or hematoma formation, development of a seroma or abscess, anastomotic leak, tissue necrosis requiring debridement or excision, or in the case of malignancy, close or involved resection margins that may require re-excision to avoid local recurrence. Reoperation can be performed in the acute phase, or it can be also performed months to years later if the surgery failed to solve the indicated problem. Reoperation can also be planned as a staged operation where components of the procedure are performed or reversed under separate anesthesia.
Description of surgical procedure
Setting
Inpatient surgery is performed in a hospital, and the person undergoing surgery stays at least one night in the hospital after the surgery. Outpatient surgery occurs in a hospital outpatient department or freestanding ambulatory surgery center, and the person who had surgery is discharged the same working day. Office-based surgery occurs in a physician's office, and the person is discharged the same day.
At a hospital, modern surgery is often performed in an operating theater using surgical instruments, an operating table, and other equipment. Among United States hospitalizations for non-maternal and non-neonatal conditions in 2012, more than one-fourth of stays and half of hospital costs involved stays that included operating room (OR) procedures. The environment and procedures used in surgery are governed by the principles of aseptic technique: the strict separation of "sterile" (free of microorganisms) things from "unsterile" or "contaminated" things. All surgical instruments must be sterilized, and an instrument must be replaced or re-sterilized if it becomes contaminated (i.e. handled in an unsterile manner, or allowed to touch an unsterile surface). Operating room staff must wear sterile attire (scrubs, a scrub cap, a sterile surgical gown, sterile latex or non-latex polymer gloves and a surgical mask), and they must scrub hands and arms with an approved disinfectant agent before each procedure.
Preoperative care
Prior to surgery, the person is given a medical examination, receives certain pre-operative tests, and their physical status is rated according to the ASA physical status classification system. If these results are satisfactory, the person requiring surgery signs a consent form and is given a surgical clearance. If the procedure is expected to result in significant blood loss, an autologous blood donation may be made some weeks prior to surgery. If the surgery involves the digestive system, the person requiring surgery may be instructed to perform a bowel prep by drinking a solution of polyethylene glycol the night before the procedure. People preparing for surgery are also instructed to abstain from food or drink (an NPO order after midnight on the night before the procedure), to minimize the effect of stomach contents on pre-operative medications and reduce the risk of aspiration if the person vomits during or after the procedure.
Some medical systems have a practice of routinely performing chest x-rays before surgery. The premise behind this practice is that the physician might discover some unknown medical condition which would complicate the surgery, and that upon discovering this with the chest x-ray, the physician would adapt the surgery practice accordingly. However, medical specialty professional organizations recommend against routine pre-operative chest x-rays for people who have an unremarkable medical history and presented with a physical exam which did not indicate a chest x-ray. Routine x-ray examination is more likely to result in problems like misdiagnosis, overtreatment, or other negative outcomes than it is to result in a benefit to the person. Likewise, other tests including complete blood count, prothrombin time, partial thromboplastin time, basic metabolic panel, and urinalysis should not be done unless the results of these tests can help evaluate surgical risk.
Preparing for surgery
A surgical team may include a surgeon, anesthetist, a circulating nurse, and a "scrub tech", or surgical technician, as well as other assistants who provide equipment and supplies as required. While informed consent discussions may be performed in a clinic or acute care setting, the pre-operative holding area is where documentation is reviewed and where family members can also meet the surgical team. Nurses in the preoperative holding area confirm orders and answer additional questions of the family members of the patient prior to surgery. In the pre-operative holding area, the person preparing for surgery changes out of their street clothes and are asked to confirm the details of his or her surgery as previously discussed during the process of informed consent. A set of vital signs are recorded, a peripheral IV line is placed, and pre-operative medications (antibiotics, sedatives, etc.) are given.
When the patient enters the operating room and is appropriately anesthetized, the team will then position the patient in an appropriate surgical position. If hair is present at the surgical site, it is clipped (instead of shaving). The skin surface within the operating field is cleansed and prepared by applying an antiseptic (typically chlorhexidine gluconate in alcohol, as this is twice as effective as povidone-iodine at reducing the risk of infection). Sterile drapes are then used to cover the borders of the operating field. Depending on the type of procedure, the cephalad drapes are secured to a pair of poles near the head of the bed to form an "ether screen", which separate the anesthetist/anesthesiologist's working area (unsterile) from the surgical site (sterile).
Anesthesia is administered to prevent pain from the trauma of cutting, tissue manipulation, application of thermal energy, and suturing. Depending on the type of operation, anesthesia may be provided locally, regionally, or as general anesthesia. Spinal anesthesia may be used when the surgical site is too large or deep for a local block, but general anesthesia may not be desirable. With local and spinal anesthesia, the surgical site is anesthetized, but the person can remain conscious or minimally sedated. In contrast, general anesthesia may render the person unconscious and paralyzed during surgery. The person is typically intubated to protect their airway and placed on a mechanical ventilator, and anesthesia is produced by a combination of injected and inhaled agents. The choice of surgical method and anesthetic technique aims to solve the indicated problem, minimize the risk of complications, optimize the time needed for recovery, and limit the surgical stress response.
Intraoperative phase
The intraoperative phase begins when the surgery subject is received in the surgical area (such as the operating theater or surgical department), and lasts until the subject is transferred to a recovery area (such as a post-anesthesia care unit).
An incision is made to access the surgical site. Blood vessels may be clamped or cauterized to prevent bleeding, and retractors may be used to expose the site or keep the incision open. The approach to the surgical site may involve several layers of incision and dissection, as in abdominal surgery, where the incision must traverse skin, subcutaneous tissue, three layers of muscle and then the peritoneum. In certain cases, bone may be cut to further access the interior of the body; for example, cutting the skull for brain surgery or cutting the sternum for thoracic (chest) surgery to open up the rib cage. Whilst in surgery aseptic technique is used to prevent infection or further spreading of the disease. The surgeons' and assistants' hands, wrists and forearms are washed thoroughly for at least 4 minutes to prevent germs getting into the operative field, then sterile gloves are placed onto their hands. An antiseptic solution is applied to the area of the person's body that will be operated on. Sterile drapes are placed around the operative site. Surgical masks are worn by the surgical team to avoid germs on droplets of liquid from their mouths and noses from contaminating the operative site.
Work to correct the problem in body then proceeds. This work may involve:
excision – cutting out an organ, tumor, or other tissue.
resection – partial removal of an organ or other bodily structure.
reconnection of organs, tissues, etc., particularly if severed. Resection of organs such as intestines involves reconnection. Internal suturing or stapling may be used. Surgical connection between blood vessels or other tubular or hollow structures such as loops of intestine is called anastomosis.
reduction – the movement or realignment of a body part to its normal position. e.g. Reduction of a broken nose involves the physical manipulation of the bone or cartilage from their displaced state back to their original position to restore normal airflow and aesthetics.
ligation – tying off blood vessels, ducts, or "tubes".
grafts – may be severed pieces of tissue cut from the same (or different) body or flaps of tissue still partly connected to the body but resewn for rearranging or restructuring of the area of the body in question. Although grafting is often used in cosmetic surgery, it is also used in other surgery. Grafts may be taken from one area of the person's body and inserted to another area of the body. An example is bypass surgery, where clogged blood vessels are bypassed with a graft from another part of the body. Alternatively, grafts may be from other persons, cadavers, or animals.
insertion of prosthetic parts when needed. Pins or screws to set and hold bones may be used. Sections of bone may be replaced with prosthetic rods or other parts. Sometimes a plate is inserted to replace a damaged area of skull. Artificial hip replacement has become more common. Heart pacemakers or valves may be inserted. Many other types of prostheses are used.
creation of a stoma, a permanent or semi-permanent opening in the body
in transplant surgery, the donor organ (taken out of the donor's body) is inserted into the recipient's body and reconnected to the recipient in all necessary ways (blood vessels, ducts, etc.).
arthrodesis – surgical connection of adjacent bones so the bones can grow together into one. Spinal fusion is an example of adjacent vertebrae connected allowing them to grow together into one piece.
modifying the digestive tract in bariatric surgery for weight loss.
repair of a fistula, hernia, or prolapse.
repair according to the ICD-10-PCS, in the Medical and Surgical Section 0, root operation Q, means restoring, to the extent possible, a body part to its normal anatomic structure and function. This definition, repair, is used only when the method used to accomplish the repair is not one of the other root operations. Examples would be colostomy takedown, herniorrhaphy of a hernia, and the surgical suture of a laceration.
other procedures, including:
clearing clogged ducts, blood or other vessels
removal of calculi (stones)
draining of accumulated fluids
debridement – removal of dead, damaged, or diseased tissue
Blood or blood expanders may be administered to compensate for blood lost during surgery. Once the procedure is complete, sutures or staples are used to close the incision. Once the incision is closed, the anesthetic agents are stopped or reversed, and the person is taken off ventilation and extubated (if general anesthesia was administered).
Postoperative care
After completion of surgery, the person is transferred to the post anesthesia care unit and closely monitored. When the person is judged to have recovered from the anesthesia, he/she is either transferred to a surgical ward elsewhere in the hospital or discharged home. During the post-operative period, the person's general function is assessed, the outcome of the procedure is assessed, and the surgical site is checked for signs of infection. There are several risk factors associated with postoperative complications, such as immune deficiency and obesity. Obesity has long been considered a risk factor for adverse post-surgical outcomes. It has been linked to many disorders such as obesity hypoventilation syndrome, atelectasis and pulmonary embolism, adverse cardiovascular effects, and wound healing complications. If removable skin closures are used, they are removed after 7 to 10 days post-operatively, or after healing of the incision is well under way.
It is not uncommon for surgical drains to be required to remove blood or fluid from the surgical wound during recovery. Mostly these drains stay in until the volume tapers off, then they are removed. These drains can become clogged, leading to abscess.
Postoperative therapy may include adjuvant treatment such as chemotherapy, radiation therapy, or administration of medication such as anti-rejection medication for transplants. For postoperative nausea and vomiting (PONV), solutions like saline, water, controlled breathing placebo and aromatherapy can be used in addition to medication. Other follow-up studies or rehabilitation may be prescribed during and after the recovery period. A recent post-operative care philosophy has been early ambulation. Ambulation is getting the patient moving around. This can be as simple as sitting up or even walking around. The goal is to get the patient moving as early as possible. It has been found to shorten the patient's length of stay. Length of stay is the amount of time a patient spends in the hospital after surgery before they are discharged. In a recent study done with lumbar decompressions, the patient's length of stay was decreased by 1–3 days.
The use of topical antibiotics on surgical wounds to reduce infection rates has been questioned. Antibiotic ointments are likely to irritate the skin, slow healing, and could increase risk of developing contact dermatitis and antibiotic resistance. It has also been suggested that topical antibiotics should only be used when a person shows signs of infection and not as a preventative. A systematic review published by Cochrane (organisation) in 2016, though, concluded that topical antibiotics applied over certain types of surgical wounds reduce the risk of surgical site infections, when compared to no treatment or use of antiseptics. The review also did not find conclusive evidence to suggest that topical antibiotics increased the risk of local skin reactions or antibiotic resistance.
Through a retrospective analysis of national administrative data, the association between mortality and day of elective surgical procedure suggests a higher risk in procedures carried out later in the working week and on weekends. The odds of death were 44% and 82% higher respectively when comparing procedures on a Friday to a weekend procedure. This "weekday effect" has been postulated to be from several factors including poorer availability of services on a weekend, and also, decrease number and level of experience over a weekend.
Postoperative pain affects an estimated 80% of people who underwent surgery. While pain is expected after surgery, there is growing evidence that pain may be inadequately treated in many people in the acute period immediately after surgery. It has been reported that incidence of inadequately controlled pain after surgery ranged from 25.1% to 78.4% across all surgical disciplines. There is insufficient evidence to determine if giving opioid pain medication pre-emptively (before surgery) reduces postoperative pain the amount of medication needed after surgery.
Postoperative recovery has been defined as an energy‐requiring process to decrease physical symptoms, reach a level of emotional well‐being, regain functions, and re‐establish activities. Moreover, it has been identified that patients who have undergone surgery are often not fully recovered on discharge.
Epidemiology
United States
In 2011, of the 38.6 million hospital stays in U.S. hospitals, 29% included at least one operating room procedure. These stays accounted for 48% of the total $387 billion in hospital costs.
The overall number of procedures remained stable from 2001 to 2011. In 2011, over 15 million operating room procedures were performed in U.S. hospitals.
Data from 2003 to 2011 showed that U.S. hospital costs were highest for the surgical service line; the surgical service line costs were $17,600 in 2003 and projected to be $22,500 in 2013. For hospital stays in 2012 in the United States, private insurance had the highest percentage of surgical expenditure. in 2012, mean hospital costs in the United States were highest for surgical stays.
Special populations
Elderly people
Older adults have widely varying physical health. Frail elderly people are at significant risk of post-surgical complications and the need for extended care. Assessment of older people before elective surgery can accurately predict the person's recovery trajectories. One frailty scale uses five items: unintentional weight loss, muscle weakness, exhaustion, low physical activity, and slowed walking speed. A healthy person scores 0; a very frail person scores 5. Compared to non-frail elderly people, people with intermediate frailty scores (2 or 3) are twice as likely to have post-surgical complications, spend 50% more time in the hospital, and are three times as likely to be discharged to a skilled nursing facility instead of to their own homes. People who are frail and elderly (score of 4 or 5) have even worse outcomes, with the risk of being discharged to a nursing home rising to twenty times the rate for non-frail elderly people.
Children
Surgery on children requires considerations that are not common in adult surgery. Children and adolescents are still developing physically and mentally making it difficult for them to make informed decisions and give consent for surgical treatments. Bariatric surgery in youth is among the controversial topics related to surgery in children.
Vulnerable populations
Doctors perform surgery with the consent of the person undergoing surgery. Some people are able to give better informed consent than others. Populations such as incarcerated persons, people living with dementia, the mentally incompetent, persons subject to coercion, and other people who are not able to make decisions with the same authority as others, have special needs when making decisions about their personal healthcare, including surgery.
Global surgery
Global surgery has been defined as 'the multidisciplinary enterprise of providing improved and equitable surgical care to the world's population, with its core belief as the issues of need, access and quality". Halfdan T. Mahler, the 3rd Director-General of the World Health Organization (WHO), first brought attention to the disparities in surgery and surgical care in 1980 when he stated in his address to the World Congress of the International College of Surgeons, "'the vast majority of the world's population has no access whatsoever to skilled surgical care and little is being done to find a solution.As such, surgical care globally has been described as the 'neglected stepchild of global health,' a term coined by Paul Farmer to highlight the urgent need for further work in this area. Furthermore, Jim Young Kim, the former President of the World Bank, proclaimed in 2014 that "surgery is an indivisible, indispensable part of health care and of progress towards universal health coverage."
In 2015, the Lancet Commission on Global Surgery (LCoGS) published the landmark report titled "Global Surgery 2030: evidence and solutions for achieving health, welfare, and economic development", describing the large, pre-existing burden of surgical diseases in low- and middle-income countries (LMICs) and future directions for increasing universal access to safe surgery by the year 2030. The Commission highlighted that about 5 billion people lack access to safe and affordable surgical and anesthesia care and 143 million additional procedures were needed every year to prevent further morbidity and mortality from treatable surgical conditions as well as a $12.3 trillion loss in economic productivity by the year 2030. This was especially true in the poorest countries, which account for over one-third of the population but only 3.5% of all surgeries that occur worldwide. It emphasized the need to significantly improve the capacity for Bellwether procedures – laparotomy, caesarean section, open fracture care – which are considered a minimum level of care that first-level hospitals should be able to provide in order to capture the most basic emergency surgical care. In terms of the financial impact on the patients, the lack of adequate surgical and anesthesia care has resulted in 33 million individuals every year facing catastrophic health expenditure – the out-of-pocket healthcare cost exceeding 40% of a given household's income.
In alignment with the LCoGS call for action, the World Health Assembly adopted the resolution WHA68.15 in 2015 that stated, "Strengthening emergency and essential surgical care and anesthesia as a component of universal health coverage." This not only mandated the WHO to prioritize strengthening the surgical and anesthesia care globally, but also led to governments of the member states recognizing the urgent need for increasing capacity in surgery and anesthesia. Additionally, the third edition of Disease Control Priorities (DCP3), published in 2015 by the World Bank, declared surgery as essential and featured an entire volume dedicated to building surgical capacity.
Data from WHO and the World Bank indicate that scaling up infrastructure to enable access to surgical care in regions where it is currently limited or is non-existent is a low-cost measure relative to the significant morbidity and mortality caused by lack of surgical treatment. In fact, a systematic review found that the cost-effectiveness ratio – dollars spent per DALYs averted – for surgical interventions is on par or exceeds those of major public health interventions such as oral rehydration therapy, breastfeeding promotion, and even HIV/AIDS antiretroviral therapy. This finding challenged the common misconception that surgical care is financially prohibitive endeavor not worth pursuing in LMICs.
A key policy framework that arose from this renewed global commitment towards surgical care worldwide is the National Surgical Obstetric and Anesthesia Plan (NSOAP). NSOAP focuses on policy-to-action capacity building for surgical care with tangible steps as follows: (1) analysis of baseline indicators, (2) partnership with local champions, (3) broad stakeholder engagement, (4) consensus building and synthesis of ideas, (5) language refinement, (6) costing, (7) dissemination, and (8) implementation. This approach has been widely adopted and has served as guiding principles between international collaborators and local institutions and governments. Successful implementations have allowed for sustainability in terms of longterm monitoring, quality improvement, and continued political and financial support.
Human rights
Access to surgical care is increasingly recognized as an integral aspect of healthcare and therefore is evolving into a normative derivation of human right to health. The ICESCR Article 12.1 and 12.2 define the human right to health as "the right of everyone to the enjoyment of the highest attainable standard of physical and mental health" In the August 2000, the UN Committee on Economic, Social and Cultural Rights (CESCR) interpreted this to mean "right to the enjoyment of a variety of facilities, goods, services, and conditions necessary for the realization of the highest attainable health". Surgical care can be thereby viewed as a positive right – an entitlement to protective healthcare.
Woven through the International Human and Health Rights literature is the right to be free from surgical disease. The 1966 ICESCR Article 12.2a described the need for "provision for the reduction of the stillbirth-rate and of infant mortality and for the healthy development of the child" which was subsequently interpreted to mean "requiring measures to improve… emergency obstetric services". Article 12.2d of the ICESCR stipulates the need for "the creation of conditions which would assure to all medical service and medical attention in the event of sickness", and is interpreted in the 2000 comment to include timely access to "basic preventative, curative services… for appropriate treatment of injury and disability.". Obstetric care shares close ties with reproductive rights, which includes access to reproductive health.
Surgeons and public health advocates, such as Kelly McQueen, have described surgery as "Integral to the right to health". This is reflected in the establishment of the WHO Global Initiative for Emergency and Essential Surgical Care in 2005, the 2013 formation of the Lancet Commission for Global Surgery, the 2015 World Bank Publication of Volume 1 of its Disease Control Priorities Project "Essential Surgery", and the 2015 World Health Assembly 68.15 passing of the Resolution for Strengthening Emergency and Essential Surgical Care and Anesthesia as a Component of Universal Health Coverage. The Lancet Commission for Global Surgery outlined the need for access to "available, affordable, timely and safe" surgical and anesthesia care; dimensions paralleled in ICESCR General Comment No. 14, which similarly outlines need for available, accessible, affordable and timely healthcare.
History
Trepanation
Surgical treatments date back to the prehistoric era. The oldest for which there is evidence is trepanation, in which a hole is drilled or scraped into the skull, thus exposing the dura mater in order to treat health problems related to intracranial pressure.
Ancient Egypt
Prehistoric surgical techniques are seen in Ancient Egypt, where a mandible dated to approximately 2650 BC shows two perforations just below the root of the first molar, indicating the draining of an abscessed tooth. Surgical texts from ancient Egypt date back about 3500 years ago. Surgical operations were performed by priests, specialized in medical treatments similar to today, and used sutures to close wounds. Infections were treated with honey.
India
9,000-year-old skeletal remains of a prehistoric individual from the Indus River valley show evidence of teeth having been drilled. Sushruta Samhita is one of the oldest known surgical texts and its period is usually placed in the first millennium BCE. It describes in detail the examination, diagnosis, treatment, and prognosis of numerous ailments, as well as procedures for various forms of cosmetic surgery, plastic surgery and rhinoplasty.
Sri Lanka
In 1982 archaeologists were able to find significant evidence when the ancient land, called 'Alahana Pirivena' situated in Polonnaruwa, with ruins, was excavated. In that place ruins of an ancient hospital emerged. The hospital building was 147.5 feet in width and 109.2 feet in length. The instruments which were used for complex surgeries were there among the things discovered from the place, including forceps, scissors, probes, lancets, and scalpels. The instruments discovered may be dated to 11th century AD.
Ancient and Medieval Greece
In ancient Greece, temples dedicated to the healer-god Asclepius, known as Asclepieia (, sing. Asclepieion Ασκληπιείον), functioned as centers of medical advice, prognosis, and healing. In the Asclepieion of Epidaurus, some of the surgical cures listed, such as the opening of an abdominal abscess or the removal of traumatic foreign material, are realistic enough to have taken place. The Greek Galen was one of the greatest surgeons of the ancient world and performed many audacious operations – including brain and eye surgery – that were not tried again for almost two millennia. Hippocrates stated in the oath () "I will not use the knife, even upon those suffering from stones, but I will leave this to those who are trained in this craft."
Researchers from the Adelphi University discovered in the Paliokastro on Thasos ten skeletal remains, four women and six men, who were buried between the fourth and seventh centuries A.D. Their bones illuminated their physical activities, traumas, and even a complex form of brain surgery. According to the researchers: "The very serious trauma cases sustained by both males and females had been treated surgically or orthopedically by a very experienced physician/surgeon with great training in trauma care. We believe it to have been a military physician". The researchers were impressed by the complexity of the brain surgical operation.
In 1991 at the Polystylon fort in Greece, researchers discovered the head of a Byzantine warrior of the 14th century. Analysis of the lower jaw revealed that a surgery has been performed, when the warrior was alive, to the jaw which had been badly fractured and it tied back together until it healed.
Islamic world
During the Islamic Golden Age, largely based upon Paul of Aegina's Pragmateia, the writings of Albucasis (Abu al-Qasim Khalaf ibn al-Abbas Al-Zahrawi), an Andalusian-Arab physician and scientist who practiced in the Zahra suburb of Córdoba, were influential. Al-Zahrawi specialized in curing disease by cauterization. He invented several surgical instruments for purposes such as inspection of the interior of the urethra and for removing foreign bodies from the throat, the ear, and other body organs. He was also the first to illustrate the various cannulae and to treat warts with an iron tube and caustic metal as a boring instrument. He describes what is thought to be the first attempt at reduction mammaplasty for the management of gynaecomastia and the first mastectomy to treat breast cancer. He is credited with the performance of the first thyroidectomy. Al-Zahrawi pioneered techniques of neurosurgery and neurological diagnosis, treating head injuries, skull fractures, spinal injuries, hydrocephalus, subdural effusions and headache. The first clinical description of an operative procedure for hydrocephalus was given by Al-Zahrawi, who clearly describes the evacuation of superficial intracranial fluid in hydrocephalic children.
Early modern Europe
In Europe, the demand grew for surgeons to formally study for many years before practicing; universities such as Montpellier, Padua and Bologna were particularly renowned. In the 12th century, Rogerius Salernitanus composed his Chirurgia, laying the foundation for modern Western surgical manuals. Barber-surgeons generally had a bad reputation that was not to improve until the development of academic surgery as a specialty of medicine, rather than an accessory field. Basic surgical principles for asepsis etc., are known as Halsteads principles.
There were some important advances to the art of surgery during this period. The professor of anatomy at the University of Padua, Andreas Vesalius, was a pivotal figure in the Renaissance transition from classical medicine and anatomy based on the works of Galen, to an empirical approach of 'hands-on' dissection. In his anatomic treaties De humani corporis fabrica, he exposed the many anatomical errors in Galen and advocated that all surgeons should train by engaging in practical dissections themselves.
The second figure of importance in this era was Ambroise Paré (sometimes spelled "Ambrose"), a French army surgeon from the 1530s until his death in 1590. The practice for cauterizing gunshot wounds on the battlefield had been to use boiling oil; an extremely dangerous and painful procedure. Paré began to employ a less irritating emollient, made of egg yolk, rose oil and turpentine. He also described more efficient techniques for the effective ligation of the blood vessels during an amputation.
Modern surgery
The discipline of surgery was put on a sound, scientific footing during the Age of Enlightenment in Europe. An important figure in this regard was the Scottish surgical scientist, John Hunter, generally regarded as the father of modern scientific surgery. He brought an empirical and experimental approach to the science and was renowned around Europe for the quality of his research and his written works. Hunter reconstructed surgical knowledge from scratch; refusing to rely on the testimonies of others, he conducted his own surgical experiments to determine the truth of the matter. To aid comparative analysis, he built up a collection of over 13,000 specimens of separate organ systems, from the simplest plants and animals to humans.
He greatly advanced knowledge of venereal disease and introduced many new techniques of surgery, including new methods for repairing damage to the Achilles tendon and a more effective method for applying ligature of the arteries in case of an aneurysm. He was also one of the first to understand the importance of pathology, the danger of the spread of infection and how the problem of inflammation of the wound, bone lesions and even tuberculosis often undid any benefit that was gained from the intervention. He consequently adopted the position that all surgical procedures should be used only as a last resort.
Other important 18th- and early 19th-century surgeons included Percival Pott (1713–1788) who described tuberculosis on the spine and first demonstrated that a cancer may be caused by an environmental carcinogen (he noticed a connection between chimney sweep's exposure to soot and their high incidence of scrotal cancer). Astley Paston Cooper (1768–1841) first performed a successful ligation of the abdominal aorta, and James Syme (1799–1870) pioneered the Symes Amputation for the ankle joint and successfully carried out the first hip disarticulation.
Modern pain control through anesthesia was discovered in the mid-19th century. Before the advent of anesthesia, surgery was a traumatically painful procedure and surgeons were encouraged to be as swift as possible to minimize patient suffering. This also meant that operations were largely restricted to amputations and external growth removals. Beginning in the 1840s, surgery began to change dramatically in character with the discovery of effective and practical anaesthetic chemicals such as ether, first used by the American surgeon Crawford Long, and chloroform, discovered by Scottish obstetrician James Young Simpson and later pioneered by John Snow, physician to Queen Victoria. In addition to relieving patient suffering, anaesthesia allowed more intricate operations in the internal regions of the human body. In addition, the discovery of muscle relaxants such as curare allowed for safer applications.
Infection and antisepsis
The introduction of anesthetics encouraged more surgery, which inadvertently caused more dangerous patient post-operative infections. The concept of infection was unknown until relatively modern times. The first progress in combating infection was made in 1847 by the Hungarian doctor Ignaz Semmelweis who noticed that medical students fresh from the dissecting room were causing excess maternal death compared to midwives. Semmelweis, despite ridicule and opposition, introduced compulsory handwashing for everyone entering the maternal wards and was rewarded with a plunge in maternal and fetal deaths; however, the Royal Society dismissed his advice.
Until the pioneering work of British surgeon Joseph Lister in the 1860s, most medical men believed that chemical damage from exposures to bad air (see "miasma") was responsible for infections in wounds, and facilities for washing hands or a patient's wounds were not available. Lister became aware of the work of French chemist Louis Pasteur, who showed that rotting and fermentation could occur under anaerobic conditions if micro-organisms were present. Pasteur suggested three methods to eliminate the micro-organisms responsible for gangrene: filtration, exposure to heat, or exposure to chemical solutions. Lister confirmed Pasteur's conclusions with his own experiments and decided to use his findings to develop antiseptic techniques for wounds. As the first two methods suggested by Pasteur were inappropriate for the treatment of human tissue, Lister experimented with the third, spraying carbolic acid on his instruments. He found that this remarkably reduced the incidence of gangrene and he published his results in The Lancet. Later, on 9 August 1867, he read a paper before the British Medical Association in Dublin, on the Antiseptic Principle of the Practice of Surgery, which was reprinted in the British Medical Journal. His work was groundbreaking and laid the foundations for a rapid advance in infection control that saw modern antiseptic operating theatres widely used within 50 years.
Lister continued to develop improved methods of antisepsis and asepsis when he realised that infection could be better avoided by preventing bacteria from getting into wounds in the first place. This led to the rise of sterile surgery. Lister introduced the Steam Steriliser to sterilize equipment, instituted rigorous hand washing and later implemented the wearing of rubber gloves. These three crucial advances – the adoption of a scientific methodology toward surgical operations, the use of anaesthetic and the introduction of sterilised equipment – laid the groundwork for the modern invasive surgical techniques of today.
The use of X-rays as an important medical diagnostic tool began with their discovery in 1895 by German physicist Wilhelm Röntgen. He noticed that these rays could penetrate the skin, allowing the skeletal structure to be captured on a specially treated photographic plate.
Surgical specialties
General surgery
Breast
Cardiothoracic
Colorectal
Craniofacial surgery
Dental surgery
Endocrine
Gynaecology
Neurosurgery
Ophthalmology
Oncological
Oral and maxillofacial surgery
Transplant
Orthopaedic surgery
Hand surgery
Otolaryngology
Paediatric (Pediatric)
Periodontal surgery
Plastic
Podiatric surgery
Skin
Trauma
Urology
Vascular
Learned societies
World Federation of Neurosurgical Societies
American College of Surgeons
American College of Osteopathic Surgeons
American Academy of Orthopedic Surgeons
American College of Foot and Ankle Surgeons
Royal Australasian College of Surgeons
Royal Australasian College of Dental Surgeons
Royal College of Physicians and Surgeons of Canada
Royal College of Surgeons in Ireland
Royal College of Surgeons of Edinburgh
Royal College of Physicians and Surgeons of Glasgow
Royal College of Surgeons of England
| Biology and health sciences | Health, fitness, and medicine | null |
45600 | https://en.wikipedia.org/wiki/Reed%E2%80%93Solomon%20error%20correction | Reed–Solomon error correction | In information theory and coding theory, Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960.
They have many applications, including consumer technologies such as MiniDiscs, CDs, DVDs, Blu-ray discs, QR codes, Data Matrix, data transmission technologies such as DSL and WiMAX, broadcast systems such as satellite communications, DVB and ATSC, and storage systems such as RAID 6.
Reed–Solomon codes operate on a block of data treated as a set of finite-field elements called symbols. Reed–Solomon codes are able to detect and correct multiple symbol errors. By adding = − check symbols to the data, a Reed–Solomon code can detect (but not correct) any combination of up to erroneous symbols, or locate and correct up to erroneous symbols at unknown locations. As an erasure code, it can correct up to erasures at locations that are known and provided to the algorithm, or it can detect and correct combinations of errors and erasures. Reed–Solomon codes are also suitable as multiple-burst bit-error correcting codes, since a sequence of consecutive bit errors can affect at most two symbols of size . The choice of is up to the designer of the code and may be selected within wide limits.
There are two basic types of Reed–Solomon codes original view and BCH view with BCH view being the most common, as BCH view decoders are faster and require less working storage than original view decoders.
History
Reed–Solomon codes were developed in 1960 by Irving S. Reed and Gustave Solomon, who were then staff members of MIT Lincoln Laboratory. Their seminal article was titled "Polynomial Codes over Certain Finite Fields" . The original encoding scheme described in the Reed and Solomon article used a variable polynomial based on the message to be encoded where only a fixed set of values (evaluation points) to be encoded are known to encoder and decoder. The original theoretical decoder generated potential polynomials based on subsets of k (unencoded message length) out of n (encoded message length) values of a received message, choosing the most popular polynomial as the correct one, which was impractical for all but the simplest of cases. This was initially resolved by changing the original scheme to a BCH-code-like scheme based on a fixed polynomial known to both encoder and decoder, but later, practical decoders based on the original scheme were developed, although slower than the BCH schemes. The result of this is that there are two main types of Reed–Solomon codes: ones that use the original encoding scheme and ones that use the BCH encoding scheme.
Also in 1960, a practical fixed polynomial decoder for BCH codes developed by Daniel Gorenstein and Neal Zierler was described in an MIT Lincoln Laboratory report by Zierler in January 1960 and later in an article in June 1961. The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book "Error-Correcting Codes" by W. Wesley Peterson (1961). By 1963 (or possibly earlier), J. J. Stone (and others) recognized that Reed–Solomon codes could use the BCH scheme of using a fixed generator polynomial, making such codes a special class of BCH codes, but Reed–Solomon codes based on the original encoding scheme are not a class of BCH codes, and depending on the set of evaluation points, they are not even cyclic codes.
In 1969, an improved BCH scheme decoder was developed by Elwyn Berlekamp and James Massey and has since been known as the Berlekamp–Massey decoding algorithm.
In 1975, another improved BCH scheme decoder was developed by Yasuo Sugiyama, based on the extended Euclidean algorithm.
In 1977, Reed–Solomon codes were implemented in the Voyager program in the form of concatenated error correction codes. The first commercial application in mass-produced consumer products appeared in 1982 with the compact disc, where two interleaved Reed–Solomon codes are used. Today, Reed–Solomon codes are widely implemented in digital storage devices and digital communication standards, though they are being slowly replaced by Bose–Chaudhuri–Hocquenghem (BCH) codes. For example, Reed–Solomon codes are used in the Digital Video Broadcasting (DVB) standard DVB-S, in conjunction with a convolutional inner code, but BCH codes are used with LDPC in its successor, DVB-S2.
In 1986, an original scheme decoder known as the Berlekamp–Welch algorithm was developed.
In 1996, variations of original scheme decoders called list decoders or soft decoders were developed by Madhu Sudan and others, and work continues on these types of decoders (see Guruswami–Sudan list decoding algorithm).
In 2002, another original scheme decoder was developed by Shuhong Gao, based on the extended Euclidean algorithm.
Applications
Data storage
Reed–Solomon coding is very widely used in mass storage systems to correct
the burst errors associated with media defects.
Reed–Solomon coding is a key component of the compact disc. It was the first use of strong error correction coding in a mass-produced consumer product, and DAT and DVD use similar schemes. In the CD, two layers of Reed–Solomon coding separated by a 28-way convolutional interleaver yields a scheme called Cross-Interleaved Reed–Solomon Coding (CIRC). The first element of a CIRC decoder is a relatively weak inner (32,28) Reed–Solomon code, shortened from a (255,251) code with 8-bit symbols. This code can correct up to 2 byte errors per 32-byte block. More importantly, it flags as erasures any uncorrectable blocks, i.e., blocks with more than 2 byte errors. The decoded 28-byte blocks, with erasure indications, are then spread by the deinterleaver to different blocks of the (28,24) outer code. Thanks to the deinterleaving, an erased 28-byte block from the inner code becomes a single erased byte in each of 28 outer code blocks. The outer code easily corrects this, since it can handle up to 4 such erasures per block.
The result is a CIRC that can completely correct error bursts up to 4000 bits, or about 2.5 mm on the disc surface. This code is so strong that most CD playback errors are almost certainly caused by tracking errors that cause the laser to jump track, not by uncorrectable error bursts.
DVDs use a similar scheme, but with much larger blocks, a (208,192) inner code, and a (182,172) outer code.
Reed–Solomon error correction is also used in parchive files which are commonly posted accompanying multimedia files on USENET. The distributed online storage service Wuala (discontinued in 2015) also used Reed–Solomon when breaking up files.
Bar code
Almost all two-dimensional bar codes such as PDF-417, MaxiCode, Datamatrix, QR Code, Aztec Code and Han Xin code use Reed–Solomon error correction to allow correct reading even if a portion of the bar code is damaged. When the bar code scanner cannot recognize a bar code symbol, it will treat it as an erasure.
Reed–Solomon coding is less common in one-dimensional bar codes, but is used by the PostBar symbology.
Data transmission
Specialized forms of Reed–Solomon codes, specifically Cauchy-RS and Vandermonde-RS, can be used to overcome the unreliable nature of data transmission over erasure channels. The encoding process assumes a code of RS(N, K) which results in N codewords of length N symbols each storing K symbols of data, being generated, that are then sent over an erasure channel.
Any combination of K codewords received at the other end is enough to reconstruct all of the N codewords. The code rate is generally set to 1/2 unless the channel's erasure likelihood can be adequately modelled and is seen to be less. In conclusion, N is usually 2K, meaning that at least half of all the codewords sent must be received in order to reconstruct all of the codewords sent.
Reed–Solomon codes are also used in xDSL systems and CCSDS's Space Communications Protocol Specifications as a form of forward error correction.
Space transmission
One significant application of Reed–Solomon coding was to encode the digital pictures sent back by the Voyager program.
Voyager introduced Reed–Solomon coding concatenated with convolutional codes, a practice that has since become very widespread in deep space and satellite (e.g., direct digital broadcasting) communications.
Viterbi decoders tend to produce errors in short bursts. Correcting these burst errors is a job best done by short or simplified Reed–Solomon codes.
Modern versions of concatenated Reed–Solomon/Viterbi-decoded convolutional coding were and are used on the Mars Pathfinder, Galileo, Mars Exploration Rover and Cassini missions, where they perform within about 1–1.5 dB of the ultimate limit, the Shannon capacity.
These concatenated codes are now being replaced by more powerful turbo codes:
Constructions (encoding)
The Reed–Solomon code is actually a family of codes, where every code is characterised by three parameters: an alphabet size , a block length , and a message length , with . The set of alphabet symbols is interpreted as the finite field of order , and thus, must be a prime power. In the most useful parameterizations of the Reed–Solomon code, the block length is usually some constant multiple of the message length, that is, the rate is some constant, and furthermore, the block length is either equal to the alphabet size or one less than it, i.e., or .
Reed & Solomon's original view: The codeword as a sequence of values
There are different encoding procedures for the Reed–Solomon code, and thus, there are different ways to describe the set of all codewords.
In the original view of , every codeword of the Reed–Solomon code is a sequence of function values of a polynomial of degree less than . In order to obtain a codeword of the Reed–Solomon code, the message symbols (each within the q-sized alphabet) are treated as the coefficients of a polynomial of degree less than k, over the finite field with elements.
In turn, the polynomial p is evaluated at n ≤ q distinct points of the field F, and the sequence of values is the corresponding codeword. Common choices for a set of evaluation points include {0, 1, 2, ..., n − 1}, {0, 1, α, α2, ..., αn−2}, or for n < q, {1, α, α2, ..., αn−1}, ... , where α is a primitive element of F.
Formally, the set of codewords of the Reed–Solomon code is defined as follows:
Since any two distinct polynomials of degree less than agree in at most points, this means that any two codewords of the Reed–Solomon code disagree in at least positions. Furthermore, there are two polynomials that do agree in points but are not equal, and thus, the distance of the Reed–Solomon code is exactly . Then the relative distance is , where is the rate. This trade-off between the relative distance and the rate is asymptotically optimal since, by the Singleton bound, every code satisfies .
Being a code that achieves this optimal trade-off, the Reed–Solomon code belongs to the class of maximum distance separable codes.
While the number of different polynomials of degree less than k and the number of different messages are both equal to , and thus every message can be uniquely mapped to such a polynomial, there are different ways of doing this encoding. The original construction of interprets the message x as the coefficients of the polynomial p, whereas subsequent constructions interpret the message as the values of the polynomial at the first k points and obtain the polynomial p by interpolating these values with a polynomial of degree less than k. The latter encoding procedure, while being slightly less efficient, has the advantage that it gives rise to a systematic code, that is, the original message is always contained as a subsequence of the codeword.
Simple encoding procedure: The message as a sequence of coefficients
In the original construction of , the message is mapped to the polynomial with
The codeword of is obtained by evaluating at different points of the field . Thus the classical encoding function for the Reed–Solomon code is defined as follows:
This function is a linear mapping, that is, it satisfies for the following -matrix with elements from :
This matrix is a Vandermonde matrix over . In other words, the Reed–Solomon code is a linear code, and in the classical encoding procedure, its generator matrix is .
Systematic encoding procedure: The message as an initial sequence of values
There are alternative encoding procedures that produce a systematic Reed–Solomon code. One method uses Lagrange interpolation to compute polynomial such that Then is evaluated at the other points .
This function is a linear mapping. To generate the corresponding systematic encoding matrix G, multiply the Vandermonde matrix A by the inverse of A's left square submatrix.
for the following -matrix with elements from :
Discrete Fourier transform and its inverse
A discrete Fourier transform is essentially the same as the encoding procedure; it uses the generator polynomial to map a set of evaluation points into the message values as shown above:
The inverse Fourier transform could be used to convert an error free set of n < q message values back into the encoding polynomial of k coefficients, with the constraint that in order for this to work, the set of evaluation points used to encode the message must be a set of increasing powers of α:
However, Lagrange interpolation performs the same conversion without the constraint on the set of evaluation points or the requirement of an error free set of message values and is used for systematic encoding, and in one of the steps of the Gao decoder.
The BCH view: The codeword as a sequence of coefficients
In this view, the message is interpreted as the coefficients of a polynomial . The sender computes a related polynomial of degree where and sends the polynomial . The polynomial is constructed by multiplying the message polynomial , which has degree , with a generator polynomial of degree that is known to both the sender and the receiver. The generator polynomial is defined as the polynomial whose roots are sequential powers of the Galois field primitive
For a "narrow sense code", .
Systematic encoding procedure
The encoding procedure for the BCH view of Reed–Solomon codes can be modified to yield a systematic encoding procedure, in which each codeword contains the message as a prefix, and simply appends error correcting symbols as a suffix. Here, instead of sending , the encoder constructs the transmitted polynomial such that the coefficients of the largest monomials are equal to the corresponding coefficients of , and the lower-order coefficients of are chosen exactly in such a way that becomes divisible by . Then the coefficients of are a subsequence of the coefficients of . To get a code that is overall systematic, we construct the message polynomial by interpreting the message as the sequence of its coefficients.
Formally, the construction is done by multiplying by to make room for the check symbols, dividing that product by to find the remainder, and then compensating for that remainder by subtracting it. The check symbols are created by computing the remainder :
The remainder has degree at most , whereas the coefficients of in the polynomial are zero. Therefore, the following definition of the codeword has the property that the first coefficients are identical to the coefficients of :
As a result, the codewords are indeed elements of , that is, they are divisible by the generator polynomial :
This function is a linear mapping. To generate the corresponding systematic encoding matrix G, set G's left square submatrix to the identity matrix and then encode each row:
Ignoring leading zeroes, the last row = .
for the following -matrix with elements from :
Properties
The Reed–Solomon code is a [n, k, n − k + 1] code; in other words, it is a linear block code of length n (over F) with dimension k and minimum Hamming distance The Reed–Solomon code is optimal in the sense that the minimum distance has the maximum value possible for a linear code of size (n, k); this is known as the Singleton bound. Such a code is also called a maximum distance separable (MDS) code.
The error-correcting ability of a Reed–Solomon code is determined by its minimum distance, or equivalently, by , the measure of redundancy in the block. If the locations of the error symbols are not known in advance, then a Reed–Solomon code can correct up to erroneous symbols, i.e., it can correct half as many errors as there are redundant symbols added to the block. Sometimes error locations are known in advance (e.g., "side information" in demodulator signal-to-noise ratios)—these are called erasures. A Reed–Solomon code (like any MDS code) is able to correct twice as many erasures as errors, and any combination of errors and erasures can be corrected as long as the relation is satisfied, where is the number of errors and is the number of erasures in the block.
The theoretical error bound can be described via the following formula for the AWGN channel for FSK:
and for other modulation schemes:
where , , , is the symbol error rate in uncoded AWGN case and is the modulation order.
For practical uses of Reed–Solomon codes, it is common to use a finite field with elements. In this case, each symbol can be represented as an -bit value.
The sender sends the data points as encoded blocks, and the number of symbols in the encoded block is . Thus a Reed–Solomon code operating on 8-bit symbols has symbols per block. (This is a very popular value because of the prevalence of byte-oriented computer systems.) The number , with , of data symbols in the block is a design parameter. A commonly used code encodes eight-bit data symbols plus 32 eight-bit parity symbols in an -symbol block; this is denoted as a code, and is capable of correcting up to 16 symbol errors per block.
The Reed–Solomon code properties discussed above make them especially well-suited to applications where errors occur in bursts. This is because it does not matter to the code how many bits in a symbol are in error — if multiple bits in a symbol are corrupted it only counts as a single error. Conversely, if a data stream is not characterized by error bursts or drop-outs but by random single bit errors, a Reed–Solomon code is usually a poor choice compared to a binary code.
The Reed–Solomon code, like the convolutional code, is a transparent code. This means that if the channel symbols have been inverted somewhere along the line, the decoders will still operate. The result will be the inversion of the original data. However, the Reed–Solomon code loses its transparency when the code is shortened (see 'Remarks' at the end of this section). The "missing" bits in a shortened code need to be filled by either zeros or ones, depending on whether the data is complemented or not. (To put it another way, if the symbols are inverted, then the zero-fill needs to be inverted to a one-fill.) For this reason it is mandatory that the sense of the data (i.e., true or complemented) be resolved before Reed–Solomon decoding.
Whether the Reed–Solomon code is cyclic or not depends on subtle details of the construction. In the original view of Reed and Solomon, where the codewords are the values of a polynomial, one can choose the sequence of evaluation points in such a way as to make the code cyclic. In particular, if is a primitive root of the field , then by definition all non-zero elements of take the form for , where . Each polynomial over gives rise to a codeword . Since the function is also a polynomial of the same degree, this function gives rise to a codeword ; since holds, this codeword is the cyclic left-shift of the original codeword derived from . So choosing a sequence of primitive root powers as the evaluation points makes the original view Reed–Solomon code cyclic. Reed–Solomon codes in the BCH view are always cyclic because BCH codes are cyclic.
Remarks
Designers are not required to use the "natural" sizes of Reed–Solomon code blocks. A technique known as "shortening" can produce a smaller code of any desired size from a larger code. For example, the widely used (255,223) code can be converted to a (160,128) code by padding the unused portion of the source block with 95 binary zeroes and not transmitting them. At the decoder, the same portion of the block is loaded locally with binary zeroes.
The QR code, Ver 3 (29×29) uses interleaved blocks. The message has 26 data bytes and is encoded using two Reed-Solomon code blocks. Each block is a (255,233) Reed Solomon code shortened to a (35,13) code.
The Delsarte–Goethals–Seidel theorem illustrates an example of an application of shortened Reed–Solomon codes. In parallel to shortening, a technique known as puncturing allows omitting some of the encoded parity symbols.
BCH view decoders
The decoders described in this section use the BCH view of a codeword as a sequence of coefficients. They use a fixed generator polynomial known to both encoder and decoder.
Peterson–Gorenstein–Zierler decoder
Daniel Gorenstein and Neal Zierler developed a decoder that was described in a MIT Lincoln Laboratory report by Zierler in January 1960 and later in a paper in June 1961. The Gorenstein–Zierler decoder and the related work on BCH codes are described in a book Error Correcting Codes by W. Wesley Peterson (1961).
Formulation
The transmitted message, , is viewed as the coefficients of a polynomial
As a result of the Reed–Solomon encoding procedure, s(x) is divisible by the generator polynomial
where α is a primitive element.
Since s(x) is a multiple of the generator g(x), it follows that it "inherits" all its roots:
Therefore,
The transmitted polynomial is corrupted in transit by an error polynomial
to produce the received polynomial
Coefficient ei will be zero if there is no error at that power of x, and nonzero if there is an error. If there are ν errors at distinct powers ik of x, then
The goal of the decoder is to find the number of errors (ν), the positions of the errors (ik), and the error values at those positions (eik). From those, e(x) can be calculated and subtracted from r(x) to get the originally sent message s(x).
Syndrome decoding
The decoder starts by evaluating the polynomial as received at points . We call the results of that evaluation the "syndromes" Sj. They are defined as
Note that because has roots at , as shown in the previous section.
The advantage of looking at the syndromes is that the message polynomial drops out. In other words, the syndromes only relate to the error and are unaffected by the actual contents of the message being transmitted. If the syndromes are all zero, the algorithm stops here and reports that the message was not corrupted in transit.
Error locators and error values
For convenience, define the error locators Xk and error values Yk as
Then the syndromes can be written in terms of these error locators and error values as
This definition of the syndrome values is equivalent to the previous since .
The syndromes give a system of equations in 2ν unknowns, but that system of equations is nonlinear in the Xk and does not have an obvious solution. However, if the Xk were known (see below), then the syndrome equations provide a linear system of equations
which can easily be solved for the Yk error values.
Consequently, the problem is finding the Xk, because then the leftmost matrix would be known, and both sides of the equation could be multiplied by its inverse, yielding Yk
In the variant of this algorithm where the locations of the errors are already known (when it is being used as an erasure code), this is the end. The error locations (Xk) are already known by some other method (for example, in an FM transmission, the sections where the bitstream was unclear or overcome with interference are probabilistically determinable from frequency analysis). In this scenario, up to errors can be corrected.
The rest of the algorithm serves to locate the errors and will require syndrome values up to , instead of just the used thus far. This is why twice as many error-correcting symbols need to be added as can be corrected without knowing their locations.
Error locator polynomial
There is a linear recurrence relation that gives rise to a system of linear equations. Solving those equations identifies those error locations Xk.
Define the error locator polynomial as
The zeros of are the reciprocals . This follows from the above product notation construction, since if , then one of the multiplied terms will be zero, , making the whole polynomial evaluate to zero:
Let be any integer such that . Multiply both sides by , and it will still be zero:
Sum for k = 1 to ν, and it will still be zero:
Collect each term into its own sum:
Extract the constant values of that are unaffected by the summation:
These summations are now equivalent to the syndrome values, which we know and can substitute in. This therefore reduces to
Subtracting from both sides yields
Recall that j was chosen to be any integer between 1 and v inclusive, and this equivalence is true for all such values. Therefore, we have v linear equations, not just one. This system of linear equations can therefore be solved for the coefficients Λi of the error-location polynomial:
The above assumes that the decoder knows the number of errors ν, but that number has not been determined yet. The PGZ decoder does not determine ν directly but rather searches for it by trying successive values. The decoder first assumes the largest value for a trial ν and sets up the linear system for that value. If the equations can be solved (i.e., the matrix determinant is nonzero), then that trial value is the number of errors. If the linear system cannot be solved, then the trial ν is reduced by one and the next smaller system is examined .
Find the roots of the error locator polynomial
Use the coefficients Λi found in the last step to build the error location polynomial. The roots of the error location polynomial can be found by exhaustive search. The error locators Xk are the reciprocals of those roots. The order of coefficients of the error location polynomial can be reversed, in which case the roots of that reversed polynomial are the error locators (not their reciprocals ). Chien search is an efficient implementation of this step.
Calculate the error values
Once the error locators Xk are known, the error values can be determined. This can be done by direct solution for Yk in the error equations matrix given above, or using the Forney algorithm.
Calculate the error locations
Calculate ik by taking the log base of Xk. This is generally done using a precomputed lookup table.
Fix the errors
Finally, e(x) is generated from ik and eik and then is subtracted from r(x) to get the originally sent message s(x), with errors corrected.
Example
Consider the Reed–Solomon code defined in with and (this is used in PDF417 barcodes) for a RS(7,3) code. The generator polynomial is
If the message polynomial is , then a systematic codeword is encoded as follows:
Errors in transmission might cause this to be received instead:
The syndromes are calculated by evaluating r at powers of α:
yielding the system
Using Gaussian elimination,
so
with roots x1 = 757 = 3−3 and x2 = 562 = 3−4.
The coefficients can be reversed:
to produce roots 27 = 33 and 81 = 34 with positive exponents, but typically this isn't used. The logarithm of the inverted roots corresponds to the error locations (right to left, location 0 is the last term in the codeword).
To calculate the error values, apply the Forney algorithm:
Subtracting from the received polynomial r(x) reproduces the original codeword s.
Berlekamp–Massey decoder
The Berlekamp–Massey algorithm is an alternate iterative procedure for finding the error locator polynomial. During each iteration, it calculates a discrepancy based on a current instance of Λ(x) with an assumed number of errors e:
and then adjusts Λ(x) and e so that a recalculated Δ would be zero. The article Berlekamp–Massey algorithm has a detailed description of the procedure. In the following example, C(x) is used to represent Λ(x).
Example
Using the same data as the Peterson Gorenstein Zierler example above:
The final value of C is the error locator polynomial, Λ(x).
Euclidean decoder
Another iterative method for calculating both the error locator polynomial and the error value polynomial is based on Sugiyama's adaptation of the extended Euclidean algorithm .
Define S(x), Λ(x), and Ω(x) for t syndromes and e errors:
The key equation is:
For t = 6 and e = 3:
The middle terms are zero due to the relationship between Λ and syndromes.
The extended Euclidean algorithm can find a series of polynomials of the form
where the degree of R decreases as i increases. Once the degree of Ri(x) < t/2, then
B(x) and Q(x) don't need to be saved, so the algorithm becomes:
R−1 := xt
R0 := S(x)
A−1 := 0
A0 := 1
i := 0
while degree of Ri ≥ t/2
i := i + 1
Q := Ri-2 / Ri-1
Ri := Ri-2 - Q Ri-1
Ai := Ai-2 - Q Ai-1
to set low order term of Λ(x) to 1, divide Λ(x) and Ω(x) by Ai(0):
Ai(0) is the constant (low order) term of Ai.
Example
Using the same data as the Peterson–Gorenstein–Zierler example above:
Decoder using discrete Fourier transform
A discrete Fourier transform can be used for decoding. To avoid conflict with syndrome names, let c(x) = s(x) the encoded codeword. r(x) and e(x) are the same as above. Define C(x), E(x), and R(x) as the discrete Fourier transforms of c(x), e(x), and r(x). Since r(x) = c(x) + e(x), and since a discrete Fourier transform is a linear operator, R(x) = C(x) + E(x).
Transform r(x) to R(x) using discrete Fourier transform. Since the calculation for a discrete Fourier transform is the same as the calculation for syndromes, t coefficients of R(x) and E(x) are the same as the syndromes:
Use through as syndromes (they're the same) and generate the error locator polynomial using the methods from any of the above decoders.
Let v = number of errors. Generate E(x) using the known coefficients to , the error locator polynomial, and these formulas
Then calculate C(x) = R(x) − E(x) and take the inverse transform (polynomial interpolation) of C(x) to produce c(x).
Decoding beyond the error-correction bound
The Singleton bound states that the minimum distance d of a linear block code of size (n,k) is upper-bounded by . The distance d was usually understood to limit the error-correction capability to . The Reed–Solomon code achieves this bound with equality, and can thus correct up to errors. However, this error-correction bound is not exact.
In 1999, Madhu Sudan and Venkatesan Guruswami at MIT published "Improved Decoding of Reed–Solomon and Algebraic-Geometry Codes" introducing an algorithm that allowed for the correction of errors beyond half the minimum distance of the code. It applies to Reed–Solomon codes and more generally to algebraic geometric codes. This algorithm produces a list of codewords (it is a list-decoding algorithm) and is based on interpolation and factorization of polynomials over and its extensions.
In 2023, building on three exciting works, coding theorists showed that Reed-Solomon codes defined over random evaluation points can actually achieve list decoding capacity (up to errors) over linear size alphabets with high probability. However, this result is combinatorial rather than algorithmic.
Soft-decoding
The algebraic decoding methods described above are hard-decision methods, which means that for every symbol a hard decision is made about its value. For example, a decoder could associate with each symbol an additional value corresponding to the channel demodulator's confidence in the correctness of the symbol. The advent of LDPC and turbo codes, which employ iterated soft-decision belief propagation decoding methods to achieve error-correction performance close to the theoretical limit, has spurred interest in applying soft-decision decoding to conventional algebraic codes. In 2003, Ralf Koetter and Alexander Vardy presented a polynomial-time soft-decision algebraic list-decoding algorithm for Reed–Solomon codes, which was based upon the work by Sudan and Guruswami.
In 2016, Steven J. Franke and Joseph H. Taylor published a novel soft-decision decoder.
MATLAB example
Encoder
Here we present a simple MATLAB implementation for an encoder.
function encoded = rsEncoder(msg, m, prim_poly, n, k)
% RSENCODER Encode message with the Reed-Solomon algorithm
% m is the number of bits per symbol
% prim_poly: Primitive polynomial p(x). Ie for DM is 301
% k is the size of the message
% n is the total size (k+redundant)
% Example: msg = uint8('Test')
% enc_msg = rsEncoder(msg, 8, 301, 12, numel(msg));
% Get the alpha
alpha = gf(2, m, prim_poly);
% Get the Reed-Solomon generating polynomial g(x)
g_x = genpoly(k, n, alpha);
% Multiply the information by X^(n-k), or just pad with zeros at the end to
% get space to add the redundant information
msg_padded = gf([msg zeros(1, n - k)], m, prim_poly);
% Get the remainder of the division of the extended message by the
% Reed-Solomon generating polynomial g(x)
[~, remainder] = deconv(msg_padded, g_x);
% Now return the message with the redundant information
encoded = msg_padded - remainder;
end
% Find the Reed-Solomon generating polynomial g(x), by the way this is the
% same as the rsgenpoly function on matlab
function g = genpoly(k, n, alpha)
g = 1;
% A multiplication on the galois field is just a convolution
for k = mod(1 : n - k, n)
g = conv(g, [1 alpha .^ (k)]);
end
end
Decoder
Now the decoding part:
function [decoded, error_pos, error_mag, g, S] = rsDecoder(encoded, m, prim_poly, n, k)
% RSDECODER Decode a Reed-Solomon encoded message
% Example:
% [dec, ~, ~, ~, ~] = rsDecoder(enc_msg, 8, 301, 12, numel(msg))
max_errors = floor((n - k) / 2);
orig_vals = encoded.x;
% Initialize the error vector
errors = zeros(1, n);
g = [];
S = [];
% Get the alpha
alpha = gf(2, m, prim_poly);
% Find the syndromes (Check if dividing the message by the generator
% polynomial the result is zero)
Synd = polyval(encoded, alpha .^ (1:n - k));
Syndromes = trim(Synd);
% If all syndromes are zeros (perfectly divisible) there are no errors
if isempty(Syndromes.x)
decoded = orig_vals(1:k);
error_pos = [];
error_mag = [];
g = [];
S = Synd;
return;
end
% Prepare for the euclidean algorithm (Used to find the error locating
% polynomials)
r0 = [1, zeros(1, 2 * max_errors)]; r0 = gf(r0, m, prim_poly); r0 = trim(r0);
size_r0 = length(r0);
r1 = Syndromes;
f0 = gf([zeros(1, size_r0 - 1) 1], m, prim_poly);
f1 = gf(zeros(1, size_r0), m, prim_poly);
g0 = f1; g1 = f0;
% Do the euclidean algorithm on the polynomials r0(x) and Syndromes(x) in
% order to find the error locating polynomial
while true
% Do a long division
[quotient, remainder] = deconv(r0, r1);
% Add some zeros
quotient = pad(quotient, length(g1));
% Find quotient*g1 and pad
c = conv(quotient, g1);
c = trim(c);
c = pad(c, length(g0));
% Update g as g0-quotient*g1
g = g0 - c;
% Check if the degree of remainder(x) is less than max_errors
if all(remainder(1:end - max_errors) == 0)
break;
end
% Update r0, r1, g0, g1 and remove leading zeros
r0 = trim(r1); r1 = trim(remainder);
g0 = g1; g1 = g;
end
% Remove leading zeros
g = trim(g);
% Find the zeros of the error polynomial on this galois field
evalPoly = polyval(g, alpha .^ (n - 1 : - 1 : 0));
error_pos = gf(find(evalPoly == 0), m);
% If no error position is found we return the received work, because
% basically is nothing that we could do and we return the received message
if isempty(error_pos)
decoded = orig_vals(1:k);
error_mag = [];
return;
end
% Prepare a linear system to solve the error polynomial and find the error
% magnitudes
size_error = length(error_pos);
Syndrome_Vals = Syndromes.x;
b(:, 1) = Syndrome_Vals(1:size_error);
for idx = 1 : size_error
e = alpha .^ (idx * (n - error_pos.x));
err = e.x;
er(idx, :) = err;
end
% Solve the linear system
error_mag = (gf(er, m, prim_poly) \ gf(b, m, prim_poly))';
% Put the error magnitude on the error vector
errors(error_pos.x) = error_mag.x;
% Bring this vector to the galois field
errors_gf = gf(errors, m, prim_poly);
% Now to fix the errors just add with the encoded code
decoded_gf = encoded(1:k) + errors_gf(1:k);
decoded = decoded_gf.x;
end
% Remove leading zeros from Galois array
function gt = trim(g)
gx = g.x;
gt = gf(gx(find(gx, 1) : end), g.m, g.prim_poly);
end
% Add leading zeros
function xpad = pad(x, k)
len = length(x);
if len < k
xpad = [zeros(1, k - len) x];
end
end
Reed Solomon original view decoders
The decoders described in this section use the Reed Solomon original view of a codeword as a sequence of polynomial values where the polynomial is based on the message to be encoded. The same set of fixed values are used by the encoder and decoder, and the decoder recovers the encoding polynomial (and optionally an error locating polynomial) from the received message.
Theoretical decoder
described a theoretical decoder that corrected errors by finding the most popular message polynomial. The decoder only knows the set of values to and which encoding method was used to generate the codeword's sequence of values. The original message, the polynomial, and any errors are unknown. A decoding procedure could use a method like Lagrange interpolation on various subsets of n codeword values taken k at a time to repeatedly produce potential polynomials, until a sufficient number of matching polynomials are produced to reasonably eliminate any errors in the received codeword. Once a polynomial is determined, then any errors in the codeword can be corrected, by recalculating the corresponding codeword values. Unfortunately, in all but the simplest of cases, there are too many subsets, so the algorithm is impractical. The number of subsets is the binomial coefficient, , and the number of subsets is infeasible for even modest codes. For a code that can correct 3 errors, the naïve theoretical decoder would examine 359 billion subsets.
Berlekamp Welch decoder
In 1986, a decoder known as the Berlekamp–Welch algorithm was developed as a decoder that is able to recover the original message polynomial as well as an error "locator" polynomial that produces zeroes for the input values that correspond to errors, with time complexity , where is the number of values in a message. The recovered polynomial is then used to recover (recalculate as needed) the original message.
Example
Using RS(7,3), GF(929), and the set of evaluation points ai = i − 1
If the message polynomial is
The codeword is
Errors in transmission might cause this to be received instead.
The key equations are:
Assume maximum number of errors: e = 2. The key equations become:
Using Gaussian elimination:
Recalculate where to correct resulting in the corrected codeword:
Gao decoder
In 2002, an improved decoder was developed by Shuhong Gao, based on the extended Euclid algorithm.
Example
Using the same data as the Berlekamp Welch example above:
Lagrange interpolation of for i = 1 to n
divide Q(x) and E(x) by most significant coefficient of E(x) = 708. (Optional)
Recalculate where to correct resulting in the corrected codeword:
| Mathematics | Information theory | null |
45609 | https://en.wikipedia.org/wiki/Cheetah | Cheetah | The cheetah (Acinonyx jubatus) is a large cat and the fastest land animal. It has a tawny to creamy white or pale buff fur that is marked with evenly spaced, solid black spots. The head is small and rounded, with a short snout and black tear-like facial streaks. It reaches at the shoulder, and the head-and-body length is between . Adults weigh between . The cheetah is capable of running at ; it has evolved specialized adaptations for speed, including a light build, long thin legs and a long tail.
The cheetah was first described in the late 18th century. Four subspecies are recognised today that are native to Africa and central Iran. An African subspecies was introduced to India in 2022. It is now distributed mainly in small, fragmented populations in northwestern, eastern and southern Africa and central Iran. It lives in a variety of habitats such as savannahs in the Serengeti, arid mountain ranges in the Sahara, and hilly desert terrain.
The cheetah lives in three main social groups: females and their cubs, male "coalitions", and solitary males. While females lead a nomadic life searching for prey in large home ranges, males are more sedentary and instead establish much smaller territories in areas with plentiful prey and access to females. The cheetah is active during the day, with peaks during dawn and dusk. It feeds on small- to medium-sized prey, mostly weighing under , and prefers medium-sized ungulates such as impala, springbok and Thomson's gazelles. The cheetah typically stalks its prey within before charging towards it, trips it during the chase and bites its throat to suffocate it to death. It breeds throughout the year. After a gestation of nearly three months, females give birth to a litter of three or four cubs. Cheetah cubs are highly vulnerable to predation by other large carnivores. They are weaned at around four months and are independent by around 20 months of age.
The cheetah is threatened by habitat loss, conflict with humans, poaching and high susceptibility to diseases. The global cheetah population was estimated in 2021 at 6,517; it is listed as Vulnerable on the IUCN Red List. It has been widely depicted in art, literature, advertising, and animation. It was tamed in ancient Egypt and trained for hunting ungulates in the Arabian Peninsula and India. It has been kept in zoos since the early 19th century.
Etymology
The vernacular name "cheetah" is derived from Hindustani and (). This in turn comes from () meaning 'variegated', 'adorned' or 'painted'. In the past, the cheetah was often called "hunting leopard" because they could be tamed and used for coursing. The generic name Acinonyx probably derives from the combination of two Greek words: () meaning 'unmoved' or 'motionless', and () meaning 'nail' or 'hoof'. A rough translation is "immobile nails", a reference to the cheetah's limited ability to retract its claws. A similar meaning can be obtained by the combination of the Greek prefix a– (implying a lack of) and () meaning 'to move' or 'to set in motion'. The specific name is Latin for 'crested, having a mane'.
A few old generic names such as Cynailurus and Cynofelis allude to the similarities between the cheetah and canids.
Taxonomy
In 1777, Johann Christian Daniel von Schreber described the cheetah based on a skin from the Cape of Good Hope and gave it the scientific name Felis jubatus. Joshua Brookes proposed the generic name Acinonyx in 1828. In 1917, Reginald Innes Pocock placed the cheetah in a subfamily of its own, Acinonychinae, given its striking morphological resemblance to the greyhound and significant deviation from typical felid features; the cheetah was classified in Felinae in later taxonomic revisions.
In the 19th and 20th centuries, several cheetah zoological specimens were described; some were proposed as subspecies.
A South African specimen with notably dense fur was proposed as (Felis lanea) by Philip Sclater in 1877 and became known as the "woolly cheetah". Its classification as a species was mostly disputed. There has been considerable confusion in the nomenclature of the cheetah and leopard (Panthera pardus) as authors often confused the two; some considered "hunting leopards" an independent species, or equal to the leopard.
Subspecies
In 1975, five cheetah subspecies were considered valid taxa: A. j. hecki, A. j. jubatus, A. j. raineyi, A. j. soemmeringii and A. j. venaticus. In 2011, a phylogeographic study found minimal genetic variation between A. j. jubatus and A. j. raineyi; only four subspecies were identified. In 2017, the Cat Classification Task Force of the IUCN Cat Specialist Group revised felid taxonomy and recognised these four subspecies as valid. Their details are tabulated below:
Phylogeny and evolution
The cheetah's closest relatives are the cougar (Puma concolor) and the jaguarundi (Herpailurus yagouaroundi). Together, these three species form the Puma lineage, one of the eight lineages of the extant felids; the Puma lineage diverged from the rest 6.7 mya. The sister group of the Puma lineage is a clade of smaller Old World cats that includes the genera Felis, Otocolobus and Prionailurus.
The oldest cheetah fossils, excavated in eastern and southern Africa, date to 3.5–3 mya; the earliest known specimen from South Africa is from the lowermost deposits of the Silberberg Grotto (Sterkfontein). Though incomplete, these fossils indicate forms larger but less cursorial than the modern cheetah. The first occurrence of the modern species A. jubatus in Africa may come from Cooper's D, a site in South Africa dating back to 1.5 to 1.4 Ma, during the Calabrian stage. Fossil remains from Europe are limited to a few Middle Pleistocene specimens from Hundsheim (Austria) and Mosbach Sands (Germany). Cheetah-like cats are known from as late as 10,000 years ago from the Old World. The giant cheetah (A. pardinensis), significantly larger and slower compared to the modern cheetah, occurred in Eurasia and eastern and southern Africa in the Villafranchian period roughly 3.8–1.9 mya. In the Middle Pleistocene a smaller cheetah, A. intermedius, ranged from Europe to China. The modern cheetah appeared in Africa around 1.9 mya; its fossil record is restricted to Africa.
Extinct North American cheetah-like cats had historically been classified in Felis, Puma or Acinonyx; two such species, F. studeri and F. trumani, were considered to be closer to the puma than the cheetah, despite their close similarities to the latter. Noting this, palaeontologist Daniel Adams proposed Miracinonyx, a new subgenus under Acinonyx, in 1979 for the North American cheetah-like cats; this was later elevated to genus rank. Adams pointed out that North American and Old World cheetah-like cats may have had a common ancestor, and Acinonyx might have originated in North America instead of Eurasia. However, subsequent research has shown that Miracinonyx is phylogenetically closer to the cougar than the cheetah; the similarities to cheetahs have been attributed to parallel evolution.
The three species of the Puma lineage may have had a common ancestor during the Miocene (roughly 8.25 mya). Some suggest that North American cheetahs possibly migrated to Asia via the Bering Strait, then dispersed southward to Africa through Eurasia at least 100,000 years ago; some authors have expressed doubt over the occurrence of cheetah-like cats in North America, and instead suppose the modern cheetah to have evolved from Asian populations that eventually spread to Africa. The cheetah is thought to have experienced two population bottlenecks that greatly decreased the genetic variability in populations; one occurred about 100,000 years ago that has been correlated to migration from North America to Asia, and the second 10,000–12,000 years ago in Africa, possibly as part of the Late Pleistocene extinction event.
Genetics
The diploid number of chromosomes in the cheetah is 38, the same as in most other felids. The cheetah was the first felid observed to have unusually low genetic variability among individuals, which has led to poor breeding in captivity, increased spermatozoal defects, high juvenile mortality and increased susceptibility to diseases and infections. A prominent instance was the deadly feline coronavirus outbreak in a cheetah breeding facility of Oregon in 1983 which had a mortality rate of 60%, higher than that recorded for previous epizootics of feline infectious peritonitis in any felid. The remarkable homogeneity in cheetah genes has been demonstrated by experiments involving the major histocompatibility complex (MHC); unless the MHC genes are highly homogeneous in a population, skin grafts exchanged between a pair of unrelated individuals would be rejected. Skin grafts exchanged between unrelated cheetahs are accepted well and heal, as if their genetic makeup were the same.
The low genetic diversity is thought to have been created by two population bottlenecks from about 100,000 years and about 12,000 years ago, respectively. The resultant level of genetic variation is around 0.1–4% of average living species, lower than that of Tasmanian devils, Virunga gorillas, Amur tigers, and even highly inbred domestic cats and dogs.
Selective retention of gene variants (Duplication) has been found in 10 genes candidates to explain energetics and anabolism related to muscle specialization in cheetahs.
•Regulation of muscle contraction (Five genes: ADORA1, ADRA1B, CACNA1C, RGS2, SCN5A).
•Physiological stress response (Two genes: ADORA1, TAOK2).
•Negative regulation of catabolic process (Four genes: APOC3, SUFU, DDIT4, PPARA).
Potentially harmful mutations has been found in a gene related to spermatogenesis (AKAP4). This could explain the high proportion of abnormal sperma in male cheetahs and poor reproductive success in the species.
King cheetah
The king cheetah is a variety of cheetah with a rare mutation for cream-coloured fur marked with large, blotchy spots and three dark, wide stripes extending from the neck to the tail. In Manicaland, Zimbabwe, it was known as nsuifisi and thought to be a cross between a leopard and a hyena. In 1926, Major A. Cooper wrote about a cheetah-like animal he had shot near modern-day Harare, with fur as thick as that of a snow leopard and spots that merged to form stripes. He suggested it could be a cross between a leopard and a cheetah. As more such individuals were observed it was seen that they had non-retractable claws like the cheetah.
In 1927, Pocock described these individuals as a new species by the name of Acinonyx rex ("king cheetah"). However, in the absence of proof to support his claim, he withdrew his proposal in 1939. Abel Chapman considered it a colour morph of the normally spotted cheetah. Since 1927, the king cheetah has been reported five more times in the wild in Zimbabwe, Botswana and northern Transvaal; one was photographed in 1975.
In 1981, two female cheetahs that had mated with a wild male from Transvaal at the De Wildt Cheetah and Wildlife Centre (South Africa) gave birth to one king cheetah each; subsequently, more king cheetahs were born at the centre. In 2012, the cause of this coat pattern was found to be a mutation in the gene for transmembrane aminopeptidase (Taqpep), the same gene responsible for the striped "mackerel" versus blotchy "classic" pattern seen in tabby cats. The appearance is caused by reinforcement of a recessive allele; hence if two mating cheetahs are heterozygous carriers of the mutated allele, a quarter of their offspring can be expected to be king cheetahs.
Characteristics
The cheetah is a lightly built, spotted cat characterised by a small rounded head, a short snout, black tear-like facial streaks, a deep chest, long thin legs and a long tail. Its slender, canine-like form is highly adapted for speed, and contrasts sharply with the robust build of the genus Panthera. Cheetahs typically reach at the shoulder and the head-and-body length is between . The weight can vary with age, health, location, sex and subspecies; adults typically range between . Cubs born in the wild weigh at birth, while those born in captivity tend to be larger and weigh around . The cheetah is sexually dimorphic, with males larger and heavier than females, but not to the extent seen in other large cats; females have a much lower body mass index than males. Studies differ significantly on morphological variations among the subspecies.
The coat is typically tawny to creamy white or pale buff (darker in the mid-back portion). The chin, throat and underparts of the legs and the belly are white and devoid of markings. The rest of the body is covered with around 2,000 evenly spaced, oval or round solid black spots, each measuring roughly . Each cheetah has a distinct pattern of spots which can be used to identify unique individuals. Besides the clearly visible spots, there are other faint, irregular black marks on the coat. Newly born cubs are covered in fur with an unclear pattern of spots that gives them a dark appearance—pale white above and nearly black on the underside. The hair is mostly short and often coarse, but the chest and the belly are covered in soft fur; the fur of king cheetahs has been reported to be silky. There is a short, rough mane, covering at least along the neck and the shoulders; this feature is more prominent in males. The mane starts out as a cape of long, loose blue to grey hair in juveniles. Melanistic cheetahs are rare and have been seen in Zambia and Zimbabwe. In 1877–1878, Sclater described two partially albino specimens from South Africa.
The head is small and more rounded compared to other big cats. Saharan cheetahs have canine-like slim faces. The ears are small, short and rounded; they are tawny at the base and on the edges and marked with black patches on the back. The eyes are set high and have round pupils. The whiskers, shorter and fewer than those of other felids, are fine and inconspicuous. The pronounced tear streaks (or malar stripes), unique to the cheetah, originate from the corners of the eyes and run down the nose to the mouth. The role of these streaks is not well understood—they may protect the eyes from the sun's glare (a helpful feature as the cheetah hunts mainly during the day), or they could be used to define facial expressions. The exceptionally long and muscular tail, with a bushy white tuft at the end, measures . While the first two-thirds of the tail are covered in spots, the final third is marked with four to six dark rings or stripes.
The cheetah is superficially similar to the leopard, which has a larger head, fully retractable claws, rosettes instead of spots, lacks tear streaks and is more muscular. Moreover, the cheetah is taller than the leopard. The serval also resembles the cheetah in physical build, but is significantly smaller, has a shorter tail and its spots fuse to form stripes on the back. The cheetah appears to have evolved convergently with canids in morphology and behaviour; it has canine-like features such as a relatively long snout, long legs, a deep chest, tough paw pads and blunt, semi-retractable claws. The cheetah has often been likened to the greyhound, as both have similar morphology and the ability to reach tremendous speeds in a shorter time than other mammals, but the cheetah can attain much higher maximum speeds.
Internal anatomy
Sharply contrasting with the other big cats in its morphology, the cheetah shows several specialized adaptations for prolonged chases to catch prey at some of the fastest speeds reached by land animals. Its light, streamlined body makes it well-suited to short, explosive bursts of speed, rapid acceleration, and an ability to execute extreme changes in direction while moving at high speed. The large nasal passages, accommodated well due to the smaller size of the canine teeth, ensure fast flow of sufficient air, and the enlarged heart and lungs allow the enrichment of blood with oxygen in a short time. This allows cheetahs to rapidly regain their stamina after a chase. During a typical chase, their respiratory rate increases from 60 to 150 breaths per minute. The cheetah has a fast heart rate, averaging 126–173 beats per minute at resting without arrhythmia. Moreover, the reduced viscosity of the blood at higher temperatures (common in frequently moving muscles) could ease blood flow and increase oxygen transport. While running, in addition to having good traction due to their semi-retractable claws, cheetahs use their tail as a rudder-like means of steering that enables them to make sharp turns, necessary to outflank antelopes which often change direction to escape during a chase. The protracted claws increase grip over the ground, while rough paw pads make the sprint more convenient over tough ground. The limbs of the cheetah are longer than what is typical for other cats its size; the thigh muscles are large, and the tibia and fibula are held close together making the lower legs less likely to rotate. This reduces the risk of losing balance during runs, but compromises the cat's ability to climb trees. The highly reduced clavicle is connected through ligaments to the scapula, whose pendulum-like motion increases the stride length and assists in shock absorption. The extension of the vertebral column can add as much as to the stride length.
Muscle tissue has been analyzed in the cheetah and it has been found that there are little differences in type IIx muscle fibers concentration, anaerobic LDH enzyme activity, as well glycogen concentration between sexes, in contrast to humans where women had LDH activity much lower that men, although type IIx muscle fibers concentration were similar. Cheetah males had larger cross-sectional area fibers.
The cheetah resembles the smaller cats in cranial features, and in having a long and flexible spine, as opposed to the stiff and short one in other large felids. The roughly triangular skull has light, narrow bones and the sagittal crest is poorly developed, possibly to reduce weight and enhance speed. The mouth can not be opened as widely as in other cats given the shorter length of muscles between the jaw and the skull. A study suggested that the limited retraction of the cheetah's claws may result from the earlier truncation of the development of the middle phalanx bone in cheetahs.
The cheetah has a total of 30 teeth; the dental formula is . The small, flat canines are used to bite the throat and suffocate the prey. A study gave the bite force quotient (BFQ) of the cheetah as 119, close to that for the lion (112), suggesting that adaptations for a lighter skull may not have reduced the power of the cheetah's bite. Unlike other cats, the cheetah's canines have no gap or diastema behind them when the jaws close, as the top and bottom cheek teeth show extensive overlap. Cheetahs have relatively elongated, blade-like shape carnassial teeth, with reduced lingual cusps; this may have been an adaptation to consume quickly the flesh of a prey before more heavy-built predators from other species arrive to take it from them. The slightly curved claws, shorter and straighter than those of other cats, lack a protective sheath and are partly retractable. The claws are blunt due to lack of protection, but the large and strongly curved dewclaw is remarkably sharp. Cheetahs have a high concentration of nerve cells arranged in a band in the centre of the eyes, a visual streak, the most efficient among felids. This significantly sharpens the vision and enables the cheetah to swiftly locate prey against the horizon. The cheetah is unable to roar due to the presence of a sharp-edged vocal fold within the larynx.
In stressful situations, the cheetah has a lower cortisol level than the leopard, indicating better stress response; it also has lower immunoglobulin G and Serum amyloid A levels but a higher lysozyme level and a higher bacterial killing capacity than the leopard, indicating a poorer adaptive and induced innate immune systems but a better constitutive innate immune system; its constitutive innate immune system compensates for its low variation of major histocompatibility complex and poorer immune adaptability.
Speed and acceleration
The cheetah is the world's fastest land animal. Estimates of the maximum speed attained range from . A commonly quoted value is , recorded in 1957, but this measurement is disputed. In 2012, an 11-year-old cheetah from the Cincinnati Zoo set a world record by running in 5.95 seconds over a set run, recording a maximum speed of .
Cheetahs equipped with GPS collars hunted at speeds during most of the chase much lower than the highest recorded speed; their run was interspersed with a few short bursts of a few seconds when they attained peak speeds. The average speed recorded during the high speed phase was , or within the range including error. The highest recorded value was . A hunt consists of two phases, an initial fast acceleration phase when the cheetah tries to catch up with the prey, followed by slowing down as it closes in on it, the deceleration varying by the prey in question. The initial linear acceleration observed was 13 m/s², more than twice than 6 m/s² of horses and greater than 10 m/s² of greyhounds. Cheetahs can increase up 3 m/s (10.8 km/h) and decrease up 4 m/s (14.4 km/h) in a single stride. Speed and acceleration values for a hunting cheetah may be different from those for a non-hunter because while engaged in the chase, the cheetah is more likely to be twisting and turning and may be running through vegetation. The speeds attained by the cheetah may be only slightly greater than those achieved by the pronghorn at and the springbok at , but the cheetah additionally has an exceptional acceleration.
One stride of a galloping cheetah measures ; the stride length and the number of jumps increases with speed. During more than half the duration of the sprint, the cheetah has all four limbs in the air, increasing the stride length. Running cheetahs can retain up to 90% of the heat generated during the chase. A 1973 study suggested the length of the sprint is limited by excessive build-up of body heat when the body temperature reaches . However, a 2013 study recorded the average temperature of cheetahs after hunts to be , suggesting high temperatures need not cause hunts to be abandoned.
The running speed of of the cheetah was obtained as an result of a single run of one individual by dividing the distance traveled for time spent. The run lasted 2.25 seconds and was supposed to have been long, but was later found to have been long. It was therefore discredited for a faulty method of measurement.
Cheetahs have subsequently been measured at running at a speed of as the fastest speed from three runs including in opposite direction, for a single individual, over a marked course, even starting the run behind the start line, starting the run already running on the course. Again dividing the distance by time, but this time to determine the maximum sustained speed, completing the runs in an time of 7.0, 6.9 and 7.2 seconds. Being a more accurate method of measurement, this test was made in 1965 but published in 1997.
Subsequently, with GPS-IMU collars, running speed was measured for wild cheetahs during hunts with turns and maneuvers, and the maximum speed recorded was sustained for 1–2 seconds. The speed was obtained by dividing the length by the time between footfalls of a stride. Cheetahs can go from in less than 3 seconds.
There are indirect ways to measure how fast a cheetah can run. One case is known of a cheetah that overtook a young male pronghorn. Cheetahs can overtake a running antelope with a head start. Both animals were clocked at by speedometer reading while running alongside a vehicle at full speed. Cheetahs can easily capture gazelles galloping at full speed ().
The physiological reasons for speed in cheetahs are:
Small head and long lumbar region of the spine, 36.8% of the presacral vertebral column.
A tibia and radius longer than the femur and humerus, with a femorotibial index of 101.9–105 and a humeroradial index of 100.1–103.3.
Elongated and slender long bones of the limbs, especially femur, tibia, humerus, radius and pelvis, specially the ischium.
A cool nose and enlarged respiratory passages that allow it to inhale and exhale more air with each breath, which helps dissipate body heat.
A higher concentration of glycolytic fast twitch muscle fibers (Type IIx) than other cats and animals in general. A very high LDH activity is indicative of this principally anaerobic muscle metabolism.
Most of the locomotor muscle mass is concentrated proximally close to the body in shoulders, thighs and spine, and is reduced in shins and forearms. Long tendons finish off the distal locomotor muscles.
Muscular hindlimbs form 19.8% of the body mass, whereas the forelimbs form 15.1%. The hamstrings, quadriceps, adductor muscles of the hip and psoas major muscles are especially large.
Enlarged Betz cells in the motor cortex M1 and innervating muscle fibers, with longer dendrites and more numerous dendritic segments to fit predominant type IIx muscle fibers.
Ecology and behaviour
Cheetahs are active mainly during the day, whereas other carnivores such as leopards and lions are active mainly at night; These larger carnivores can kill cheetahs and steal their kills; hence, the diurnal tendency of cheetahs helps them avoid larger predators in areas where they are sympatric, such as the Okavango Delta. In areas where the cheetah is the major predator (such as farmlands in Botswana and Namibia), activity tends to increase at night. This may also happen in highly arid regions such as the Sahara, where daytime temperatures can reach . The lunar cycle can also influence the cheetah's routine—activity might increase on moonlit nights as prey can be sighted easily, though this comes with the danger of encountering larger predators. Hunting is the major activity throughout the day, with peaks during dawn and dusk. Groups rest in grassy clearings after dusk. Cheetahs often inspect their vicinity at observation points such as elevations to check for prey or larger carnivores; even while resting, they take turns at keeping a lookout.
Social organisation
Cheetahs have a flexible and complex social structure and tend to be more gregarious than several other cats (except the lion). Individuals typically avoid one another but are generally amicable; males may fight over territories or access to females in oestrus, and on rare occasions such fights can result in severe injury and death. Females are not social and have minimal interaction with other individuals, barring the interaction with males when they enter their territories or during the mating season. Some females, generally mother and offspring or siblings, may rest beside one another during the day. Females tend to lead a solitary life or live with offspring in undefended home ranges; young females often stay close to their mothers for life but young males leave their mother's range to live elsewhere.
Some males are territorial, and group together for life, forming coalitions that collectively defend a territory which ensures maximum access to females—this is unlike the behaviour of the male lion who mates with a particular group (pride) of females. In most cases, a coalition will consist of brothers born in the same litter who stayed together after weaning, but biologically unrelated males are often allowed into the group; in the Serengeti, 30% of members in coalitions are unrelated males. If a cub is the only male in a litter, he will typically join an existing group, or form a small group of solitary males with two or three other lone males who may or may not be territorial. In the Kalahari Desert around 40% of the males live in solitude.
Males in a coalition are affectionate toward each other, grooming mutually and calling out if any member is lost; unrelated males may face some aversion in their initial days in the group. All males in the coalition typically have equal access to kills when the group hunts together, and possibly also to females who may enter their territory. A coalition generally has a greater chance of encountering and acquiring females for mating; however, its large membership demands greater resources than do solitary males. A 1987 study showed that solitary and grouped males have a nearly equal chance of coming across females, but the males in coalitions are notably healthier and have better chances of survival than their solitary counterparts.
Home ranges and territories
Unlike many other felids, among cheetahs, females tend to occupy larger areas compared to males. Females typically disperse over large areas in pursuit of prey, but they are less nomadic and roam in a smaller area if prey availability in the area is high. As such, the size of their home range depends on the distribution of prey in a region. In central Namibia, where most prey species are sparsely distributed, home ranges average , whereas in the woodlands of the Phinda Game Reserve (South Africa), which have plentiful prey, home ranges are in size. Cheetahs can travel long stretches overland in search of food; a study in the Kalahari Desert recorded an average displacement of nearly every day and walking speeds ranged between .
Males are generally less nomadic than females; often males in coalitions (and sometimes solitary males staying far from coalitions) establish territories. Whether males settle in territories or disperse over large areas forming home ranges depends primarily on the movements of females. Territoriality is preferred only if females tend to be more sedentary, which is more feasible in areas with plenty of prey. Some males, called floaters, switch between territoriality and nomadism depending on the availability of females. A 1987 study showed territoriality depended on the size and age of males and the membership of the coalition. The ranges of floaters averaged in the Serengeti to in central Namibia. In the Kruger National Park (South Africa) territories were much smaller. A coalition of three males occupied a territory measuring , and the territory of a solitary male measured . When a female enters a territory, the males will surround her; if she tries to escape, the males will bite or snap at her. Generally, the female can not escape on her own; the males themselves leave after they lose interest in her. They may smell the spot she was sitting or lying on to determine if she was in oestrus.
Communication
The cheetah is a vocal felid with a broad repertoire of calls and sounds; the acoustic features and the use of many of these have been studied in detail. The vocal characteristics, such as the way they are produced, are often different from those of other cats. For instance, a study showed that exhalation is louder than inhalation in cheetahs, while no such distinction was observed in the domestic cat. Listed below are some commonly recorded vocalisations observed in cheetahs:
Chirping: A chirp (or a "stutter-bark") is an intense bird-like call and lasts less than a second. Cheetahs chirp when they are excited, for instance, when gathered around a kill. Other uses include summoning concealed or lost cubs by the mother, or as a greeting or courtship between adults. The cheetah's chirp is similar to the soft roar of the lion, and its churr as the latter's loud roar. A similar but louder call ('yelp') can be heard from up to away; this call is typically used by mothers to locate lost cubs, or by cubs to find their mothers and siblings.
Churring (or churtling): A churr is a shrill, staccato call that can last up to two seconds. Churring and chirping have been noted for their similarity to the soft and loud roars of the lion. It is produced in similar context as chirping, but a study of feeding cheetahs found chirping to be much more common.
Purring: Similar to purring in domestic cats but much louder, it is produced when the cheetah is content, and as a form of greeting or when licking one another. It involves continuous sound production alternating between egressive and ingressive airstreams.
Agonistic sounds: These include bleating, coughing, growling, hissing, meowing and moaning (or yowling). A bleat indicates distress, for instance when a cheetah confronts a predator that has stolen its kill. Growls, hisses and moans are accompanied by multiple, strong hits on the ground with the front paw, during which the cheetah may retreat by a few metres. A meow, though a versatile call, is typically associated with discomfort or irritation.
Other vocalisations: Individuals can make a gurgling noise as part of a close, amicable interaction. A "nyam nyam" sound may be produced while eating. Apart from chirping, mothers can use a repeated "ihn ihn" is to gather cubs, and a "prr prr" is to guide them on a journey. A low-pitched alarm call is used to warn the cubs to stand still. Bickering cubs can let out a "whirr"—the pitch rises with the intensity of the quarrel and ends on a harsh note.
Another major means of communication is by scent—the male will often raise his tail and spray urine on elevated landmarks such as a tree trunks, stumps or rocks; other cheetahs will sniff these landmarks and repeat the ritual. Females may also show marking behaviour but less prominently than males do. Females in oestrus will show maximum urine-marking, and their excrement can attract males from far off. In Botswana, cheetahs are frequently captured by ranchers to protect livestock by setting up traps in traditional marking spots; the calls of the trapped cheetah can attract more cheetahs to the place.
Touch and visual cues are other ways of signalling in cheetahs. Social meetings involve mutual sniffing of the mouth, anus and genitals. Individuals will groom one another, lick each other's faces and rub cheeks. However, they seldom lean on or rub their flanks against each other. The tear streaks on the face can sharply define expressions at close range. Mothers probably use the alternate light and dark rings on the tail to signal their cubs to follow them.
Diet and hunting
The cheetah is a carnivore that hunts small to medium-sized prey weighing , but mostly less than . Its primary prey are medium-sized ungulates. They are the major component of the diet in certain areas, such as Dama and Dorcas gazelles in the Sahara, impala in the eastern and southern African woodlands, springbok in the arid savannas to the south and Thomson's gazelle in the Serengeti. Smaller antelopes like the common duiker are frequent prey in the southern Kalahari. Larger ungulates are typically avoided, though nyala, whose males weigh around , were found to be the major prey in a study in the Phinda Game Reserve. In Namibia cheetahs are the major predators of livestock. The diet of the Asiatic cheetah consists of chinkara, desert hare, goitered gazelle, urial, wild goats, and livestock; in India cheetahs used to prey mostly on blackbuck.
Prey preferences and hunting success vary with the age, sex and number of cheetahs involved in the hunt and on the vigilance of the prey. Generally, only groups of cheetahs (coalitions or mother and cubs) will try to kill larger prey; mothers with cubs especially look out for larger prey and tend to be more successful than females without cubs. Individuals on the periphery of the prey herd are common targets; vigilant prey which would react quickly on seeing the cheetah are not preferred.
Cheetahs are one of the most iconic pursuit predators, hunting primarily throughout the day, sometimes with peaks at dawn and dusk; they tend to avoid larger predators like the primarily nocturnal lion. Cheetahs in the Sahara and Maasai Mara in Kenya hunt after sunset to escape the high temperatures of the day. Cheetahs use their vision to hunt instead of their sense of smell; they keep a lookout for prey from resting sites or low branches. The cheetah will stalk its prey, trying to conceal itself in cover, and approach as close as possible, often within of the prey (or even closer for less alert prey). Alternatively the cheetah can lie hidden in cover and wait for the prey to come nearer. A stalking cheetah assumes a partially crouched posture, with the head lower than the shoulders; it will move slowly and be still at times. In areas of minimal cover, the cheetah will approach within of the prey and start the chase. The chase typically lasts a minute; in a 2013 study, the length of chases averaged , and the longest run measured . The cheetah can give up the chase if it is detected by the prey early or if it cannot make a kill quickly. Being lightly built, cheetahs lack the raw strength to tackle down the prey, and instead catch the prey by performing a kind of foot sweep by hitting the prey's leg or rump with the forepaw or using the strong dewclaw to knock the prey off its balance. Such a fall during a high-speed chase may cause the prey to collapse hard enough to break some of its limbs, and allow the cheetah to then pounce on the fallen and vulnerable prey.
Cheetahs can decelerate dramatically towards the end of the hunt, slowing down from to in just three strides, and can easily follow any twists and turns the prey makes as it tries to flee. To kill medium- to large-sized prey, the cheetah bites the prey's throat to strangle it, maintaining the bite for around five minutes, within which the prey succumbs to asphyxiation and stops struggling. A bite on the nape of the neck or the snout (and sometimes on the skull) suffices to kill smaller prey. Cheetahs have an average hunting success rate of 25–40%, higher for smaller and more vulnerable prey.
Once the hunt is over, the prey is taken near a bush or under a tree; the cheetah, highly exhausted after the chase, rests beside the kill and pants heavily for five to 55 minutes. Meanwhile, cheetahs nearby, who did not take part in the hunt, might feed on the kill immediately. Groups of cheetah consume the kill peacefully, though minor noises and snapping may be observed. Cheetahs can consume large quantities of food; a cheetah at the Etosha National Park (Namibia) was found to consume as much as within two hours. However, on a daily basis, a cheetah feeds on around of meat. Cheetahs, especially mothers with cubs, remain cautious even as they eat, pausing to look around for vultures and predators who may steal the kill.
Cheetahs move their heads from side to side so the sharp carnassial teeth tear the flesh, which can then be swallowed without chewing. They typically begin with the hindquarters where the tissue is the softest, and then progress toward the abdomen and the spine. Ribs are chewed on at the ends, and the limbs are not generally torn apart while eating. Unless the prey is very small, the skeleton is left almost intact after feeding on the meat. Cheetahs might lose up 13–14% of their kills to larger and stronger carnivores. To defend itself or its prey, a cheetah will hold its body low to the ground and snarl with its mouth wide open, the eyes staring threateningly ahead and the ears folded backward. This may be accompanied by moans, hisses and growls, and hitting the ground with the forepaws. Cheetahs have rarely been observed scavenging kills; this may be due to vultures and spotted hyena adroitly capturing and consuming heavy carcasses within a short time.
Cheetahs appear to have a comparatively higher hunting success rate than other predators. Their success rate for hunting Thomson gazelles is 70%, whereas the success rate of African wild dogs is 57%, of spotted hyenas 33%, and of lions 26%. Their success rate for hunting impalas is 26%, but of African wild dogs only 15.5%.
Reproduction and life cycle
The cheetah breeds throughout the year; females are polyestrous and induced ovulators with an estrous cycle of 12 days on average that can vary from three days to a month. They have their first litter at two to three years of age and can conceive again after 17 to 20 months from giving birth, or even sooner if a whole litter is lost. Males can breed at less than two years of age in captivity, but this may be delayed in the wild until the male acquires a territory. A 2007 study showed that females who gave birth to more litters early in their life often died younger, indicating a trade-off between longevity and yearly reproductive success.
Urine-marking in males can become more pronounced when a female in their vicinity comes into estrus. Males, sometimes even those in coalitions, fight among one another to secure access to the female. Often one male will eventually win dominance over the others and mate with the female, though a female can mate with different males. Mating begins with the male approaching the female, who lies down on the ground; individuals often chirp, purr or yelp at this time. No courtship behaviour is observed; the male immediately secures hold of the female's nape, and copulation takes place. The pair then ignore each other, but meet and copulate a few more times three to five times a day for the next two to three days before finally parting ways.
After a gestation of nearly three months, a litter of one to eight cubs is born (though those of three to four cubs are more common). Births take place at 20–25 minute intervals in a sheltered place such as thick vegetation. The eyes are shut at birth, and open in four to 11 days. Newborn cubs might spit a lot and make soft churring noises; they start walking by two weeks. Their nape, shoulders and back are thickly covered with long bluish-grey hair, called a mantle, which gives them a mohawk-type appearance; this fur is shed as the cheetah grows older. A study suggested that this mane gives a cheetah cub the appearance of a honey badger, and could act as camouflage from attacks by these badgers or predators that tend to avoid them.
Compared to other felids, cheetah cubs are highly vulnerable to several predators during the first few weeks of their life. Mothers keep their cubs hidden in dense vegetation for the first two months and nurse in the early morning. The mother is extremely vigilant at this stage; she stays within of the lair, frequently visits her cubs, moves them every five to six days, and remains with them after dark. Despite trying to make minimal noise, she cannot generally defend her litter from predators. Predation is the leading cause of mortality in cheetah cubs; a study showed that in areas with a low density of predators (such as Namibian farmlands) around 70% of the cubs make it beyond the age of 14 months, whereas in areas like the Serengeti National Park, where several large carnivores exist, the survival rate was just 17%. Deaths also occur from starvation if their mothers abandon them, fires, or pneumonia because of exposure to bad weather. Generation length of the cheetah is six years.
Cubs start coming out of the lair at two months of age, trailing after their mother wherever she goes. At this point the mother nurses less and brings solid food to the cubs; they retreat away from the carcass in fear initially, but gradually start eating it. The cubs might purr as the mother licks them clean after the meal. Weaning occurs at four to six months. To train her cubs in hunting, the mother will catch and let go of live prey in front of her cubs. Cubs' play behaviour includes chasing, crouching, pouncing and wrestling; there is plenty of agility, and attacks are seldom lethal. Playing can improve catching skills in cubs, though the ability to crouch and hide may not develop remarkably.
Cubs as young as six months try to capture small prey like hares and young gazelles. However, they may have to wait until as long as 15 months of age to make a successful kill on their own. At around 20 months, offspring become independent; mothers might have conceived again by then. Siblings may remain together for a few more months before parting ways. While females stay close to their mothers, males move farther off. The lifespan of wild cheetahs is 14 to 15 years for females, and their reproductive cycle typically ends by 12 years of age; males generally live as long as ten years.
Distribution and habitat
In eastern and southern Africa, the cheetah occurs mostly in savannas like the Kalahari and Serengeti. In central, northern and western Africa, it inhabits arid mountain ranges and valleys; in the harsh climate of the Sahara, it prefers high mountains, which receive more rainfall than the surrounding desert. The vegetation and water resources in these mountains support antelopes. In Iran, it occurs in hilly terrain of deserts at elevations up to , where annual precipitation is generally below ; the primary vegetation in these areas is thinly distributed shrubs, less than tall.
The cheetah inhabits a variety of ecosystems and appears to be less selective in habitat choice than other felids; it prefers areas with greater availability of prey, good visibility and minimal chances of encountering larger predators. It seldom occurs in tropical forests. It has been reported at the elevation of . An open area with some cover, such as diffused bushes, is probably ideal for the cheetah because it needs to stalk and pursue its prey over a distance. This also minimises the risk of encountering larger carnivores. The cheetah tends to occur in low densities typically between 0.3 and 3.0 adults per ; these values are 10–30% of those reported for leopards and lions.
Historical range
In prehistoric times, the cheetah was distributed throughout Africa, Asia and Europe. It gradually fell to extinction in Europe, possibly because of competition with the lion. Today the cheetah has been extirpated in most of its historical range; the numbers of the Asiatic cheetah had begun plummeting since the late 1800s, long before the other subspecies started their decline. As of 2017, cheetahs occur in just nine per cent of their erstwhile range in Africa, mostly in unprotected areas.
In the past until the mid-20th century, the cheetah ranged across vast stretches in Asia, from the Arabian Peninsula in the west to the Indian subcontinent in the east, and as far north as the Aral and Caspian Seas. A few centuries ago the cheetah was abundant in India, and its range coincided with the distribution of major prey like the blackbuck. However, its numbers in India plummeted from the 19th century onward; Divyabhanusinh of the Bombay Natural History Society notes that the last three individuals in the wild were killed by Maharaja Ramanuj Pratap Singh of Surguja in 1947. The last confirmed sighting in India was of a cheetah that drowned in a well near Hyderabad in 1957. In Iran there were around 400 cheetahs before World War II, distributed across deserts and steppes to the east and the borderlands with Iraq to the west; the numbers were falling because of a decline in prey. In Iraq, cheetahs were reported from Basra in the 1920s. Conservation efforts in the 1950s stabilised the population, but prey species declined again in the wake of the Iranian Revolution (1979) and the Iran–Iraq War (1980–1988), leading to a significant contraction of the cheetah's historical range in the region.
In 1975, the cheetah population was estimated at 15,000 individuals throughout Sub-Saharan Africa, following the first survey in this region by Norman Myers. The range covered most of eastern and southern Africa, except for the desert region on the western coast of modern-day Angola and Namibia. In the following years, cheetah populations across the region have become smaller and more fragmented as their natural habitat has been modified dramatically.
Present distribution
The cheetah occurs mostly in eastern and southern Africa; its presence in Asia is limited to the central deserts of Iran, though there have been unconfirmed reports of sightings in Afghanistan, Iraq and Pakistan in the last few decades. The global population of cheetahs was estimated at nearly 7,100 mature individuals in 2016. The Iranian population appears to have decreased from 60–100 individuals in 2007 to 43 in 2016, distributed in three subpopulations over less than in Iran's central plateau. The largest population of nearly 4,000 individuals is sparsely distributed over Angola, Botswana, Mozambique, Namibia, South Africa and Zambia. Another population in Kenya and Tanzania comprises about 1,000 individuals. All other cheetahs occur in small, fragmented groups of less than 100 individuals each. Populations are thought to be declining.
Threats
The cheetah is threatened by several factors, like habitat loss and fragmentation of populations. Habitat loss is caused mainly by the introduction of commercial land use, such as agriculture and industry. It is further aggravated by ecological degradation, like woody plant encroachment, which is common in southern Africa. Moreover, the species apparently requires a sizeable area to live in as indicated by its low population densities. Shortage of prey and conflict with other species such as humans and large carnivores are other major threats. The cheetah appears to be less capable of coexisting with humans than the leopard. With 76% of its range consisting of unprotected land, the cheetah is often targeted by farmers and pastoralists who attempt to protect their livestock, especially in Namibia. Illegal wildlife trade and trafficking is another problem in some places (like Ethiopia). Some tribes, like the Maasai people in Tanzania, have been reported to use cheetah skins in ceremonies. Roadkill is another threat, especially in areas where roads have been constructed near natural habitat or protected areas. Cases of roadkill involving cheetahs have been reported from Kalmand, Touran National Park, and Bafq in Iran. The reduced genetic variability makes cheetahs more vulnerable to diseases; however, the threat posed by infectious diseases may be minor, given the low population densities and hence a reduced chance of infection.
Conservation
The cheetah has been classified as Vulnerable by the IUCN; it is listed under AppendixI of the CMS and AppendixI of CITES. The Endangered Species Act enlists the cheetah as Endangered.
In Africa
Until the 1970s, cheetahs and other carnivores were frequently killed to protect livestock in Africa. Gradually the understanding of cheetah ecology increased and their falling numbers became a matter of concern. The De Wildt Cheetah and Wildlife Centre was set up in 1971 in South Africa to provide care for wild cheetahs regularly trapped or injured by Namibian farmers. By 1987, the first major research project to outline cheetah conservation strategies was underway. The Cheetah Conservation Fund, founded in 1990 in Namibia, put efforts into field research and education about cheetahs on the global platform. The CCF runs a cheetah genetics laboratory, the only one of its kind, in Otjiwarongo (Namibia); "Bushblok" is an initiative to restore habitat systematically through targeted bush thinning and biomass utilisation. Several more cheetah-specific conservation programmes have since been established, like Cheetah Outreach in South Africa.
The Global Cheetah Action Plan Workshop in 2002 laid emphasis on the need for a range-wide survey of wild cheetahs to demarcate areas for conservation efforts and on creating awareness through training programs. The Range Wide Conservation Program for Cheetah and African Wild Dogs (RWCP) began in 2007 as a joint initiative of the IUCN Cat and Canid Specialist Groups, the Wildlife Conservation Society and the Zoological Society of London. National conservation plans have been developed successfully for several African countries. In 2014, the CITES Standing Committee recognised the cheetah as a "species of priority" in their strategies in northeastern Africa to counter wildlife trafficking. In December 2016 the results of an extensive survey detailing the distribution and demography of cheetahs throughout the range were published; the researchers recommended listing the cheetah as Endangered on the IUCN Red List.
The cheetah was reintroduced in Malawi in 2017.
In Asia
In 2001, the Iranian government collaborated with the CCF, the IUCN, Panthera Corporation, UNDP and the Wildlife Conservation Society on the Conservation of Asiatic Cheetah Project (CACP) to protect the natural habitat of the Asiatic cheetah and its prey. In 2004, the Iranian Centre for Sustainable Development (CENESTA) conducted an international workshop to discuss conservation plans with local stakeholders. Iran declared 31August as National Cheetah Day in 2006. The Iranian Cheetah Strategic Planning meet in 2010 formulated a five-year conservation plan for Asiatic cheetahs. The CACP Phase II was implemented in 2009, and the third phase was drafted in 2018.
During the early 2000s scientists from the Centre for Cellular and Molecular Biology (Hyderabad) proposed a plan to clone Asiatic cheetahs from Iran for reintroduction in India, but Iran denied the proposal. In September 2009, the Minister of Environment and Forests assigned the Wildlife Trust of India and the Wildlife Institute of India with examining the potential of importing African cheetahs to India. Kuno Wildlife Sanctuary and Nauradehi Wildlife Sanctuary were suggested as reintroduction sites for the cheetah because of their high prey density. However, plans for reintroduction were stalled in May 2012 by the Supreme Court of India because of a political dispute and concerns over introducing a non-native species to the country. Opponents stated the plan was "not a case of intentional movement of an organism into a part of its native range". On 28 January 2020, the Supreme Court allowed the central government to introduce cheetahs to a suitable habitat in India on an experimental basis to see if they can adapt to it. In 2020, India signed a memorandum of understanding with Namibia as part of Project Cheetah. In July 2022, it was announced that eight cheetahs would be transferred from Namibia to India in August. The eight cheetahs were released into Kuno on 17 September 2022, by Prime Minister Narendra Modi. Since their introduction, 17 cubs have been born in India. However, as of September 2024, eight adult cheetahs and four cubs have already died.
Interaction with humans
Taming
The cheetah shows little aggression toward humans, and can be tamed easily, as it has been since antiquity. The earliest known depictions of the cheetah are from the Chauvet Cave in France, dating back to 32,000–26,000 BC. According to historians such as Heinz Friederichs and Burchard Brentjes, the cheetah was first tamed in Sumer and this gradually spread out to central and northern Africa, from where it reached India. The evidence for this is mainly pictorial; for instance, a Sumerian seal dating back to , featuring a long-legged leashed animal has fueled speculation that the cheetah was first tamed in Sumer. However, Thomas Allsen argues that the depicted animal might be a large dog. Other historians, such as Frederick Zeuner, have opined that ancient Egyptians were the first to tame the cheetah, from where it gradually spread into central Asia, Iran and India.
In comparison, theories of the cheetah's taming in Egypt are stronger and include timelines proposed on this basis. Mafdet, one of the ancient Egyptian deities worshiped during the First Dynasty (3100–2900BC), was sometimes depicted as a cheetah. Ancient Egyptians believed the spirits of deceased pharaohs were taken away by cheetahs. Reliefs in the Deir el-Bahari temple complex tell of an expedition by Egyptians to the Land of Punt during the reign of Hatshepsut (1507–1458BC) that fetched, among other things, animals called "panthers". During the New Kingdom (16th to 11th centuries BC), cheetahs were common pets for royalty, who adorned them with ornate collars and leashes. Rock carvings depicting cheetahs dating back to 2000–6000 years ago have been found in Twyfelfontein; little else has been discovered in connection to the taming of cheetahs (or other cats) in southern Africa.
Hunting cheetahs are known in pre-Islamic Arabic art from Yemen. Hunting with cheetahs became more prevalent toward the seventh centuryAD. In the Middle East, the cheetah would accompany the nobility to hunts in a special seat on the back of the saddle. Taming was an elaborate process and could take a year to complete. The Romans may have referred to the cheetah as the () or (), believing it to be a hybrid between a leopard and a lion because of the mantle seen in cheetah cubs and the difficulty of breeding them in captivity. A Roman hunting cheetah is depicted in a 4th-century mosaic from Lod, Israel. Cheetahs continued to be used into the Byzantine period of the Roman empire, with "hunting leopards" being mentioned in the Cynegetica (283/284 AD).
In eastern Asia, records are confusing as regional names for the leopard and the cheetah may be used interchangeably. The earliest depiction of cheetahs from eastern Asia dates back to the Tang dynasty (7th to 10th centuriesAD); paintings depict tethered cheetahs and cheetahs mounted on horses. Chinese emperors would use cheetahs and caracals as gifts. In the 13th and 14th centuries, the Yuan rulers bought numerous cheetahs from the western parts of the empire and from Muslim merchants. According to the , the subsequent Ming dynasty (14th to 17th centuries) continued this practice. Tomb figurines from the Mongol empire, dating back to the reign of Kublai Khan (1260–1294AD), represent cheetahs on horseback. The Mughal ruler Akbar the Great (1556–1605AD) is said to have kept as many as 1000 khasa (imperial) cheetahs. His son Jahangir wrote in his memoirs, Tuzk-e-Jahangiri, that only one of them gave birth. Mughal rulers trained cheetahs and caracals in a similar way as the western Asians, and used them to hunt game, especially blackbuck. The rampant hunting severely affected the populations of wild animals in India; by 1927, cheetahs had to be imported from Africa.
In captivity
The first cheetah to be brought into captivity in a zoo was at the Zoological Society of London in 1829. Early captive cheetahs showed a high mortality rate, with an average lifespan of 3–4 years. After trade of wild cheetahs was delimited by the enforcement of CITES in 1975, more efforts were put into breeding in captivity; in 2014 the number of captive cheetahs worldwide was estimated at 1730 individuals, with 87% born in captivity.
Mortality under captivity is generally high; in 2014, 23% of the captive cheetahs worldwide died under one year of age, mostly within a month of birth. Deaths result from several reasons—stillbirths, birth defects, cannibalism, hypothermia, maternal neglect, and infectious diseases. Compared to other felids, cheetahs need specialised care because of their higher vulnerability to stress-induced diseases; this has been attributed to their low genetic variability and factors of captive life. Common diseases of cheetahs include feline herpesvirus, feline infectious peritonitis, gastroenteritis, glomerulosclerosis, leukoencephalopathy, myelopathy, nephrosclerosis and veno-occlusive disease. High density of cheetahs in a place, closeness to other large carnivores in enclosures, improper handling, exposure to public and frequent movement between zoos can be sources of stress for cheetahs. Recommended management practices for cheetahs include spacious and ample access to outdoors, stress minimisation by exercise and limited handling, and following proper hand-rearing protocols (especially for pregnant females).
Wild cheetahs are far more successful breeders than captive cheetahs; this has also been linked to increased stress levels in captive individuals. In a study in the Serengeti, females were found to have a 95% success rate in breeding, compared to 20% recorded for North American captive cheetahs in another study. On 26 November 2017, a female cheetah gave birth to eight cubs in the Saint Louis Zoo, setting a record for the most births recorded by the Association of Zoos and Aquariums. Chances of successful mating in captive males can be improved by replicating social groups such as coalitions observed in the wild.
Attacks on humans
There are no documented records of lethal attacks on humans by wild cheetahs. However, there have been instances of people being fatally mauled by captive cheetahs. In 2007, a 37-year-old woman from Antwerp was killed by a cheetah in a Belgian zoo after sneaking into its cage outside of visiting hours. In 2017, a three-year-old child was attacked by a captive cheetah on a farm in Philippolis, South Africa. Despite being airlifted to a hospital in Bloemfontein, the boy died from his injuries.
In culture
The cheetah has been widely portrayed in a variety of artistic works. In Bacchus and Ariadne, an oil painting by the 16th-century Italian painter Titian, the chariot of the Greek god Dionysus (Bacchus) is depicted as being drawn by two cheetahs. The cheetahs in the painting were previously considered to be leopards. In 1764, English painter George Stubbs commemorated the gifting of a cheetah to George III by the English Governor of Madras, Sir George Pigot in his painting Cheetah with Two Indian Attendants and a Stag. The painting depicts a cheetah, hooded and collared by two Indian servants, along with a stag it was supposed to prey upon. The 1896 painting The Caress by the 19th-century Belgian symbolist painter Fernand Khnopff is a representation of the myth of Oedipus and the Sphinx and portrays a creature with a woman's head and a cheetah's body.
Two cheetahs are depicted standing upright and supporting a crown in the coat of arms of the Free State (South Africa).
In 1969, Joy Adamson, of Born Free fame, wrote The Spotted Sphinx, a biography of her pet cheetah Pippa. Hussein, An Entertainment, a novel by Patrick O'Brian set in the British Raj period in India, illustrates the practice of royalty keeping and training cheetahs to hunt antelopes. The book How It Was with Dooms tells the true story of a family raising an orphaned cheetah cub named Dooms in Kenya. The 2005 film Duma was based loosely on this book. The animated series ThunderCats had a character named "Cheetara", an anthropomorphic cheetah, voiced by Lynne Lipton. Comic book heroine Wonder Woman's chief adversary is Barbara Ann Minerva alias The Cheetah.
The Bill Thomas Cheetah American racing car, a Chevrolet-based coupe first designed and driven in 1963, was an attempt to challenge Carroll Shelby's Shelby Cobra in American sports car competition of the 1960s. Because only two dozen or fewer chassis were built, with only a dozen complete cars, the Cheetah was never homologated for competition beyond prototype status; its production ended in 1966. In 1986, Frito-Lay introduced Chester Cheetah, an anthropomorphic cheetah, as the mascot for their snack food Cheetos. The Mac OS X 10.0 was code-named "Cheetah".
| Biology and health sciences | Carnivora | null |
45648 | https://en.wikipedia.org/wiki/Nightjar | Nightjar | Nightjars are medium-sized nocturnal or crepuscular birds in the family Caprimulgidae and order Caprimulgiformes, characterised by long wings, short legs, and very short bills. They are sometimes called bugeaters, their primary source of food being insects. Some New World species are called nighthawks. The English word nightjar originally referred to the European nightjar.
Nightjars are found all around the world, with the exception of Antarctica, and certain island groups such as the Seychelles. They can be found in a variety of habitats, most commonly the open country with some vegetation. They usually nest on the ground, with a habit of resting and roosting on roads.
The subfamilies of nightjars have similar characteristics, including small feet, of little use for walking, and long, pointed wings. Typical nightjars have rictal bristles, longer bills, and softer plumage. The colour of their plumage and their unusual perching habits help conceal them during the day.
Systematics
Caprimulgiformes
Previously, all members of the orders Apodiformes, Aegotheliformes, Nyctibiiformes, Podargiformes, and Steatornithiformes were lumped alongside nightjars in the Caprimulgiformes. In 2021, the International Ornithological Congress redefined the Caprimulgiformes as only applying to nightjars, with potoos, frogmouths, oilbirds, and owlet-nightjars all being reclassified into their own orders. See Strisores for more info about the disputes over the taxonomy of Caprimulgiformes. A phylogenetic analysis found that the extinct family Archaeotrogonidae, known from the Eocene and Oligocene of Europe, are the closest known relatives of nightjars.
Caprimulgidae
Traditionally, nightjars have been divided into two subfamilies—the Caprimulginae, or typical nightjars with 79 known species, and the Chordeilinae, or nighthawks of the New World, with 10 known species. The groups are similar in most respects, but the typical nightjars have rictal bristles, longer bills, and softer plumage. The underside of the claw of the middle toe is comb-like with serrations. Their soft plumage is cryptically coloured to resemble bark or leaves, and some species, unusual for birds, perch along a branch rather than across it, helping to conceal them during the day. The subfamilies of nightjars have similar characteristics, including small feet, of little use for walking, and long, pointed wings.
The common poorwill, Phalaenoptilus nuttallii, is unique as a bird that undergoes a form of hibernation, becoming torpid and with a much reduced body temperature for weeks or months, although other nightjars can enter a state of torpor for shorter periods.
In their pioneering DNA–DNA hybridisation work, Charles Sibley and Jon E. Ahlquist found that the genetic difference between the eared nightjars and the typical nightjars was, in fact, greater than that between the typical nightjars and the nighthawks of the New World. Accordingly, they placed the eared nightjars in a separate family, the Eurostopodidae (9 known species), but the family has not yet been widely adopted.
Subsequent work, both morphological and genetic, has provided support for the separation of the typical and the eared nightjars, and some authorities have adopted this Sibley–Ahlquist recommendation, and also the more far-reaching one to group all the owls (traditionally Strigiformes) together in the Caprimulgiformes. The listing below retains a more orthodox arrangement, but recognises the eared nightjars as a separate group. For more detail and an alternative classification scheme, see Caprimulgiformes and Sibley–Ahlquist taxonomy.
†Ventivorus Mourer-Chauviré 1988
Subfamily Eurostopodinae
Genus Eurostopodus (7 species)
Genus Lyncornis (2 species)
Subfamily Caprimulginae (typical nightjars)
Genus Gactornis – collared nightjar
Genus Nyctipolus (2 species)
Genus Nyctidromus (2 species)
Genus Hydropsalis (4 species)
Genus Siphonorhis (2 species)
Genus Nyctiphrynus (4 species)
Genus Phalaenoptilus – common poorwill
Genus Antrostomus (12 species)
Genus Caprimulgus (40 species, including the European nightjar)
Genus Setopagis (4 species)
Genus Uropsalis (2 species)
Genus Macropsalis – long-trained nightjar
Genus Eleothreptus (2 species)
Genus Systellura (2 species)
Subfamily Chordeilinae (nighthawks)
Genus Chordeiles (6 species; includes Podager)
Genus Nyctiprogne (2 species)
Genus Lurocalis (2 species)
Also see a list of nightjars, sortable by common and binomial names.
Distribution and habitat
Nightjars inhabit all continents other than Antarctica, as well as some island groups such as Madagascar, the Seychelles, New Caledonia and the islands of Caribbean. They are not known to live in extremely arid desert regions. Nightjars can occupy all elevations from sea level to , and a number of species are montane specialists. Nightjars occupy a wide range of habitats, from deserts to rainforests but are most common in open country with some vegetation.
The nighthawks are confined to the New World, and the eared nightjars to Asia and Australia.
A number of species undertake migrations, although the secretive nature of the family may account for the incomplete understanding of their migratory habits. Species that live in the far north, such as the European nightjar or the common nighthawk, migrate southward with the onset of winter. Geolocators placed on European nightjars in southern England found they wintered in the south of the Democratic Republic of the Congo. Other species make shorter migrations.
Conservation and status
Some species of nightjars are threatened with extinction. Road-kills of this species by cars are thought to be a major cause of mortality for many members of the family because of their habit of resting and roosting on roads.
They also usually nest on the ground, laying one or two patterned eggs directly onto bare ground. Nightjars possibly move their eggs and chicks from the nesting site in the event of danger by carrying them in their mouths. This suggestion has been repeated many times in ornithology books, but surveys of nightjar research have found very little evidence to support this idea.
Developing conservation strategies for some species presents a particular challenge in that scientists do not have enough data to determine whether or not a species is endangered due to the difficulty in locating, identifying, and/or categorizing their limited number (e.g. 10,000) known to exist, a good example being the Vaurie's nightjar in China's south-western Xinjiang Province (as seen only once in-hand). Surveys in the 1970s and 1990s failed to find the species., implying that the species has become extinct, endangered, or found only in a few small areas.
In history and popular culture
Nighthawk as a name has been applied to numerous places, characters, and objects throughout history.
Nebraska's state nickname was once the "Bugeater State" and its people were sometimes called "bugeaters" (presumably named after the common nighthawk). The Nebraska Cornhuskers college athletic teams were also briefly known the Bugeaters, before adopting their current name, also adopted by the state as a whole. A semi-professional soccer team in Nebraska now uses the Bugeaters moniker.
Nightjars feature prominently in the lyrics of the Elton John/Bernie Taupin song "Come Down in Time": "While a cluster of nightjars sang some songs out of tune". Sting, in an interview about this song and about Elton John, said, "It's a very beautiful song. ... I love Bernie's lyrics ... It is one of those songs you wish you had written...."
They are also featured prominently in the lyrics of Joanna Newsom's bird-heavy fourth album Divers; the opening track "Anecdotes" name-checks four different varieties (Rufous, Whip-poor-will, Star-Spotted and Sickle-Winged) and the final track ends with a repeated radio transmission to the fictional soldier Rufous Nightjar.
The Welsh name for the nightjar is "Troellwr Mawr", meaning "big spinner", referring to its whirling sound (the grass warbler is named "Troellwr Bach").
| Biology and health sciences | Caprimulgiformes | null |
45712 | https://en.wikipedia.org/wiki/Eggplant | Eggplant | Eggplant (US, CA, AU, NZ, PH), aubergine (UK, IE), brinjal (IN, SG, MY, ZA), or baigan (IN, GY) is a plant species in the nightshade family Solanaceae. Solanum melongena is grown worldwide for its edible fruit.
Most commonly purple, the spongy, absorbent fruit is used in several cuisines. Typically used as a vegetable in cooking, it is a berry by botanical definition. As a member of the genus Solanum, it is related to the tomato, chili pepper, and potato, although those are of the New World while the eggplant is of the Old World. Like the tomato, its skin and seeds can be eaten, but it is usually eaten cooked. Eggplant is nutritionally low in macronutrient and micronutrient content, but the capability of the fruit to absorb oils and flavors into its flesh through cooking expands its use in the culinary arts.
It was originally domesticated from the wild nightshade species thorn or bitter apple, S. incanum, probably with two independent domestications: one in South Asia, and one in East Asia. In 2021, world production of eggplants was 59 million tonnes, with China and India combined accounting for 86% of the total.
Description
The eggplant is a delicate, tropical perennial plant often cultivated as a tender or half-hardy annual in temperate climates. The stem is often spiny. The flowers are white to purple in color, with a five-lobed corolla and yellow stamens. Some common cultivars have fruit that is egg-shaped, glossy, and purple with white flesh and a spongy, "meaty" texture. Some other cultivars are white and longer in shape. The cut surface of the flesh rapidly turns brown when the fruit is cut open (oxidation).
Eggplant grows tall, with large, coarsely lobed leaves that are long and broad. Semiwild types can grow much larger, to , with large leaves over long and broad. On wild plants, the fruit is less than in diameter
Botanically classified as a berry, the fruit contains numerous small, soft, edible seeds that taste bitter because they contain or are covered in nicotinoid alkaloids, like the related tobacco.
The eggplant genome has 12 chromosomes.
History
There is no consensus about the place of origin of eggplant; the plant species has been described as native to South Asia, where it continues to grow wild, or Africa. It has been cultivated in southern and eastern Asia since prehistory. The first known written record of the plant is found in Qimin Yaoshu, an ancient Chinese agricultural treatise completed in 544 CE.
Eggplant was introduced to Europe through the Iberian Peninsula, where it became a staple among Muslim and Jewish communities. The presence of numerous Arabic and North African names for the vegetable, coupled with the absence of ancient Greek and Roman names, suggests that it was cultivated in the Mediterranean area by Arabs during the early Middle Ages, arriving in Spain in the 8th century. A book on agriculture by Ibn Al-Awwam in 12th-century Muslim Spain described how to grow aubergines. Records exist from later medieval Catalan and Spanish, as well as from 14th-century Italy. Unlike its popularity in Spain and limited presence in southern Italy, the eggplant remained relatively obscure in other regions of Europe until the 17th century.
The aubergine is unrecorded in England until the 16th century. An English botany book in 1597 described the madde or raging Apple:
The Europeans brought it to the Americas.
Because of the plant's relationship with various other nightshades, the fruit was at one time believed to be extremely poisonous. The flowers and leaves can be poisonous if consumed in large quantities due to the presence of solanine.
The eggplant has a special place in folklore. In 13th-century Italian traditional folklore, the eggplant can cause insanity. In 19th-century Egypt, insanity was said to be "more common and more violent" when the eggplant is in season in the summer.
Etymology and regional names
The plant and fruit have a profusion of English names.
Eggplant-type names
The name eggplant is usual in North American English and Australian English. First recorded in 1763, the word "eggplant" was originally applied to white cultivars, which look very much like hen's eggs (see image). Similar names are widespread in other languages, such as the Icelandic term eggaldin or the Welsh planhigyn ŵy.
The white, egg-shaped varieties of the eggplant's fruits are also known as garden eggs, a term first attested in 1811. The Oxford English Dictionary records that between 1797 and 1888, the name vegetable egg was also used.
Aubergine-type names
Whereas eggplant was coined in English, most of the diverse other European names for the plant derive from the bāḏinjān . Bāḏinjān is itself a loan-word in Arabic, whose earliest traceable origins lie in the Dravidian languages. The Hobson-Jobson dictionary comments that "probably there is no word of the kind which has undergone such extraordinary variety of modifications, whilst retaining the same meaning, as this".
In English usage, modern names deriving from Arabic bāḏinjān include:
Aubergine, usual in British English (as well as German, French and Dutch).
Brinjal or brinjaul, usual in South Asia and South African English.
Solanum melongena, the Linnaean name.
From Dravidian to Arabic
All the aubergine-type names have the same origin, in the Dravidian languages. Modern descendants of this ancient Dravidian word include Malayalam vaṟutina and Tamil vaṟutuṇai.
The Dravidian word was borrowed into the Indo-Aryan languages, giving ancient forms such as Sanskrit and Pali vātiṅ-gaṇa (alongside Sanskrit vātigama) and Prakrit vāiṃaṇa. According to the entry brinjal in the Oxford English Dictionary, the Sanskrit word vātin-gāna denoted 'the class (that removes) the wind-disorder (windy humour)': that is, vātin-gāna came to be the name for eggplants because they were thought to cure flatulence. The modern Hindustani words descending directly from the Sanskrit name are baingan and began.
The Indic word vātiṅ-gaṇa was then borrowed into Persian as bādingān. Persian bādingān was borrowed in turn into Arabic as bāḏinjān (or, with the definite article, al-bāḏinjān). From Arabic, the word was borrowed into European languages.
From Arabic into Iberia and beyond
In al-Andalus, the Arabic word (al-)bāḏinjān was borrowed into the Romance languages in forms beginning with b- or, with the definite article included, alb-:
Portuguese , , .
Spanish , .
The Spanish word was then borrowed into French, giving (along with French dialectal forms like , , , and ). The French name was then borrowed into British English, appearing there first in the late eighteenth century.
Through the colonial expansion of Portugal, the Portuguese form was borrowed into a variety of other languages:
Indian, Malaysian, Singaporean and South African English brinjal, brinjaul (first attested in the seventeenth century).
West Indian English brinjalle and (through folk-etymology) brown-jolly.
French bringelle in La Réunion.
Thus although Indian English brinjal ultimately originates in languages of the Indian Subcontinent, it actually came into Indian English via Portuguese.
From Arabic into Greek and beyond
The Arabic word bāḏinjān was borrowed into Greek by the eleventh century CE. The Greek loans took a variety of forms, but crucially they began with m-, partly because Greek lacked the initial b- sound and partly through folk-etymological association with the Greek word μέλας (melas), 'black'. Attested Greek forms include ματιζάνιον (matizanion, eleventh-century), μελιντζάνα (melintzana, fourteenth-century), and μελιντζάνιον (melintzanion, seventeenth-century).
From Greek, the word was borrowed into Italian and medieval Latin, and onwards into French. Early forms include:
Melanzāna, recorded in Sicilian in the twelfth century.
Melongena, recorded in Latin in the thirteenth century.
Melongiana, recorded in Veronese in the fourteenth century.
Melanjan, recorded in Old French.
From these forms came the botanical Latin melongēna. This was used by Tournefort as a genus name in 1700, then by Linnaeus as a species name in 1753. It remains in scientific use.
These forms also gave rise to the Caribbean English melongene.
The Italian melanzana, through folk-etymology, was adapted to mela insana ('mad apple'): already by the thirteenth century, this name had given rise to a tradition that eggplants could cause insanity. Translated into English as 'mad-apple', 'rage-apple', or 'raging apple', this name for eggplants is attested from 1578 and the form 'mad-apple' may still be found in Southern American English.
Other English names
The plant is also known as guinea squash in Southern American English. The term guinea in the name originally denoted the fact that the fruits were associated with West Africa, specifically the region that is now the modern day country Guinea.
It has been known as 'Jew's apple', apparently in relation to a belief that the fruit was first imported to the West Indies by Jewish people.
Cultivars
Different cultivars of the plant produce fruit of different size, shape, and color, though typically purple. The less common white varieties of eggplant are also known as Easter white eggplants, garden eggs, Casper or white eggplant. The most widely cultivated varieties—cultivars—in Europe and North America today are elongated ovoid, long and broad with a dark purple skin.
A much wider range of shapes, sizes, and colors is grown in India and elsewhere in Asia. Larger cultivars weighing up to a kilogram (2.2 pounds) grow in the region between the Ganges and Yamuna Rivers, while smaller ones are found elsewhere. Colors vary from white to yellow or green, as well as reddish-purple and dark purple. Some cultivars have a color gradient—white at the stem, to bright pink, deep purple or even black. Green or purple cultivars with white striping also exist. Chinese cultivars are commonly shaped like a narrower, slightly pendulous cucumber. Also, Asian cultivars of Japanese breeding are grown.
Oval or elongated oval-shaped and black-skinned cultivars include 'Harris Special Hibush', 'Burpee Hybrid', 'Bringal Bloom', 'Black Magic', 'Classic', 'Dusky', and 'Black Beauty'.
Slim cultivars in purple-black skin include 'Little Fingers', 'Ichiban', 'Pingtung Long', and 'Tycoon'
In green skin, 'Louisiana Long Green' and 'Thai (Long) Green'
In white skin, 'Dourga'.
Traditional, white-skinned, egg-shaped cultivars include 'Casper' and 'Easter Egg'.
Bicolored cultivars with color gradient include 'Rosa Bianca', 'Violetta di Firenze', 'Bianca Sfumata di Rosa' (heirloom), and 'Prosperosa' (heirloom).
Bicolored cultivars with striping include 'Listada de Gandia' and 'Udumalapet'.
In some parts of India, miniature cultivars, most commonly called baigan, are popular.
Varieties
S. m. var. esculentum – common aubergine, including white varieties, with many cultivars
S. m. var. depressum – dwarf aubergine
S. m. var. serpentium – snake aubergine
Genetically engineered eggplant
Bt brinjal is a transgenic eggplant that contains a gene from the soil bacterium Bacillus thuringiensis. This variety was designed to give the plant resistance to lepidopteran insects such as the brinjal fruit and shoot borer (Leucinodes orbonalis) and fruit borer (Helicoverpa armigera).
On 9 February 2010, the Environment Ministry of India imposed a moratorium on the cultivation of Bt brinjal after protests against regulatory approval of cultivated Bt brinjal in 2009, stating the moratorium would last "for as long as it is needed to establish public trust and confidence". This decision was deemed controversial, as it deviated from previous practices with other genetically modified crops in India. Bt brinjal was approved for commercial cultivation in Bangladesh in 2013.
Uses
Culinary
Raw eggplant can have a bitter taste, with an astringent quality, but it becomes tender when cooked and develops a rich, complex flavor. Rinsing, draining, and salting the sliced fruit before cooking may remove the bitterness. The fruit is capable of absorbing cooking fats and sauces, which may enhance the flavor of eggplant dishes.
Eggplant is used in the cuisines of many countries. Due to its texture and bulk, it is sometimes used as a meat substitute in vegan and vegetarian cuisines. Eggplant flesh is smooth. Its numerous seeds are small, soft and edible, along with the rest of the fruit, and do not have to be removed. Its thin skin is also edible, and so it does not have to be peeled. However, the green part at the top, the calyx, does have to be removed when preparing an eggplant for cooking.
Eggplant can be steamed, stir-fried, pan fried, deep fried, barbecued, roasted, stewed, curried, or pickled. Many eggplant dishes are sauces made by mashing the cooked fruit. It can be stuffed. It is frequently, but not always, cooked with oil or fat.
East Asia
Korean and Japanese eggplant varieties are typically thin-skinned.
In Chinese cuisine, eggplants are known as qiézi (). They are often deep fried and made into dishes such as yúxiāng-qiézi ("fish fragrance eggplant") or di sān xiān ("three earthen treasures"). Elsewhere in China, such as in Yunnan cuisine (in particular the cuisine of the Dai people) they are barbecued or roasted, then split and either eaten directly with garlic, chilli, oil and coriander, or the flesh is removed and pounded to a mash (typically with a wooden pestle and mortar) before being eaten with rice or other dishes.
In Japanese cuisine, eggplants are known as nasu or nasubi and use the same characters as Chinese (). An example of it use is in the dish hasamiyaki () in which slices of eggplant are grilled and filled with a meat stuffing. Eggplants also feature in several Japanese expression and proverbs, such as (because their lack of seeds will reduce her fertility) and .
In Korean cuisine, eggplants are known as gaji (). They are steamed, stir-fried, or pan-fried and eaten as banchan (side dishes), such as namul, bokkeum, and jeon.
Southeast Asia
In the Philippines, eggplants are of the long and slender purple variety. They are known as talong and is widely used in many stew and soup dishes, like pinakbet. However the most popular eggplant dish is tortang talong, an omelette made from grilling an eggplant, dipping it into beaten eggs, and pan-frying the mixture. The dish is characteristically served with the stalk attached. The dish has several variants, including rellenong talong which is stuffed with meat and vegetables. Eggplant can also be grilled, skinned and eaten as a salad called ensaladang talong. Another popular dish is adobong talong, which is diced eggplant prepared with vinegar, soy sauce, and garlic as an adobo.
South Asia
Eggplant is widely used in its native India, for example in sambar (a tamarind lentil stew), dalma (a dal preparation with vegetables, native to Odisha), chutney, curry (vankai), and achaar (a pickled dish). Owing to its versatile nature and wide use in both everyday and festive Indian food, it is often described as the "king of vegetables". Roasted, skinned, mashed, mixed with onions, tomatoes, and spices, and then slow cooked gives the South Asian dish baingan bharta or gojju, similar to salată de vinete in Romania. Another version of the dish, begun-pora (eggplant charred or burnt), is very popular in Bangladesh and the east Indian states of Odisha and West Bengal where the pulp of the vegetable is mixed with raw chopped shallot, green chilies, salt, fresh coriander, and mustard oil. Sometimes fried tomatoes and deep-fried potatoes are also added, creating a dish called begun bhorta. In a dish from Maharashtra called , small brinjals are stuffed with ground coconut, peanuts, onions, tamarind, jaggery and masala spices, and then cooked in oil. Maharashtra and the adjacent state of Karnataka also have an eggplant-based vegetarian pilaf called 'vangi bhat'.
Middle East and the Mediterranean
Eggplant is often stewed, as in the French ratatouille, or deep-fried as in the Italian parmigiana di melanzane, the Turkish karnıyarık, or Turkish, Greek, and Levantine musakka/moussaka, and Middle Eastern and South Asian dishes. Eggplants can also be battered before deep-frying and served with a sauce made of tahini and tamarind. In Iranian cuisine, it is blended with whey as kashk e bademjan, tomatoes as mirza ghassemi, or made into stew as khoresht-e-bademjan. It can be sliced and deep-fried, then served with plain yogurt (optionally topped with a tomato and garlic sauce), such as in the Turkish dish patlıcan kızartması (meaning fried aubergines), or without yogurt, as in patlıcan şakşuka. Perhaps the best-known Turkish eggplant dishes are imam bayıldı (vegetarian) and karnıyarık (with minced meat). It may also be roasted in its skin until charred, so the pulp can be removed and blended with other ingredients, such as lemon, tahini, and garlic, as in the Levantine baba ghanoush, Greek melitzanosalata, Moroccan zaalouk and Romanian salată de vinete. A mix of roasted eggplant, roasted red peppers, chopped onions, tomatoes, mushrooms, carrots, celery, and spices is called zacuscă in Romania, and ajvar or pinjur in the Balkans.
A Spanish dish called escalivada in Catalonia calls for strips of roasted aubergine, sweet pepper, onion, and tomato. In Andalusia, eggplant is mostly cooked thinly sliced, deep-fried in olive oil and served hot with honey (berenjenas a la Cordobesa). In the La Mancha region of central Spain, a small eggplant is pickled in vinegar, paprika, olive oil, and red peppers. The result is berenjena of Almagro, Ciudad Real. A Levantine specialty is makdous, another pickling of eggplants, stuffed with red peppers and walnuts in olive oil. Eggplant can be hollowed out and stuffed with meat, rice, or other fillings, and then baked. In Georgia, for example, it is fried and stuffed with walnut paste to make nigvziani badrijani.
In medieval Spain, eggplant, along with ingredients such as Swiss chard and chickpeas, was closely associated with Jewish cuisine. The Kitāb al-Ṭabikh, a 13th-century Andalusian cookbook, features eggplant as the main ingredient in fifteen out of its nineteen vegetable dishes, indicating its significance in the local cuisine at the time. Jewish communities in Spain prepared eggplant in various ways, including in dishes like almodrote, a casserole of eggplant and cheese. This dish and others became identifiers for Jews during their expulsion from Spain and the Inquisition, and they were carried by the expelled Jews to their new homes in the Ottoman Empire. The classic Judaeo-Spanish song "Siete modos de gizar la berendgena" lists various methods of preparing eggplant that persisted among Jews in the Ottoman Empire. Today, eggplant remains a defining ingredient of Sephardic Jewish cuisine.
Iran
In Iranian cuisine, eggplant (called bādenjān or bādemjān in Persian) can be used in both appetizers and main courses. It can also be pickled in vinegar. The ideal eggplant in Iranian cuisine is long, straight, firm, and black. Based on how al-Razi uses the color of eggplant as a shorthand for purpleness in his Kitab al-hawi, it can be assumed that the dark purple kind of eggplant was the widely grown variety in Iran at his time (9th century). Its importance in Iran is alluded to in the Ain-i-Akbari of Abu'l-Fazl ibn Mubarak, which says "this vegetable is on sale in the markets in Iran all the year round and in such abundance that it is sold for 1.5 dams per seer" (which was a cheap price at that time).
In Iran, unlike places like Greece, Turkey, and North Africa, eggplant is cooked peeled and usually seasoned with cinnamon or especially turmeric. Most eggplant dishes are classified as nankhoreshi (eaten with bread), and they are commonly served as snacks alongside alcoholic beverages.
The 14th-century poet Boshaq At'ema refers to an early eggplant dish called burani-e badenjan: chopped eggplant sautéed with onions and turmeric, then slowly cooked, and finally mixed with yogurt. The combination of eggplant and kashk (condensed whey) is popular in Iranian cuisine; it is found in dishes like kashk o badenjan as well as ash-e kashk o badenjan (involving layers of sautéed eggplant, grilled onions, and red beans topped by kashk seasoned with turmeric). Another eggplant dish is mast o badenjan, also known as nazkhatun in Tehran, which involves eggplant, yogurt, and dried mint. Eggplant can also be cooked in stews (khoreshes), either with lamb (khoresh-e badenjan) or with chicken and either unripe grapes or pomegranate juice (mosamma-ye badenjan). Variants of ab-gusht, eshkana, fesenjan, and kuku also make use of eggplant. Some regional dishes involving eggplant include badenjan-polow, a dish mainly from Fars and Kerman that combines white rice with a paste of chopped sautéed eggplant, chopped meat, and spices; as well as the northern Iranian badenjan-e qasemi, a casserole using grilled eggplant, garlic, tomatoes, and eggs.
Eggplants are traditionally among the foods that get preserved and stored for winter in Iran. They are selected in the last month of summer, when they are most readily available, then peeled, and finally preserved in one of two ways. In the first way, the peeled eggplants are cut, salted, and left to "sweat" (to make them less bilious); then they are sun-dried by hanging them on a line. The dried eggplants are then rehydrated 24 hours before being cooked. In the second way, the peeled eggplants are cooked in oil, put in a copper pot, and finally covered with plenty of hot oil, "which congeals to seal them".
Medieval Iranian writers such as al-Razi and al-Biruni cautioned that eggplant contains harmful qualities, and it must be ripe and cooked before eating to neutralize them. They wrote that it could cause heat and dryness and an excess of black bile, contributing to a wide range of health problems. If the "salt" in it was removed, or it was cooked in oil or vinegar, then they wrote that eggplant gained healthy attributes. Present-day Iranian attitudes to the eggplant reflect this medical tradition's influence: the eggplant is "considered rather dangerous... a cook in Tehran will say that the poison must be taken out". People also use eggplant seeds as an expectorant to relieve asthma and catarrh.
Nutrition
Raw eggplant is 92% water, 6% carbohydrates, 1% protein, and has negligible fat (table). It provides low amounts of essential nutrients, with only manganese having a moderate percentage (10%) of the Daily Value. Minor changes in nutrient composition occur with season, environment of cultivation (open field or greenhouse), and genotype.
Cultivation and pests
In tropical and subtropical climates, eggplant can be sown in the garden. Eggplant grown in temperate climates fares better when transplanted into the garden after all danger of frost has passed. Eggplant prefers hot weather, and when grown in cold climates or in areas with low humidity, the plants languish or fail to set and produce mature fruit. Seeds are typically started eight to 10 weeks prior to the anticipated frost-free date. S. melongena is included on a list of low flammability plants, indicating that it is suitable for growing within a building protection zone.
Spacing should be between plants, depending on cultivar, and between rows, depending on the type of cultivation equipment being used. Mulching helps conserve moisture and prevent weeds and fungal diseases and the plants benefit from some shade during the hottest part of the day. Hand pollination by shaking the flowers improves the set of the first blossoms. Growers typically cut fruits from the vine just above the calyx owing to the somewhat woody stems. Flowers are complete, containing both female and male structures, and may be self- or cross-pollinated.
Many of the pests and diseases that afflict other solanaceous plants, such as tomato, capsicum, and potato, are also troublesome to eggplants. For this reason, it should generally not be planted in areas previously occupied by its close relatives. However, since eggplants can be particularly susceptible to pests such as whiteflies, they are sometimes grown with slightly less susceptible plants, such as chili pepper, as a sacrificial trap crop. Four years should separate successive crops of eggplants to reduce pest pressure.
Common North American pests include the potato beetles, flea beetles, aphids, whiteflies, and spider mites. Good sanitation and crop rotation practices are extremely important for controlling fungal disease, the most serious of which is Verticillium.
The potato tuber moth (Phthorimaea operculella) is an oligophagous insect that prefers to feed on plants of the family Solanaceae such as eggplants. Female P. operculella use the leaves to lay their eggs and the hatched larvae will eat away at the mesophyll of the leaf.
Several different Phytoplasmas cause little leaf of brinjal, which is agriculturally significant in South Asia. This is spread by the leafhopper Hishimonus phycitis.
Production
In 2022, world production was 59 million tonnes, led by China with 65% and India with 22% (table).
Chemistry
The color of purple skin cultivars is due to the anthocyanin nasunin.
The browning of eggplant flesh results from the oxidation of polyphenols, such as the most abundant phenolic compound in the fruit, chlorogenic acid.
Allergies
Case reports of itchy skin or mouth, mild headache, and stomach upset after handling or eating eggplant have been reported anecdotally and published in medical journals (see also oral allergy syndrome). A 2021 review indicated that possibly four interacting mechanisms may elicit an allergic response from consuming eggplant: lipid transfer protein, profilin, polyphenol oxidase, and pollen reactions.
A 2008 study of a sample of 741 people in India, where eggplant is commonly consumed, found nearly 10% reported some allergic symptoms after consuming eggplant, with 1.4% showing symptoms within two hours. Contact dermatitis from eggplant leaves and allergy to eggplant flower pollen have also been reported.
Individuals who are atopic (genetically predisposed to developing certain allergic hypersensitivity reactions) are more likely to have a reaction to eggplant, which may be because eggplant is high in histamines. Cooking eggplant thoroughly seems to preclude reactions in some individuals, but some of the allergenic proteins may survive the cooking process.
Taxonomy
The eggplant is quite often featured in the older scientific literature under the junior synonyms S. ovigerum and S. trongum. Several other names that are now invalid have been uniquely applied to it:
Melongena ovata
Solanum album
Solanum insanum
Solanum longum
Solanum melanocarpum
Solanum melongenum
Solanum oviferum
Prachi
A number of subspecies and varieties have been named, mainly by Dikii, Dunal, and (invalidly) by Sweet. Names for various eggplant types, such as , are not considered to refer to anything more than cultivar groups at best. However, Solanum incanum and cockroach berry (S. capsicoides), other eggplant-like nightshades described by Linnaeus and Allioni, respectively, were occasionally considered eggplant varieties, but this is not correct.
The eggplant has a long history of taxonomic confusion with the scarlet and Ethiopian eggplants (Solanum aethiopicum), known as gilo and nakati, respectively, and described by Linnaeus as S. aethiopicum. The eggplant was sometimes considered a variety violaceum of that species. S. violaceum of de Candolle applies to Linnaeus' S. aethiopicum. An actual S. violaceum, an unrelated plant described by Ortega, included Dunal's S. amblymerum and was often confused with the same author's S. brownii.
Like the potato and S. lichtensteinii, but unlike the tomato, which then was generally put in a different genus, the eggplant was also described as S. esculentum, in this case once more in the course of Dunal's work. He also recognized the varieties aculeatum, inerme, and subinerme at that time. Similarly, H.C.F. Schuhmacher and Peter Thonning named the eggplant as S. edule, which is also a junior synonym of sticky nightshade (S. sisymbriifolium). Scopoli's S. zeylanicum refers to the eggplant, and that of Blanco to S. lasiocarpum.
| Biology and health sciences | Solanales | null |
45714 | https://en.wikipedia.org/wiki/Horseradish | Horseradish | Horseradish (Armoracia rusticana, syn. Cochlearia armoracia) is a perennial plant of the family Brassicaceae (which also includes mustard, wasabi, broccoli, cabbage, and radish). It is a root vegetable, cultivated and used worldwide as a spice and as a condiment. The species is probably native to Southeastern Europe and Western Asia.
Description
Horseradish grows up to tall, with hairless bright green unlobed leaves up to long that may be mistaken for docks (Rumex). It is cultivated primarily for its large, white, tapered root. The white four-petalled flowers are scented and are borne in dense panicles. Established plants may form extensive patches and may become invasive unless carefully managed.
Intact horseradish root has little aroma. When cut or grated, enzymes from within the plant cells digest sinigrin (a glucosinolate) to produce allyl isothiocyanate (mustard oil), which irritates the mucous membranes of the sinuses and eyes. Once exposed to air or heat, horseradish loses its pungency, darkens in color, and develops a bitter flavor.
History
Horseradish has been cultivated since antiquity. Dioscorides listed horseradish equally as Persicon sinapi (Diosc. 2.186) or Sinapi persicum (Diosc. 2.168), which Pliny's Natural History reported as Persicon napy; Cato discusses the plant in his treatises on agriculture. A mural in Ostia Antica shows the plant. Horseradish is probably the plant mentioned by Pliny the Elder in his Natural History under the name of Amoracia, and recommended by him for its medicinal qualities, and possibly the wild radish, or raphanos agrios of the Greeks. The early Renaissance herbalists Pietro Andrea Mattioli and John Gerard showed it under Raphanus. Its modern Linnaean genus Armoracia was first applied to it by Heinrich Bernhard Ruppius, in his Flora Jenensis, 1745, but Linnaeus himself called it Cochlearia armoracia.
Both roots and leaves were used as a traditional medicine during the Middle Ages. The root was used as a condiment on meats in Germany, Scandinavia, and Britain. It was introduced to North America during European colonization; both George Washington and Thomas Jefferson mention horseradish in garden accounts. Native Americans used it to stimulate the glands, stave off scurvy, and as a diaphoretic treatment for the common cold.
William Turner mentions horseradish as Red Cole in his "Herbal" (1551–1568), but not as a condiment. In The Herball, or Generall Historie of Plantes (1597), John Gerard describes it under the name of raphanus rusticanus, stating that it occurs wild in several parts of England. After referring to its medicinal uses, he says:
Etymology and common names
The word horseradish is attested in English from the 1590s. It combines the word horse (formerly used in a figurative sense to mean strong or coarse) and the word radish. Some sources say that the term originates from a mispronunciation of the German word "meerrettich" as "mareradish". However, this hypothesis has been disputed, as there is no historical evidence of this term being used.
In Slavic languages, the word for mustard derives from a root meaning fire or burning, often used metaphorically to refer to spicy or bitter foods. The Czech word for mustard for example is hořčice. Hořký is the adjectival form in Czech, meaning hot (spicy) or bitter. Horseradish is a plant in the mustard family. The first syllable horse in English appears to be a cognate or borrowing from the Slavic root. This likely derivation In Central and Eastern Europe may frequently be neglected since the Slavic words for horseradish chren, hren and ren (in various spellings like kren) in many Slavic languages is distinct from that for mustard referenced above. In Austria, in parts of Germany (where the other German name Meerrettich is not used), in North-East Italy, and in Yiddish (כריין transliterated as khreyn). It is common in Ukraine (under the name of , khrin), in Belarus (under the name of , chren), in Poland (under the name of ), in Czechia (), in Slovakia (), in Russia (, khren), in Hungary (), in Romania (), in Lithuania (), and in Bulgaria (under the name of ).
Cultivation
Horseradish is perennial in hardiness zones 2–9 and can be grown as an annual in other zones, although not as successfully as in zones with both a long growing season and winter temperatures cold enough to ensure plant dormancy. After the first frost in autumn kills the leaves, the root is dug and divided. The main root is harvested and one or more large offshoots of the main root are replanted to produce next year's crop. Horseradish left undisturbed in the garden spreads via underground shoots and can become invasive. Older roots left in the ground become woody, after which they are no longer culinarily useful, although older plants can be dug and re-divided to start new plants. The early season leaves can be distinctively different, asymmetric spiky, before the mature typical flat broad leaves start to be developed.
Pests and diseases
Introduced by accident, "cabbageworms", the larvae of Pieris rapae, are a common caterpillar pest in horseradish. Mature caterpillars chew large, ragged holes in the leaves leaving the large veins intact. Handpicking is an effective control strategy in home gardens. Another common pest of horseradish is the mustard leaf beetle (Phaedon cochleariae). These beetles are undeterred by the defense mechanisms produced by Brassicaceae plants like horseradish.
Production
In the United States, horseradish is grown in several areas such as Eau Claire, Wisconsin, and Tule Lake, California. The most concentrated growth occurs in the Collinsville, Illinois region.
30,000 metric tonnes of horseradish are produced in Europe annually, of which Hungary produces 12,000, making it the biggest single producer.
Culinary uses
The distinctive pungent taste of horseradish is from the compound allyl isothiocyanate. Upon crushing the flesh of horseradish, the enzyme myrosinase is released and acts on the glucosinolates sinigrin and gluconasturtiin, which are precursors to the allyl isothiocyanate. The allyl isothiocyanate serves the plant as a natural defense against herbivores. Since allyl isothiocyanate is harmful to the plant itself, it is stored in the harmless form of the glucosinolate, separate from the enzyme myrosinase. When an animal chews the plant, the allyl isothiocyanate is released, repelling the animal. Allyl isothiocyanate is an unstable compound, degrading over the course of days at . Because of this instability, horseradish sauces lack the pungency of the freshly crushed roots.
Cooks may use the terms "horseradish" or "prepared horseradish" to refer to the mashed (or grated) root of the horseradish plant mixed with vinegar. Prepared horseradish is white to creamy-beige in color. It can be stored for up to 3 months under refrigeration, but eventually will darken, indicating less flavour. The leaves of the plant are edible, either cooked or raw when young, with a flavor similar but weaker than the roots.
On Passover, many Ashkenazi Jews use grated horseradish as a choice for Maror (bitter herbs) at the Passover Seder.
Horseradish sauce
Horseradish sauce made from grated horseradish root and vinegar is a common condiment in the United Kingdom, in Denmark (with sugar added) and in Poland. In the UK, it is usually served with roast beef, often as part of a traditional Sunday roast, but can be used in a number of other dishes, including sandwiches or salads. A variation of horseradish sauce, which in some cases may replace the vinegar with other products like lemon juice or citric acid, is known in Germany as Tafelmeerrettich. Also available in the UK is Tewkesbury mustard, a blend of mustard and grated horseradish originating in medieval times and mentioned by Shakespeare (Falstaff says: "his wit's as thick as Tewkesbury Mustard" in Henry IV Part II). A similar mustard, called Krensenf or Meerrettichsenf, is common in Austria and parts of Germany. In France, sauce au raifort is used in Alsatian cuisine. In Russia, horseradish root is usually mixed with grated garlic and a small amount of tomatoes for color (Khrenovina sauce).
In the United States, the term "horseradish sauce" refers to grated horseradish combined with mayonnaise or salad dressing. In Denmark, it is mixed with whipping cream and as such used on top of traditional Danish open sandwiches with beef (boiled or steaked) slices. Prepared horseradish is a common ingredient in Bloody Mary cocktails and in cocktail sauce and is used as a sauce or sandwich spread. Horseradish cream is a mixture of horseradish and sour cream and is served au jus for a prime rib dinner.
Vegetable
In Europe, there are two varieties of chrain. "Red" chrain is mixed with red beetroot and "white" chrain contains no beetroot. Chrain is a part of Christian Easter and Jewish Passover tradition (as maror) in Eastern and Central Europe. In the Christian tradition, horseradish is eaten during Eastertide (Paschaltide) as "is a reminder of the bitterness of Jesus' suffering" on Good Friday.
In parts of Southern Germany "kren" is a component of the traditional wedding dinner. It is served with cooked beef and a dip made from lingonberry to balance the slight hotness of the Kren.
In Poland, a variety with red beetroot is called or simply ćwikła.
In Russia, a very popular ingredient for pickles (cucumbers, tomatoes, mushrooms).
In Ashkenazi European Jewish cooking, beetroot horseradish is commonly served with gefilte fish.
In Transylvania and other Romanian regions, red beetroot with horseradish is used as a salad served with lamb dishes at Easter called sfecla cu hrean.
In Serbia, ren is an essential condiment with cooked meat and freshly roasted suckling pig.
In Croatia, freshly grated horseradish (Croatian: Hren) is often eaten with boiled ham or beef.
In Hungary, Slovenia, and in the adjacent Italian regions of Friuli-Venezia Giulia and nearby Italian region of Veneto, horseradish (often grated and mixed with sour cream, vinegar, hard-boiled eggs, or apples) is also a traditional Easter dish.
In the Italian regions of Lombardy, Emilia-Romagna, and Piedmont, it is called barbaforte (strong beard) and is a traditional accompaniment to bollito misto; while in northeastern regions like Trentino-Alto Adige/Südtirol, Veneto and Friuli-Venezia Giulia, it is still called kren or cren. In the southern region of Basilicata it is known as rafano and used for the preparation of rafanata, a main course made of horseradish, eggs, cheese and sausage.
Horseradish is also used as a main ingredient for soups. In Poland, horseradish soup is a common Easter Day dish.
Relation to wasabi
Outside Japan, the Japanese condiment wasabi, although traditionally prepared from the true wasabi plant (Wasabia japonica), is now usually made with horseradish due to the scarcity of the wasabi plant. The Japanese botanical name for horseradish is , or "Western wasabi". Both plants are members of the family Brassicaceae.
Nutritional content
In a 100-gram amount, prepared horseradish provides 48 calories and has high content of vitamin C with moderate content of sodium, folate and dietary fiber, while other essential nutrients are negligible in content. In a typical serving of one tablespoon (15 grams), horseradish supplies no significant nutrient content.
Horseradish contains volatile oils, notably mustard oil.
Biomedical uses
The enzyme horseradish peroxidase (HRP), found in the plant, is used extensively in molecular biology and biochemistry primarily for its ability to amplify a weak signal and increase detectability of a target molecule. HRP has been used in decades of research to visualize under microscopy and assess non-quantitatively the permeability of capillaries, particularly those of the brain.
| Biology and health sciences | Brassicales | null |
45715 | https://en.wikipedia.org/wiki/Arecaceae | Arecaceae | The Arecaceae () is a family of perennial, flowering plants in the monocot order Arecales. Their growth form can be climbers, shrubs, tree-like and stemless plants, all commonly known as palms. Those having a tree-like form are colloquially called palm trees. Currently, 181 genera with around 2,600 species are known, most of which are restricted to tropical and subtropical climates. Most palms are distinguished by their large, compound, evergreen leaves, known as fronds, arranged at the top of an unbranched stem, except for the Hyphaene genus, who has branched palms. However, palms exhibit an enormous diversity in physical characteristics and inhabit nearly every type of habitat within their range, from rainforests to deserts.
Palms are among the best known and most extensively cultivated plant families. They have been important to humans throughout much of history, especially in regions like the Middle East and North Africa. A wide range of common products and foods are derived from palms. In contemporary times, palms are also widely used in landscaping. In many historical cultures, because of their importance as food, palms were symbols for such ideas as victory, peace, and fertility.
Etymology
The word Arecaceae is derived from the word areca with the suffix "-aceae". Areca is derived from Portuguese, via Malayalam അടയ്ക്ക (aṭaykka), which is from Dravidian *aṭ-ay-kkāy ("areca nut"). The suffix -aceae is the feminine plural of the Latin -āceus ("resembling").
Palm originates from Latin palma semantically overlapping with sense of "hand front" (due to similar splayed shape) ultimately from Proto-Indo-European *pl̥h₂meh₂, a direct descendant once existed in Old English.
Morphology
Whether as shrubs, tree-like, or vines, palms have two methods of growth: solitary or clustered. The common representation is that of a solitary shoot ending in a crown of leaves. This monopodial character may be exhibited by prostrate, trunkless, and trunk-forming members. Some common palms restricted to solitary growth include Washingtonia and Roystonea. Palms may instead grow in sparse though dense clusters. The trunk develops an axillary bud at a leaf node, usually near the base, from which a new shoot emerges. The new shoot, in turn, produces an axillary bud and a clustering habit results. Exclusively sympodial genera include many of the rattans, Guihaia, and Rhapis. Several palm genera have both solitary and clustering members. Palms which are usually solitary may grow in clusters and vice versa.
Palms have large, evergreen leaves that are either palmately ('fan-leaved') or pinnately ('feather-leaved') compound and spirally arranged at the top of the stem. The leaves have a tubular sheath at the base that usually splits open on one side at maturity. The inflorescence is a spadix or spike surrounded by one or more bracts or spathes that become woody at maturity. The flowers are generally small and white, radially symmetric, and can be either uni- or bisexual. The sepals and petals usually number three each and may be distinct or joined at the base. The stamens generally number six, with filaments that may be separate, attached to each other, or attached to the pistil at the base. The fruit is usually a single-seeded drupe (sometimes berry-like) but some genera (e.g., Salacca) may contain two or more seeds in each fruit.
Like all monocots, palms do not have the ability to increase the width of a stem (secondary growth) via the same kind of vascular cambium found in non-monocot woody plants. This explains the cylindrical shape of the trunk (almost constant diameter) that is often seen in palms, unlike in ring-forming trees. However, many palms, like some other monocots, do have secondary growth, although because it does not arise from a single vascular cambium producing xylem inwards and phloem outwards, it is often called "anomalous secondary growth".
The Arecaceae are notable among monocots for their height and for the size of their seeds, leaves, and inflorescences. Ceroxylon quindiuense, Colombia's national "tree", is the tallest monocot in the world, reaching up to tall. The coco de mer (Lodoicea maldivica) has the largest seeds of any plant, in diameter and weighing each (coconuts are the second largest). Raffia palms (Raphia spp.) have the largest leaves of any plant, up to long and wide. The Corypha species have the largest inflorescence of any plant, up to tall and containing millions of small flowers. Calamus stems can reach in length.
Range and habitat
Most palms are native to tropical and subtropical climates. Palms thrive in moist and hot climates but can be found in a variety of different habitats. Their diversity is highest in wet, lowland forests. South America, the Caribbean, and areas of the South Pacific and southern Asia are regions of concentration. Colombia may have the highest number of palm species in one country. There are some palms that are also native to desert areas such as the Arabian Peninsula and parts of northwestern Mexico. Only about 130 palm species naturally grow entirely beyond the tropics, mostly in humid lowland subtropical climates, in highlands in southern Asia, and along the rim lands of the Mediterranean Sea. The northernmost native palm is Chamaerops humilis, which reaches 44°N latitude along the coast of Liguria, Italy. In the southern hemisphere, the southernmost palm is the Rhopalostylis sapida, which reaches 44°S on the Chatham Islands where an oceanic climate prevails. Cultivation of palms is possible north of subtropical climates, and some higher latitude locales such as Ireland, Scotland, England, and the Pacific Northwest feature a few palms in protected locations and microclimates. In the United States, there are at least 12 native palm species, mostly occurring in the states of the Deep South and Florida.
Palms inhabit a variety of ecosystems. More than two-thirds of palm species live in humid moist forests, where some species grow tall enough to form part of the canopy and shorter ones form part of the understory. Some species form pure stands in areas with poor drainage or regular flooding, including Raphia hookeri which is common in coastal freshwater swamps in West Africa. Other palms live in tropical mountain habitats above , such as those in the genus Ceroxylon native to the Andes. Palms may also live in grasslands and scrublands, usually associated with a water source, and in desert oases such as the date palm. A few palms are adapted to extremely basic lime soils, while others are similarly adapted to extreme potassium deficiency and toxicity of heavy metals in serpentine soils.
Taxonomy
Palms are a monophyletic group of plants, meaning the group consists of a common ancestor and all its descendants. Extensive taxonomic research on palms began with botanist H.E. Moore, who organized palms into 15 major groups based mostly on general morphological characteristics. The following classification, proposed by N.W. Uhl and J. Dransfield in 1987, is a revision of Moore's classification that organizes palms into 6 subfamilies. A few general traits of each subfamily are listed below.
Subfamily are the largest subfamily with 14 tribes and containing over 100 genera. All tribes have pinnate or bipinnate leaves and flowers arranged in groups of three, with a central pistillate and two staminate flowers.
Subfamily includes the climbing palms, such as rattans. The leaves are usually pinnate; derived characters (synapomorphies) include spines on various organs, organs specialized for climbing, an extension of the main stem of the leaf-bearing reflexed spines, and overlapping scales covering the fruit and ovary.
Subfamily has small to medium-sized flowers, spirally arranged, with a gynoecium of three joined carpels.
Subfamily are the second-largest subfamily with 8 tribes. Most palms in this subfamily have palmately lobed leaves and solitary flowers with three, or sometimes four carpels. The fruit normally develops from only one carpel.
Subfamily contains only one species, Nypa fruticans, which has large, pinnate leaves. The fruit is unusual in that it floats, and the stem is underground and dichotomously branched, also unusual in palms.
The is the sixth subfamily of Arecaceae in N.W. Uhl and J. Dransfield's 1987 classification. Members of this group have distinct monopodial flower clusters. Other distinct features include a gynoecium with five to 10 joined carpels, and flowers with more than three parts per whorl. Fruits are multiple-seeded and have multiple parts. From the modern phylogenomic data, the Phytelephantoideae are tribe in the Ceroxyloideae subfamily.
Currently, few extensive phylogenetic studies of the Arecaceae exist. In 1997, Baker et al. explored subfamily and tribe relationships using chloroplast DNA from 60 genera from all subfamilies and tribes. The results strongly showed the Calamoideae are monophyletic, and Ceroxyloideae and Coryphoideae are paraphyletic. The relationships of Arecoideae are uncertain, but they are possibly related to the Ceroxyloideae and Phytelephantoideae. Studies have suggested the lack of a fully resolved hypothesis for the relationships within the family is due to a variety of factors, including difficulties in selecting appropriate outgroups, homoplasy in morphological character states, slow rates of molecular evolution important for the use of standard DNA markers, and character polarization. However, hybridization has been observed among Orbignya and Phoenix species, and using chloroplast DNA in cladistic studies may produce inaccurate results due to maternal inheritance of the chloroplast DNA. Chemical and molecular data from non-organelle DNA, for example, could be more effective for studying palm phylogeny.
Recently, nuclear genomes and transcriptomes have been used to reconstruct the phylogeny of palms. This has revealed, for example, that a whole-genome duplication event occurred early in the evolution of the Arecaceae lineage, that was not experienced by its sister clade, the Dasypogonaceae.
For a phylogenetic tree of the family, see the list of Arecaceae genera.
Selected genera
Archontophoenix—Bangalow palm
Areca—Betel palm
Astrocaryum
Attalea
Bactris—Pupunha
Beccariophoenix—Beccariophoenix alfredii
Bismarckia—Bismarck palm
Borassus—Palmyra palm, sugar palm, toddy palm
Butia
Calamus—Rattan palm
Ceroxylon
Cocos—Coconut
Coccothrinax
Copernicia—Carnauba wax palm
Corypha—Gebang palm, Buri palm or Talipot palm
Elaeis—Oil palm
Euterpe—Cabbage heart palm, açaí palm
Hyphaene—Doum palm
Jubaea—Chilean wine palm, Coquito palm
Latania—Latan palm
Licuala
Livistona—Cabbage palm
Mauritia—Moriche palm
Metroxylon—Sago palm
Nypa—Nipa palm
Parajubaea—Bolivian coconut palms
Phoenix—Date palm
Pritchardia
Raphia—Raffia palm
Rhapidophyllum
Rhapis
Roystonea—Royal palm
Sabal—Palmettos
Salacca—Salak
Syagrus—Queen palm
Thrinax
Trachycarpus—Windmill palm, Kumaon palm
Trithrinax
Veitchia—Manila palm, Joannis palm
Washingtonia—Fan palm
Evolution
The Arecaceae were the first modern family of monocots to appear in the fossil record around 80 million years ago (Mya), during the late Cretaceous period. The first modern species, such as Nypa fruticans and Acrocomia aculeata, appeared 69 Mya, as evidenced by fossil Nypa pollen. Palms appear to have undergone an early period of adaptive radiation. By 60 Mya, many of the modern, specialized genera of palms appeared and became widespread and common, much more widespread than their range today. Because palms separated from the monocots earlier than other families, they developed more intrafamilial specialization and diversity. By tracing back these diverse characteristics of palms to the basic structures of monocots, palms may be valuable in studying monocot evolution. Several species of palms have been identified from flowers preserved in amber, including Palaeoraphe dominicana and Roystonea palaea. Fossil evidence of them can also be found in samples of petrified palmwood.
The relationship between the subfamilies is shown in the following cladogram:
Uses
Evidence for cultivation of the date palm by Mesopotamians and other Middle Eastern peoples exists from more than 5,000 years ago, in the form of date wood, pits for storing dates, and other remains of the date palm in Mesopotamian sites. The date palm had a significant effect on the history of the Middle East and North Africa. In the text "Date Palm Products" (1993), W.H. Barreveld wrote:
An indication of the importance of palms in ancient times is that they are mentioned more than 30 times in the Bible, and at least 22 times in the Quran. The Torah also references the "70 date palm trees", which symbolize the 70 aspects of Torah that are revealed to those who "eat of its fruit."
Arecaceae have great economic importance, including coconut products, oils, dates, palm syrup, ivory nuts, carnauba wax, rattan cane, raffia, and palm wood. This family supplies a large amount of the human diet and several other human uses, both by absolute amount produced and by number of species domesticated. This is far higher than almost any other plant family, sixth out of domesticated crops in the human diet, and first in total economic value produced sharing the top spot with the Poaceae and Fabaceae. These human uses have also spread many Arecaceae species around the world.
Along with dates mentioned above, members of the palm family with human uses are numerous:
The type member of Arecaceae is the areca palm (Areca catechu), the fruit of which, the areca nut, is chewed with the betel leaf for intoxicating effects.
Carnauba wax is harvested from the leaves of South American palms of the genus Copernicia.
Rattans, whose stems are used extensively in furniture and baskets, are in the genus Calamus.
Palm oil is an edible vegetable oil produced by the oil palms in the genus Elaeis.
Several species are harvested for heart of palm, a vegetable eaten in salads.
Sap of the nipa palm, Nypa fruticans, is used to make vinegar.
Palm sap is sometimes fermented to produce palm wine or toddy, an alcoholic beverage common in parts of Africa, India, and the Philippines. The sap may be drunk fresh, but fermentation is rapid, reaching up to 4% alcohol content within an hour, and turning vinegary in a day.
Palmyra and date palm sap is harvested in Bengal, India, to process into gur and jaggery.
Coconut is the partially edible seed of the fruit of the coconut palm (Cocos nucifera).
Coir is a coarse, water-resistant fiber extracted from the outer shell of coconuts, used in doormats, brushes, mattresses, and ropes.
Some indigenous groups living in palm-rich areas use palms to make many of their necessary items and food. Sago, for example, a starch made from the pith of the trunk of the sago palm Metroxylon sagu, is a major staple food for lowland peoples of New Guinea and the Moluccas.
Palm wine is made from Jubaea also called Chilean wine palm, or coquito palm.
Recently, the fruit of the açaí palm Euterpe has been used for its reputed health benefits.
Saw palmetto (Serenoa repens) is being investigated as a drug for treating enlarged prostates.
Palm leaves are also valuable to some peoples as a material for thatching, basketry, clothing, and in religious ceremonies (see "Symbolism" below).
Ornamental uses: Today, palms are valuable as ornamental plants and are often grown along streets in tropical and subtropical cities. Chamaedorea elegans is a popular houseplant and is grown indoors for its low maintenance. Farther north, palms are a common feature in botanical gardens or as indoor plants. Few palms tolerate severe cold and the majority of the species are tropical or subtropical. The three most cold-tolerant species are Trachycarpus fortunei, native to eastern Asia, and Rhapidophyllum hystrix and Sabal minor, both native to the southeastern United States.
The southeastern U.S. state of South Carolina is nicknamed the Palmetto State after the sabal palmetto (cabbage palmetto), logs from which were used to build the fort at Fort Moultrie. During the American Revolutionary War, they were invaluable to those defending the fort, because their spongy wood absorbed or deflected the British cannonballs.
Singaporean politician Tan Cheng Bock uses a palm tree-like symbol similar to a Ravenala to represent him in the 2011 Singaporean presidential election. The symbol of a party he founded, Progress Singapore Party, was also based on a palm tree.
On Ash Wednesday, Catholics receive a cross on their forehead made of palm ashes as a reminder of the Catholic belief that everyone and everything eventually returns to where it came from, commonly expressed by the saying "ashes to ashes and dust to dust."
Lately the Fujairah Research Centre reported the use of date palm leaves to help restore coral reefs as it merged ancient Emerati techniques with modern science.
Endangered species
Like many other plants, palms have been threatened by human intervention and exploitation. The greatest risk to palms is destruction of habitat, especially in the tropical forests, due to urbanization, wood-chipping, mining, and conversion to farmland. Palms rarely reproduce after such great changes in the habitat, and those with small habitat ranges are most vulnerable to them. The harvesting of heart of palm, a delicacy in salads, also poses a threat because it is derived from the palm's apical meristem, a vital part of the palm that cannot be regrown (except in domesticated varieties, e.g. of peach palm). The use of rattan palms in furniture has caused a major population decrease in these species that has negatively affected local and international markets, as well as biodiversity in the area. The sale of seeds to nurseries and collectors is another threat, as the seeds of popular palms are sometimes harvested directly from the wild. In 2006, at least 100 palm species were considered endangered, and nine species have been reported as recently extinct.
However, several factors make palm conservation more difficult. Palms live in almost every type of warm habitat and have tremendous morphological diversity. Most palm seeds lose viability quickly, and they cannot be preserved in low temperatures because the cold kills the embryo. Using botanical gardens for conservation also presents problems, since they can rarely house more than a few plants of any species or truly imitate the natural setting. There is also the risk that cross-pollination can lead to hybrid species.
The Palm Specialist Group of the World Conservation Union (IUCN) began in 1984, and has performed a series of three studies to find basic information on the status of palms in the wild, use of wild palms, and palms under cultivation. Two projects on palm conservation and use supported by the World Wildlife Fund took place from 1985 to 1990 and 1986–1991, in the American tropics and southeast Asia, respectively. Both studies produced copious new data and publications on palms. Preparation of a global action plan for palm conservation began in 1991, supported by the IUCN, and was published in 1996.
The rarest palm known is Hyophorbe amaricaulis. The only living individual remains at the Botanic Gardens of Curepipe in Mauritius.
Arthropod pests
Some pests are specialists to particular taxa. Pests that attack a variety of species of palms include:
Raoiella indica, the red palm mite
Caryobruchus gleditsiae, the palm seed beetle or palm seed weevil
Rhynchophorus ferrugineus, the red palm weevil, recently introduced to Europe
Symbolism
The palm branch was a symbol of triumph and victory in classical antiquity. The Romans rewarded champions of the games and celebrated military successes with palm branches. Early Christians used the palm branch to symbolize the victory of the faithful over enemies of the soul, as in the Palm Sunday festival celebrating the triumphal entry of Jesus Christ into Jerusalem. In Judaism, the palm represents peace and plenty, and is one of the Four Species of Sukkot; the palm may also symbolize the Tree of Life in Kabbalah.
The canopies of the Rathayatra carts which carry the deities of Krishna and his family members in the cart festival of Jagganath Puri in India are marked with the emblem of a palm tree. Specifically it is the symbol of Krishna's brother, Baladeva.
In 1840, the American geologist Edward Hitchcock (1793–1864) published the first tree-like paleontology chart in his Elementary Geology, with two separate trees of life for the plants and the animals. These are crowned (graphically) with the Palms and with Man.
Today, the palm, especially the coconut palm, remains a symbol of the tropical island paradise.
Palms appear on the flags and seals of several places where they are native, including those of Haiti, Guam, Saudi Arabia, Florida, and South Carolina.
Other plants
Some species commonly called palms, though they are not true palms, include:
Ailanthus altissima (Ghetto palm), a tree in the flowering plant family Simaroubaceae
Alocasia odora x gageana 'Calidora' (Persian palm), a flowering plant in the family Araceae
Aloe thraskii (Palm aloe), a flowering plant in the family Asphodelaceae
Amorphophallus konjac (Snake palm), a flowering plant in the family Araceae
Beaucarnea recurvata (Ponytail palm), a flowering plant in the family Asparagaceae
Begonia luxurians (Palm leaf begonia), a flowering plant in the family Begoniaceae
Biophytum umbraculum (South Pacific palm), a flowering plant in the family Oxalidaceae
Blechnum appendiculatum (Palm fern), a fern in the family Aspleniaceae
Brassica oleracea 'Lacinato kale' (Black Tuscan palm), a flowering plant in the family Brassicaceae
Brighamia insignis (Vulcan palm), a flowering plant in the family Campanulaceae
Carludovica palmata (Panama hat palm) and perhaps other members in the family Cyclanthaceae.
Cordyline australis (Cabbage palm, Torbay palm, ti palm) or palm lily (family Asparagaceae) and other representatives in the genus Cordyline.
Cyathea cunninghamii (Palm fern) and other tree ferns (families Cyatheaceae and Dicksoniaceae) that may be confused with palms.
Cycas revoluta (Sago palm) and the rest of the order Cycadales.
Cyperus alternifolius (Umbrella palm), a sedge in the family Cyperaceae
Dasylirion longissimum (Grass palm), a flowering plant in the family Asparagaceae and other plants in the genus Dasylirion
Dioon spinulosum (Gum palm), a cycad in the family Zamiaceae
Dracaena marginata (Dragon palm) a flowering plant in the family Asparagaceae
Eisenia arborea (Southern sea palm), a species of brown alga in the family Lessoniaceae
Fatsia japonica (Figleaf palm), a flowering plant in the family Araliaceae
Hypnodendron comosum (Palm tree moss or palm moss), a moss in the family Hypnodendraceae
Musa species (Banana palm), a flowering plant in the family Musaceae
Pachypodium lamerei (Madagascar palm), a flowering plant in the family Apocynaceae
Pandanus spiralis (Screw palm), a flowering plant in the family Pandanaceae and perhaps other Pandanus spp.
Ravenala (Traveller's palm), a flowering plant in the family Strelitziaceae
Setaria palmifolia (Palm grass), a grass in the family Poaceae
Yucca brevifolia (Yucca palm or palm tree yucca)
Yucca filamentosa (Needle palm) and Yucca filifera (St. Peter's palm), flowering plants in the family Asparagaceae
Zamia furfuracea (Cardboard palm), a cycad in the family Zamiaceae
Zamioculcas zamiifolia (Emerald palm or aroid palm), a flowering plant in the family Araceae
| Biology and health sciences | Monocots | null |
45729 | https://en.wikipedia.org/wiki/Panthera | Panthera | Panthera is a genus within the family Felidae, and one of two extant genera in the subfamily Pantherinae. It contains the largest living members of the cat family. There are five living species: the jaguar, leopard, lion, snow leopard and tiger. Numerous extinct species are also named, including the cave lion and American lion.
Etymology
The word panther derives from classical Latin panthēra, itself from the ancient Greek pánthēr (πάνθηρ).
Characteristics
In Panthera species, the dorsal profile of the skull is flattish or evenly convex. The frontal interorbital area is not noticeably elevated, and the area behind the elevation is less steeply sloped. The basic cranial axis is nearly horizontal. The inner chamber of the bullae is large, the outer small. The partition between them is close to the external auditory meatus. The convexly rounded chin is sloping.
All Panthera species have an incompletely ossified hyoid bone and a specially adapted larynx with large vocal folds covered in a fibro-elastic pad; these characteristics enable them to roar. Only the snow leopard cannot roar, as it has shorter vocal folds of that provide a lower resistance to airflow; it was therefore proposed to be retained in the genus Uncia.
Panthera species can prusten, which is a short, soft, snorting sound; it is used during contact between friendly individuals. The roar is an especially loud call with a distinctive pattern that depends on the species.
Evolution
The geographic origin of the genus Panthera is uncertain, though the earliest known definitive species Panthera principialis is from Tanzania. P. blytheae from northern Central Asia, originally described as the oldest known Panthera species, is suggested to be similar in skull features to the snow leopard, but subsequent studies have since agreed that it is not a member of or a related species of the snow leopard lineage and that it belongs to a different genus Palaeopanthera. The tiger, snow leopard, and clouded leopard genetic lineages likely dispersed in Southeast Asia during the Late Miocene.
Genetic studies indicate that the pantherine cats diverged from the subfamily Felinae between six and ten million years ago.
The genus Neofelis is sister to Panthera.
The clouded leopard appears to have diverged about . Panthera diverged from other cat species about and then evolved into the species tiger about , snow leopard about and leopard about . Mitochondrial sequence data from fossils suggest that the American lion (P. atrox) is a sister lineage to Panthera spelaea (the Eurasian cave or steppe lion) that diverged about , and that both P. atrox and P. spelaea are most closely related to lions among living Panthera species. The snow leopard is nested within Panthera and is the sister species of the tiger.
Results of a 2016 study based on analysis of biparental nuclear genomes suggest the following relationships of living Panthera species:
The extinct species Panthera gombaszoegensis, was probably closely related to the modern jaguar. The first fossil remains were excavated in Olivola, in Italy, and date to .
Fossil remains found in South Africa that appear to belong within the Panthera lineage date to about .
Classification
Panthera was named and described by Lorenz Oken in 1816 who placed all the spotted cats in this group. During the 19th and 20th centuries, various explorers and staff of natural history museums suggested numerous subspecies, or at times called "races", for all Panthera species. The taxonomist Reginald Innes Pocock reviewed skins and skulls in the zoological collection of the Natural History Museum, London, and grouped subspecies described, thus shortening the lists considerably. Reginald Innes Pocock revised the classification of this genus in 1916 as comprising the tiger (P. tigris), lion (P. leo), jaguar (P. onca), and leopard (P. pardus) on the basis of common features of their skulls. Since the mid-1980s, several Panthera species became subjects of genetic research, mostly using blood samples of captive individuals. Study results indicate that many of the lion and leopard subspecies are questionable because of insufficient genetic distinction between them. Subsequently, it was proposed to group all African leopard populations to P. p. pardus and retain eight subspecific names for Asian leopard populations. Results of genetic analysis indicate that the snow leopard (formerly Uncia uncia) also belongs to the genus Panthera (P. uncia), a classification that was accepted by IUCN Red List assessors in 2008.
Based on genetic research, it was suggested to group all living sub-Saharan lion populations into P. l. leo.
Results of phylogeographic studies indicate that the Western and Central African lion populations are more closely related to those in India and form a different clade than lion populations in Southern and East Africa; southeastern Ethiopia is an admixture region between North African and East African lion populations.
Black panthers do not form a distinct species, but are melanistic specimens of the genus, most often encountered in the leopard and jaguar.
Contemporary species
The following list of the genus Panthera is based on the taxonomic assessment in Mammal Species of the World and reflects the taxonomy revised in 2017 by the Cat Classification Task Force of the Cat Specialist Group:
Extinct species and subspecies
Other, now invalid, species have also been described, such as Panthera crassidens from South Africa, which was later found to be based on a mixture of leopard and cheetah fossils.
Phylogeny
In 2018, results of a phylogenetic study on living and fossil cats were published. This study was based on the morphological diversity of the mandibles of saber-toothed cats, their speciation and extinction rates.
| Biology and health sciences | Felines | Animals |
45748 | https://en.wikipedia.org/wiki/Chariot | Chariot | A chariot is a type of cart driven by a charioteer, usually using horses to provide rapid motive power. The oldest known chariots have been found in burials of the Sintashta culture in modern-day Chelyabinsk Oblast, Russia, dated to c. 1950–1880 BC and are depicted on cylinder seals from Central Anatolia in Kültepe dated to c. 1900 BC. The critical invention that allowed the construction of light, horse-drawn chariots was the spoked wheel.
The chariot was a fast, light, open, two-wheeled conveyance drawn by two or more equids (usually horses) that were hitched side by side, and was little more than a floor with a waist-high guard at the front and sides. It was initially used for ancient warfare during the Bronze and Iron Ages, but after its military capabilities had been superseded by light and heavy cavalries, chariots continued to be used for travel and transport, in processions, for games, and in races.
Etymology
The word "chariot" comes from the Latin term carrus, a loanword from Gaulish karros.
In ancient Rome a biga described a chariot requiring two horses, a triga three, and a quadriga four.
Origins
The wheel may have been invented at several places, with early evidence found in Ukraine, Poland, Germany, and Slovenia. Evidence of wheeled vehicles appears from the mid 4th millennium BC near-simultaneously in the Northern Caucasus (Maykop culture), and in Central Europe. These earliest vehicles may have been ox carts. A necessary precursor to the invention of the chariot is the domestication of animals, and specifically domestication of horses – a major step in the development of civilization. Despite the large impact horse domestication has had in transport and communication, tracing its origins has been challenging. Evidence supports horses having been domesticated in the Eurasian Steppes, with studies suggesting the Botai culture in modern-day Kazakhstan were the first, about 3500 BC. Others say horses were domesticated earlier than 3500 BC in Eastern Europe (modern Ukraine and Western Kazakhstan), 6000 years ago.
The spread of spoke-wheeled chariots has been closely associated with early Indo-Iranian migrations. The earliest known chariots have been found in Sintashta culture burial sites, and the culture is considered a strong candidate for the origin of the technology, which spread throughout the Old World and played an important role in ancient warfare. It is also strongly associated with the ancestors of modern domestic horses, the DOM2 population (DOM2 horses originated from the Western Eurasia steppes, especially the lower Volga-Don, but not in Anatolia, during the late fourth and early third millennia BC. Their genes may show selection for easier domestication and stronger backs).
These Aryan people migrated southward into South Asia, ushering in the Vedic period around 1750 BC. Shortly after this, about 1700 BC, evidence of chariots appears in Asia-Minor.
The earliest fully developed spoke-wheeled horse chariots are from the chariot burials of the Andronovo (Timber-Grave) sites of the Sintashta-Petrovka Proto-Indo-Iranian culture in modern Russia and Kazakhstan from around 2000 BC. This culture is at least partially derived from the earlier Yamna culture. It built heavily fortified settlements, engaged in bronze metallurgy on an industrial scale, and practiced complex burial rituals reminiscent of Hindu rituals known from the Rigveda and the Avesta. Over the next few centuries, the Andronovo culture spread across the steppes from the Urals to the Tien Shan, likely corresponding to the time of early Indo-Iranian cultures.
Not everyone agrees that the Sintashta culture vehicle finds are true chariots.
In 1996 Joost Crouwel and Mary Aiken Littauer wrote
Peter Raulwing and Stefan Burmeister consider the Sintashta and Krivoe Ozero finds from the steppe to be carts rather than chariots.
Spread by Indo-Europeans
Chariots figure prominently in Indo-Iranian and early European mythology. Chariots are also an important part of both Hindu and Persian mythology, with most of the gods in their pantheon portrayed as riding them. The Sanskrit word for a chariot is rátha- (m.), which is cognate with Avestan raθa- (also m.), and in origin a substantiation of the adjective Proto-Indo-European meaning "having wheels", with the characteristic accent shift found in Indo-Iranian substantivisations. This adjective is in turn derived from the collective noun "wheels", continued in Latin rota, which belongs to the noun for "wheel" (from "to run") that is also found in Germanic, Celtic and Baltic (Old High German rad n., Old Irish roth m., Lithuanian rãtas m.). Nomadic tribes of the Pontic steppes, like Scythians such as Hamaxobii, would travel in wagons, carts, and chariots during their migrations.
Hittites
The oldest testimony of chariot warfare in the ancient Near East is the Old Hittite Anitta text (18th century BC), which mentions 40 teams of horses (in the original cuneiform spelling: 40 ṢÍ-IM-TI ANŠE.KUR.RAḪI.A) at the siege of Salatiwara. Since the text mentions teams rather than chariots, the existence of chariots in the 18th century BC is uncertain. The first certain attestation of chariots in the Hittite empire dates to the late 17th century BC (Hattusili I). A Hittite horse-training text is attributed to Kikkuli the Mitanni (15th century BC).
The Hittites were renowned charioteers. They developed a new chariot design that had lighter wheels, with four spokes rather than eight, and that held three rather than two warriors. It could hold three warriors because the wheel was placed in the middle of the chariot and not at the back as in Egyptian chariots. Typically one Hittite warrior steered the chariot while the second man was usually the main archer; the third warrior would either wield a spear or sword when charging at enemies or hold up a large shield to protect himself and the others from enemy arrows.
Hittite prosperity largely depended on their control of trade routes and natural resources, specifically metals. As the Hittites gained dominion over Mesopotamia, tensions flared among the neighboring Assyrians, Hurrians, and Egyptians. Under Suppiluliuma I, the Hittites conquered Kadesh and, eventually, the whole of Syria. The Battle of Kadesh in 1274 BC is likely to have been the largest chariot battle ever fought, involving over 5,000 chariots.
Bronze Age Indian Subcontinent
Models of single axled, solid wheeled ox-drawn vehicles, have been found at several mature Indus Valley cites, such as Chanhudaro, Daimabad, Harappa, and Nausharo.
Spoked-wheeled, horse-drawn chariots, often carrying an armed passenger, are depicted in second millennium BC Chalcolithic period rock paintings, examples are known from Chibbar Nulla, Chhatur Bhoj Nath Nulla, and Kathotia. There are some depictions of chariots among the petroglyphs in the sandstone of the Vindhya range. Two depictions of chariots are found in Morhana Pahar, Mirzapur district. One depicts a biga and the head of the driver. The second depicts a quadriga, with six-spoked wheels, and a driver standing up in a large chariot box. This chariot is being attacked. One figure, who is armed with a shield and a mace, stands in the chariot's path; another figure, who is armed with a bow and arrow, threatens the right flank. It has been suggested (speculated) that the drawings record a story, most probably dating to the early centuries BC, from some center in the area of the Ganges–Yamuna plain into the territory of still Neolithic hunting tribes. The very realistic chariots carved into the Sanchi stupas are dated to roughly the 1st century.
Bronze Age solid-disk wheel carts were found in 2018 at Sinauli, which were interpreted by some as horse-pulled "chariots," predating the arrival of the horse-centered Indo-Aryans. They were ascribed by Sanjay Manjul, director of the excavations, to the Ochre Coloured Pottery culture (OCP)/Copper Hoard Culture, which was contemporaneous with the Late Harappan culture, and interpreted by him as horse-pulled chariots. Majul further noted that "the rituals relating to the Sanauli burials showed close affinity with Vedic rituals, and stated that "the dating of the Mahabharata is around 1750 BC." According to Asko Parpola these finds were ox-pulled carts, indicating that these burials are related to an early Aryan migration of Proto-Indo-Iranian speaking people into the Indian subcontinent, "forming then the ruling elite of a major Late Harappan settlement."
Horse-drawn chariots, as well as their cult and associated rituals, were spread by the Indo-Iranians, and horses and horse-drawn chariots were introduced in India by the Indo-Aryans.
In religion
In Rigveda, Indra is described as strong willed, armed with a thunderbolt, riding a chariot:Among Rigvedic deities, notably the Vedic Sun God Surya rides on a one spoked chariot driven by his charioteer Aruṇa. Ushas (the dawn) rides in a chariot, as well as Agni in his function as a messenger between gods and men.
The Jain Bhagavi Sutra states that Indian troops used a chariot with a club or mace attached to it during the war against the Licchavis during the reign of Ajatashatru of Magadha.
Persia
The Persians succeeded Elam in the mid 1st millennium. They may have been the first to yoke four horses to their chariots. They also used scythed chariots. Cyrus the Younger employed these chariots in large numbers at the Battle of Cunaxa.
Herodotus mentions that the Ancient Libyan and the Ancient Indian (Sattagydia, Gandhara and Hindush) satrapies supplied cavalry and chariots to Xerxes the Great's army. However, by this time, cavalry was far more effective and agile than the chariot, and the defeat of Darius III at the Battle of Gaugamela (331 BC), where the army of Alexander simply opened their lines and let the chariots pass and attacked them from behind, marked the end of the era of chariot warfare (barring the Seleucid and Pontic powers, India, China, and the Celtic peoples).
Introduction in the Near East
Chariots were introduced in the Near East in the 17(18)th–16th centuries BC. Some scholars argue that the horse chariot was most likely a product of the ancient Near East early in the 2nd millennium BC. Archaeologist Joost Crouwel writes that "Chariots were not sudden inventions, but developed out of earlier vehicles that were mounted on disk or cross-bar wheels. This development can best be traced in the Near East, where spoke-wheeled and horse-drawn chariots are first attested in the earlier part of the second millennium BC..." and were illustrated on a Syrian cylinder seal dated to either the 18th or 17th century BC.
Early wheeled vehicles in the Near East
According to Christoph Baumer, the earliest discoveries of wheels in Mesopotamia come from the first half of the third millennium BC – more than half a millennium later than the first finds from the Kuban region. At the same time, in Mesopotamia, some intriguing early pictograms of a sled that rests on wooden rollers or wheels have been found. They date from about the same time as the early wheel discoveries in Europe and may indicate knowledge of the wheel.
The earliest depiction of vehicles in the context of warfare is on the Standard of Ur in southern Mesopotamia, . These are more properly called wagons which were double-axled and pulled by oxen or a hybrid of a donkey and a female onager, named Kunga in the city of Nagar which was famous for breeding them. The hybrids were used by the Eblaite, early Sumerian, Akkadian and Ur III armies. Although sometimes carrying a spearman with the charioteer (driver), such heavy wagons, borne on solid wooden wheels and covered with skins, may have been part of the baggage train (e.g., during royal funeral processions) rather than vehicles of battle in themselves.
The Sumerians had a lighter, two-wheeled type of cart, pulled by four asses, and with solid wheels. The spoked wheel did not appear in Mesopotamia until the mid second millennium BC.
Egypt
Chariot use made its way into Egypt around 1650 BC during the Hyksos invasion of Egypt and establishment of the Fourteenth Dynasty. In 1659 BC the Indo-European Hittites sacked Babylon, which demonstrated the superiority of chariots in antiquity.
The chariot and horse were used extensively in Egypt by the Hyksos invaders from the 16th century BC onwards, though discoveries announced in 2013 potentially place the earliest chariot use as early as Egypt's Old Kingdom (–2181 BC). In the remains of Egyptian and Assyrian art, there are numerous representations of chariots, which display rich ornamentation. The chariots of the Egyptians and Assyrians, with whom the bow was the principal arm of attack, were richly mounted with quivers full of arrows. The Egyptians invented the yoke saddle for their chariot horses in . As a general rule, the Egyptians used chariots as mobile archery platforms; chariots always had two men, with the driver steering the chariot with his reins while the main archer aimed his bow and arrow at any targets within range. The best preserved examples of Egyptian chariots are the four specimens from the tomb of Tutankhamun. Chariots can be pulled by two or more horses.
Ancient Canaan and Israel
Chariots are frequently mentioned in the Hebrew Tanakh and the Greek Old Testament, respectively, particularly by the prophets, as instruments of war or as symbols of power or glory. First mentioned in the story of Joseph (Genesis 50:9), "Iron chariots" are mentioned also in Joshua (17:16, 18) and Judges (1:19,4:3, 13) as weapons of the Canaanites and Israelites. 1 Samuel 13:5 mentions chariots of the Philistines, who are sometimes identified with the Sea Peoples or early Greeks.
Examples from The Jewish Study Bible of the Tanakh (Jewish Bible) include:
Isaiah 2:7 Their land is full of silver and gold, there is no limit to their treasures; their land is full of horses, there is no limit to their chariots.
Jeremiah 4:13 Lo, he [I.e., the invader of v. 7.] ascends like clouds, his chariots are like a whirlwind, his horses are swifter than eagles. Woe to us, we are ruined!
Ezekiel 26:10 From the cloud raised by his horses dust shall cover you; from the clatter of horsemen and wheels and chariots, your walls shall shake−when he enters your gates as men enter a breached city.
Psalms 20:8 They [call] on chariots, they [call] on horses, but we call on the name of the LORD our God.
Song of Songs 1:9 I have likened you, my darling, to a mare in Pharaoh's chariots
Examples from the King James Version of the Christian Bible include:
And Solomon gathered chariots and horsemen: and he had a thousand and four hundred chariots, and twelve thousand horsemen, which he placed in the chariot cities, and with the king at Jerusalem.
And the LORD was with Judah; and he drave out the inhabitants of the mountain; but could not drive out the inhabitants of the valley, because they had chariots of iron.
Acts 8:37–38 Then Philip said, "If you believe with all your heart, you may." And he answered and said, "I believe that Jesus Christ is the Son of God." So he commanded the chariot to stand still. And both Philip and the eunuch went down into the water, and he baptized him.
Small domestic horses may have been present in the northern Negev before 3000 BC. Jezreel (city) has been identified as the chariot base of King Ahab. And a decorated bronze tablet thought to be the head of a lynchpin of a Canaanite chariot was found at a site that may be Sisera's fortress Harosheth Haggoyim.
Urartu
In Urartu (860–590 BC), the chariot was used by both the nobility and the military. In Erebuni (Yerevan), King Argishti of Urartu is depicted riding on a chariot which is pulled by two horses. The chariot has two wheels and each wheel has about eight spokes. This type of chariot was used around 800 BC.
Introduction in Bronze-Age Europe
As David W. Anthony writes in his book The Horse, the Wheel, and Language, in Eastern Europe, the earliest well-dated depiction of a wheeled vehicle (a wagon with two axles and four wheels) is on the Bronocice pot (). It is a clay pot excavated in a Funnelbeaker settlement in Swietokrzyskie Voivodeship in Poland. The oldest securely dated real wheel-axle combination in Eastern Europe is the Ljubljana Marshes Wheel ().
Greece
The later Greeks of the first millennium BC had a (still not very effective) cavalry arm (indeed, it has been argued that these early horseback riding soldiers may have given rise to the development of the later, heavily armed foot-soldiers known as hoplites), and the rocky terrain of the Greek mainland was unsuited for wheeled vehicles. The chariot was heavily used by the Mycaenean Greeks, most probably adopted from the Hittites, around 1600 BC. Linear B tablets from Mycenaean palaces record large inventories of chariots, sometimes with specific details as to how many chariots were assembled or not (i.e. stored in modular form).On a gravestone from the royal Shaft-grave V in Mycenae dated LH II (about 1500 BC) there is one of the earliest depiction of the chariot in Achaean art. This sculpture shows a single man driving a two-wheeled small box chariot. Later the vehicles were used in games and processions, notably for races at the Olympic and Panathenaic Games and other public festivals in ancient Greece, in hippodromes and in contests called agons. They were also used in ceremonial functions, as when a paranymph, or friend of a bridegroom, went with him in a chariot to fetch the bride home.
Herodotus (Histories, 5. 9) Reports that chariots were widely used in the Pontic–Caspian steppe by the Sigynnae.
Greek chariots were made to be drawn by two horses attached to a central pole. If two additional horses were added, they were attached on each side of the main pair by a single bar or trace fastened to the front or prow of the chariot, as may be seen on two prize vases in the British Museum from the Panathenaic Games at Athens, Greece, in which the driver is seated with feet resting on a board hanging down in front close to the legs of the horses. The biga itself consists of a seat resting on the axle, with a rail at each side to protect the driver from the wheels. Greek chariots appear to have lacked any other attachment for the horses, which would have made turning difficult.
The body or basket of the chariot rested directly on the axle (called beam) connecting the two wheels. There was no suspension, making this an uncomfortable form of transport. At the front and sides of the basket was a semicircular guard about 3 ft (1 m) high, to give some protection from enemy attack. At the back the basket was open, making it easy to mount and dismount. There was no seat, and generally only enough room for the driver and one passenger.
The reins were mostly the same as those in use in the 19th century, and were made of leather and ornamented with studs of ivory or metal. The reins were passed through rings attached to the collar bands or yoke, and were long enough to be tied round the waist of the charioteer to allow for defense.
The wheels and basket of the chariot were usually of wood, strengthened in places with bronze or iron. The wheels had from four to eight spokes and tires of bronze or iron. Due to the widely spaced spokes, the rim of the chariot wheel was held in tension over comparatively large spans. Whilst this provided a small measure of shock absorption, it also necessitated the removal of the wheels when the chariot was not in use, to prevent warping from continued weight bearing. Most other nations of this time had chariots of similar design to the Greeks, the chief differences being the mountings.
According to Greek mythology, the chariot was invented by Erichthonius of Athens to conceal his feet, which were those of a dragon.
The most notable appearance of the chariot in Greek mythology occurs when Phaëton, the son of Helios, in an attempt to drive the chariot of the sun, managed to set the earth on fire. This story led to the archaic meaning of a phaeton as one who drives a chariot or coach, especially at a reckless or dangerous speed. Plato, in his Chariot Allegory, depicted a chariot drawn by two horses, one well behaved and the other troublesome, representing opposite impulses of human nature; the task of the charioteer, representing reason, was to stop the horses from going different ways and to guide them towards enlightenment.
The Greek word for chariot, ἅρμα, hárma, is also used nowadays to denote a tank, properly called άρμα μάχης, árma mákhēs, literally a "combat chariot".
Central and Northern Europe
The Trundholm sun chariot is dated to c. 1500-1300 BC (see: Nordic Bronze Age). The horse drawing the solar disk runs on four wheels, and the Sun itself on two. All wheels have four spokes. The "chariot" comprises the solar disk, the axle, and the wheels, and it is unclear whether the sun is depicted as the chariot or as the passenger. Nevertheless, the presence of a model of a horse-drawn vehicle on two spoked wheels in Northern Europe at such an early time is astonishing.
In addition to the Trundholm chariot, there are numerous petroglyphs from the Nordic Bronze Age that depict chariots. One petroglyph, drawn on a stone slab in a double burial from c. 1000 BC, depicts a biga with two four-spoked wheels.
The use of the composite bow in chariot warfare is not attested in northern Europe.
Western Europe
The Celts were famous for their chariots and modern English words like car, carriage and carry are ultimately derived from the native Brythonic language (Modern Welsh: Cerbyd). The word chariot itself is derived from the Norman French charriote and shares a Celtic root (Gaulish: karros). Some 20 iron-aged chariot burials have been excavated in Britain, roughly dating from between 500 BC and 100 BC. Virtually all of them were found in East Yorkshire – the exception was a find in 2001 in Newbridge, 10 km west of Edinburgh.
The Celtic chariot, which may have been called karbantos in Gaulish (compare Latin carpentum), was a biga that measured approximately in width and in length.
British chariots were open in front. Julius Caesar provides the only significant eyewitness report of British chariot warfare:
Chariots play an important role in Irish mythology surrounding the hero Cú Chulainn.
Chariots could also be used for ceremonial purposes. According to Tacitus (Annals 14.35), Boudica, queen of the Iceni and a number of other tribes in a formidable uprising against the occupying Roman forces, addressed her troops from a chariot in 61:
"Boudicca curru filias prae se vehens, ut quamque nationem accesserat, solitum quidem Britannis feminarum ductu bellare testabatur"
Boudicca, with her daughters before her in a chariot, went up to tribe after tribe, protesting that it was indeed usual for Britons to fight under the leadership of women.
The last mention of chariot use in battle seems to be at the Battle of Mons Graupius, somewhere in modern Scotland, in 84 CE. From Tacitus (Agricola 1.35–36) "The plain between resounded with the noise and with the rapid movements of chariots and cavalry." The chariots did not win even their initial engagement with the Roman auxiliaries: "Meantime the enemy's cavalry had fled, and the charioteers had mingled in the engagement of the infantry."
Later through the centuries, the chariot was replaced by the "war wagon". The "war wagon" was a medieval development used to attack rebel or enemy forces on battle fields. The wagon was given slits for archers to shoot enemy targets, supported by infantry using pikes and flails and later for the invention of gunfire by hand-gunners; side walls were used for protection against archers, crossbowmen, the early use of gunpowder and cannon fire.
It was especially useful during the Hussite Wars, c. 1420, by Hussite forces rebelling in Bohemia. Groups of them could form defensive works, but they also were used as hardpoints for Hussite formations or as firepower in pincer movements. This early use of gunpowder and innovative tactics helped a largely peasant infantry stave off attacks by the Holy Roman Empire's larger forces of mounted knights.
Etruria
The only intact Etruscan chariot dates to c. 530 BC and was uncovered as part of a chariot burial at Monteleone di Spoleto. Currently in the collection of the Metropolitan Museum of Art, it is decorated with bronze plates decorated with detailed low-relief scenes, commonly interpreted as depicting episodes from the life of Achilles.
Rome
In the Roman Empire, chariots were not used for warfare, but for chariot racing, especially in circuses, or for triumphal processions, when they could be pulled by as many as ten horses or even by dogs, tigers, or ostriches. There were four divisions, or factiones, of charioteers, distinguished by the colour of their costumes: the red, blue, green and white teams. The main centre of chariot racing was the Circus Maximus, situated in the valley between the Palatine and Aventine Hills in Rome. The track could hold 12 chariots, and the two sides of the track were separated by a raised median termed the spina. Chariot races continued to enjoy great popularity in Byzantine times, in the Hippodrome of Constantinople, even after the Olympic Games had been disbanded, until their decline after the Nika riots in the 6th century. The starting gates were known as the Carceres.
An ancient Roman car or chariot pulled by four horses abreast together with the horses pulling it was called a Quadriga, from the Latin quadriugi (of a team of four). The term sometimes meant instead the four horses without the chariot or the chariot alone. A three-horse chariot, or the three-horse team pulling it, was a triga, from triugi (of a team of three). A two-horse chariot, or the two-horse team pulling it, was a biga, from biugi.
A popular legend that has been around since at least 1937 traces the origin of the 4 ft in standard railroad gauge to Roman times, suggesting that it was based on the distance between the ruts of rutted roads marked by chariot wheels dating from the Roman Empire. There is no evidence of the distance being used in the millennium and a half between the departure of the Romans from Britain and the adoption of the gauge on the Stockton and Darlington Railway in 1825.
Introduction in Ancient China
The earliest archaeological evidence of chariots in China, a chariot burial site discovered in 1933 at Hougang, Anyang in Henan province, dates to the rule of King Wu Ding of the Late Shang (). Oracle bone inscriptions suggest that the western enemies of the Shang used limited numbers of chariots in battle, but the Shang themselves used them only as mobile command-vehicles and in royal hunts.
During the Shang dynasty, members of the royal family were buried with a complete household and servants, including a chariot, horses, and a charioteer. A Shang chariot was often drawn by two horses, but four-horse variants are occasionally found in burials.
Jacques Gernet claims that the Zhou dynasty, which conquered the Shang ca. 1046 BC, made more use of the chariot than did the Shang and "invented a new kind of harness with four horses abreast". The crew consisted of an archer, a driver, and sometimes a third warrior who was armed with a spear or dagger-axe. From the 8th to 5th centuries BC the Chinese use of chariots reached its peak. Although chariots appeared in greater numbers, infantry often defeated charioteers in battle.
Massed-chariot warfare became all but obsolete after the Warring-States period (476–221 BC). The main reasons were increased use of the crossbow, use of long halberds up to long and pikes up to long, and the adoption of standard cavalry units, and the adaptation of mounted archery from nomadic cavalry, which were more effective. Chariots would continue to serve as command posts for officers during the Qin dynasty (221–206 BC) and the Han dynasty (206 BC–220 AD), while armored chariots were also used during the Han dynasty against the Xiongnu Confederation in the Han–Xiongnu War (133 BC to 89 AD), specifically at the Battle of Mobei (119 BC).
Before the Han dynasty, the power of Chinese states and dynasties was often measured by the number of chariots they were known to have. A country of a thousand chariots ranked as a medium country, and a country of ten thousand chariots ranked as a huge and powerful country.
| Technology | Military technology: General | null |
45756 | https://en.wikipedia.org/wiki/Pyrite | Pyrite | The mineral pyrite ( ), or iron pyrite, also known as fool's gold, is an iron sulfide with the chemical formula FeS2 (iron (II) disulfide). Pyrite is the most abundant sulfide mineral.
Pyrite's metallic luster and pale brass-yellow hue give it a superficial resemblance to gold, hence the well-known nickname of fool's gold. The color has also led to the nicknames brass, brazzle, and brazil, primarily used to refer to pyrite found in coal.
The name pyrite is derived from the Greek (), 'stone or mineral which strikes fire', in turn from (), 'fire'. In ancient Roman times, this name was applied to several types of stone that would create sparks when struck against steel; Pliny the Elder described one of them as being brassy, almost certainly a reference to what is now called pyrite.
By Georgius Agricola's time, , the term had become a generic term for all of the sulfide minerals.
Pyrite is usually found associated with other sulfides or oxides in quartz veins, sedimentary rock, and metamorphic rock, as well as in coal beds and as a replacement mineral in fossils, but has also been identified in the sclerites of scaly-foot gastropods. Despite being nicknamed "fool's gold", pyrite is sometimes found in association with small quantities of gold. A substantial proportion of the gold is "invisible gold" incorporated into the pyrite (see Carlin-type gold deposit). It has been suggested that the presence of both gold and arsenic is a case of coupled substitution but as of 1997 the chemical state of the gold remained controversial.
Uses
Pyrite gained a brief popularity in the 16th and 17th centuries as a source of ignition in early firearms, most notably the wheellock, where a sample of pyrite was placed against a circular file to strike the sparks needed to fire the gun.
Pyrite is used with flintstone and a form of tinder made of stringybark by the Kaurna people of South Australia, as a traditional method of starting fires.
Pyrite has been used since classical times to manufacture copperas (ferrous sulfate). Iron pyrite was heaped up and allowed to weather (an example of an early form of heap leaching). The acidic runoff from the heap was then boiled with iron to produce iron sulfate. In the 15th century, new methods of such leaching began to replace the burning of sulfur as a source of sulfuric acid. By the 19th century, it had become the dominant method.
Pyrite remains in commercial use for the production of sulfur dioxide, for use in such applications as the paper industry, and in the manufacture of sulfuric acid. Thermal decomposition of pyrite into FeS (iron(II) sulfide) and elemental sulfur starts at ; at around , pS2 is about .
A newer commercial use for pyrite is as the cathode material in Energizer brand non-rechargeable lithium metal batteries (Energizer Ultimate Lithium™) .
Pyrite is a semiconductor material with a band gap of 0.95 eV. Pure pyrite is naturally n-type, in both crystal and thin-film forms, potentially due to sulfur vacancies in the pyrite crystal structure acting as n-dopants.
During the early years of the 20th century, pyrite was used as a mineral detector in radio receivers, and is still used by crystal radio hobbyists. Until the vacuum tube matured, the crystal detector was the most sensitive and dependable detector available—with considerable variation between mineral types and even individual samples within a particular type of mineral. Pyrite detectors occupied a midway point between galena detectors and the more mechanically complicated perikon mineral pairs. Pyrite detectors can be as sensitive as a modern 1N34A germanium diode detector.
Pyrite has been proposed as an abundant, non-toxic, inexpensive material in low-cost photovoltaic solar panels. Synthetic iron sulfide was used with copper sulfide to create the photovoltaic material. More recent efforts are working toward thin-film solar cells made entirely of pyrite.
Pyrite is used to make marcasite jewelry. Marcasite jewelry, using small faceted pieces of pyrite, often set in silver, has been made since ancient times and was popular in the Victorian era. At the time when the term became common in jewelry making, "marcasite" referred to all iron sulfides including pyrite, and not to the orthorhombic FeS2 mineral marcasite which is lighter in color, brittle and chemically unstable, and thus not suitable for jewelry making. Marcasite jewelry does not actually contain the mineral marcasite. The specimens of pyrite, when it appears as good quality crystals, are used in decoration. They are also very popular in mineral collecting. Among the sites that provide the best specimens are Soria and La Rioja provinces (Spain).
In value terms, China ($47 million) constitutes the largest market for imported unroasted iron pyrites worldwide, making up 65% of global imports. China is also the fastest growing in terms of the unroasted iron pyrites imports, with a CAGR of +27.8% from 2007 to 2016.
Research
In July 2020 scientists reported that they have observed a voltage-induced transformation of normally diamagnetic pyrite into a ferromagnetic material, which may lead to applications in devices such as solar cells or magnetic data storage.
Researchers at Trinity College Dublin, Ireland have demonstrated that FeS2 can be exfoliated into few-layers just like other two-dimensional layered materials such as graphene by a simple liquid-phase exfoliation route. This is the first study to demonstrate the production of non-layered 2D-platelets from 3D bulk FeS2. Furthermore, they have used these 2D-platelets with 20% single walled carbon-nanotube as an anode material in lithium-ion batteries, reaching a capacity of 1000 mAh/g close to the theoretical capacity of FeS2.
In 2021, a natural pyrite stone has been crushed and pre-treated followed by liquid-phase exfoliation into two-dimensional nanosheets, which has shown capacities of 1200 mAh/g as an anode in lithium-ion batteries.
Formal oxidation states for pyrite, marcasite, molybdenite and arsenopyrite
From the perspective of classical inorganic chemistry, which assigns formal oxidation states to each atom, pyrite and marcasite are probably best described as Fe2+[S2]2−. This formalism recognizes that the sulfur atoms in pyrite occur in pairs with clear S–S bonds. These persulfide [–S–S–] units can be viewed as derived from hydrogen disulfide, H2S2. Thus pyrite would be more descriptively called iron persulfide, not iron disulfide. In contrast, molybdenite, MoS2, features isolated sulfide S2− centers and the oxidation state of molybdenum is Mo4+. The mineral arsenopyrite has the formula FeAsS. Whereas pyrite has [S2]2– units, arsenopyrite has [AsS]3– units, formally derived from deprotonation of arsenothiol (H2AsSH). Analysis of classical oxidation states would recommend the description of arsenopyrite as Fe3+[AsS]3−.
Crystallography
Iron-pyrite FeS2 represents the prototype compound of the crystallographic pyrite structure. The structure is cubic and was among the first crystal structures solved by X-ray diffraction. It belongs to the crystallographic space group Pa and is denoted by the Strukturbericht notation C2. Under thermodynamic standard conditions the lattice constant of stoichiometric iron pyrite FeS2 amounts to . The unit cell is composed of a Fe face-centered cubic sublattice into which the ions are embedded. (Note though that the iron atoms in the faces are not equivalent by translation alone to the iron atoms at the corners.) The pyrite structure is also seen in other MX2 compounds of transition metals M and chalcogens X = O, S, Se and Te. Certain dipnictides with X standing for P, As and Sb etc. are also known to adopt the pyrite structure.
The Fe atoms are bonded to six S atoms, giving a distorted octahedron. The material is a semiconductor. The Fe ions are usually considered to be low spin divalent state (as shown by Mössbauer spectroscopy as well as XPS). The material as a whole behaves as a Van Vleck paramagnet, despite its low-spin divalency.
The sulfur centers occur in pairs, described as S22−. Reduction of pyrite with potassium gives potassium dithioferrate, KFeS2. This material features ferric ions and isolated sulfide (S2-) centers.
The S atoms are tetrahedral, being bonded to three Fe centers and one other S atom. The site symmetry at Fe and S positions is accounted for by point symmetry groups C3i and C3, respectively. The missing center of inversion at S lattice sites has important consequences for the crystallographic and physical properties of iron pyrite. These consequences derive from the crystal electric field active at the sulfur lattice site, which causes a polarization of S ions in the pyrite lattice. The polarisation can be calculated on the basis of higher-order Madelung constants and has to be included in the calculation of the lattice energy by using a generalised Born–Haber cycle. This reflects the fact that the covalent bond in the sulfur pair is inadequately accounted for by a strictly ionic treatment.
Arsenopyrite has a related structure with heteroatomic As–S pairs rather than S-S pairs. Marcasite also possesses homoatomic anion pairs, but the arrangement of the metal and diatomic anions differs from that of pyrite. Despite its name, chalcopyrite () does not contain dianion pairs, but single S2− sulfide anions.
Crystal habit
Pyrite usually forms cuboid crystals, sometimes forming in close association to form raspberry-shaped masses called framboids. However, under certain circumstances, it can form anastomosing filaments or T-shaped crystals.
Pyrite can also form shapes almost the same as a regular dodecahedron, known as pyritohedra, and this suggests an explanation for the artificial geometrical models found in Europe as early as the 5th century BC.
Varieties
Cattierite (CoS2), vaesite (NiS2) and hauerite (MnS2), as well as sperrylite (PtAs2) are similar in their structure and belong also to the pyrite group.
is a nickel-cobalt bearing variety of pyrite, with > 50% substitution of Ni2+ for Fe2+ within pyrite. Bravoite is not a formally recognised mineral, and is named after the Peruvian scientist Jose J. Bravo (1874–1928).
Distinguishing similar minerals
Pyrite is distinguishable from native gold by its hardness, brittleness and crystal form. Pyrite fractures are very uneven, sometimes conchoidal because it does not cleave along a preferential plane. Native gold nuggets, or glitters, do not break but deform in a ductile way. Pyrite is brittle, gold is malleable.
Natural gold tends to be anhedral (irregularly shaped without well defined faces), whereas pyrite comes as either cubes or multifaceted crystals with well developed and sharp faces easy to recognise. Well crystallised pyrite crystals are euhedral (i.e., with nice faces). Pyrite can often be distinguished by the striations which, in many cases, can be seen on its surface. Chalcopyrite () is brighter yellow with a greenish hue when wet and is softer (3.5–4 on Mohs' scale). Arsenopyrite (FeAsS) is silver white and does not become more yellow when wet.
Hazards
Iron pyrite is unstable when exposed to the oxidizing conditions prevailing at the Earth's surface: iron pyrite in contact with atmospheric oxygen and water, or damp, ultimately decomposes into iron oxyhydroxides (ferrihydrite, FeO(OH)) and sulfuric acid (). This process is accelerated by the action of Acidithiobacillus bacteria which oxidize pyrite to first produce ferrous ions (), sulfate ions (), and release protons (, or ). In a second step, the ferrous ions () are oxidized by into ferric ions () which hydrolyze also releasing ions and producing FeO(OH). These oxidation reactions occur more rapidly when pyrite is finely dispersed (framboidal crystals initially formed by sulfate reducing bacteria (SRB) in argillaceous sediments or dust from mining operations).
Pyrite oxidation and acid mine drainage
Pyrite oxidation by atmospheric in the presence of moisture () initially produces ferrous ions () and sulfuric acid which dissociates into sulfate ions and protons, leading to acid mine drainage (AMD). An example of acid rock drainage caused by pyrite is the 2015 Gold King Mine waste water spill.
2FeS2{\scriptstyle (s)} + 7O2{\scriptstyle (g)} + 2H2O{\scriptstyle (l)} -> 2Fe^{2+}{\scriptstyle (aq)} + 4SO4^{2-}{\scriptstyle (aq)} + 4H+{\scriptstyle (aq)}.
Dust explosions
Pyrite oxidation is sufficiently exothermic that underground coal mines in high-sulfur coal seams have occasionally had serious problems with spontaneous combustion. The solution is the use of buffer blasting and the use of various sealing or cladding agents to hermetically seal the mined-out areas to exclude oxygen.
In modern coal mines, limestone dust is sprayed onto the exposed coal surfaces to reduce the hazard of dust explosions. This has the secondary benefit of neutralizing the acid released by pyrite oxidation and therefore slowing the oxidation cycle described above, thus reducing the likelihood of spontaneous combustion. In the long term, however, oxidation continues, and the hydrated sulfates formed may exert crystallization pressure that can expand cracks in the rock and lead eventually to roof fall.
Weakened building materials
Building stone containing pyrite tends to stain brown as pyrite oxidizes. This problem appears to be significantly worse if any marcasite is present. The presence of pyrite in the aggregate used to make concrete can lead to severe deterioration as pyrite oxidizes. In early 2009, problems with Chinese drywall imported into the United States after Hurricane Katrina were attributed to pyrite oxidation, followed by microbial sulfate reduction which released hydrogen sulfide gas (). These problems included a foul odor and corrosion of copper wiring. In the United States, in Canada, and more recently in Ireland, where it was used as underfloor infill, pyrite contamination has caused major structural damage. Concrete exposed to sulfate ions, or sulfuric acid, degrades by sulfate attack: the formation of expansive mineral phases, such as ettringite (small needle crystals exerting a huge crystallization pressure inside the concrete pores) and gypsum creates inner tensile forces in the concrete matrix which destroy the hardened cement paste, form cracks and fissures in concrete, and can lead to the ultimate ruin of the structure. Normalized tests for construction aggregate certify such materials as free of pyrite or marcasite.
Occurrence
Pyrite is the most common of sulfide minerals and is widespread in igneous, metamorphic, and sedimentary rocks. It is a common accessory mineral in igneous rocks, where it also occasionally occurs as larger masses arising from an immiscible sulfide phase in the original magma. It is found in metamorphic rocks as a product of contact metamorphism. It also forms as a high-temperature hydrothermal mineral, though it occasionally forms at lower temperatures.
Pyrite occurs both as a primary mineral, present in the original sediments, and as a secondary mineral, deposited during diagenesis. Pyrite and marcasite commonly occur as replacement pseudomorphs after fossils in black shale and other sedimentary rocks formed under reducing environmental conditions. Pyrite is common as an accessory mineral in shale, where it is formed by precipitation from anoxic seawater, and coal beds often contain significant pyrite.
Notable deposits are found as lenticular masses in Virginia, U.S., and in smaller quantities in many other locations. Large deposits are mined at Rio Tinto in Spain and elsewhere in the Iberian Peninsula.
Cultural beliefs
In the beliefs of the Thai people (especially those in the south), pyrite is known as Khao tok Phra Ruang, Khao khon bat Phra Ruang (ข้าวตอกพระร่วง, ข้าวก้นบาตรพระร่วง) or Phet na tang, Hin na tang (เพชรหน้าทั่ง, หินหน้าทั่ง). It is believed to be a sacred item that has the power to prevent evil, black magic or demons.
Images
| Physical sciences | Minerals | Earth science |
45784 | https://en.wikipedia.org/wiki/Biomimetics | Biomimetics | Biomimetics or biomimicry is the emulation of the models, systems, and elements of nature for the purpose of solving complex human problems. The terms "biomimetics" and "biomimicry" are derived from (bios), life, and μίμησις (mīmēsis), imitation, from μιμεῖσθαι (mīmeisthai), to imitate, from μῖμος (mimos), actor. A closely related field is bionics.
Nature has gone through evolution over the 3.8 billion years since life is estimated to have appeared on the Earth. It has evolved species with high performance using commonly found materials. Surfaces of solids interact with other surfaces and the environment and derive the properties of materials. Biological materials are highly organized from the molecular to the nano-, micro-, and macroscales, often in a hierarchical manner with intricate nanoarchitecture that ultimately makes up a myriad of different functional elements. Properties of materials and surfaces result from a complex interplay between surface structure and morphology and physical and chemical properties. Many materials, surfaces, and objects in general provide multifunctionality.
Various materials, structures, and devices have been fabricated for commercial interest by engineers, material scientists, chemists, and biologists, and for beauty, structure, and design by artists and architects. Nature has solved engineering problems such as self-healing abilities, environmental exposure tolerance and resistance, hydrophobicity, self-assembly, and harnessing solar energy. Economic impact of bioinspired materials and surfaces is significant, on the order of several hundred billion dollars per year worldwide.
History
One of the early examples of biomimicry was the study of birds to enable human flight. Although never successful in creating a "flying machine", Leonardo da Vinci (1452–1519) was a keen observer of the anatomy and flight of birds, and made numerous notes and sketches on his observations as well as sketches of "flying machines". The Wright Brothers, who succeeded in flying the first heavier-than-air aircraft in 1903, allegedly derived inspiration from observations of pigeons in flight.
During the 1950s the American biophysicist and polymath Otto Schmitt developed the concept of "biomimetics". During his doctoral research he developed the Schmitt trigger by studying the nerves in squid, attempting to engineer a device that replicated the biological system of nerve propagation. He continued to focus on devices that mimic natural systems and by 1957 he had perceived a converse to the standard view of biophysics at that time, a view he would come to call biomimetics.
In 1960 Jack E. Steele coined a similar term, bionics, at Wright-Patterson Air Force Base in Dayton, Ohio, where Otto Schmitt also worked. Steele defined bionics as "the science of systems which have some function copied from nature, or which represent characteristics of natural systems or their analogues". During a later meeting in 1963 Schmitt stated,
In 1969, Schmitt used the term "biomimetic" in the title one of his papers, and by 1974 it had found its way into Webster's Dictionary. Bionics entered the same dictionary earlier in 1960 as "a science concerned with the application of data about the functioning of biological systems to the solution of engineering problems". Bionic took on a different connotation when Martin Caidin referenced Jack Steele and his work in the novel Cyborg which later resulted in the 1974 television series The Six Million Dollar Man and its spin-offs. The term bionic then became associated with "the use of electronically operated artificial body parts" and "having ordinary human powers increased by or as if by the aid of such devices". Because the term bionic took on the implication of supernatural strength, the scientific community in English speaking countries largely abandoned it.
The term biomimicry appeared as early as 1982. Biomimicry was popularized by scientist and author Janine Benyus in her 1997 book Biomimicry: Innovation Inspired by Nature. Biomimicry is defined in the book as a "new science that studies nature's models and then imitates or takes inspiration from these designs and processes to solve human problems". Benyus suggests looking to Nature as a "Model, Measure, and Mentor" and emphasizes sustainability as an objective of biomimicry.
One of the latest examples of biomimicry has been created by Johannes-Paul Fladerer and Ernst Kurzmann by the description of "managemANT". This term (a combination of the words "management" and "ant"), describes the usage of behavioural strategies of ants in economic and management strategies. The potential long-term impacts of biomimicry were quantified in a 2013 Fermanian Business & Economic Institute Report commissioned by the San Diego Zoo. The findings demonstrated the potential economic and environmental benefits of biomimicry, which can be further seen in Johannes-Paul Fladerer and Ernst Kurzmann's "managemANT" approach. This approach utilizes the behavioral strategies of ants in economic and management strategies.
Bio-inspired technologies
Biomimetics could in principle be applied in many fields. Because of the diversity and complexity of biological systems, the number of features that might be imitated is large. Biomimetic applications are at various stages of development from technologies that might become commercially usable to prototypes. Murray's law, which in conventional form determined the optimum diameter of blood vessels, has been re-derived to provide simple equations for the pipe or tube diameter which gives a minimum mass engineering system.
Locomotion
Aircraft wing design and flight techniques are being inspired by birds and bats. The aerodynamics of streamlined design of improved Japanese high speed train Shinkansen 500 Series were modelled after the beak of Kingfisher bird.
Biorobots based on the physiology and methods of locomotion of animals include BionicKangaroo which moves like a kangaroo, saving energy from one jump and transferring it to its next jump; Kamigami Robots, a children's toy, mimic cockroach locomotion to run quickly and efficiently over indoor and outdoor surfaces, and Pleobot, a shrimp-inspired robot to study metachronal swimming and the ecological impacts of this propulsive gait on the environment.
Biomimetic flying robots (BFRs)
BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments.
Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented.
Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal.
Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments. The prototype by Phan and Park took inspiration from the rhinoceros beetle, so it can successfully continue flight even after a collision by deforming its hindwings.
Biomimetic architecture
Living beings have adapted to a constantly changing environment during evolution through mutation, recombination, and selection. The core idea of the biomimetic philosophy is that nature's inhabitants including animals, plants, and microbes have the most experience in solving problems and have already found the most appropriate ways to last on planet Earth. Similarly, biomimetic architecture seeks solutions for building sustainability present in nature. While nature serves as a model, there are few examples of biomimetic architecture that aim to be nature positive.
The 21st century has seen a ubiquitous waste of energy due to inefficient building designs, in addition to the over-utilization of energy during the operational phase of its life cycle. In parallel, recent advancements in fabrication techniques, computational imaging, and simulation tools have opened up new possibilities to mimic nature across different architectural scales. As a result, there has been a rapid growth in devising innovative design approaches and solutions to counter energy problems. Biomimetic architecture is one of these multi-disciplinary approaches to sustainable design that follows a set of principles rather than stylistic codes, going beyond using nature as inspiration for the aesthetic components of built form but instead seeking to use nature to solve problems of the building's functioning and saving energy.
Characteristics
The term biomimetic architecture refers to the study and application of construction principles which are found in natural environments and species, and are translated into the design of sustainable solutions for architecture. Biomimetic architecture uses nature as a model, measure and mentor for providing architectural solutions across scales, which are inspired by natural organisms that have solved similar problems in nature. Using nature as a measure refers to using an ecological standard of measuring sustainability, and efficiency of man-made innovations, while the term mentor refers to learning from natural principles and using biology as an inspirational source.
Biomorphic architecture, also referred to as bio-decoration, on the other hand, refers to the use of formal and geometric elements found in nature, as a source of inspiration for aesthetic properties in designed architecture, and may not necessarily have non-physical, or economic functions. A historic example of biomorphic architecture dates back to Egyptian, Greek and Roman cultures, using tree and plant forms in the ornamentation of structural columns.
Procedures
Within biomimetic architecture, two basic procedures can be identified, namely, the bottom-up approach (biology push) and top-down approach (technology pull). The boundary between the two approaches is blurry with the possibility of transition between the two, depending on each individual case. Biomimetic architecture is typically carried out in interdisciplinary teams in which biologists and other natural scientists work in collaboration with engineers, material scientists, architects, designers, mathematicians and computer scientists.
In the bottom-up approach, the starting point is a new result from basic biological research promising for biomimetic implementation. For example, developing a biomimetic material system after the quantitative analysis of the mechanical, physical, and chemical properties of a biological system.
In the top-down approach, biomimetic innovations are sought for already existing developments that have been successfully established on the market. The cooperation focuses on the improvement or further development of an existing product.
Examples
Researchers studied the termite's ability to maintain virtually constant temperature and humidity in their termite mounds in Africa despite outside temperatures that vary from . Researchers initially scanned a termite mound and created 3-D images of the mound structure, which revealed construction that could influence human building design. The Eastgate Centre, a mid-rise office complex in Harare, Zimbabwe, stays cool via a passive cooling architecture that uses only 10% of the energy of a conventional building of the same size.
Researchers in the Sapienza University of Rome were inspired by the natural ventilation in termite mounds and designed a double façade that significantly cuts down over lit areas in a building. Scientists have imitated the porous nature of mound walls by designing a facade with double panels that was able to reduce heat gained by radiation and increase heat loss by convection in cavity between the two panels. The overall cooling load on the building's energy consumption was reduced by 15%.
A similar inspiration was drawn from the porous walls of termite mounds to design a naturally ventilated façade with a small ventilation gap. This design of façade is able to induce air flow due to the Venturi effect and continuously circulates rising air in the ventilation slot. Significant transfer of heat between the building's external wall surface and the air flowing over it was observed. The design is coupled with greening of the façade. Green wall facilitates additional natural cooling via evaporation, respiration and transpiration in plants. The damp plant substrate further support the cooling effect.
Scientists in Shanghai University were able to replicate the complex microstructure of clay-made conduit network in the mound to mimic the excellent humidity control in mounds. They proposed a porous humidity control material (HCM) using sepiolite and calcium chloride with water vapor adsorption-desorption content at 550 grams per meter squared. Calcium chloride is a desiccant and improves the water vapor adsorption-desorption property of the Bio-HCM. The proposed bio-HCM has a regime of interfiber mesopores which acts as a mini reservoir. The flexural strength of the proposed material was estimated to be 10.3 MPa using computational simulations.
In structural engineering, the Swiss Federal Institute of Technology (EPFL) has incorporated biomimetic characteristics in an adaptive deployable "tensegrity" bridge. The bridge can carry out self-diagnosis and self-repair. The arrangement of leaves on a plant has been adapted for better solar power collection.
Analysis of the elastic deformation happening when a pollinator lands on the sheath-like perch part of the flower Strelitzia reginae (known as bird-of-paradise flower) has inspired architects and scientists from the University of Freiburg and University of Stuttgart to create hingeless shading systems that can react to their environment. These bio-inspired products are sold under the name Flectofin.
Other hingeless bioinspired systems include Flectofold. Flectofold has been inspired from the trapping system developed by the carnivorous plant Aldrovanda vesiculosa.
Structural materials
There is a great need for new structural materials that are light weight but offer exceptional combinations of stiffness, strength, and toughness.
Such materials would need to be manufactured into bulk materials with complex shapes at high volume and low cost and would serve a variety of fields such as construction, transportation, energy storage and conversion. In a classic design problem, strength and toughness are more likely to be mutually exclusive, i.e., strong materials are brittle and tough materials are weak. However, natural materials with complex and hierarchical material gradients that span from nano- to macro-scales are both strong and tough. Generally, most natural materials utilize limited chemical components but complex material architectures that give rise to exceptional mechanical properties. Understanding the highly diverse and multi functional biological materials and discovering approaches to replicate such structures will lead to advanced and more efficient technologies. Bone, nacre (abalone shell), teeth, the dactyl clubs of stomatopod shrimps and bamboo are great examples of damage tolerant materials. The exceptional resistance to fracture of bone is due to complex deformation and toughening mechanisms that operate at spanning different size scales — nanoscale structure of protein molecules to macroscopic physiological scale. Nacre exhibits similar mechanical properties however with rather simpler structure. Nacre shows a brick and mortar like structure with thick mineral layer (0.2–0.9 μm) of closely packed aragonite structures and thin organic matrix (~20 nm). While thin films and micrometer sized samples that mimic these structures are already produced, successful production of bulk biomimetic structural materials is yet to be realized. However, numerous processing techniques have been proposed for producing nacre like materials. Pavement cells, epidermal cells on the surface of plant leaves and petals, often form wavy interlocking patterns resembling jigsaw puzzle pieces and are shown to enhance the fracture toughness of leaves, key to plant survival. Their pattern, replicated in laser-engraved Poly(methyl methacrylate) samples, was also demonstrated to lead to increased fracture toughness. It is suggested that the arrangement and patterning of cells play a role in managing crack propagation in tissues.
Biomorphic mineralization is a technique that produces materials with morphologies and structures resembling those of natural living organisms by using bio-structures as templates for mineralization. Compared to other methods of material production, biomorphic mineralization is facile, environmentally benign and economic.
Freeze casting (ice templating), an inexpensive method to mimic natural layered structures, was employed by researchers at Lawrence Berkeley National Laboratory to create alumina-Al-Si and IT HAP-epoxy layered composites that match the mechanical properties of bone with an equivalent mineral/organic content. Various further studies also employed similar methods to produce high strength and high toughness composites involving a variety of constituent phases.
Recent studies demonstrated production of cohesive and self supporting macroscopic tissue constructs that mimic living tissues by printing tens of thousands of heterologous picoliter droplets in software-defined, 3D millimeter-scale geometries. Efforts are also taken up to mimic the design of nacre in artificial composite materials using fused deposition modelling and the helicoidal structures of stomatopod clubs in the fabrication of high performance carbon fiber-epoxy composites.
Various established and novel additive manufacturing technologies like PolyJet printing, direct ink writing, 3D magnetic printing, multi-material magnetically assisted 3D printing and magnetically assisted slip casting have also been utilized to mimic the complex micro-scale architectures of natural materials and provide huge scope for future research.
Spider silk is tougher than Kevlar used in bulletproof vests. Engineers could in principle use such a material, if it could be reengineered to have a long enough life, for parachute lines, suspension bridge cables, artificial ligaments for medicine, and other purposes. The self-sharpening teeth of many animals have been copied to make better cutting tools.
New ceramics that exhibit giant electret hysteresis have also been realized.
Neuronal computers
Neuromorphic computers and sensors are electrical devices that copy the structure and function of biological neurons in order to compute. One example of this is the event camera in which only the
pixels that receive a new signal update to a new state. All other pixels do not update until a signal is received.
Self healing-materials
In some biological systems, self-healing occurs via chemical releases at the site of fracture, which initiate a systemic response to transport repairing agents to the fracture site. This promotes autonomic healing. To demonstrate the use of micro-vascular networks for autonomic healing, researchers developed a microvascular coating–substrate architecture that mimics human skin. Bio-inspired self-healing structural color hydrogels that maintain the stability of an inverse opal structure and its resultant structural colors were developed. A self-repairing membrane inspired by rapid self-sealing processes in plants was developed for inflatable lightweight structures such as rubber boats or Tensairity constructions. The researchers applied a thin soft cellular polyurethane foam coating on the inside of a fabric substrate, which closes the crack if the membrane is punctured with a spike. Self-healing materials, polymers and composite materials capable of mending cracks have been produced based on biological materials.
The self-healing properties may also be achieved by the breaking and reforming of hydrogen bonds upon cyclical stress of the material.
Surfaces
Surfaces that recreate the properties of shark skin are intended to enable more efficient movement through water. Efforts have been made to produce fabric that emulates shark skin.
Surface tension biomimetics are being researched for technologies such as hydrophobic or hydrophilic coatings and microactuators.
Adhesion
Wet adhesion
Some amphibians, such as tree and torrent frogs and arboreal salamanders, are able to attach to and move over wet or even flooded environments without falling. This kind of organisms have toe pads which are permanently wetted by mucus secreted from glands that open into the channels between epidermal cells. They attach to mating surfaces by wet adhesion and they are capable of climbing on wet rocks even when water is flowing over the surface. Tire treads have also been inspired by the toe pads of tree frogs. 3D printed hierarchical surface models, inspired from tree and torrent frogs toe pad design, have been observed to produce better wet traction than conventional tire design.
Marine mussels can stick easily and efficiently to surfaces underwater under the harsh conditions of the ocean. Mussels use strong filaments to adhere to rocks in the inter-tidal zones of wave-swept beaches, preventing them from being swept away in strong sea currents. Mussel foot proteins attach the filaments to rocks, boats and practically any surface in nature including other mussels. These proteins contain a mix of amino acid residues which has been adapted specifically for adhesive purposes. Researchers from the University of California Santa Barbara borrowed and simplified chemistries that the mussel foot uses to overcome this engineering challenge of wet adhesion to create copolyampholytes, and one-component adhesive systems with potential for employment in nanofabrication protocols. Other research has proposed adhesive glue from mussels.
Dry adhesion
Leg attachment pads of several animals, including many insects (e.g., beetles and flies), spiders and lizards (e.g., geckos), are capable of attaching to a variety of surfaces and are used for locomotion, even on vertical walls or across ceilings. Attachment systems in these organisms have similar structures at their terminal elements of contact, known as setae. Such biological examples have offered inspiration in order to produce climbing robots, boots and tape. Synthetic setae have also been developed for the production of dry adhesives.
Liquid repellency
Superliquiphobicity refers to a remarkable surface property where a solid surface exhibits an extreme aversion to liquids, causing droplets to bead up and roll off almost instantaneously upon contact. This behavior arises from intricate surface textures and interactions at the nanoscale, effectively preventing liquids from wetting or adhering to the surface. The term "superliquiphobic" is derived from "superhydrophobic," which describes surfaces highly resistant to water. Superliquiphobic surfaces go beyond water repellency and display repellent characteristics towards a wide range of liquids, including those with very low surface tension or containing surfactants.
Superliquiphobicity emerges when a solid surface possesses minute roughness, forming interfaces with droplets through wetting while altering contact angles. This behavior hinges on the roughness factor (Rf), defining the ratio of solid-liquid area to its projection, influencing contact angles. On rough surfaces, non-wetting liquids give rise to composite solid-liquid-air interfaces, their contact angles determined by the distribution of wet and air-pocket areas. The achievement of superliquiphobicity involves increasing the fractional flat geometrical area (fLA) and Rf, leading to surfaces that actively repel liquids.
The inspiration for crafting such surfaces draws from nature's ingenuity, illustrated by the "lotus effect". Leaves of water-repellent plants, like the lotus, exhibit inherent hierarchical structures featuring nanoscale wax-coated formations. Other natural surfaces with these capabilities can include Beetle carapaces and cacti spines, which may exhibit rough features at multiple size scales. These structures lead to superhydrophobicity, where water droplets perch on trapped air bubbles, resulting in high contact angles and minimal contact angle hysteresis. This natural example guides the development of superliquiphobic surfaces, capitalizing on re-entrant geometries that can repel low surface tension liquids and achieve near-zero contact angles.
Creating superliquiphobic surfaces involves pairing re-entrant geometries with low surface energy materials, such as fluorinated substances or liquid-like silocones . These geometries include overhangs that widen beneath the surface, enabling repellency even for minimal contact angles. These surfaces find utility in self-cleaning, anti-icing, anti-fogging, antifouling, enhanced condensation, and more, presenting innovative solutions to challenges in biomedicine, desalination, atmospheric water harvesting, and energy conversion.
In essence, superliquiphobicity, inspired by natural models like the lotus leaf, capitalizes on re-entrant geometries and surface properties to create interfaces that actively repel liquids. These surfaces hold immense promise across a range of applications, promising enhanced functionality and performance in various technological and industrial contexts.
Optics
Biomimetic materials are gaining increasing attention in the field of optics and photonics. There are still little known bioinspired or biomimetic products involving the photonic properties of plants or animals. However, understanding how nature designed such optical materials from biological resources is a current field of research.
Inspiration from fruits and plants
One source of biomimetic inspiration is from plants. Plants have proven to be concept generations for the following functions; re(action)-coupling, self (adaptability), self-repair, and energy-autonomy. As plants do not have a centralized decision making unit (i.e. a brain), most plants have a decentralized autonomous system in various organs and tissues of the plant. Therefore, they react to multiple stimulus such as light, heat, and humidity.
One example is the carnivorous plant species Dionaea muscipula (Venus flytrap). For the last 25 years, there has been research focus on the motion principles of the plant to develop AVFT (artificial Venus flytrap robots). Through the movement during prey capture, the plant inspired soft robotic motion systems. The fast snap buckling (within 100–300 ms) of the trap closure movement is initiated when prey triggers the hairs of the plant within a certain time (twice within 20 s). AVFT systems exist, in which the trap closure movements are actuated via magnetism, electricity, pressurized air, and temperature changes.
Another example of mimicking plants, is the Pollia condensata, also known as the marble berry. The chiral self-assembly of cellulose inspired by the Pollia condensata berry has been exploited to make optically active films. Such films are made from cellulose which is a biodegradable and biobased resource obtained from wood or cotton. The structural colours can potentially be everlasting and have more vibrant colour than the ones obtained from chemical absorption of light. Pollia condensata is not the only fruit showing a structural coloured skin; iridescence is also found in berries of other species such as Margaritaria nobilis. These fruits show iridescent colors in the blue-green region of the visible spectrum which gives the fruit a strong metallic and shiny visual appearance. The structural colours come from the organisation of cellulose chains in the fruit's epicarp, a part of the fruit skin. Each cell of the epicarp is made of a multilayered envelope that behaves like a Bragg reflector. However, the light which is reflected from the skin of these fruits is not polarised unlike the one arising from man-made replicates obtained from the self-assembly of cellulose nanocrystals into helicoids, which only reflect left-handed circularly polarised light.
The fruit of Elaeocarpus angustifolius also show structural colour that come arises from the presence of specialised cells called iridosomes which have layered structures. Similar iridosomes have also been found in Delarbrea michieana fruits.
In plants, multi layer structures can be found either at the surface of the leaves (on top of the epidermis), such as in Selaginella willdenowii or within specialized intra-cellular organelles, the so-called iridoplasts, which are located inside the cells of the upper epidermis. For instance, the rain forest plants Begonia pavonina have iridoplasts located inside the epidermal cells.
Structural colours have also been found in several algae, such as in the red alga Chondrus crispus (Irish Moss).
Inspiration from animals
Structural coloration produces the rainbow colours of soap bubbles, butterfly wings and many beetle scales. Phase-separation has been used to fabricate ultra-white scattering membranes from polymethylmethacrylate, mimicking the beetle Cyphochilus. LED lights can be designed to mimic the patterns of scales on fireflies' abdomens, improving their efficiency.
Morpho butterfly wings are structurally coloured to produce a vibrant blue that does not vary with angle. This effect can be mimicked by a variety of technologies. Lotus Cars claim to have developed a paint that mimics the Morpho butterfly's structural blue colour. In 2007, Qualcomm commercialised an interferometric modulator display technology, "Mirasol", using Morpho-like optical interference. In 2010, the dressmaker Donna Sgro made a dress from Teijin Fibers' Morphotex, an undyed fabric woven from structurally coloured fibres, mimicking the microstructure of Morpho butterfly wing scales.
Canon Inc.'s SubWavelength structure Coating uses wedge-shaped structures the size of the wavelength of visible light. The wedge-shaped structures cause a continuously changing refractive index as light travels through the coating, significantly reducing lens flare. This imitates the structure of a moth's eye. Notable figures such as the Wright Brothers and Leonardo da Vinci attempted to replicate the flight observed in birds. In an effort to reduce aircraft noise researchers have looked to the leading edge of owl feathers, which have an array of small finlets or rachis adapted to disperse aerodynamic pressure and provide nearly silent flight to the bird.
Agricultural systems
Holistic planned grazing, using fencing and/or herders, seeks to restore grasslands by carefully planning movements of large herds of livestock to mimic the vast herds found in nature. The natural system being mimicked and used as a template is grazing animals concentrated by pack predators that must move on after eating, trampling, and manuring an area, and returning only after it has fully recovered. Its founder Allan Savory and some others have claimed potential in building soil, increasing biodiversity, and reversing desertification. However, many researchers have disputed Savory's claim. Studies have often found that the method increases desertification instead of reducing it.
Other uses
Some air conditioning systems use biomimicry in their fans to increase airflow while reducing power consumption.
Technologists like Jas Johl have speculated that the functionality of vacuole cells could be used to design highly adaptable security systems. "The functionality of a vacuole, a biological structure that guards and promotes growth, illuminates the value of adaptability as a guiding principle for security." The functions and significance of vacuoles are fractal in nature, the organelle has no basic shape or size; its structure varies according to the requirements of the cell. Vacuoles not only isolate threats, contain what's necessary, export waste, maintain pressure—they also help the cell scale and grow. Johl argues these functions are necessary for any security system design. The 500 Series Shinkansen used biomimicry to reduce energy consumption and noise levels while increasing passenger comfort. With reference to space travel, NASA and other firms have sought to develop swarm-type space drones inspired by bee behavioural patterns, and oxtapod terrestrial drones designed with reference to desert spiders.
Other technologies
Protein folding has been used to control material formation for self-assembled functional nanostructures. Polar bear fur has inspired the design of thermal collectors and clothing. The light refractive properties of the moth's eye has been studied to reduce the reflectivity of solar panels.
The Bombardier beetle's powerful repellent spray inspired a Swedish company to develop a "micro mist" spray technology, which is claimed to have a low carbon impact (compared to aerosol sprays). The beetle mixes chemicals and releases its spray via a steerable nozzle at the end of its abdomen, stinging and confusing the victim.
Most viruses have an outer capsule 20 to 300 nm in diameter. Virus capsules are remarkably robust and capable of withstanding temperatures as high as 60 °C; they are stable across the pH range 2–10. Viral capsules can be used to create nano device components such as nanowires, nanotubes, and quantum dots. Tubular virus particles such as the tobacco mosaic virus (TMV) can be used as templates to create nanofibers and nanotubes, since both the inner and outer layers of the virus are charged surfaces which can induce nucleation of crystal growth. This was demonstrated through the production of platinum and gold nanotubes using TMV as a template. Mineralized virus particles have been shown to withstand various pH values by mineralizing the viruses with different materials such as silicon, PbS, and CdS and could therefore serve as a useful carriers of material. A spherical plant virus called cowpea chlorotic mottle virus (CCMV) has interesting expanding properties when exposed to environments of pH higher than 6.5. Above this pH, 60 independent pores with diameters about 2 nm begin to exchange substance with the environment. The structural transition of the viral capsid can be utilized in Biomorphic mineralization for selective uptake and deposition of minerals by controlling the solution pH. Possible applications include using the viral cage to produce uniformly shaped and sized quantum dot semiconductor nanoparticles through a series of pH washes. This is an alternative to the apoferritin cage technique currently used to synthesize uniform CdSe nanoparticles. Such materials could also be used for targeted drug delivery since particles release contents upon exposure to specific pH levels.
| Technology | Biotechnology | null |
45798 | https://en.wikipedia.org/wiki/Clouded%20leopard | Clouded leopard | The clouded leopard (Neofelis nebulosa), also called mainland clouded leopard, is a wild cat inhabiting dense forests from the foothills of the Himalayas through Northeast India and Bhutan to mainland Southeast Asia into South China. It was first described in 1821 on the basis of a skin of an individual from China. The clouded leopard has large dusky-grey blotches and irregular spots and stripes reminiscent of clouds. Its head-and-body length ranges from with a long tail. It uses its tail for balancing when moving in trees and is able to climb down vertical tree trunks head first. It rests in trees during the day and hunts by night on the forest floor.
The clouded leopard is the sister taxon to other pantherine cats, having genetically diverged 9.32 to 4.47 million years ago. Today, the clouded leopard is locally extinct in Singapore, Taiwan, and possibly also in Hainan Island and Vietnam. The wild population is believed to be in decline with fewer than 10,000 adults and no more than 1,000 in each subpopulation. It has therefore been listed as Vulnerable on the IUCN Red List since 2008. The population is threatened by large–scale deforestation and commercial poaching for the wildlife trade. Its body parts are offered for decoration and clothing, though it is legally protected in most range countries.
The clouded leopard has been kept in zoological gardens since the early 20th century. Captive breeding programs were initiated in the 1980s. In captivity, the clouded leopard has an average lifespan of 11 years.
Taxonomy and phylogeny
Felis nebulosa was proposed by Edward Griffith in 1821 who first described a skin of a clouded leopard that was brought alive from Guangdong in China to the menagerie at Exeter Exchange in London.
Felis macrosceloides proposed by Brian Houghton Hodgson in 1841 was a clouded leopard specimen from Nepal.
Felis brachyura proposed by Robert Swinhoe in 1862 was a clouded leopard skin from Taiwan.
The generic name Neofelis was proposed by John Edward Gray in 1867 who subordinated all three to this genus.
At present, N. nebulosa is considered a monotypic species due to lack of evidence for subspeciation.
Felis diardi proposed by Georges Cuvier in 1823 was based on a clouded leopard skin from Java.
It was considered a clouded leopard subspecies by Reginald Innes Pocock in 1917. In 2006, it was identified as a distinct Neofelis species, the Sunda clouded leopard. Populations in Taiwan and Hainan Island are considered to belong to the mainland clouded leopard.
Phylogeny
Skulls of clouded leopard and Panthera species were analysed morphologically in the 1960s. Results indicate that the clouded leopard forms an evolutionary link between the Pantherinae and the Felinae.
Phylogenetic analysis of the nuclear DNA in tissue samples from all Felidae species revealed that the evolutionary radiation of the Felidae began in the Miocene around in Asia. Analysis of mitochondrial DNA of all Felidae species indicates a radiation at .
The clouded leopard is the sister taxon to all other members of the Pantherinae, diverging , based on analysis of their nuclear DNA.
The clouded leopard from mainland Asia reached Borneo and Sumatra via a now submerged land bridge probably during the Pleistocene, when populations became isolated during periods of global cooling and warming. Genetic analysis of hair samples of the clouded leopard and its sister species the Sunda clouded leopard (N. diardi) indicates that they diverged 2.0–0.93 million years ago.
Characteristics
The clouded leopard's fur is of a dark grey or ochreous ground-color, often largely obliterated by black and dark dusky-grey blotched pattern. There are black spots on the head, and the ears are black. Partly fused or broken-up stripes run from the corner of the eyes over the cheek, from the corner of the mouth to the neck, and along the nape to the shoulders. Elongated blotches continue down the spine and form a single median stripe on the loins. Two large blotches of dark dusky-grey hair on the side of the shoulders are each emphasized posteriorly by a dark stripe, which passes on to the foreleg and breaks up into irregular spots. The flanks are marked by dark dusky-grey irregular blotches bordered behind by long, oblique, irregularly curved or looped stripes. These blotches yielding the clouded pattern suggest the English name of the cat. The underparts and legs are spotted, and the tail is marked by large, irregular, paired spots. Its legs are short and stout, and paws broad. Females are slightly smaller than males.
Its hyoid bone is ossified, making it possible to purr. Its pupils contract into vertical slits. Irises are brownish yellow to grayish green. Melanistic clouded leopards are uncommon. It has rather short limbs compared to the other big cats. Its hind limbs are longer than its front limbs to allow for increased jumping and leaping capabilities. Its ulnae and radii are not fused, which also contributes to a greater range of motion when climbing trees and stalking prey. Clouded leopards weigh between . Females vary in head-to-body length from , with a tail long. Males are larger at with a tail long.
Its shoulder height varies from .
Its skull is long and low with strong occipital and sagittal crests. The canine teeth are exceptionally long, the upper being about three times as long as the basal width of the socket. The first premolar is usually absent. The upper pair of canines measure or longer.
It has a bite force at the canine tip of 544.3 Newton and a bite force quotient at the canine tip of 122.4.
The clouded leopard is often referred to as a "modern-day sabre-tooth" because it has the largest canines in proportion to its body size.
Distribution and habitat
The clouded leopard occurs from the Himalayan foothills in Nepal, Bhutan and India to Myanmar, southeastern Bangladesh, Thailand, Peninsular Malaysia and to south of the Yangtze River in China. It is locally extinct in Singapore and Taiwan.
Clouded leopards were found in Nepal in 1987 and 1988, having previously been presumed to be extinct in the country. Since then, the clouded leopard has been recorded in Shivapuri Nagarjun National Park and in Annapurna Conservation Area. Between 2014 and 2015, it was also recorded in Langtang National Park at an elevation range of .
In India, it occurs in the states of Sikkim, northern West Bengal, Tripura, Mizoram, Manipur, Assam, Nagaland and Arunachal Pradesh, as well as in the Meghalaya subtropical forests. In Pakke Tiger Reserve, a clouded leopard was photographed in semi-evergreen forest at an elevation of . In Sikkim, clouded leopards were photographed by camera traps at elevations of between April 2008 and May 2010 in the Khangchendzonga Biosphere Reserve. In Manas National Park, 16 individuals were recorded during a survey in November 2010 to February 2011. Between January 2013 and March 2018, clouded leopards were also recorded in Dampa Tiger Reserve, Eaglenest Wildlife Sanctuary and Singchung-Bugun Village Community Reserve, in Meghalaya's Nongkhyllem National Park and Balpakram-Baghmara landscape.
In Bhutan, it was recorded in Royal Manas National Park, Jigme Singye Wangchuck National Park, Phibsoo Wildlife Sanctuary, Jigme Dorji National Park, Phrumsengla National Park, Bumdeling Wildlife Sanctuary and several non-protected areas. In Bangladesh, it was recorded in Sangu Matamuhari in the Chittagong Hill Tracts in 2016. In Myanmar, it was recorded by camera traps for the first time in the hill forests of Karen State in 2015.
In Thailand, it inhabits relatively open, dry tropical forest in Huai Kha Khaeng Wildlife Sanctuary and closed-forest habitats in Khao Yai National Park. In Laos, it was recorded in Nam Et-Phou Louey National Protected Area in dry evergreen and semi-evergreen forests. In Cambodia, it was recorded in deciduous dipterocarp forest in Phnom Prich Wildlife Sanctuary between 2008 and 2009, and in Central Cardamom Mountains National Park, Southern Cardamom National Park, Botum Sakor National Park and Phnom Samkos Wildlife Sanctuary between 2012 and 2016. In Peninsular Malaysia, it was recorded in Taman Negara National Park, Ulu Muda Forest, Pasoh Forest Reserve, Belum-Temengor, Temengor Forest Reserve and in a few linkages between 2009 and 2015.
The last confirmed record of a Formosan clouded leopard dates to 1989, when the skin of a young individual was found in the Taroko National Park. It was not recorded during an extensive camera trapping survey conducted from 1997 to 2012 in more than 1,450 sites inside and outside Taiwanese protected areas.
Behaviour and ecology
The clouded leopard is a solitary cat. Early accounts depict it as a rare, secretive, arboreal, and nocturnal inhabitant of dense primary forest.
It is one of the most talented climbers among the cats. Captive clouded leopards have been observed to climb down vertical tree trunks head first, and hang on to branches with their hind paws bent around branchings of tree limbs. They are capable of supination and can hang down from branches only by bending their hind paws and their tail around them. They can jump up to high.
They use trees as daytime rest sites, but also spend time on the ground when hunting at night. Captive clouded leopards have been observed to scent mark by spraying urine and rubbing their heads on prominent objects.
Their vocalisations include a short high-pitched meow call, a loud crying call, both emitted when a cat is trying to locate another one over a long or short distance; they prusten and raise their muzzle when meeting each other in a friendly manner; when aggressive, they growl with a low-pitched sound and hiss with exposed teeth and wrinkled nose.
Radio-collared clouded leopards were foremost active by night but also showed crepuscular activity peaks.
Clouded leopards recorded in northeast India were most active in the late evening after sunset.
Home ranges have only been estimated in Thailand:
Four individuals were radio-collared in Phu Khieo Wildlife Sanctuary from April 2000 to February 2003. Home ranges of two females were and , and of two males and .
Two individuals were radio-collared during a study from 1997 to 1999 in the Khao Yai National Park. The home range of one female was , of the one male . Both individuals had a core area of .
In 2016, clouded leopards were detected in the forest complex of Khlong Saeng Wildlife Sanctuary and Khao Sok National Park during camera trapping surveys; 15 individuals were identified in a core zone of with population density estimated at 5.06 individuals per ; but only 12 individuals were identified in an edge zone of , which is more disturbed by humans, with density estimated at 3.13 individuals per .
Hunting and diet
When hunting, the clouded leopard stalks its prey or waits for the prey to approach. After making and feeding on a kill, it usually retreats into trees to digest and rest. Its prey includes both arboreal and terrestrial vertebrates.
Pocock presumed that it is adapted for preying upon herbivorous mammals of considerable bulk because of its powerful build, long canines and the deep penetration of its bites. In Thailand, clouded leopards have been observed preying on southern pig-tailed macaque (Macaca nemestrina), Indian hog deer (Axis porcinus), Bengal slow loris (Nycticebus bengalensis), Asiatic brush-tailed porcupine (Atherurus macrourus), Malayan pangolin (Manis javanica) and Berdmore's ground squirrel (Menetes berdmorei). Known prey species in China include barking deer (Muntiacus sp.) and pheasants.
In northern Peninsular Malaysia, a male clouded leopard was photographed while carrying a binturong (Arctictis binturong) in its jaws.
Reproduction and life cycle
Both males and females average 26 months at first reproduction. The female is in estrus for about six days, with her estrous cycle lasting about 30 days. In the wild, mating usually occurs between December and March. The pair mates multiple times over the course of several days. The male grasps the female by the neck who responds with vocalization. Occasionally, he also bites her during courtship and is very aggressive during sexual encounters. Females can bear one litter each year. The male is not involved in raising the cubs.
The female gives birth to a litter of one to five, mostly three cubs, after a gestation period of 93 ± 6 days. Cubs are born with closed eyes and weigh from . Their spots are solid dark, rather than dark rings. Their eyes open after about 10 days. They are active within five weeks and fully weaned at around three months of age. They attain the adult coat pattern at around six months and become independent after around 10 months.
Captive clouded leopards have an average lifespan of 11 years.
One individual has lived to be almost 17 years old.
The generation length of the clouded leopard is about seven years.
Threats
Clouded leopard require larger areas of intact forest than are present in many parts of their range. They are threatened by habitat loss following large–scale deforestation and commercial poaching for the wildlife trade. In Myanmar, 301 body parts of at least 279 clouded leopards, mostly skins and skeletons, were observed in four markets surveyed between 1991 and 2006, despite the protected status of clouded leopards in Myanmar. Some markets are located near Myanmar's borders with China and Thailand and are used to facilitate cross-border smuggling.
In Nepal, 27 cases of clouded leopard body parts were discovered between November 1988 and March 2020 in nine districts of the country, comprising at least 51 individual clouded leopards. In 17 of these cases, the poachers and traders were arrested.
Conservation
The clouded leopard is listed in CITES Appendix I. Hunting is banned in Bangladesh, China, India, Malaysia, Myanmar, Nepal, Taiwan, Thailand and Vietnam. These bans, however, are poorly enforced in India, Malaysia and Thailand.
In the United States, the clouded leopard is listed as endangered under the Endangered Species Act, prohibiting trade in live animals or body parts.
International Clouded Leopard Day is celebrated each year on 4 August since 2018 in zoos and conservation organizations all over the world.
In captivity
Clouded leopards have been kept in zoos since the early 20th century. The international studbook was initiated in the 1970s. Coordinated breeding programs were started in the 1980s and encompass the European Endangered Species Programme, the Species Survival Plan, and the Indian Conservation Breeding Programme. As of 2014, 64 institutions keep clouded leopards.
Early captive-breeding programs involving clouded leopards were not successful, largely due to ignorance of their courtship behaviour. Males have the reputation of being aggressive towards females. For breeding success, it has been deemed extremely important that male and female clouded leopards are compatible. Introducing pairs at a young age gives them opportunities to bond and breed successfully. Facilities breeding clouded leopards need to provide the female with a secluded, off-exhibit area. There has been some recent captive breeding success using artificial insemination with cubs successfully born in 1992, 2015 and 2017.
A study on morbidity and mortality rate of 271 captive clouded leopards across 44 zoos in Europe, Asia and Australia showed that 17% of them died because of respiratory disease, 12% due to maternal neglect and starvation, 10% from generalized infectious disease, 10% from digestive diseases, and 10% from trauma.
In March 2011, two breeding females at the Nashville Zoo at Grassmere gave birth to three cubs, which were raised by zookeepers. Each cub weighed . In June 2011, two cubs were born at the Point Defiance Zoo & Aquarium. The breeding pair was brought from the Khao Kheow Open Zoo in Thailand in an ongoing education and research exchange program. Four cubs were born at Nashville Zoo in 2012. In May 2015, four cubs were born in Point Defiance Zoo & Aquarium.
In culture
The clouded leopard is the state animal of the Indian state of Meghalaya. In the 1970s, the print of Rama Samaraweera's painting Clouded leopard was a best-seller in the US.
| Biology and health sciences | Felines | Animals |
45809 | https://en.wikipedia.org/wiki/Dijkstra%27s%20algorithm | Dijkstra's algorithm | Dijkstra's algorithm ( ) is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, a road network. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.
Dijkstra's algorithm finds the shortest path from a given source node to every other node. It can be used to find the shortest path to a specific destination node, by terminating the algorithm after determining the shortest path to the destination node. For example, if the nodes of the graph represent cities, and the costs of edges represent the average distances between pairs of cities connected by a direct road, then Dijkstra's algorithm can be used to find the shortest route between one city and all other cities. A common application of shortest path algorithms is network routing protocols, most notably IS-IS (Intermediate System to Intermediate System) and OSPF (Open Shortest Path First). It is also employed as a subroutine in algorithms such as Johnson's algorithm.
The algorithm uses a min-priority queue data structure for selecting the shortest paths known so far. Before more advanced priority queue structures were discovered, Dijkstra's original algorithm ran in time, where is the number of nodes. proposed a Fibonacci heap priority queue to optimize the running time complexity to . This is asymptotically the fastest known single-source shortest-path algorithm for arbitrary directed graphs with unbounded non-negative weights. However, specialized cases (such as bounded/integer weights, directed acyclic graphs etc.) can be improved further. If preprocessing is allowed, algorithms such as contraction hierarchies can be up to seven orders of magnitude faster.
Dijkstra's algorithm is commonly used on graphs where the edge weights are positive integers or real numbers. It can be generalized to any graph where the edge weights are partially ordered, provided the subsequent labels (a subsequent label is produced when traversing an edge) are monotonically non-decreasing.
In many fields, particularly artificial intelligence, Dijkstra's algorithm or a variant offers a uniform cost search and is formulated as an instance of the more general idea of best-first search.
History
Dijkstra thought about the shortest path problem while working as a programmer at the Mathematical Center in Amsterdam in 1956. He wanted to demonstrate the capabilities of the new ARMAC computer. His objective was to choose a problem and a computer solution that non-computing people could understand. He designed the shortest path algorithm and later implemented it for ARMAC for a slightly simplified transportation map of 64 cities in the Netherlands (he limited it to 64, so that 6 bits would be sufficient to encode the city number). A year later, he came across another problem advanced by hardware engineers working on the institute's next computer: minimize the amount of wire needed to connect the pins on the machine's back panel. As a solution, he re-discovered Prim's minimal spanning tree algorithm (known earlier to Jarník, and also rediscovered by Prim). Dijkstra published the algorithm in 1959, two years after Prim and 29 years after Jarník.
Algorithm
The algorithm requires a starting node, and node N, with a distance between the starting node and N. Dijkstra's algorithm starts with infinite distances and tries to improve them step by step:
Create a set of all unvisited nodes: the unvisited set.
Assign to every node a distance from start value: for the starting node, it is zero, and for all other nodes, it is infinity, since initially no path is known to these nodes. During execution, the distance of a node N is the length of the shortest path discovered so far between the starting node and N.
From the unvisited set, select the current node to be the one with the smallest (finite) distance; initially, this is the starting node (distance zero). If the unvisited set is empty, or contains only nodes with infinite distance (which are unreachable), then the algorithm terminates by skipping to step 6. If the only concern is the path to a target node, the algorithm terminates once the current node is the target node. Otherwise, the algorithm continues.
For the current node, consider all of its unvisited neighbors and update their distances through the current node; compare the newly calculated distance to the one currently assigned to the neighbor and assign the smaller one to it. For example, if the current node A is marked with a distance of 6, and the edge connecting it with its neighbor B has length 2, then the distance to B through A is 6 + 2 = 8. If B was previously marked with a distance greater than 8, then update it to 8 (the path to B through A is shorter). Otherwise, keep its current distance (the path to B through A is not the shortest).
After considering all of the current node's unvisited neighbors, the current node is removed from the unvisited set. Thus a visited node is never rechecked, which is correct because the distance recorded on the current node is minimal (as ensured in step 3), and thus final. Repeat from step 3.
Once the loop exits (steps 3–5), every visited node contains its shortest distance from the starting node.
Description
The shortest path between two intersections on a city map can be found by this algorithm using pencil and paper. Every intersection is listed on a separate line: one is the starting point and is labeled (given a distance of) 0. Every other intersection is initially labeled with a distance of infinity. This is done to note that no path to these intersections has yet been established. At each iteration one intersection becomes the current intersection. For the first iteration, this is the starting point.
From the current intersection, the distance to every neighbor (directly-connected) intersection is assessed by summing the label (value) of the current intersection and the distance to the neighbor and then relabeling the neighbor with the lesser of that sum and the neighbor's existing label. I.e., the neighbor is relabeled if the path to it through the current intersection is shorter than previously assessed paths. If so, mark the road to the neighbor with an arrow pointing to it, and erase any other arrow that points to it. After the distances to each of the current intersection's neighbors have been assessed, the current intersection is marked as visited. The unvisited intersection with the smallest label becomes the current intersection and the process repeats until all nodes with labels less than the destination's label have been visited.
Once no unvisited nodes remain with a label smaller than the destination's label, the remaining arrows show the shortest path.
Pseudocode
In the following pseudocode, is an array that contains the current distances from the to other vertices, i.e. is the current distance from the source to the vertex . The array contains pointers to previous-hop nodes on the shortest path from source to the given vertex (equivalently, it is the next-hop on the path from the given vertex to the source). The code , searches for the vertex in the vertex set that has the least value. returns the length of the edge joining (i.e. the distance between) the two neighbor-nodes and . The variable on line 14 is the length of the path from the node to the neighbor node if it were to go through . If this path is shorter than the current shortest path recorded for , then the distance of is updated to .
1 function Dijkstra(Graph, source):
2
3 for each vertex v in Graph.Vertices:
4 dist[v] ← INFINITY
5 prev[v] ← UNDEFINED
6 add v to Q
7 dist[source] ← 0
8
9 while Q is not empty:
10 u ← vertex in Q with minimum dist[u]
11 remove u from Q
12
13 for each neighbor v of u still in Q:
14 alt ← dist[u] + Graph.Edges(u, v)
15 if alt < dist[v]:
16 dist[v] ← alt
17 prev[v] ← u
18
19 return dist[], prev[]
To find the shortest path between vertices and , the search terminates after line 10 if . The shortest path from to can be obtained by reverse iteration:
1 S ← empty sequence
2 u ← target
3 if prev[u] is defined or u = source: // Proceed if the vertex is reachable
4 while u is defined: // Construct the shortest path with a stack S
5 insert u at the beginning of S // Push the vertex onto the stack
6 u ← prev[u] // Traverse from target to source
Now sequence is the list of vertices constituting one of the shortest paths from to , or the empty sequence if no path exists.
A more general problem is to find all the shortest paths between and (there might be several of the same length). Then instead of storing only a single node in each entry of all nodes satisfying the relaxation condition can be stored. For example, if both and connect to and they lie on different shortest paths through (because the edge cost is the same in both cases), then both and are added to . When the algorithm completes, data structure describes a graph that is a subset of the original graph with some edges removed. Its key property is that if the algorithm was run with some starting node, then every path from that node to any other node in the new graph is the shortest path between those nodes graph, and all paths of that length from the original graph are present in the new graph. Then to actually find all these shortest paths between two given nodes, a path finding algorithm on the new graph, such as depth-first search would work.
Using a priority queue
A min-priority queue is an abstract data type that provides 3 basic operations: , and . As mentioned earlier, using such a data structure can lead to faster computing times than using a basic queue. Notably, Fibonacci heap or Brodal queue offer optimal implementations for those 3 operations. As the algorithm is slightly different in appearance, it is mentioned here, in pseudocode as well:
1 function Dijkstra(Graph, source):
2 create vertex priority queue Q
3
4 dist[source] ← 0 // Initialization
5 Q.add_with_priority(source, 0) // associated priority equals dist[·]
6
7 for each vertex v in Graph.Vertices:
8 if v ≠ source
9 prev[v] ← UNDEFINED // Predecessor of v
10 dist[v] ← INFINITY // Unknown distance from source to v
11 Q.add_with_priority(v, INFINITY)
12
13
14 while Q is not empty: // The main loop
15 u ← Q.extract_min() // Remove and return best vertex
16 for each neighbor v of u: // Go through all v neighbors of u
17 alt ← dist[u] + Graph.Edges(u, v)
18 if alt < dist[v]:
19 prev[v] ← u
20 dist[v] ← alt
21 Q.decrease_priority(v, alt)
22
23 return dist, prev
Instead of filling the priority queue with all nodes in the initialization phase, it is possible to initialize it to contain only source; then, inside the if alt < dist[v] block, the becomes an operation.
Yet another alternative is to add nodes unconditionally to the priority queue and to instead check after extraction (u ← Q.extract_min()) that it isn't revisiting, or that no shorter connection was found yet in the if alt < dist[v] block. This can be done by additionally extracting the associated priority p from the queue and only processing further if p == dist[u] inside the while Q is not empty loop.
These alternatives can use entirely array-based priority queues without decrease-key functionality, which have been found to achieve even faster computing times in practice. However, the difference in performance was found to be narrower for denser graphs.
Proof
To prove the correctness of Dijkstra's algorithm, mathematical induction can be used on the number of visited nodes.
Invariant hypothesis: For each visited node , is the shortest distance from to , and for each unvisited node , is the shortest distance from to when traveling via visited nodes only, or infinity if no such path exists. (Note: we do not assume is the actual shortest distance for unvisited nodes, while is the actual shortest distance)
Base case
The base case is when there is just one visited node, . Its distance is defined to be zero, which is the shortest distance, since negative weights are not allowed. Hence, the hypothesis holds.
Induction
Assuming that the hypothesis holds for visited nodes, to show it holds for nodes, let be the next visited node, i.e. the node with minimum . The claim is that is the shortest distance from to .
The proof is by contradiction. If a shorter path were available, then this shorter path either contains another unvisited node or not.
In the former case, let be the first unvisited node on this shorter path. By induction, the shortest paths from to and through visited nodes only have costs and respectively. This means the cost of going from to via has the cost of at least + the minimal cost of going from to . As the edge costs are positive, the minimal cost of going from to is a positive number. However, is at most because otherwise w would have been picked by the priority queue instead of u. This is a contradiction, since it has already been established that + a positive number < .
In the latter case, let be the last but one node on the shortest path. That means . That is a contradiction because by the time is visited, it should have set to at most .
For all other visited nodes , the is already known to be the shortest distance from already, because of the inductive hypothesis, and these values are unchanged.
After processing , it is still true that for each unvisited node , is the shortest distance from to using visited nodes only. Any shorter path that did not use , would already have been found, and if a shorter path used it would have been updated when processing .
After all nodes are visited, the shortest path from to any node consists only of visited nodes. Therefore, is the shortest distance.
Running time
Bounds of the running time of Dijkstra's algorithm on a graph with edges and vertices can be expressed as a function of the number of edges, denoted , and the number of vertices, denoted , using big-O notation. The complexity bound depends mainly on the data structure used to represent the set . In the following, upper bounds can be simplified because is for any simple graph, but that simplification disregards the fact that in some problems, other upper bounds on may hold.
For any data structure for the vertex set , the running time i s:
where and are the complexities of the decrease-key and extract-minimum operations in , respectively.
The simplest version of Dijkstra's algorithm stores the vertex set as a linked list or array, and edges as an adjacency list or matrix. In this case, extract-minimum is simply a linear search through all vertices in , so the running time is .
For sparse graphs, that is, graphs with far fewer than edges, Dijkstra's algorithm can be implemented more efficiently by storing the graph in the form of adjacency lists and using a self-balancing binary search tree, binary heap, pairing heap, Fibonacci heap or a priority heap as a priority queue to implement extracting minimum efficiently. To perform decrease-key steps in a binary heap efficiently, it is necessary to use an auxiliary data structure that maps each vertex to its position in the heap, and to update this structure as the priority queue changes. With a self-balancing binary search tree or binary heap, the algorithm requires
time in the worst case; for connected graphs this time bound can be simplified to . The Fibonacci heap improves this to
When using binary heaps, the average case time complexity is lower than the worst-case: assuming edge costs are drawn independently from a common probability distribution, the expected number of decrease-key operations is bounded by , giving a total running time of
Practical optimizations and infinite graphs
In common presentations of Dijkstra's algorithm, initially all nodes are entered into the priority queue. This is, however, not necessary: the algorithm can start with a priority queue that contains only one item, and insert new items as they are discovered (instead of doing a decrease-key, check whether the key is in the queue; if it is, decrease its key, otherwise insert it). This variant has the same worst-case bounds as the common variant, but maintains a smaller priority queue in practice, speeding up queue operations.
Moreover, not inserting all nodes in a graph makes it possible to extend the algorithm to find the shortest path from a single source to the closest of a set of target nodes on infinite graphs or those too large to represent in memory. The resulting algorithm is called uniform-cost search (UCS) in the artificial intelligence literature and can be expressed in pseudocode as
procedure uniform_cost_search(start) is
node ← start
frontier ← priority queue containing node only
expanded ← empty set
do
if frontier is empty then
return failure
node ← frontier.pop()
if node is a goal state then
return solution(node)
expanded.add(node)
for each of node's neighbors n do
if n is not in expanded and not in frontier then
frontier.add(n)
else if n is in frontier with higher cost
replace existing node with n
Its complexity can be expressed in an alternative way for very large graphs: when is the length of the shortest path from the start node to any node satisfying the "goal" predicate, each edge has cost at least , and the number of neighbors per node is bounded by , then the algorithm's worst-case time and space complexity are both in .
Further optimizations for the single-target case include bidirectional variants, goal-directed variants such as the A* algorithm (see ), graph pruning to determine which nodes are likely to form the middle segment of shortest paths (reach-based routing), and hierarchical decompositions of the input graph that reduce routing to connecting and to their respective "transit nodes" followed by shortest-path computation between these transit nodes using a "highway". Combinations of such techniques may be needed for optimal practical performance on specific problems.
Optimality for comparison-sorting by distance
As well as simply computing distances and paths, Dijkstra's algorithm can be used to sort vertices by their distances from a given starting vertex.
In 2023, Haeupler, Rozhoň, Tětek, Hladík, and Tarjan (one of the inventors of the 1984 heap), proved that, for this sorting problem on a positively-weighted directed graph, a version of Dijkstra's algorithm with a special heap data structure has a runtime and number of comparisons that is within a constant factor of optimal among comparison-based algorithms for the same sorting problem on the same graph and starting vertex but with variable edge weights. To achieve this, they use a comparison-based heap whose cost of returning/removing the minimum element from the heap is logarithmic in the number of elements inserted after it rather than in the number of elements in the heap.
Specialized variants
When arc weights are small integers (bounded by a parameter ), specialized queues can be used for increased speed. The first algorithm of this type was Dial's algorithm for graphs with positive integer edge weights, which uses a bucket queue to obtain a running time . The use of a Van Emde Boas tree as the priority queue brings the complexity to . Another interesting variant based on a combination of a new radix heap and the well-known Fibonacci heap runs in time . Finally, the best algorithms in this special case run in time and time.
Related problems and algorithms
Dijkstra's original algorithm can be extended with modifications. For example, sometimes it is desirable to present solutions which are less than mathematically optimal. To obtain a ranked list of less-than-optimal solutions, the optimal solution is first calculated. A single edge appearing in the optimal solution is removed from the graph, and the optimum solution to this new graph is calculated. Each edge of the original solution is suppressed in turn and a new shortest-path calculated. The secondary solutions are then ranked and presented after the first optimal solution.
Dijkstra's algorithm is usually the working principle behind link-state routing protocols. OSPF and IS-IS are the most common.
Unlike Dijkstra's algorithm, the Bellman–Ford algorithm can be used on graphs with negative edge weights, as long as the graph contains no negative cycle reachable from the source vertex s. The presence of such cycles means that no shortest path can be found, since the label becomes lower each time the cycle is traversed. (This statement assumes that a "path" is allowed to repeat vertices. In graph theory that is normally not allowed. In theoretical computer science it often is allowed.) It is possible to adapt Dijkstra's algorithm to handle negative weights by combining it with the Bellman-Ford algorithm (to remove negative edges and detect negative cycles): Johnson's algorithm.
The A* algorithm is a generalization of Dijkstra's algorithm that reduces the size of the subgraph that must be explored, if additional information is available that provides a lower bound on the distance to the target.
The process that underlies Dijkstra's algorithm is similar to the greedy process used in Prim's algorithm. Prim's purpose is to find a minimum spanning tree that connects all nodes in the graph; Dijkstra is concerned with only two nodes. Prim's does not evaluate the total weight of the path from the starting node, only the individual edges.
Breadth-first search can be viewed as a special-case of Dijkstra's algorithm on unweighted graphs, where the priority queue degenerates into a FIFO queue.
The fast marching method can be viewed as a continuous version of Dijkstra's algorithm which computes the geodesic distance on a triangle mesh.
Dynamic programming perspective
From a dynamic programming point of view, Dijkstra's algorithm is a successive approximation scheme that solves the dynamic programming functional equation for the shortest path problem by the Reaching method.
In fact, Dijkstra's explanation of the logic behind the algorithm:
is a paraphrasing of Bellman's Principle of Optimality in the context of the shortest path problem.
| Mathematics | Graph theory | null |
45829 | https://en.wikipedia.org/wiki/Structural%20engineering | Structural engineering | Structural engineering is a sub-discipline of civil engineering in which structural engineers are trained to design the 'bones and joints' that create the form and shape of human-made structures. Structural engineers also must understand and calculate the stability, strength, rigidity and earthquake-susceptibility of built structures for buildings and nonbuilding structures. The structural designs are integrated with those of other designers such as architects and building services engineer and often supervise the construction of projects by contractors on site. They can also be involved in the design of machinery, medical equipment, and vehicles where structural integrity affects functioning and safety. See glossary of structural engineering.
Structural engineering theory is based upon applied physical laws and empirical knowledge of the structural performance of different materials and geometries. Structural engineering design uses a number of relatively simple structural concepts to build complex structural systems. Structural engineers are responsible for making creative and efficient use of funds, structural elements and materials to achieve these goals.
History
Structural engineering dates back to 2700 B.C. when the step pyramid for Pharaoh Djoser was built by Imhotep, the first engineer in history known by name. Pyramids were the most common major structures built by ancient civilizations because the structural form of a pyramid is inherently stable and can be almost infinitely scaled (as opposed to most other structural forms, which cannot be linearly increased in size in proportion to increased loads).
The structural stability of the pyramid, whilst primarily gained from its shape, relies also on the strength of the stone from which it is constructed, and its ability to support the weight of the stone above it. The limestone blocks were often taken from a quarry near the building site and have a compressive strength from 30 to 250 MPa (MPa = Pa × 106). Therefore, the structural strength of the pyramid stems from the material properties of the stones from which it was built rather than the pyramid's geometry.
Throughout ancient and medieval history most architectural design and construction were carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. No theory of structures existed, and understanding of how structures stood up was extremely limited, and based almost entirely on empirical evidence of 'what had worked before' and intuition. Knowledge was retained by guilds and seldom supplanted by advances. Structures were repetitive, and increases in scale were incremental.
No record exists of the first calculations of the strength of structural members or the behavior of structural material, but the profession of a structural engineer only really took shape with the Industrial Revolution and the re-invention of concrete (see History of Concrete). The physical sciences underlying structural engineering began to be understood in the Renaissance and have since developed into computer-based applications pioneered in the 1970s.
Timeline
1452–1519 Leonardo da Vinci made many contributions.
1638: Galileo Galilei published the book Two New Sciences in which he examined the failure of simple structures.
1660: Hooke's law by Robert Hooke.
1687: Isaac Newton published Philosophiæ Naturalis Principia Mathematica, which contains his laws of motion.
1750: Euler–Bernoulli beam equation.
1700–1782: Daniel Bernoulli introduced the principle of virtual work.
1707–1783: Leonhard Euler developed the theory of buckling of columns.
1826: Claude-Louis Navier published a treatise on the elastic behaviors of structures.
1873: Carlo Alberto Castigliano presented his dissertation "Intorno ai sistemi elastici", which contains his theorem for computing displacement as the partial derivative of the strain energy. This theorem includes the method of "least work" as a special case.
1874: Otto Mohr formalized the idea of a statically indeterminate structure.
1922: Timoshenko corrects the Euler–Bernoulli beam equation.
1936: Hardy Cross' publication of the moment distribution method, an important innovation in the design of continuous frames.
1941: Alexander Hrennikoff solved the discretization of plane elasticity problems using a lattice framework.
1942: Richard Courant divided a domain into finite subregions.
1956: J. Turner, R. W. Clough, H. C. Martin, and L. J. Topp's paper on the "Stiffness and Deflection of Complex Structures" introduces the name "finite-element method" and is widely recognized as the first comprehensive treatment of the method as it is known today.
Structural failure
The history of structural engineering contains many collapses and failures. Sometimes this is due to obvious negligence, as in the case of the Pétion-Ville school collapse, in which Rev. Fortin Augustin " constructed the building all by himself, saying he didn't need an engineer as he had good knowledge of construction" following a partial collapse of the three-story schoolhouse that sent neighbors fleeing. The final collapse killed 94 people, mostly children.
In other cases structural failures require careful study, and the results of these inquiries have resulted in improved practices and a greater understanding of the science of structural engineering. Some such studies are the result of forensic engineering investigations where the original engineer seems to have done everything in accordance with the state of the profession and acceptable practice yet a failure still eventuated. A famous case of structural knowledge and practice being advanced in this manner can be found in a series of failures involving box girders which collapsed in Australia during the 1970s.
Theory
Structural engineering depends upon a detailed knowledge of applied mechanics, materials science, and applied mathematics to understand and predict how structures support and resist self-weight and imposed loads. To apply the knowledge successfully a structural engineer generally requires detailed knowledge of relevant empirical and theoretical design codes, the techniques of structural analysis, as well as some knowledge of the corrosion resistance of the materials and structures, especially when those structures are exposed to the external environment. Since the 1990s, specialist software has become available to aid in the design of structures, with the functionality to assist in the drawing, analyzing and designing of structures with maximum precision; examples include AutoCAD, StaadPro, ETABS, Prokon, Revit Structure, Inducta RCB, etc. Such software may also take into consideration environmental loads, such as earthquakes and winds.
Profession
Structural engineers are responsible for engineering design and structural analysis. Entry-level structural engineers may design the individual structural elements of a structure, such as the beams and columns of a building. More experienced engineers may be responsible for the structural design and integrity of an entire system, such as a building.
Structural engineers often specialize in particular types of structures, such as buildings, bridges, pipelines, industrial, tunnels, vehicles, ships, aircraft, and spacecraft. Structural engineers who specialize in buildings may specialize in particular construction materials such as concrete, steel, wood, masonry, alloys and composites.
Structural engineering has existed since humans first started to construct their structures. It became a more defined and formalized profession with the emergence of architecture as a distinct profession from engineering during the industrial revolution in the late 19th century. Until then, the architect and the structural engineer were usually one and the same thing – the master builder. Only with the development of specialized knowledge of structural theories that emerged during the 19th and early 20th centuries, did the professional structural engineers come into existence.
The role of a structural engineer today involves a significant understanding of both static and dynamic loading and the structures that are available to resist them. The complexity of modern structures often requires a great deal of creativity from the engineer in order to ensure the structures support and resist the loads they are subjected to. A structural engineer will typically have a four or five-year undergraduate degree, followed by a minimum of three years of professional practice before being considered fully qualified.
Structural engineers are licensed or accredited by different learned societies and regulatory bodies around the world (for example, the Institution of Structural Engineers in the UK). Depending on the degree course they have studied and/or the jurisdiction they are seeking licensure in, they may be accredited (or licensed) as just structural engineers, or as civil engineers, or as both civil and structural engineers.
Another international organisation is IABSE(International Association for Bridge and Structural Engineering). The aim of that association is to exchange knowledge and to advance the practice of structural engineering worldwide in the service of the profession and society.
Specializations
Building structures
Structural building engineering is primarily driven by the creative manipulation of materials and forms and the underlying mathematical and scientific ideas to achieve an end that fulfills its functional requirements and is structurally safe when subjected to all the loads it could reasonably be expected to experience. This is subtly different from architectural design, which is driven by the creative manipulation of materials and forms, mass, space, volume, texture, and light to achieve an end which is aesthetic, functional, and often artistic.
The structural design for a building must ensure that the building can stand up safely, able to function without excessive deflections or movements which may cause fatigue of structural elements, cracking or failure of fixtures, fittings or partitions, or discomfort for occupants. It must account for movements and forces due to temperature, creep, cracking, and imposed loads. It must also ensure that the design is practically buildable within acceptable manufacturing tolerances of the materials. It must allow the architecture to work, and the building services to fit within the building and function (air conditioning, ventilation, smoke extract, electrics, lighting, etc.). The structural design of a modern building can be extremely complex and often requires a large team to complete.
Structural engineering specialties for buildings include:
Earthquake engineering
Façade engineering
Fire engineering
Roof engineering
Tower engineering
Wind engineering
Earthquake engineering structures
Earthquake engineering structures are those engineered to withstand earthquakes.
The main objectives of earthquake engineering are to understand the interaction of structures with the shaking ground, foresee the consequences of possible earthquakes, and design and construct the structures to perform during an earthquake.
Earthquake-proof structures are not necessarily extremely strong like the El Castillo pyramid at Chichen Itza shown above.
One important tool of earthquake engineering is base isolation, which allows the base of a structure to move freely with the ground.
Civil engineering structures
Civil structural engineering includes all structural engineering related to the built environment. It includes:
The structural engineer is the lead designer on these structures, and often the sole designer. In the design of structures such as these, structural safety is of paramount importance (in the UK, designs for dams, nuclear power stations and bridges must be signed off by a chartered engineer).
Civil engineering structures are often subjected to very extreme forces, such as large variations in temperature, dynamic loads such as waves or traffic, or high pressures from water or compressed gases. They are also often constructed in corrosive environments, such as at sea, in industrial facilities, or below ground.
resisted and significant deflections of structures.
The forces which parts of a machine are subjected to can vary significantly and can do so at a great rate. The forces which a boat or aircraft are subjected to vary enormously and will do so thousands of times over the structure's lifetime. The structural design must ensure that such structures can endure such loading for their entire design life without failing.
These works can require mechanical structural engineering:
Boilers and pressure vessels
Coachworks and carriages
Cranes
Elevators
Escalators
Marine vessels and hulls
Aerospace structures
Aerospace structure types include launch vehicles, (Atlas, Delta, Titan), missiles (ALCM, Harpoon), Hypersonic vehicles (Space Shuttle), military aircraft (F-16, F-18) and commercial aircraft (Boeing 777, MD-11). Aerospace structures typically consist of thin plates with stiffeners for the external surfaces, bulkheads, and frames to support the shape and fasteners such as welds, rivets, screws, and bolts to hold the components together.
Nanoscale structures
A nanostructure is an object of intermediate size between molecular and microscopic (micrometer-sized) structures. In describing nanostructures it is necessary to differentiate between the number of dimensions on the nanoscale. Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm. Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater. Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometer range. The term 'nanostructure' is often used when referring to magnetic technology.
Structural engineering for medical science
Medical equipment (also known as armamentarium) is designed to aid in the diagnosis, monitoring or treatment of medical conditions. There are several basic types: diagnostic equipment includes medical imaging machines, used to aid in diagnosis; equipment includes infusion pumps, medical lasers, and LASIK surgical machines; medical monitors allow medical staff to measure a patient's medical state. Monitors may measure patient vital signs and other parameters including ECG, EEG, blood pressure, and dissolved gases in the blood; diagnostic medical equipment may also be used in the home for certain purposes, e.g. for the control of diabetes mellitus. A biomedical equipment technician (BMET) is a vital component of the healthcare delivery system. Employed primarily by hospitals, BMETs are the people responsible for maintaining a facility's medical equipment.
Structural elements
Any structure is essentially made up of only a small number of different types of elements:
Columns
Beams
Plates
Arches
Shells
Catenaries
Many of these elements can be classified according to form (straight, plane / curve) and dimensionality (one-dimensional / two-dimensional):
Columns
Columns are elements that carry only axial force (compression) or both axial force and bending (which is technically called a beam-column but practically, just a column). The design of a column must check the axial capacity of the element and the buckling capacity.
The buckling capacity is the capacity of the element to withstand the propensity to buckle. Its capacity depends upon its geometry, material, and the effective length of the column, which depends upon the restraint conditions at the top and bottom of the column. The effective length is where is the real length of the column and K is the factor dependent on the restraint conditions.
The capacity of a column to carry axial load depends on the degree of bending it is subjected to, and vice versa. This is represented on an interaction chart and is a complex non-linear relationship.
Beams
A beam may be defined as an element in which one dimension is much greater than the other two and the applied loads are usually normal to the main axis of the element. Beams and columns are called line elements and are often represented by simple lines in structural modeling.
cantilevered (supported at one end only with a fixed connection)
simply supported (fixed against vertical translation at each end and horizontal translation at one end only, and able to rotate at the supports)
fixed (supported in all directions for translation and rotation at each end)
continuous (supported by three or more supports)
a combination of the above (ex. supported at one end and in the middle)
Beams are elements that carry pure bending only. Bending causes one part of the section of a beam (divided along its length) to go into compression and the other part into tension. The compression part must be designed to resist buckling and crushing, while the tension part must be able to adequately resist the tension.
Trusses
A truss is a structure comprising members and connection points or nodes. When members are connected at nodes and forces are applied at nodes members can act in tension or compression. Members acting in compression are referred to as compression members or struts while members acting in tension are referred to as tension members or ties. Most trusses use gusset plates to connect intersecting elements. Gusset plates are relatively flexible and unable to transfer bending moments. The connection is usually arranged so that the lines of force in the members are coincident at the joint thus allowing the truss members to act in pure tension or compression.
Trusses are usually used in large-span structures, where it would be uneconomical to use solid beams.
Plates
Plates carry bending in two directions. A concrete flat slab is an example of a plate. Plates are understood by using continuum mechanics, but due to the complexity involved they are most often designed using a codified empirical approach, or computer analysis.
They can also be designed with yield line theory, where an assumed collapse mechanism is analyzed to give an upper bound on the collapse load. This technique is used in practice but because the method provides an upper-bound (i.e. an unsafe prediction of the collapse load) for poorly conceived collapse mechanisms, great care is needed to ensure that the assumed collapse mechanism is realistic.
Shells
Shells derive their strength from their form and carry forces in compression in two directions. A dome is an example of a shell. They can be designed by making a hanging-chain model, which will act as a catenary in pure tension and inverting the form to achieve pure compression.
Arches
Arches carry forces in compression in one direction only, which is why it is appropriate to build arches out of masonry. They are designed by ensuring that the line of thrust of the force remains within the depth of the arch. It is mainly used to increase the bountifulness of any structure.
Catenaries
Catenaries derive their strength from their form and carry transverse forces in pure tension by deflecting (just as a tightrope will sag when someone walks on it). They are almost always cable or fabric structures. A fabric structure acts as a catenary in two directions.
Materials
Structural engineering depends on the knowledge of materials and their properties, in order to understand how different materials support and resist loads. It also involves a knowledge of Corrosion engineering to avoid for example galvanic coupling of dissimilar materials.
Common structural materials are:
Iron: wrought iron, cast iron
Concrete: reinforced concrete, prestressed concrete
Alloy: steel, stainless steel
Masonry
Timber: hardwood, softwood
Aluminium
Composite materials: plywood
Other structural materials: adobe, bamboo, carbon fibre, fiber reinforced plastic, mudbrick, roofing materials
| Technology | Disciplines | null |
45831 | https://en.wikipedia.org/wiki/Tetanus | Tetanus | Tetanus (), also known as lockjaw, is a bacterial infection caused by Clostridium tetani and characterized by muscle spasms. In the most common type, the spasms begin in the jaw and then progress to the rest of the body. Each spasm usually lasts for a few minutes. Spasms occur frequently for three to four weeks. Some spasms may be severe enough to fracture bones. Other symptoms of tetanus may include fever, sweating, headache, trouble swallowing, high blood pressure, and a fast heart rate. The onset of symptoms is typically 3 to 21 days following infection. Recovery may take months; about 10% of cases prove to be fatal.
C. tetani is commonly found in soil, saliva, dust, and manure. The bacteria generally enter through a break in the skin, such as a cut or puncture wound caused by a contaminated object. They produce toxins that interfere with normal muscle contractions. Diagnosis is based on the presenting signs and symptoms. The disease does not spread between people.
Tetanus can be prevented by immunization with the tetanus vaccine. In those who have a significant wound and have had fewer than three doses of the vaccine, both vaccination and tetanus immune globulin are recommended. The wound should be cleaned, and any dead tissue should be removed. In those who are infected, tetanus immune globulin, or, if unavailable, intravenous immunoglobulin (IVIG) is used. Muscle relaxants may be used to control spasms. Mechanical ventilation may be required if a person's breathing is affected.
Tetanus occurs in all parts of the world but is most frequent in hot and wet climates where the soil has a high organic content. In 2015, there were about 209,000 infections and about 59,000 deaths globally. This is down from 356,000 deaths in 1990. In the US, there are about 30 cases per year, almost all of which were in people who had not been vaccinated. An early description of the disease was made by Hippocrates in the 5th century BC. The cause of the disease was determined in 1884 by Antonio Carle and Giorgio Rattone at the University of Turin, and a vaccine was developed in 1924.
Signs and symptoms
Tetanus often begins with mild spasms in the jaw muscles—also known as lockjaw. Similar spasms can also be a feature of trismus. The spasms can also affect the facial muscles, resulting in an appearance called risus sardonicus. Chest, neck, back, abdominal muscles, and buttocks may be affected. Back muscle spasms often cause arching, called opisthotonus. Sometimes, the spasms affect muscles utilized during inhalation and exhalation, which can lead to breathing problems.
Prolonged muscular action causes sudden, powerful, and painful contractions of muscle groups, called tetany. These episodes can cause fractures and muscle tears. Other symptoms include fever, headache, restlessness, irritability, feeding difficulties, breathing problems, burning sensation during urination, urinary retention, and loss of stool control.
Even with treatment, about 10% of people who contract tetanus die. The mortality rate is higher in unvaccinated individuals, and in people over 60 years of age.
Incubation period
The incubation period of tetanus may be up to several months but is usually about ten days. In general, the farther the injury site is from the central nervous system, the longer the incubation period. However, shorter incubation periods will have more severe symptoms. In trismus nascentium (i.e. neonatal tetanus), symptoms usually appear from 4 to 14 days after birth, averaging about 7 days. On the basis of clinical findings, four different forms of tetanus have been described.
Generalized tetanus
Generalized tetanus is the most common type of tetanus, representing about 80% of cases. The generalized form usually presents with a descending pattern. The first sign is trismus or lockjaw, then facial spasms (called risus sardonicus), followed by stiffness of the neck, difficulty in swallowing, and rigidity of pectoral and calf muscles. Other symptoms include elevated temperature, sweating, elevated blood pressure, and episodic rapid heart rate. Spasms may occur frequently and last for several minutes, with the body shaped into a characteristic form called opisthotonos. Spasms continue for up to four weeks, and complete recovery may take months.
Neonatal tetanus
Neonatal tetanus (trismus nascentium) is a form of generalized tetanus that occurs in newborns, usually those born to mothers who themselves have not been vaccinated. If the mother has been vaccinated against tetanus, the infants acquire passive immunity, and are thus protected. It usually occurs through infection of the unhealed umbilical stump, particularly when the stump is cut with a non-sterile instrument. As of 1998, neonatal tetanus was common in many developing countries, and was responsible for about 14% (215,000) of all neonatal deaths. In 2010, the worldwide death toll was approximately 58,000 newborns. As the result of a public health campaign, the death toll from neonatal tetanus was reduced by 90% between 1990 and 2010, and by 2013, the disease had been largely eliminated from all but 25 countries. Neonatal tetanus is rare in developed countries.
Local tetanus
Local tetanus is an uncommon form of the disease, in which people have persistent contraction of muscles in the same anatomic area as the injury. The contractions may persist for many weeks before gradually subsiding. Local tetanus is generally milder; only about 1% of cases are fatal, but it may precede the onset of generalized tetanus.
Cephalic tetanus
Cephalic tetanus is the rarest form of the disease (0.9–3% of cases), and is limited to muscles and nerves in the head. It usually occurs after trauma to the head area, including: skull fracture, laceration, eye injury, dental extraction, and otitis media, but it has been observed from injuries to other parts of the body. Paralysis of the facial nerve is most frequently implicated, which may cause lockjaw, facial palsy, or ptosis, but other cranial nerves can also be affected. Cephalic tetanus may progress to a more generalized form of the disease. Due to its rarity, clinicians may be unfamiliar with the clinical presentation, and may not suspect tetanus as the illness. Treatment can be complicated, as symptoms may be concurrent with the initial injury that caused the infection. Cephalic tetanus is more likely than other forms of tetanus to be fatal, with the progression to generalized tetanus carrying a 15–30% case fatality rate.
Cause
Tetanus is caused by the tetanus bacterium, Clostridium tetani. The disease is an international health problem, as C. tetani endospores are ubiquitous. Endospores can be introduced into the body through a puncture wound (penetrating trauma). Due to C. tetani being an anaerobic bacterium, it and its endospores thrive in environments that lack oxygen, such as a puncture wound. With the changes in oxygen levels, the turkey drumstick-shaped endospore can quickly spread.
The disease occurs almost exclusively in people who are inadequately immunized. It is more common in hot, damp climates with soil rich in organic matter. Manure-treated soils may contain spores, as they are widely distributed in the intestines and feces of many animals, such as horses, sheep, cattle, dogs, cats, rats, guinea pigs, and chickens. In agricultural areas, a significant number of human adults may harbor the organism.
The spores can also be found on skin surfaces and in contaminated heroin. Rarely, tetanus can be contracted through surgical procedures, intramuscular injections, compound fractures, and dental infections. Animal bites can transmit tetanus.
Tetanus is often associated with rust, especially rusty nails. Although rust itself does not cause tetanus, objects that accumulate rust are often found outdoors or in places that harbor soil bacteria. Additionally, the rough surface of rusty metal provides crevices for dirt containing C. tetani, while a nail affords a means to puncture the skin and deliver endospores deep within the body at the site of the wound. An endospore is a non-metabolizing survival structure that begins to metabolize and cause infection once in an adequate environment. Hence, stepping on a nail (rusty or not) may result in a tetanus infection, as the low-oxygen (anaerobic) environment may exist under the skin, and the puncturing object can deliver endospores to a suitable environment for growth. It is a common misconception that rust itself is the cause; a related misconception is that a puncture from a rust-free nail is not a risk.
Pathophysiology
Tetanus neurotoxin (TeNT) binds to the presynaptic membrane of the neuromuscular junction, is internalized, and is transported back through the axon until it reaches the central nervous system. Here, it selectively binds to and is transported into inhibitory neurons via endocytosis. It then leaves the vesicle for the neuron cytosol, where it cleaves vesicle associated membrane protein (VAMP) synaptobrevin, which is necessary for membrane fusion of small synaptic vesicles (SSV's). SSV's carry neurotransmitter to the membrane for release, so inhibition of this process blocks neurotransmitter release.
Tetanus toxin specifically blocks the release of the neurotransmitters GABA and glycine from inhibitory neurons. These neurotransmitters keep overactive motor neurons from firing and also play a role in the relaxation of muscles after contraction. When inhibitory neurons are unable to release their neurotransmitters, motor neurons fire out of control, and muscles have difficulty relaxing. This causes the muscle spasms and spastic paralysis seen in tetanus infection.
The tetanus toxin, tetanospasmin, is made up of a heavy chain and a light chain. There are three domains, each of which contributes to the pathophysiology of the toxin. The heavy chain has two of the domains. The N-terminal side of the heavy chain helps with membrane translocation, and the C-terminal side helps the toxin locate the specific receptor site on the correct neuron. The light chain domain cleaves the VAMP protein once it arrives in the inhibitory neuron cytosol.
There are four main steps in tetanus's mechanism of action: binding to the neuron, internalization of the toxin, membrane translocation, and cleavage of the target VAMP.
Neurospecific binding
The toxin travels from the wound site to the neuromuscular junction through the bloodstream, where it binds to the presynaptic membrane of a motor neuron. The heavy chain C-terminal domain aids in binding to the correct site, recognizing and binding to the correct glycoproteins and glycolipids in the presynaptic membrane. The toxin binds to a site that will be taken into the neuron as an endocytic vesicle that will travel down the axon, past the cell body, and down the dendrites to the dendritic terminal at the spine and central nervous system. Here, it will be released into the synaptic cleft, and allowed to bind with the presynaptic membrane of inhibitory neurons in a similar manner seen with the binding to the motor neuron.
Internalization
Tetanus toxin is then internalized again via endocytosis, this time, in an acidic vesicle. In a mechanism not well understood, depolarization caused by the firing of the inhibitory neuron causes the toxin to be pulled into the neuron inside vesicles.
Membrane translocation
The toxin then needs a way to get out of the vesicle and into the neuron cytosol for it to act on its target. The low pH of the vesicle lumen causes a conformational change in the toxin, shifting it from a water-soluble form to a hydrophobic form. With the hydrophobic patches exposed, the toxin can slide into the vesicle membrane. The toxin forms an ion channel in the membrane that is nonspecific for Na+, K+, Ca2+, and Cl− ions. There is a consensus among experts that this new channel is involved in the translocation of the toxin's light chain from the inside of the vesicle to the neuron cytosol, but the mechanism is not well understood or agreed upon. It has been proposed that the channel could allow the light chain (unfolded from the low pH environment) to leave through the toxin pore, or that the pore could alter the electrochemical gradient enough, by letting in or out ions, to cause osmotic lysis of the vesicle, spilling the vesicle's contents.
Enzymatic target cleavage
The light chain of the tetanus toxin is zinc-dependent protease. It shares a common zinc protease motif (His-Glu-Xaa-Xaa-His) that researchers hypothesized was essential for target cleavage until this was more recently confirmed by experiment: when all zinc was removed from the neuron with heavy metal chelators, the toxin was inhibited, only to be reactivated when the zinc was added back in. The light chain binds to VAMP, and cleaves it between Gln76 and Phe77. Without VAMP, vesicles holding the neurotransmitters needed for motor neuron regulation (GABA and glycine) cannot be released, causing the above-mentioned deregulation of motor neurons and muscle tension.
Diagnosis
There are currently no blood tests for diagnosing tetanus. The diagnosis is based on the presentation of tetanus symptoms and does not depend upon isolation of the bacterium, which is recovered from the wound in only 30% of cases and can be isolated from people without tetanus. Laboratory identification of C. tetani can be demonstrated only by the production of tetanospasmin in mice. Having recently experienced head trauma may indicate cephalic tetanus if no other diagnosis has been made.
The "spatula test" is a clinical test for tetanus that involves touching the posterior pharyngeal wall with a soft-tipped instrument and observing the effect. A positive test result is the involuntary contraction of the jaw (biting down on the "spatula"), and a negative test result would normally be a gag reflex attempting to expel the foreign object. A short report in The American Journal of Tropical Medicine and Hygiene states that, in an affected subject research study, the spatula test had a high specificity (zero false-positive test results) and a high sensitivity (94% of infected people produced a positive test).
Prevention
Unlike many infectious diseases, recovery from naturally acquired tetanus does not usually result in immunity. This is due to the extreme potency of the tetanospasmin toxin. Tetanospasmin will likely be lethal before it will provoke an immune response.
Tetanus can be prevented by vaccination with tetanus toxoid. The CDC recommends that adults receive a booster vaccine every ten years, and standard care practice in many places is to give the booster to any person with a puncture wound who is uncertain of when they were last vaccinated, or if they have had fewer than three lifetime doses of the vaccine. The booster may not prevent a potentially fatal case of tetanus from the current wound, however, as it can take up to two weeks for tetanus antibodies to form.
In children under the age of seven, the tetanus vaccine is often administered as a combined vaccine, DPT/DTaP vaccine, which also includes vaccines against diphtheria and pertussis. For adults and children over seven, the Td vaccine (tetanus and diphtheria) or Tdap (tetanus, diphtheria, and acellular pertussis) is commonly used.
The World Health Organization certifies countries as having eliminated maternal or neonatal tetanus. Certification requires at least two years of rates of less than 1 case per 1,000 live births. In 1998 in Uganda, 3,433 tetanus cases were recorded in newborn babies; of these, 2,403 died. After a major public health effort, Uganda was certified as having eliminated maternal and neonatal tetanus in 2011.
Post-exposure prophylaxis
Tetanus toxoid can be given in case of suspected exposure to tetanus. In such cases, it can be given with or without tetanus immunoglobulin (also called tetanus antibodies or tetanus antitoxin). It can be given as intravenous therapy or by intramuscular injection.
The guidelines for such events in the United States for people at least 11 years old (and not pregnant) are as follows:
Treatment
Mild tetanus
Mild cases of tetanus can be treated with:
Tetanus immunoglobulin (TIG), also called tetanus antibodies or tetanus antitoxin. It can be given as intravenous therapy or by intramuscular injection.
Antibiotic therapy to reduce toxin production. Metronidazole intravenous (IV) is a preferred treatment.
Benzodiazepines can be used to control muscle spasms. Options include diazepam and lorazepam, oral or IV.
Severe tetanus
Severe cases will require admission to intensive care. In addition to the measures listed above for mild tetanus:
Human tetanus immunoglobulin injected intrathecally (which increases clinical improvement from 4% to 35%).
Tracheotomy and mechanical ventilation for 3 to 4 weeks. Tracheotomy is recommended for securing the airway, because the presence of an endotracheal tube is a stimulus for spasm.
Magnesium sulfate, as an intravenous infusion, to control spasm and autonomic dysfunction.
Diazepam as a continuous IV infusion.
The autonomic effects of tetanus can be difficult to manage (alternating hyper- and hypotension hyperpyrexia/hypothermia), and may require IV labetalol, magnesium, clonidine, or nifedipine.
Drugs, such as diazepam or other muscle relaxants, can be given to control the muscle spasms. In extreme cases, it may be necessary to paralyze the person with curare-like drugs, and use a mechanical ventilator.
To survive a tetanus infection, the maintenance of an airway and proper nutrition are required. An intake of and at least 150 g of protein per day is often given in liquid form through a tube directly into the stomach (percutaneous endoscopic gastrostomy), or through a drip into a vein (parenteral nutrition). This high-caloric diet maintenance is required because of the increased metabolic strain brought on by the increased muscle activity. Full recovery takes 4 to 6 weeks because the body must regenerate destroyed nerve axon terminals.
The antibiotic of choice is metronidazole. It can be given intravenously, by mouth, or by rectum. Of likewise efficiency is penicillin, but some raise the concern of provoking spasms because it inhibits GABA receptor, which is already affected by tetanospasmin.
Epidemiology
In 2013, it caused about 59,000 deaths—down from 356,000 in 1990. Tetanus, notably the neonatal form, remains a significant public health problem in non-industrialized countries, with 59,000 newborns dying worldwide in 2008 as a result of neonatal tetanus. In the United States, from 2000 through 2007, an average of 31 cases were reported per year. Nearly all of the cases in the United States occur in unimmunized individuals, or individuals who have allowed their inoculations to lapse.
In animals
Tetanus is found primarily in goats and sheep. The following are clinical symptoms found in affected goats and sheep. Extended head and neck, tail rigors
(tail becomes rigid and straight), abnormal gait (walking becomes stiff and abnormal), arched back, stiffness of the jaw muscles, lockjaw,
twitching of eyes, drooping eyelids, difficulty swallowing, difficulty or inability to eat and drink, abdominal bloat, spasms (uncontrolled muscular contractions) before death.
Death sometimes is due to asphyxiation, secondary to respiratory paralysis.
History
Tetanus was well known to ancient civilizations, who recognized the relationship between wounds and fatal muscle spasms. In 1884, Arthur Nicolaier isolated the strychnine-like toxin of tetanus from free-living, anaerobic soil bacteria. The etiology of the disease was further elucidated in 1884 by Antonio Carle and Giorgio Rattone, two pathologists of the University of Turin, who demonstrated the transmissibility of tetanus for the first time. They produced tetanus in rabbits by injecting pus from a person with fatal tetanus into their sciatic nerves, and testing their reactions while tetanus was spreading.
In 1891, C. tetani was isolated from a human victim by Kitasato Shibasaburō, who later showed that the organism could produce disease when injected into animals and that the toxin could be neutralized by specific antibodies. In 1897, Edmond Nocard showed that tetanus antitoxin induced passive immunity in humans, and could be used for prophylaxis and treatment. Tetanus toxoid vaccine was developed by P. Descombey in 1924, and was widely used to prevent tetanus induced by battle wounds during World War II.
Etymology
The word tetanus comes from the , which is further from the .
Research
There is insufficient evidence that tetanus can be treated or prevented by vitamin C. This is at least partially due to the fact that the historical trials that were conducted in attempts to look for a possible connection between vitamin C and alleviating tetanus patients were of poor quality.
| Biology and health sciences | Infectious disease | null |
45871 | https://en.wikipedia.org/wiki/Loudspeaker | Loudspeaker | A loudspeaker (commonly referred to as a speaker or, more fully, a speaker system) is a combination of one or more speaker drivers, an enclosure, and electrical connections (possibly including a crossover network). The speaker driver is an electroacoustic transducer that converts an electrical audio signal into a corresponding sound.
The driver is a linear motor connected to a diaphragm, which transmits the motor's movement to produce sound by moving air. An audio signal, typically originating from a microphone, recording, or radio broadcast, is electronically amplified to a power level sufficient to drive the motor, reproducing the sound corresponding to the original unamplified signal. This process functions as the inverse of a microphone. In fact, the dynamic speaker driver—the most common type—shares the same basic configuration as a dynamic microphone, which operates in reverse as a generator.
The dynamic speaker was invented in 1925 by Edward W. Kellogg and Chester W. Rice. When the electrical current from an audio signal passes through its voice coil—a coil of wire capable of moving axially in a cylindrical gap containing a concentrated magnetic field produced by a permanent magnet—the coil is forced to move rapidly back and forth due to Faraday's law of induction; this attaches to a diaphragm or speaker cone (as it is usually conically shaped for sturdiness) in contact with air, thus creating sound waves. In addition to dynamic speakers, several other technologies are possible for creating sound from an electrical signal, a few of which are in commercial use.
For a speaker to efficiently produce sound, especially at lower frequencies, the speaker driver must be baffled so that the sound emanating from its rear does not cancel out the (intended) sound from the front; this generally takes the form of a speaker enclosure or speaker cabinet, an often rectangular box made of wood, but sometimes metal or plastic. The enclosure's design plays an important acoustic role thus determining the resulting sound quality. Most high fidelity speaker systems (picture at right) include two or more sorts of speaker drivers, each specialized in one part of the audible frequency range. The smaller drivers capable of reproducing the highest audio frequencies are called tweeters, those for middle frequencies are called mid-range drivers and those for low frequencies are called woofers. Sometimes the reproduction of the very lowest frequencies (20–~50 Hz) is augmented by a so-called subwoofer often in its own (large) enclosure. In a two-way or three-way speaker system (one with drivers covering two or three different frequency ranges) there is a small amount of passive electronics called a crossover network which helps direct components of the electronic signal to the speaker drivers best capable of reproducing those frequencies. In a so-called powered speaker system, the power amplifier actually feeding the speaker drivers is built into the enclosure itself; these have become more and more common especially as computer speakers.
Smaller speakers are found in devices such as radios, televisions, portable audio players, personal computers (computer speakers), headphones, and earphones. Larger, louder speaker systems are used for home hi-fi systems (stereos), electronic musical instruments, sound reinforcement in theaters and concert halls, and in public address systems.
Terminology
The term loudspeaker may refer to individual transducers (also known as drivers) or to complete speaker systems consisting of an enclosure and one or more drivers.
To adequately and accurately reproduce a wide range of frequencies with even coverage, most loudspeaker systems employ more than one driver, particularly for higher sound pressure level (SPL) or maximum accuracy. Individual drivers are used to reproduce different frequency ranges. The drivers are named subwoofers (for very low frequencies); woofers (low frequencies); mid-range speakers (middle frequencies); tweeters (high frequencies); and sometimes supertweeters, for the highest audible frequencies and beyond. The terms for different speaker drivers differ, depending on the application. In two-way systems there is no mid-range driver, so the task of reproducing the mid-range sounds is divided between the woofer and tweeter. When multiple drivers are used in a system, a filter network, called an audio crossover, separates the incoming signal into different frequency ranges and routes them to the appropriate driver. A loudspeaker system with n separate frequency bands is described as n-way speakers: a two-way system will have a woofer and a tweeter; a three-way system employs a woofer, a mid-range, and a tweeter. Loudspeaker drivers of the type pictured are termed dynamic (short for electrodynamic) to distinguish them from other sorts including moving iron speakers, and speakers using piezoelectric or electrostatic systems.
History
Johann Philipp Reis installed an electric loudspeaker in his telephone in 1861; it was capable of reproducing clear tones, but later revisions could also reproduce muffled speech. Alexander Graham Bell patented his first electric loudspeaker (a moving iron type capable of reproducing intelligible speech) as part of his telephone in 1876, which was followed in 1877 by an improved version from Ernst Siemens. During this time, Thomas Edison was issued a British patent for a system using compressed air as an amplifying mechanism for his early cylinder phonographs, but he ultimately settled for the familiar metal horn driven by a membrane attached to the stylus. In 1898, Horace Short patented a design for a loudspeaker driven by compressed air; he then sold the rights to Charles Parsons, who was issued several additional British patents before 1910. A few companies, including the Victor Talking Machine Company and Pathé, produced record players using compressed-air loudspeakers. Compressed-air designs are significantly limited by their poor sound quality and their inability to reproduce sound at low volume. Variants of the design were used for public address applications, and more recently, other variations have been used to test space-equipment resistance to the very loud sound and vibration levels that the launching of rockets produces.
Moving-coil
The first experimental moving-coil (also called dynamic) loudspeaker was invented by Oliver Lodge in 1898. The first practical moving-coil loudspeakers were manufactured by Danish engineer Peter L. Jensen and Edwin Pridham in 1915, in Napa, California. Like previous loudspeakers these used horns to amplify the sound produced by a small diaphragm. Jensen was denied patents. Being unsuccessful in selling their product to telephone companies, in 1915 they changed their target market to radios and public address systems, and named their product Magnavox. Jensen was, for years after the invention of the loudspeaker, a part owner of The Magnavox Company.
The moving-coil principle commonly used today in speakers was patented in 1925 by Edward W. Kellogg and Chester W. Rice. The key difference between previous attempts and the patent by Rice and Kellogg is the adjustment of mechanical parameters to provide a reasonably flat frequency response.
These first loudspeakers used electromagnets, because large, powerful permanent magnets were generally not available at a reasonable price. The coil of an electromagnet, called a field coil, was energized by a current through a second pair of connections to the driver. This winding usually served a dual role, acting also as a choke coil, filtering the power supply of the amplifier that the loudspeaker was connected to. AC ripple in the current was attenuated by the action of passing through the choke coil. However, AC line frequencies tended to modulate the audio signal going to the voice coil and added to the audible hum. In 1930 Jensen introduced the first commercial fixed-magnet loudspeaker; however, the large, heavy iron magnets of the day were impractical and field-coil speakers remained predominant until the widespread availability of lightweight alnico magnets after World War II.
First loudspeaker systems
In the 1930s, loudspeaker manufacturers began to combine two and three drivers or sets of drivers each optimized for a different frequency range in order to improve frequency response and increase sound pressure level. In 1937, the first film industry-standard loudspeaker system, "The Shearer Horn System for Theatres", a two-way system, was introduced by Metro-Goldwyn-Mayer. It used four 15" low-frequency drivers, a crossover network set for 375 Hz, and a single multi-cellular horn with two compression drivers providing the high frequencies. John Kenneth Hilliard, James Bullough Lansing, and Douglas Shearer all played roles in creating the system. At the 1939 New York World's Fair, a very large two-way public address system was mounted on a tower at Flushing Meadows. The eight 27" low-frequency drivers were designed by Rudy Bozak in his role as chief engineer for Cinaudagraph. High-frequency drivers were likely made by Western Electric.
Altec Lansing introduced the 604, which became their most famous coaxial Duplex driver, in 1943. It incorporated a high-frequency horn that sent sound through a hole in the pole piece of a 15-inch woofer for near-point-source performance. Altec's "Voice of the Theatre" loudspeaker system was first sold in 1945, offering better coherence and clarity at the high output levels necessary in movie theaters. The Academy of Motion Picture Arts and Sciences immediately began testing its sonic characteristics; they made it the film house industry standard in 1955.
In 1954, Edgar Villchur developed the acoustic suspension principle of loudspeaker design. This allowed for better bass response than previously obtainable from drivers mounted in larger cabinets. He and his partner Henry Kloss formed the Acoustic Research company to manufacture and market speaker systems using this principle. Subsequently, continuous developments in enclosure design and materials led to significant audible improvements.
The most notable improvements to date in modern dynamic drivers, and the loudspeakers that employ them, are improvements in cone materials, the introduction of higher-temperature adhesives, improved permanent magnet materials, improved measurement techniques, computer-aided design, and finite element analysis. At low frequencies, Thiele/Small parameters electrical network theory has been used to optimize bass driver and enclosure synergy since the early 1970s.
Driver design: dynamic loudspeakers
Speaker systems
Speaker system design involves subjective perceptions of timbre and sound quality, measurements and experiments. Adjusting a design to improve performance is done using a combination of magnetic, acoustic, mechanical, electrical, and materials science theory, and tracked with high-precision measurements and the observations of experienced listeners. A few of the issues speaker and driver designers must confront are distortion, acoustic lobing, phase effects, off-axis response, and crossover artifacts. Designers can use an anechoic chamber to ensure the speaker can be measured independently of room effects, or any of several electronic techniques that, to some extent, substitute for such chambers. Some developers eschew anechoic chambers in favor of specific standardized room setups intended to simulate real-life listening conditions.
Individual electrodynamic drivers provide their best performance within a limited frequency range. Multiple drivers (e.g. subwoofers, woofers, mid-range drivers, and tweeters) are generally combined into a complete loudspeaker system to provide performance beyond that constraint. The three most commonly used sound radiation systems are the cone, dome and horn-type drivers.
Full-range drivers
A full- or wide-range driver is a speaker driver designed to be used alone to reproduce an audio channel without the help of other drivers and therefore must cover the audio frequency range required by the application. These drivers are small, typically in diameter to permit reasonable high-frequency response, and carefully designed to give low-distortion output at low frequencies, though with reduced maximum output level. Full-range drivers are found, for instance, in public address systems, in televisions, small radios, intercoms, and some computer speakers.
In hi-fi speaker systems, the use of wide-range drivers can avoid undesirable interactions between multiple drivers caused by non-coincident driver location or crossover network issues but also may limit frequency response and output abilities (most especially at low frequencies). Hi-fi speaker systems built with wide-range drivers may require large, elaborate or, expensive enclosures to approach optimum performance.
Full-range drivers often employ an additional cone called a whizzer: a small, light cone attached to the joint between the voice coil and the primary cone. The whizzer cone extends the high-frequency response of the driver and broadens its high-frequency directivity, which would otherwise be greatly narrowed due to the outer diameter cone material failing to keep up with the central voice coil at higher frequencies. The main cone in a whizzer design is manufactured so as to flex more in the outer diameter than in the center. The result is that the main cone delivers low frequencies and the whizzer cone contributes most of the higher frequencies. Since the whizzer cone is smaller than the main diaphragm, output dispersion at high frequencies is improved relative to an equivalent single larger diaphragm.
Limited-range drivers, also used alone, are typically found in computers, toys, and clock radios. These drivers are less elaborate and less expensive than wide-range drivers, and they may be severely compromised to fit into very small mounting locations. In these applications, sound quality is a low priority.
Subwoofer
A subwoofer is a woofer driver used only for the lowest-pitched part of the audio spectrum: typically below 200 Hz for consumer systems, below 100 Hz for professional live sound, and below 80 Hz in THX-approved systems. Because the intended range of frequencies is limited, subwoofer system design is usually simpler in many respects than for conventional loudspeakers, often consisting of a single driver enclosed in a suitable enclosure. Since sound in this frequency range can easily bend around corners by diffraction, the speaker aperture does not have to face the audience, and subwoofers can be mounted in the bottom of the enclosure, facing the floor. This is eased by the limitations of human hearing at low frequencies; Such sounds cannot be located in space, due to their large wavelengths compared to higher frequencies which produce differential effects in the ears due to shadowing by the head, and diffraction around it, both of which we rely upon for localization clues.
To accurately reproduce very low bass notes, subwoofer systems must be solidly constructed and properly braced to avoid unwanted sounds from cabinet vibrations. As a result, good subwoofers are typically quite heavy. Many subwoofer systems include integrated power amplifiers and electronic subsonic-filters, with additional controls relevant to low-frequency reproduction (e.g. a crossover knob and a phase switch). These variants are known as active or powered subwoofers. In contrast, passive subwoofers require external amplification.
In typical installations, subwoofers are physically separated from the rest of the speaker cabinets. Because of propagation delay and positioning, their output may be out of phase with the rest of the sound. Consequently, a subwoofer's power amp often has a phase-delay adjustment which may be used improve performance of the system as a whole. Subwoofers are widely used in large concert and mid-sized venue sound reinforcement systems. Subwoofer cabinets are often built with a bass reflex port, a design feature which if properly engineered improves bass performance and increases efficiency.
Woofer
A woofer is a driver that reproduces low frequencies. The driver works with the characteristics of the speaker enclosure to produce suitable low frequencies. Some loudspeaker systems use a woofer for the lowest frequencies, sometimes well enough that a subwoofer is not needed. Additionally, some loudspeakers use the woofer to handle middle frequencies, eliminating the mid-range driver.
Mid-range driver
A mid-range speaker is a loudspeaker driver that reproduces a band of frequencies generally between 1–6 kHz, otherwise known as the mid frequencies (between the woofer and tweeter). Mid-range driver diaphragms can be made of paper or composite materials and can be direct radiation drivers (rather like smaller woofers) or they can be compression drivers (rather like some tweeter designs). If the mid-range driver is a direct radiator, it can be mounted on the front baffle of a loudspeaker enclosure, or, if a compression driver, mounted at the throat of a horn for added output level and control of radiation pattern.
Tweeter
A tweeter is a high-frequency driver that reproduces the highest frequencies in a speaker system. A major problem in tweeter design is achieving wide angular sound coverage (off-axis response), since high-frequency sound tends to leave the speaker in narrow beams. Soft-dome tweeters are widely found in home stereo systems, and horn-loaded compression drivers are common in professional sound reinforcement. Ribbon tweeters have gained popularity as the output power of some designs has been increased to levels useful for professional sound reinforcement, and their output pattern is wide in the horizontal plane, a pattern that has convenient applications in concert sound.
Coaxial drivers
A coaxial driver is a loudspeaker driver with two or more combined concentric drivers. Coaxial drivers have been produced by Altec, Tannoy, Pioneer, KEF, SEAS, B&C Speakers, BMS, Cabasse and Genelec.
System design
Crossover
Used in multi-driver speaker systems, the crossover is an assembly of filters that separate the input signal into different frequency bands according to the requirements of each driver. Hence the drivers receive power only in the sound frequency range they were designed for, thereby reducing distortion in the drivers and interference between them. Crossovers can be passive or active.
A passive crossover is an electronic circuit that uses a combination of one or more resistors, inductors and capacitors. These components are combined to form a filter network and are most often placed between the full frequency-range power amplifier and the loudspeaker drivers to divide the amplifier's signal into the necessary frequency bands before being delivered to the individual drivers. Passive crossover circuits need no external power beyond the audio signal itself, but have some disadvantages: they may require larger inductors and capacitors due to power handling requirements. Unlike active crossovers which include a built-in amplifier, passive crossovers have an inherent attenuation within the passband, typically leading to a reduction in damping factor before the voice coil.
An active crossover is an electronic filter circuit that divides the signal into individual frequency bands before power amplification, thus requiring at least one power amplifier for each band. Passive filtering may also be used in this way before power amplification, but it is an uncommon solution, being less flexible than active filtering. Any technique that uses crossover filtering followed by amplification is commonly known as bi-amping, tri-amping, quad-amping, and so on, depending on the minimum number of amplifier channels.
Some loudspeaker designs use a combination of passive and active crossover filtering, such as a passive crossover between the mid- and high-frequency drivers and an active crossover for the low-frequency driver.
Passive crossovers are commonly installed inside speaker boxes and are by far the most common type of crossover for home and low-power use. In car audio systems, passive crossovers may be in a separate box, necessary to accommodate the size of the components used. Passive crossovers may be simple for low-order filtering, or complex to allow steep slopes such as 18 or 24 dB per octave. Passive crossovers can also be designed to compensate for undesired characteristics of driver, horn, or enclosure resonances, and can be tricky to implement, due to component interaction. Passive crossovers, like the driver units that they feed, have power handling limits, have insertion losses, and change the load seen by the amplifier. The changes are matters of concern for many in the hi-fi world. When high output levels are required, active crossovers may be preferable. Active crossovers may be simple circuits that emulate the response of a passive network or may be more complex, allowing extensive audio adjustments. Some active crossovers, usually digital loudspeaker management systems, may include electronics and controls for precise alignment of phase and time between frequency bands, equalization, dynamic range compression and limiting.
Enclosures
Most loudspeaker systems consist of drivers mounted in an enclosure, or cabinet. The role of the enclosure is to prevent sound waves emanating from the back of a driver from interfering destructively with those from the front. The sound waves emitted from the back are 180° out of phase with those emitted forward, so without an enclosure they typically cause cancellations which significantly degrade the level and quality of sound at low frequencies.
The simplest driver mount is a flat panel (baffle) with the drivers mounted in holes in it. However, in this approach, sound frequencies with a wavelength longer than the baffle dimensions are canceled out because the antiphase radiation from the rear of the cone interferes with the radiation from the front. With an infinitely large panel, this interference could be entirely prevented. A sufficiently large sealed box can approach this behavior.
Since panels of infinite dimensions are impossible, most enclosures function by containing the rear radiation from the moving diaphragm. A sealed enclosure prevents transmission of the sound emitted from the rear of the loudspeaker by confining the sound in a rigid and airtight box. Techniques used to reduce the transmission of sound through the walls of the cabinet include thicker cabinet walls, internal bracing and lossy wall material.
However, a rigid enclosure reflects sound internally, which can then be transmitted back through the loudspeaker diaphragm—again resulting in degradation of sound quality. This can be reduced by internal absorption using absorptive materials such as glass wool, wool, or synthetic fiber batting, within the enclosure. The internal shape of the enclosure can also be designed to reduce this by reflecting sounds away from the loudspeaker diaphragm, where they may then be absorbed.
Other enclosure types alter the rear sound radiation so it can add constructively to the output from the front of the cone. Designs that do this (including bass reflex, passive radiator, transmission line, etc.) are often used to extend the effective low-frequency response and increase the low-frequency output of the driver.
To make the transition between drivers as seamless as possible, system designers have attempted to time align the drivers by moving one or more driver mounting locations forward or back so that the acoustic center of each driver is in the same vertical plane. This may also involve tilting the driver back, providing a separate enclosure mounting for each driver, or using electronic techniques to achieve the same effect. These attempts have resulted in some unusual cabinet designs.
The speaker mounting scheme (including cabinets) can also cause diffraction, resulting in peaks and dips in the frequency response. The problem is usually greatest at higher frequencies, where wavelengths are similar to, or smaller than, cabinet dimensions.
Horn loudspeakers
Horn loudspeakers are the oldest form of loudspeaker system. The use of horns as voice-amplifying megaphones dates at least to the 17th century, and horns were used in mechanical gramophones as early as 1877. Horn loudspeakers use a shaped waveguide in front of or behind the driver to increase the directivity of the loudspeaker and to transform a small diameter, high-pressure condition at the driver cone surface to a large diameter, low-pressure condition at the mouth of the horn. This improves the acoustic—electro/mechanical impedance match between the driver and ambient air, increasing efficiency, and focusing the sound over a narrower area.
The size of the throat, mouth, the length of the horn, as well as the area expansion rate along it must be carefully chosen to match the driver to properly provide this transforming function over a range of frequencies. The length and cross-sectional mouth area required to create a bass or sub-bass horn dictates a horn many feet long. Folded horns can reduce the total size, but compel designers to make compromises and accept increased cost and construction complications. Some horn designs not only fold the low-frequency horn but use the walls in a room corner as an extension of the horn mouth. In the late 1940s, horns whose mouths took up much of a room wall were not unknown among hi-fi fans. Room-sized installations became much less acceptable when two or more were required.
A horn-loaded speaker can have a sensitivity as high as 110 dB at 2.83 volts (1 watt at 8 ohms) at 1 meter. This is a hundredfold increase in output compared to a speaker rated at 90 dB sensitivity and is invaluable in applications where high sound levels are required or amplifier power is limited.
Transmission line loudspeaker
A transmission line loudspeaker is a loudspeaker enclosure design that uses an acoustic transmission line within the cabinet, compared to the simpler enclosure-based designs. Instead of reverberating in a fairly simple damped enclosure, sound from the back of the bass speaker is directed into a long (generally folded) damped pathway within the speaker enclosure, which allows greater control and efficient use of speaker energy.
Wiring connections
Most home hi-fi loudspeakers use two wiring points to connect to the source of the signal (for example, to the audio amplifier or receiver). To accept the wire connection, the loudspeaker enclosure may have binding posts, spring clips, or a panel-mount jack. If the wires for a pair of speakers are not connected with respect to the proper electrical polarity, the loudspeakers are said to be out of phase or more properly out of polarity. Given identical signals, motion in the cone of an out of polarity loudspeaker is in the opposite direction of the others. This typically causes monophonic material in a stereo recording to be canceled out, reduced in level, and made more difficult to localize, all due to destructive interference of the sound waves. The cancellation effect is most noticeable at frequencies where the loudspeakers are separated by a quarter wavelength or less; low frequencies are affected the most. This type of miswiring error does not damage speakers, but is not optimal for listening.
With sound reinforcement system, PA system and instrument amplifier speaker enclosures, cables and some type of jack or connector are typically used. Lower- and mid-priced sound system and instrument speaker cabinets often use 1/4" jacks. Higher-priced and higher-powered sound system cabinets and instrument speaker cabinets often use Speakon connectors. Speakon connectors are considered to be safer for high-wattage amplifiers, because the connector is designed so that human users cannot touch the connectors.
Wireless speakers
Wireless speakers are similar to wired powered speakers, but they receive audio signals using radio frequency (RF) waves rather than over audio cables. There is an amplifier integrated in the speaker's cabinet because the RF waves alone are not enough to drive the speaker. Wireless speakers still need power, so require a nearby AC power outlet, or onboard batteries. Only the wire for the audio is eliminated.
Specifications
Speaker specifications generally include:
Speaker or driver type (individual units only) – full-range, woofer, tweeter, or mid-range.
Size of individual drivers. For cone drivers, the quoted size is generally the outside diameter of the basket. However, it may less commonly also be the diameter of the cone surround, measured apex to apex, or the distance from the center of one mounting hole to its opposite. Voice-coil diameter may also be specified. If the loudspeaker has a compression horn driver, the diameter of the horn throat may be given.
Rated power – power, and peak power a loudspeaker can handle. A driver may be damaged at much less than its rated power if driven past its mechanical limits at lower frequencies. In some jurisdictions, power handling has a legal meaning allowing comparisons between loudspeakers under consideration. Elsewhere, the variety of meanings for power handling capacity can be quite confusing.
Impedance – typically 4 Ω (ohms), 8 Ω, etc.
Baffle or enclosure type (enclosed systems only) – Sealed, bass reflex, etc.
Number of drivers (complete speaker systems only) – two-way, three-way, etc.
Class of loudspeaker:
Class 1: maximum SPL 110-119 dB, the type of loudspeaker used for reproducing a person speaking in a small space or for background music; mainly used as fill speakers for Class 2 or Class 3 speakers; typically small 4" or 5" woofers and dome tweeters
Class 2: maximum SPL 120-129 dB, the type of medium power-capable loudspeaker used for reinforcement in small to medium spaces or as fill speakers for Class 3 or Class 4 speakers; typically 5" to 8" woofers and dome tweeters
Class 3: maximum SPL 130-139 dB, high power-capable loudspeakers used in main systems in small to medium spaces; also used as fill speakers for class 4 speakers; typically 6.5" to 12" woofers and 2" or 3" compression drivers for high frequencies
Class 4: maximum SPL 140 dB and higher, very high power-capable loudspeakers used as mains in medium to large spaces (or for fill speakers for these medium to large spaces); 10" to 15" woofers and 3" compression drivers
and optionally:
Crossover frequency(ies) (multi-driver systems only) – The nominal frequency boundaries of the division between drivers.
Frequency response – The measured, or specified, output over a specified range of frequencies for a constant input level varied across those frequencies. It sometimes includes a variance limit, such as within "± 2.5 dB."
Thiele/Small parameters (individual drivers only) – these include the driver's Fs (resonance frequency), Qts (a driver's Q; more or less, its damping factor at resonant frequency), Vas (the equivalent air compliance volume of the driver), etc.
Sensitivity – The sound pressure level produced by a loudspeaker in a non-reverberant environment, often specified in dB and measured at 1 meter with an input of 1 watt (2.83 rms volts into 8 Ω), typically at one or more specified frequencies. Manufacturers often use this rating in marketing material.
Maximum sound pressure level – The highest output the loudspeaker can manage, short of damage or not exceeding a particular distortion level. Manufacturers often use this rating in marketing material—commonly without reference to frequency range or distortion level.
Electrical characteristics of dynamic loudspeakers
To make sound, a loudspeaker is driven by modulated electric current (produced by an amplifier) that passes through a speaker coil which then (through inductance) creates a magnetic field around the coil. The electric current variations that pass through the speaker are thus converted to a varying magnetic field, whose interaction with the driver's magnetic field moves the speaker diaphragm, which thus forces the driver to produce air motion that is similar to the original signal from the amplifier.
The load that a driver presents to an amplifier consists of a complex electrical impedance—a combination of resistance and both capacitive and inductive reactance, which combines properties of the driver, its mechanical motion, the effects of crossover components (if any are in the signal path between amplifier and driver), and the effects of air loading on the driver as modified by the enclosure and its environment. Most amplifiers' output specifications are given at a specific power into an ideal resistive load; however, a loudspeaker does not have a constant impedance across its frequency range. Instead, the voice coil is inductive, the driver has mechanical resonances, the enclosure changes the driver's electrical and mechanical characteristics, and a passive crossover between the drivers and the amplifier contributes its own variations. The result is a load impedance that varies widely with frequency, and usually a varying phase relationship between voltage and current as well, also changing with frequency. Some amplifiers can cope with the variation better than others can.
Electromechanical measurements
Examples of typical loudspeaker measurement are: amplitude and phase characteristics vs. frequency; impulse response under one or more conditions (e.g. square waves, sine wave bursts, etc.); directivity vs. frequency (e.g. horizontally, vertically, spherically, etc.); harmonic and intermodulation distortion vs. sound pressure level (SPL) output, using any of several test signals; stored energy (i.e. ringing) at various frequencies; impedance vs. frequency; and small-signal vs. large-signal performance. Most of these measurements require sophisticated and often expensive equipment to perform. The sound pressure level (SPL) a loudspeaker produces is measured in decibels (dBspl).
Efficiency vs. sensitivity
Loudspeaker efficiency is defined as the sound power output divided by the electrical power input. Most loudspeakers are inefficient transducers; only about 1% of the electrical energy sent by an amplifier to a typical home loudspeaker is converted to acoustic energy. The remainder is converted to heat, mostly in the voice coil and magnet assembly. The main reason for this is the difficulty of achieving proper impedance matching between the acoustic impedance of the drive unit and the air it radiates into. The efficiency of loudspeaker drivers varies with frequency as well. For instance, the output of a woofer driver decreases as the input frequency decreases because of the increasingly poor impedance match between air and the driver.
Driver ratings based on the SPL for a given input are called sensitivity ratings and are notionally similar to efficiency. Sensitivity is usually defined as the SPL in decibels at 1 W electrical input, measured at 1 meter, often at a single frequency. The voltage used is often 2.83 VRMS, which results in 1 watt into a nominal 8 Ω speaker impedance. Measurements taken with this reference are quoted as dB with 2.83 V @ 1 m.
The sound pressure output is measured at (or mathematically scaled to be equivalent to a measurement taken at) one meter from the loudspeaker and on-axis (directly in front of it), under the condition that the loudspeaker is radiating into an infinitely large space and mounted on an infinite baffle. Clearly then, sensitivity does not correlate precisely with efficiency, as it also depends on the directivity of the driver being tested and the acoustic environment in front of the actual loudspeaker. For example, a cheerleader's horn produces more sound output in the direction it is pointed by concentrating sound waves from the cheerleader in one direction, thus focusing them. The horn also improves impedance matching between the voice and the air, which produces more acoustic power for a given speaker power. In some cases, improved impedance matching (via careful enclosure design) lets the speaker produce more acoustic power.
Typical home loudspeakers have sensitivities of about 85 to 95 dB for 1 W @ 1 m—an efficiency of 0.5–4%. Sound reinforcement and public address loudspeakers have sensitivities of perhaps 95 to 102 dB for 1 W @ 1 m—an efficiency of 4–10%. Rock concert, stadium PA, marine hailing, etc. speakers generally have higher sensitivities of 103 to 110 dB for 1 W @ 1 m—an efficiency of 10–20%.
Since sensitivity and power handling are largely independent properties, a driver with a higher maximum power rating cannot necessarily be driven to louder levels than a lower-rated one. In the example that follows, assume (for simplicity) that the drivers being compared have the same electrical impedance, are operated at the same frequency within both driver's respective passbands, and that power compression and distortion are insignificant. A speaker 3 dB more sensitive than another produces double the sound power (is 3 dB louder) for the same electrical power input. Thus, a 100 W driver (A) rated at 92 dB for 1 W @ 1 m sensitivity puts out twice as much acoustic power as a 200 W driver (B) rated at 89 dB for 1 W @ 1 m when both are driven with 100 W of electrical power. In this example, when driven at 100 W, speaker A produces the same SPL, or loudness as speaker B would produce with 200 W input. Thus, a 3 dB increase in the sensitivity of the speaker means that it needs half the amplifier power to achieve a given SPL. This translates into a smaller, less complex power amplifier—and often, to reduced overall system cost.
It is typically not possible to combine high efficiency (especially at low frequencies) with compact enclosure size and adequate low-frequency response. One can, for the most part, choose only two of the three parameters when designing a speaker system. So, for example, if extended low-frequency performance and small box size are important, one must accept low efficiency. This rule of thumb is sometimes called Hofmann's Iron Law (after J.A. Hofmann, the H in KLH).
Listening environment
The interaction of a loudspeaker system with its environment is complex and is largely out of the loudspeaker designer's control. Most listening rooms present a more or less reflective environment, depending on size, shape, volume, and furnishings. This means the sound reaching a listener's ears consists not only of sound directly from the speaker system, but also the same sound delayed by traveling to and from (and being modified by) one or more surfaces. These reflected sound waves, when added to the direct sound, cause cancellation and addition at assorted frequencies (e.g. from resonant room modes), thus changing the timbre and character of the sound at the listener's ears. The human brain is sensitive to small variations in reflected sound, and this is part of the reason why a loudspeaker system sounds different at different listening positions or in different rooms.
A significant factor in the sound of a loudspeaker system is the amount of absorption and diffusion present in the environment. Clapping one's hands in a typical empty room, without draperies or carpet, produces a zippy, fluttery echo due to a lack of absorption and diffusion.
Placement
In a typical rectangular listening room, the hard, parallel surfaces of the walls, floor and ceiling cause primary acoustic resonance nodes in each of the three dimensions: left-right, up-down and forward-backward. Furthermore, there are more complex resonance modes involving up to all six boundary surfaces combining to create standing waves. This is called speaker boundary interference response (SBIR). Low frequencies excite these modes the most, since long wavelengths are not much affected by furniture compositions or placement. The mode spacing is critical, especially in small and medium-sized rooms like recording studios, home theaters and broadcast studios. The proximity of the loudspeakers to room boundaries affects how strongly the resonances are excited as well as affecting the relative strength at each frequency. The location of the listener is critical, too, as a position near a boundary can have a great effect on the perceived balance of frequencies. This is because standing wave patterns are most easily heard in these locations and at lower frequencies, below the Schroeder frequency—typically around 200–300 Hz, depending on room size.
Directivity
Acousticians, in studying the radiation of sound sources have developed some concepts important to understanding how loudspeakers are perceived. The simplest possible radiating source is a point source, sometimes called a simple source. An ideal point source is an infinitesimally small point radiating sound. It may be easier to imagine a tiny pulsating sphere, uniformly increasing and decreasing in diameter, sending out sound waves in all directions equally, independent of frequency.
Any object radiating sound, including a loudspeaker system, can be thought of as being composed of combinations of such simple point sources. The radiation pattern of a combination of point sources is not the same as for a single source, but depends on the distance and orientation between the sources, the position relative to them from which the listener hears the combination, and the frequency of the sound involved. Using geometry and calculus, some simple combinations of sources are easily solved; others are not.
One simple combination is two simple sources separated by a distance and vibrating out of phase, one miniature sphere expanding while the other is contracting. The pair is known as a doublet, or dipole, and the radiation of this combination is similar to that of a very small dynamic loudspeaker operating without a baffle. The directivity of a dipole is a figure 8 shape with maximum output along a vector that connects the two sources and minimums to the sides when the observing point is equidistant from the two sources, where the sum of the positive and negative waves cancel each other. While most drivers are dipoles, depending on the enclosure to which they are attached, they may radiate as monopoles, dipoles (or bipoles). If mounted on a finite baffle, and these out-of-phase waves are allowed to interact, dipole peaks and nulls in the frequency response result. When the rear radiation is absorbed or trapped in a box, the diaphragm becomes a monopole radiator. Bipolar speakers, made by mounting in-phase monopoles (both moving out of or into the box in unison) on opposite sides of a box, are a method of approaching omnidirectional radiation patterns.
In real life, individual drivers are complex 3D shapes such as cones and domes, and they are placed on a baffle for various reasons. A mathematical expression for the directivity of a complex shape, based on modeling combinations of point sources, is usually not possible, but in the far field, the directivity of a loudspeaker with a circular diaphragm is close to that of a flat circular piston, so it can be used as an illustrative simplification for discussion. As a simple example of the mathematical physics involved, consider the following:
the formula for far field directivity of a flat circular piston in an infinite baffle is
where , is the pressure on axis, is the piston radius, is the wavelength (i.e. ) is the angle off axis and is the Bessel function of the first kind.
A planar source radiates sound uniformly for low frequencies' wavelengths longer than the dimensions of the planar source, and as frequency increases, the sound from such a source focuses into an increasingly narrower angle. The smaller the driver, the higher the frequency where this narrowing of directivity occurs. Even if the diaphragm is not perfectly circular, this effect occurs such that larger sources are more directive. Several loudspeaker designs approximate this behavior. Most are electrostatic or planar magnetic designs.
Various manufacturers use different driver mounting arrangements to create a specific type of sound field in the space for which they are designed. The resulting radiation patterns may be intended to more closely simulate the way sound is produced by real instruments, or simply create a controlled energy distribution from the input signal (some using this approach are called monitors, as they are useful in checking the signal just recorded in a studio). An example of the first is a room corner system with many small drivers on the surface of a 1/8 sphere. A system design of this type was patented and produced commercially by Professor Amar Bose—the 2201. Later Bose models have deliberately emphasized production of both direct and reflected sound by the loudspeaker itself, regardless of its environment. The designs are controversial in high fidelity circles, but have proven commercially successful. Several other manufacturers' designs follow similar principles.
Directivity is an important issue because it affects the frequency balance of sound a listener hears, and also the interaction of the speaker system with the room and its contents. A very directive (sometimes termed 'beamy') speaker (i.e. on an axis perpendicular to the speaker face) may result in a reverberant field lacking in high frequencies, giving the impression the speaker is deficient in treble even though it measures well on axis (e.g. flat across the entire frequency range). Speakers with very wide, or rapidly increasing directivity at high frequencies, can give the impression that there is too much treble (if the listener is on axis) or too little (if the listener is off axis). This is part of the reason why on-axis frequency response measurement is not a complete characterization of the sound of a given loudspeaker.
Other speaker designs
While dynamic cone speakers remain the most popular choice, many other speaker technologies exist.
With a diaphragm
Moving-iron loudspeakers
The original loudspeaker design was the moving iron. Unlike the newer dynamic (moving coil) design, a moving-iron speaker uses a stationary coil to vibrate a magnetized piece of metal (called the iron, reed, or armature). The metal is either attached to the diaphragm or is the diaphragm itself. This design originally appeared in the early telephone.
Moving iron drivers are inefficient and can only produce a small band of sound. They require large magnets and coils to increase force.
Balanced armature drivers (a type of moving iron driver) use an armature that moves like a see-saw or diving board. Since they are not damped, they are highly efficient, but they also produce strong resonances. They are still used today for high-end earphones and hearing aids, where small size and high efficiency are important.
Piezoelectric speakers
Piezoelectric speakers are frequently used as beepers in watches and other electronic devices, and are sometimes used as tweeters in less-expensive speaker systems, such as computer speakers and portable radios. Piezoelectric speakers have several advantages over conventional loudspeakers: they are resistant to overloads that would normally destroy most high-frequency drivers, and they can be used without a crossover due to their electrical properties. There are also disadvantages: some amplifiers can oscillate when driving capacitive loads like most piezoelectrics, which results in distortion or damage to the amplifier. Additionally, their frequency response, in most cases, is inferior to that of other technologies. This is why they are generally used in single-frequency (beeper) or non-critical applications.
Piezoelectric speakers can have extended high-frequency output, and this is useful in some specialized circumstances; for instance, sonar applications in which piezoelectric variants are used as both output devices (generating underwater sound) and as input devices (acting as the sensing components of underwater microphones). They have advantages in these applications, not the least of which is simple and solid-state construction that resists seawater better than a ribbon or cone-based device would.
In 2013, Kyocera introduced piezoelectric ultra-thin medium-size film speakers with only 1 millimeter of thickness and 7 grams of weight for their 55" OLED televisions and they hope the speakers will also be used in PCs and tablets. Besides medium-size, there are also large and small sizes which can all produce relatively the same quality of sound and volume within 180 degrees. The highly responsive speaker material provides better clarity than traditional TV speakers.
Magnetostatic loudspeakers
Instead of a voice coil driving a speaker cone, a magnetostatic speaker uses an array of metal strips bonded to a large film membrane. The magnetic field produced by signal current flowing through the strips interacts with the field of permanent bar magnets mounted behind them. The force produced moves the membrane and so the air in front of it. Typically, these designs are less efficient than conventional moving-coil speakers.
Magnetostrictive speakers
Magnetostrictive transducers, based on magnetostriction, have been predominantly used as sonar ultrasonic sound wave radiators, but their use has spread also to audio speaker systems. Magnetostrictive speaker drivers have some special advantages: they can provide greater force (with smaller excursions) than other technologies; low excursion can avoid distortions from large excursion as in other designs; the magnetizing coil is stationary and therefore more easily cooled; they are robust because delicate suspensions and voice coils are not required. Magnetostrictive speaker modules have been produced by Fostex and FeONIC and subwoofer drivers have also been produced.
Electrostatic loudspeakers
Electrostatic loudspeakers use a high-voltage electric field (rather than a magnetic field) to drive a thin statically charged membrane. Because they are driven over the entire membrane surface rather than from a small voice coil, they ordinarily provide a more linear and lower-distortion motion than dynamic drivers. They also have a relatively narrow dispersion pattern that can make for precise sound-field positioning. However, their optimum listening area is small and they are not very efficient speakers. They have the disadvantage that the diaphragm excursion is severely limited because of practical construction limitations—the further apart the stators are positioned, the higher the voltage must be to achieve acceptable efficiency. This increases the tendency for electrical arcs as well as increasing the speaker's attraction of dust particles. Arcing remains a potential problem with current technologies, especially when the panels are allowed to collect dust or dirt and are driven with high signal levels.
Electrostatics are inherently dipole radiators and due to the thin flexible membrane are less suited for use in enclosures to reduce low-frequency cancellation as with common cone drivers. Due to this and the low excursion capability, full-range electrostatic loudspeakers are large by nature, and the bass rolls off at a frequency corresponding to a quarter wavelength of the narrowest panel dimension. To reduce the size of commercial products, they are sometimes used as a high-frequency driver in combination with a conventional dynamic driver that handles the bass frequencies effectively.
Electrostatics are usually driven through a step-up transformer that multiplies the voltage swings produced by the power amplifier. This transformer also multiplies the capacitive load that is inherent in electrostatic transducers, which means the effective impedance presented to the power amplifiers varies widely by frequency. A speaker that is nominally 8 ohms may actually present a load of 1 ohm at higher frequencies, which is challenging to some amplifier designs.
Ribbon and planar magnetic loudspeakers
A ribbon speaker consists of a thin metal-film ribbon suspended in a magnetic field. The electrical signal is applied to the ribbon, which moves with it to create the sound. The advantage of a ribbon driver is that the ribbon has very little mass; thus, it can accelerate very quickly, yielding a very good high-frequency response. Ribbon loudspeakers are often very fragile. Most ribbon tweeters emit sound in a dipole pattern. A few have backings that limit the dipole radiation pattern. Above and below the ends of the more or less rectangular ribbon, there is less audible output due to phase cancellation, but the precise amount of directivity depends on the ribbon length. Ribbon designs generally require exceptionally powerful magnets, which makes them costly to manufacture. Ribbons have a very low resistance that most amplifiers cannot drive directly. As a result, a step down transformer is typically used to increase the current through the ribbon. The amplifier sees a load that is the ribbon's resistance times the transformer turns ratio squared. The transformer must be carefully designed so that its frequency response and parasitic losses do not degrade the sound, further increasing cost and complication relative to conventional designs.
Planar magnetic speakers (having printed or embedded conductors on a flat diaphragm) are sometimes described as ribbons, but are not truly ribbon speakers. The term planar is generally reserved for speakers with roughly rectangular flat surfaces that radiate in a bipolar (i.e. front and back) manner. Planar magnetic speakers consist of a flexible membrane with a voice coil printed or mounted on it. The current flowing through the coil interacts with the magnetic field of carefully placed magnets on either side of the diaphragm, causing the membrane to vibrate more or less uniformly and without much bending or wrinkling. The driving force covers a large percentage of the membrane surface and reduces resonance problems inherent in coil-driven flat diaphragms.
Bending wave loudspeakers
Bending wave transducers use a diaphragm that is intentionally flexible. The rigidity of the material increases from the center to the outside. Short wavelengths radiate primarily from the inner area, while longer waves reach the edge of the speaker. To prevent reflections from the outside back into the center, long waves are absorbed by a surrounding damper. Such transducers can cover a wide frequency range (80 Hz to 35,000 Hz) and have been promoted as being close to an ideal point sound source. This uncommon approach is being taken by only a very few manufacturers, in very different arrangements.
The Ohm Walsh loudspeakers use a unique driver designed by Lincoln Walsh, who had been a radar development engineer in WWII. He became interested in audio equipment design and his last project was a unique, one-way speaker using a single driver. The cone faced down into a sealed, airtight enclosure. Rather than move back and forth as conventional speakers do, the cone rippled and created sound in a manner known in RF electronics as a "transmission line". The new speaker created a cylindrical sound field. Lincoln Walsh died before his speaker was released to the public. The Ohm Acoustics firm has produced several loudspeaker models using the Walsh driver design since then. German Physiks, an audio equipment firm in Germany, also produces speakers using this approach.
The German firm Manger has designed and produced a bending wave driver that at first glance appears conventional. In fact, the round panel attached to the voice coil bends in a carefully controlled way to produce full-range sound. Josef W. Manger was awarded with the Rudolf-Diesel-Medaille for extraordinary developments and inventions by the German institute of inventions.
Flat panel loudspeakers
There have been many attempts to reduce the size of speaker systems, or alternatively to make them less obvious. One such attempt was the development of exciter transducer coils mounted to flat panels to act as sound sources, most accurately called exciter/panel drivers. These can then be made in a neutral color and hung on walls where they are less noticeable than many speakers, or can be deliberately painted with patterns, in which case they can function decoratively. There are two related problems with flat panel techniques: first, a flat panel is necessarily more flexible than a cone shape in the same material, and therefore moves as a single unit even less, and second, resonances in the panel are difficult to control, leading to considerable distortions. Some progress has been made using such lightweight, rigid, materials such as Styrofoam, and there have been several flat panel systems commercially produced in recent years.
Heil air motion transducers
Oskar Heil invented the air motion transducer in the 1960s. In this approach, a pleated diaphragm is mounted in a magnetic field and forced to close and open under control of a music signal. Air is forced from between the pleats in accordance with the imposed signal, generating sound. The drivers are less fragile than ribbons and considerably more efficient (and able to produce higher absolute output levels) than ribbon, electrostatic, or planar magnetic tweeter designs. ESS, a California manufacturer, licensed the design, employed Heil, and produced a range of speaker systems using his tweeters during the 1970s and 1980s. Lafayette Radio, a large US retail store chain, also sold speaker systems using such tweeters for a time. There are several manufacturers of these drivers (at least two in Germany—one of which produces a range of high-end professional speakers using tweeters and mid-range drivers based on the technology) and the drivers are increasingly used in professional audio. Martin Logan produces several AMT speakers in the US and GoldenEar Technologies incorporates them in its entire speaker line.
Transparent ionic conduction speaker
In 2013, a research team introduced a transparent ionic conduction speaker which has two sheets of transparent conductive gel and a layer of transparent rubber in between to make high voltage and high actuation work to reproduce good sound quality. The speaker is suitable for robotics, mobile computing and adaptive optics fields.
Digital speakers
Digital speakers have been the subject of experiments performed by Bell Labs as far back as the 1920s. The design is simple; each bit controls a driver, which is either fully 'on' or 'off'. Problems with this design have led manufacturers to abandon it as impractical for the present. First, for a reasonable number of bits (required for adequate sound reproduction quality), the physical size of a speaker system becomes very large. Secondly, due to inherent analog-to-digital conversion problems, the effect of aliasing is unavoidable, so that the audio output is reflected at equal amplitude in the frequency domain, on the other side of the Nyquist limit (half the sampling frequency), causing an unacceptably high level of ultrasonics to accompany the desired output. No workable scheme has been found to adequately deal with this.
Without a diaphragm
Plasma arc speakers
Plasma arc loudspeakers use electrical plasma as a radiating element. Since plasma has minimal mass, but is charged and therefore can be manipulated by an electric field, the result is a very linear output at frequencies far higher than the audible range. Problems of maintenance and reliability for this approach tend to make it unsuitable for mass market use. In 1978 Alan E. Hill of the Air Force Weapons Laboratory in Albuquerque, NM, designed the Plasmatronics Hill Type I, a tweeter whose plasma was generated from helium gas. This avoided the ozone and NOx produced by RF decomposition of air in an earlier generation of plasma tweeters made by the pioneering DuKane Corporation, who produced the Ionovac (marketed as the Ionofane in the UK) during the 1950s.
A less expensive variation on this theme is the use of a flame for the driver, as flames contain ionized (electrically charged) gases.
Thermoacoustic speakers
In 2008, researchers of Tsinghua University demonstrated a thermoacoustic loudspeaker (or thermophone) of carbon nanotube thin film, whose working mechanism is a thermoacoustic effect. Sound frequency electric currents are used to periodically heat the CNT and thus result in sound generation in the surrounding air. The CNT thin film loudspeaker is transparent, stretchable and flexible.
In 2013, researchers of Tsinghua University further present a thermoacoustic earphone of carbon nanotube thin yarn and a thermoacoustic surface-mounted device. They are both fully integrated devices and compatible with Si-based semiconducting technology.
Rotary woofers
A rotary woofer is essentially a fan with blades that constantly change their pitch, allowing them to easily push the air back and forth. Rotary woofers are able to efficiently reproduce subsonic frequencies, which are difficult to impossible to achieve on a traditional speaker with a diaphragm. They are often employed in movie theaters to recreate rumbling bass effects, such as explosions.
| Technology | Media and communication | null |
45906 | https://en.wikipedia.org/wiki/Exponential%20distribution | Exponential distribution | In probability theory and statistics, the exponential distribution or negative exponential distribution is the probability distribution of the distance between events in a Poisson point process, i.e., a process in which events occur continuously and independently at a constant average rate; the distance parameter could be any meaningful mono-dimensional measure of the process, such as time between production errors, or length along a roll of fabric in the weaving manufacturing process. It is a particular case of the gamma distribution. It is the continuous analogue of the geometric distribution, and it has the key property of being memoryless. In addition to being used for the analysis of Poisson point processes it is found in various other contexts.
The exponential distribution is not the same as the class of exponential families of distributions. This is a large class of probability distributions that includes the exponential distribution as one of its members, but also includes many other distributions, like the normal, binomial, gamma, and Poisson distributions.
Definitions
Probability density function
The probability density function (pdf) of an exponential distribution is
Here λ > 0 is the parameter of the distribution, often called the rate parameter. The distribution is supported on the interval . If a random variable X has this distribution, we write .
The exponential distribution exhibits infinite divisibility.
Cumulative distribution function
The cumulative distribution function is given by
Alternative parametrization
The exponential distribution is sometimes parametrized in terms of the scale parameter , which is also the mean:
Properties
Mean, variance, moments, and median
The mean or expected value of an exponentially distributed random variable X with rate parameter λ is given by
In light of the examples given below, this makes sense; a person who receives an average of two telephone calls per hour can expect that the time between consecutive calls will be 0.5 hour, or 30 minutes.
The variance of X is given by
so the standard deviation is equal to the mean.
The moments of X, for are given by
The central moments of X, for are given by
where !n is the subfactorial of n
The median of X is given by
where refers to the natural logarithm. Thus the absolute difference between the mean and median is
in accordance with the median-mean inequality.
Memorylessness property of exponential random variable
An exponentially distributed random variable T obeys the relation
This can be seen by considering the complementary cumulative distribution function:
When T is interpreted as the waiting time for an event to occur relative to some initial time, this relation implies that, if T is conditioned on a failure to observe the event over some initial period of time s, the distribution of the remaining waiting time is the same as the original unconditional distribution. For example, if an event has not occurred after 30 seconds, the conditional probability that occurrence will take at least 10 more seconds is equal to the unconditional probability of observing the event more than 10 seconds after the initial time.
The exponential distribution and the geometric distribution are the only memoryless probability distributions.
The exponential distribution is consequently also necessarily the only continuous probability distribution that has a constant failure rate.
Quantiles
The quantile function (inverse cumulative distribution function) for Exp(λ) is
The quartiles are therefore:
first quartile: ln(4/3)/λ
median: ln(2)/λ
third quartile: ln(4)/λ
And as a consequence the interquartile range is ln(3)/λ.
Conditional Value at Risk (Expected Shortfall)
The conditional value at risk (CVaR) also known as the expected shortfall or superquantile for Exp(λ) is derived as follows:
Buffered Probability of Exceedance (bPOE)
The buffered probability of exceedance is one minus the probability level at which the CVaR equals the threshold . It is derived as follows:
Kullback–Leibler divergence
The directed Kullback–Leibler divergence in nats of ("approximating" distribution) from ('true' distribution) is given by
Maximum entropy distribution
Among all continuous probability distributions with support and mean μ, the exponential distribution with λ = 1/μ has the largest differential entropy. In other words, it is the maximum entropy probability distribution for a random variate X which is greater than or equal to zero and for which E[X] is fixed.
Distribution of the minimum of exponential random variables
Let X1, ..., Xn be independent exponentially distributed random variables with rate parameters λ1, ..., λn. Then
is also exponentially distributed, with parameter
This can be seen by considering the complementary cumulative distribution function:
The index of the variable which achieves the minimum is distributed according to the categorical distribution
A proof can be seen by letting . Then,
Note that
is not exponentially distributed, if X1, ..., Xn do not all have parameter 0.
Joint moments of i.i.d. exponential order statistics
Let be independent and identically distributed exponential random variables with rate parameter λ.
Let denote the corresponding order statistics.
For , the joint moment of the order statistics and is given by
This can be seen by invoking the law of total expectation and the memoryless property:
The first equation follows from the law of total expectation.
The second equation exploits the fact that once we condition on , it must follow that . The third equation relies on the memoryless property to replace with .
Sum of two independent exponential random variables
The probability distribution function (PDF) of a sum of two independent random variables is the convolution of their individual PDFs. If and are independent exponential random variables with respective rate parameters and then the probability density of is given by
The entropy of this distribution is available in closed form: assuming (without loss of generality), then
where is the Euler-Mascheroni constant, and is the digamma function.
In the case of equal rate parameters, the result is an Erlang distribution with shape 2 and parameter which in turn is a special case of gamma distribution.
The sum of n independent Exp(λ) exponential random variables is Gamma(n, λ) distributed.
Related distributions
If X ~ Laplace(μ, β−1), then |X − μ| ~ Exp(β).
If X ~ U(0, 1) then −log(X) ~ Exp(1).
If X ~ Pareto(1, λ), then log(X) ~ Exp(λ).
If X ~ SkewLogistic(θ), then .
If Xi ~ U(0, 1) then
The exponential distribution is a limit of a scaled beta distribution:
The exponential distribution is a special case of type 3 Pearson distribution.
The exponential distribution is the special case of a Gamma distribution with shape parameter 1.
If X ~ Exp(λ) and X ~ Exp(λ) then:
, closure under scaling by a positive factor.
1 + X ~ BenktanderWeibull(λ, 1), which reduces to a truncated exponential distribution.
keX ~ Pareto(k, λ).
e−λX ~ U(0, 1).
e−X ~ Beta(λ, 1).
e ~ PowerLaw(k, λ)
, the Rayleigh distribution
, the Weibull distribution
.
, a geometric distribution on 0,1,2,3,...
, a geometric distribution on 1,2,3,4,...
If also Y ~ Erlang(n, λ) or then
If also λ ~ Gamma(k, θ) (shape, scale parametrisation) then the marginal distribution of X is Lomax(k, 1/θ), the gamma mixture
λX − λY ~ Laplace(0, 1).
min{X1, ..., Xn} ~ Exp(λ1 + ... + λn).
If also λ = λ then:
Erlang(k, λ) = Gamma(k, λ−1) = Gamma(k, λ) (in (k, θ) and (α, β) parametrization, respectively) with an integer shape parameter k.
If , then .
X − X ~ Laplace(0, λ−1).
If also X are independent, then:
~ U(0, 1)
has probability density function . This can be used to obtain a confidence interval for .
If also λ = 1:
, the logistic distribution
μ − σ log(X) ~ GEV(μ, σ, 0).
Further if then (K-distribution)
If also λ = 1/2 then ; i.e., X has a chi-squared distribution with 2 degrees of freedom. Hence:
If and ~ Poisson(X) then (geometric distribution)
The Hoyt distribution can be obtained from exponential distribution and arcsine distribution
The exponential distribution is a limit of the κ-exponential distribution in the case.
Exponential distribution is a limit of the κ-Generalized Gamma distribution in the and cases:
Other related distributions:
Hyper-exponential distribution – the distribution whose density is a weighted sum of exponential densities.
Hypoexponential distribution – the distribution of a general sum of exponential random variables.
exGaussian distribution – the sum of an exponential distribution and a normal distribution.
Statistical inference
Below, suppose random variable X is exponentially distributed with rate parameter λ, and are n independent samples from X, with sample mean .
Parameter estimation
The maximum likelihood estimator for λ is constructed as follows.
The likelihood function for λ, given an independent and identically distributed sample x = (x1, ..., xn) drawn from the variable, is:
where:
is the sample mean.
The derivative of the likelihood function's logarithm is:
Consequently, the maximum likelihood estimate for the rate parameter is:
This is an unbiased estimator of although an unbiased MLE estimator of and the distribution mean.
The bias of is equal to
which yields the bias-corrected maximum likelihood estimator
An approximate minimizer of mean squared error (see also: bias–variance tradeoff) can be found, assuming a sample size greater than two, with a correction factor to the MLE:
This is derived from the mean and variance of the inverse-gamma distribution, .
Fisher information
The Fisher information, denoted , for an estimator of the rate parameter is given as:
Plugging in the distribution and solving gives:
This determines the amount of information each independent sample of an exponential distribution carries about the unknown rate parameter .
Confidence intervals
An exact 100(1 − α)% confidence interval for the rate parameter of an exponential distribution is given by:
which is also equal to
where is the percentile of the chi squared distribution with v degrees of freedom, n is the number of observations and x-bar is the sample average. A simple approximation to the exact interval endpoints can be derived using a normal approximation to the distribution. This approximation gives the following values for a 95% confidence interval:
This approximation may be acceptable for samples containing at least 15 to 20 elements.
Bayesian inference
The conjugate prior for the exponential distribution is the gamma distribution (of which the exponential distribution is a special case). The following parameterization of the gamma probability density function is useful:
The posterior distribution p can then be expressed in terms of the likelihood function defined above and a gamma prior:
Now the posterior density p has been specified up to a missing normalizing constant. Since it has the form of a gamma pdf, this can easily be filled in, and one obtains:
Here the hyperparameter α can be interpreted as the number of prior observations, and β as the sum of the prior observations.
The posterior mean here is:
Occurrence and applications
Occurrence of events
The exponential distribution occurs naturally when describing the lengths of the inter-arrival times in a homogeneous Poisson process.
The exponential distribution may be viewed as a continuous counterpart of the geometric distribution, which describes the number of Bernoulli trials necessary for a discrete process to change state. In contrast, the exponential distribution describes the time for a continuous process to change state.
In real-world scenarios, the assumption of a constant rate (or probability per unit time) is rarely satisfied. For example, the rate of incoming phone calls differs according to the time of day. But if we focus on a time interval during which the rate is roughly constant, such as from 2 to 4 p.m. during work days, the exponential distribution can be used as a good approximate model for the time until the next phone call arrives. Similar caveats apply to the following examples which yield approximately exponentially distributed variables:
The time until a radioactive particle decays, or the time between clicks of a Geiger counter
The time between receiving one telephone call and the next
The time until default (on payment to company debt holders) in reduced-form credit risk modeling
Exponential variables can also be used to model situations where certain events occur with a constant probability per unit length, such as the distance between mutations on a DNA strand, or between roadkills on a given road.
In queuing theory, the service times of agents in a system (e.g. how long it takes for a bank teller etc. to serve a customer) are often modeled as exponentially distributed variables. (The arrival of customers for instance is also modeled by the Poisson distribution if the arrivals are independent and distributed identically.) The length of a process that can be thought of as a sequence of several independent tasks follows the Erlang distribution (which is the distribution of the sum of several independent exponentially distributed variables).
Reliability theory and reliability engineering also make extensive use of the exponential distribution. Because of the memoryless property of this distribution, it is well-suited to model the constant hazard rate portion of the bathtub curve used in reliability theory. It is also very convenient because it is so easy to add failure rates in a reliability model. The exponential distribution is however not appropriate to model the overall lifetime of organisms or technical devices, because the "failure rates" here are not constant: more failures occur for very young and for very old systems.
In physics, if you observe a gas at a fixed temperature and pressure in a uniform gravitational field, the heights of the various molecules also follow an approximate exponential distribution, known as the Barometric formula. This is a consequence of the entropy property mentioned below.
In hydrology, the exponential distribution is used to analyze extreme values of such variables as monthly and annual maximum values of daily rainfall and river discharge volumes.
The blue picture illustrates an example of fitting the exponential distribution to ranked annually maximum one-day rainfalls showing also the 90% confidence belt based on the binomial distribution. The rainfall data are represented by plotting positions as part of the cumulative frequency analysis.
In operating-rooms management, the distribution of surgery duration for a category of surgeries with no typical work-content (like in an emergency room, encompassing all types of surgeries).
Prediction
Having observed a sample of n data points from an unknown exponential distribution a common task is to use these samples to make predictions about future data from the same source. A common predictive distribution over future samples is the so-called plug-in distribution, formed by plugging a suitable estimate for the rate parameter λ into the exponential density function. A common choice of estimate is the one provided by the principle of maximum likelihood, and using this yields the predictive density over a future sample xn+1, conditioned on the observed samples x = (x1, ..., xn) given by
The Bayesian approach provides a predictive distribution which takes into account the uncertainty of the estimated parameter, although this may depend crucially on the choice of prior.
A predictive distribution free of the issues of choosing priors that arise under the subjective Bayesian approach is
which can be considered as
a frequentist confidence distribution, obtained from the distribution of the pivotal quantity ;
a profile predictive likelihood, obtained by eliminating the parameter λ from the joint likelihood of xn+1 and λ by maximization;
an objective Bayesian predictive posterior distribution, obtained using the non-informative Jeffreys prior 1/λ;
the Conditional Normalized Maximum Likelihood (CNML) predictive distribution, from information theoretic considerations.
The accuracy of a predictive distribution may be measured using the distance or divergence between the true exponential distribution with rate parameter, λ0, and the predictive distribution based on the sample x. The Kullback–Leibler divergence is a commonly used, parameterisation free measure of the difference between two distributions. Letting Δ(λ0||p) denote the Kullback–Leibler divergence between an exponential with rate parameter λ0 and a predictive distribution p it can be shown that
where the expectation is taken with respect to the exponential distribution with rate parameter , and is the digamma function. It is clear that the CNML predictive distribution is strictly superior to the maximum likelihood plug-in distribution in terms of average Kullback–Leibler divergence for all sample sizes .
Random variate generation
A conceptually very simple method for generating exponential variates is based on inverse transform sampling: Given a random variate U drawn from the uniform distribution on the unit interval , the variate
has an exponential distribution, where F is the quantile function, defined by
Moreover, if U is uniform on (0, 1), then so is 1 − U. This means one can generate exponential variates as follows:
Other methods for generating exponential variates are discussed by Knuth and Devroye.
A fast method for generating a set of ready-ordered exponential variates without using a sorting routine is also available.
| Mathematics | Statistics and probability | null |
45922 | https://en.wikipedia.org/wiki/Geometric%20distribution | Geometric distribution | In probability theory and statistics, the geometric distribution is either one of two discrete probability distributions:
The probability distribution of the number of Bernoulli trials needed to get one success, supported on ;
The probability distribution of the number of failures before the first success, supported on .
These two different geometric distributions should not be confused with each other. Often, the name shifted geometric distribution is adopted for the former one (distribution of ); however, to avoid ambiguity, it is considered wise to indicate which is intended, by mentioning the support explicitly.
The geometric distribution gives the probability that the first occurrence of success requires independent trials, each with success probability . If the probability of success on each trial is , then the probability that the -th trial is the first success is
for
The above form of the geometric distribution is used for modeling the number of trials up to and including the first success. By contrast, the following form of the geometric distribution is used for modeling the number of failures until the first success:
for
The geometric distribution gets its name because its probabilities follow a geometric sequence. It is sometimes called the Furry distribution after Wendell H. Furry.
Definition
The geometric distribution is the discrete probability distribution that describes when the first success in an infinite sequence of independent and identically distributed Bernoulli trials occurs. Its probability mass function depends on its parameterization and support. When supported on , the probability mass function iswhere is the number of trials and is the probability of success in each trial.
The support may also be , defining . This alters the probability mass function intowhere is the number of failures before the first success.
An alternative parameterization of the distribution gives the probability mass functionwhere and .
An example of a geometric distribution arises from rolling a six-sided die until a "1" appears. Each roll is independent with a chance of success. The number of rolls needed follows a geometric distribution with .
Properties
Memorylessness
The geometric distribution is the only memoryless discrete probability distribution. It is the discrete version of the same property found in the exponential distribution. The property asserts that the number of previously failed trials does not affect the number of future trials needed for a success.
Because there are two definitions of the geometric distribution, there are also two definitions of memorylessness for discrete random variables. Expressed in terms of conditional probability, the two definitions are
and
where and are natural numbers, is a geometrically distributed random variable defined over , and is a geometrically distributed random variable defined over . Note that these definitions are not equivalent for discrete random variables; does not satisfy the first equation and does not satisfy the second.
Moments and cumulants
The expected value and variance of a geometrically distributed random variable defined over is With a geometrically distributed random variable defined over , the expected value changes intowhile the variance stays the same.
For example, when rolling a six-sided die until landing on a "1", the average number of rolls needed is and the average number of failures is .
The moment generating function of the geometric distribution when defined over and respectively isThe moments for the number of failures before the first success are given by
where is the polylogarithm function.
The cumulant generating function of the geometric distribution defined over is The cumulants satisfy the recursionwhere , when defined over .
Proof of expected value
Consider the expected value of X as above, i.e. the average number of trials until a success.
On the first trial, we either succeed with probability , or we fail with probability .
If we fail the remaining mean number of trials until a success is identical to the original mean.
This follows from the fact that all trials are independent.
From this we get the formula:
which, if solved for , gives:
The expected number of failures can be found from the linearity of expectation, . It can also be shown in the following way:
The interchange of summation and differentiation is justified by the fact that convergent power series converge uniformly on compact subsets of the set of points where they converge.
Summary statistics
The mean of the geometric distribution is its expected value which is, as previously discussed in § Moments and cumulants, or when defined over or respectively.
The median of the geometric distribution is when defined over and when defined over .
The mode of the geometric distribution is the first value in the support set. This is 1 when defined over and 0 when defined over .
The skewness of the geometric distribution is .
The kurtosis of the geometric distribution is . The excess kurtosis of a distribution is the difference between its kurtosis and the kurtosis of a normal distribution, . Therefore, the excess kurtosis of the geometric distribution is . Since , the excess kurtosis is always positive so the distribution is leptokurtic. In other words, the tail of a geometric distribution decays faster than a Gaussian.
Entropy and Fisher's Information
Entropy (Geometric Distribution, Failures Before Success)
Entropy is a measure of uncertainty in a probability distribution. For the geometric distribution that models the number of failures before the first success, the probability mass function is:
The entropy for this distribution is defined as:
The entropy increases as the probability decreases, reflecting greater uncertainty as success becomes rarer.
Fisher's Information (Geometric Distribution, Failures Before Success)
Fisher information measures the amount of information that an observable random variable carries about an unknown parameter . For the geometric distribution (failures before the first success), the Fisher information with respect to is given by:
Proof:
The Likelihood Function for a geometric random variable is:
The Log-Likelihood Function is:
The Score Function (first derivative of the log-likelihood w.r.t. ) is:
The second derivative of the log-likelihood function is:
Fisher Information is calculated as the negative expected value of the second derivative:
Fisher information increases as decreases, indicating that rarer successes provide more information about the parameter .
Entropy (Geometric Distribution, Trials Until Success)
For the geometric distribution modeling the number of trials until the first success, the probability mass function is:
The entropy for this distribution is given by:
Entropy increases as decreases, reflecting greater uncertainty as the probability of success in each trial becomes smaller.
Fisher's Information (Geometric Distribution, Trials Until Success)
Fisher information for the geometric distribution modeling the number of trials until the first success is given by:
Proof:
The Likelihood Function for a geometric random variable is:
The Log-Likelihood Function is:
The Score Function (first derivative of the log-likelihood w.r.t. ) is:
The second derivative of the log-likelihood function is:
Fisher Information is calculated as the negative expected value of the second derivative:
General properties
The probability generating functions of geometric random variables and defined over and are, respectively,
The characteristic function is equal to so the geometric distribution's characteristic function, when defined over and respectively, is
The entropy of a geometric distribution with parameter is
Given a mean, the geometric distribution is the maximum entropy probability distribution of all discrete probability distributions. The corresponding continuous distribution is the exponential distribution.
The geometric distribution defined on is infinitely divisible, that is, for any positive integer , there exist independent identically distributed random variables whose sum is also geometrically distributed. This is because the negative binomial distribution can be derived from a Poisson-stopped sum of logarithmic random variables.
The decimal digits of the geometrically distributed random variable Y are a sequence of independent (and not identically distributed) random variables. For example, the hundreds digit D has this probability distribution:
where q = 1 − p, and similarly for the other digits, and, more generally, similarly for numeral systems with other bases than 10. When the base is 2, this shows that a geometrically distributed random variable can be written as a sum of independent random variables whose probability distributions are indecomposable.
Golomb coding is the optimal prefix code for the geometric discrete distribution.
Related distributions
The sum of independent geometric random variables with parameter is a negative binomial random variable with parameters and . The geometric distribution is a special case of the negative binomial distribution, with .
The geometric distribution is a special case of discrete compound Poisson distribution.
The minimum of geometric random variables with parameters is also geometrically distributed with parameter .
Suppose 0 < r < 1, and for k = 1, 2, 3, ... the random variable Xk has a Poisson distribution with expected value rk/k. Then
has a geometric distribution taking values in , with expected value r/(1 − r).
The exponential distribution is the continuous analogue of the geometric distribution. Applying the floor function to the exponential distribution with parameter creates a geometric distribution with parameter defined over . This can be used to generate geometrically distributed random numbers as detailed in § Random variate generation.
If p = 1/n and X is geometrically distributed with parameter p, then the distribution of X/n approaches an exponential distribution with expected value 1 as n → ∞, sinceMore generally, if p = λ/n, where λ is a parameter, then as n→ ∞ the distribution of X/n approaches an exponential distribution with rate λ: therefore the distribution function of X/n converges to , which is that of an exponential random variable.
The index of dispersion of the geometric distribution is and its coefficient of variation is . The distribution is overdispersed.
Statistical inference
The true parameter of an unknown geometric distribution can be inferred through estimators and conjugate distributions.
Method of moments
Provided they exist, the first moments of a probability distribution can be estimated from a sample using the formulawhere is the th sample moment and . Estimating with gives the sample mean, denoted . Substituting this estimate in the formula for the expected value of a geometric distribution and solving for gives the estimators and when supported on and respectively. These estimators are biased since as a result of Jensen's inequality.
Maximum likelihood estimation
The maximum likelihood estimator of is the value that maximizes the likelihood function given a sample. By finding the zero of the derivative of the log-likelihood function when the distribution is defined over , the maximum likelihood estimator can be found to be , where is the sample mean. If the domain is , then the estimator shifts to . As previously discussed in § Method of moments, these estimators are biased.
Regardless of the domain, the bias is equal to
which yields the bias-corrected maximum likelihood estimator,
Bayesian inference
In Bayesian inference, the parameter is a random variable from a prior distribution with a posterior distribution calculated using Bayes' theorem after observing samples. If a beta distribution is chosen as the prior distribution, then the posterior will also be a beta distribution and it is called the conjugate distribution. In particular, if a prior is selected, then the posterior, after observing samples , isAlternatively, if the samples are in , the posterior distribution isSince the expected value of a distribution is , as and approach zero, the posterior mean approaches its maximum likelihood estimate.
Random variate generation
The geometric distribution can be generated experimentally from i.i.d. standard uniform random variables by finding the first such random variable to be less than or equal to . However, the number of random variables needed is also geometrically distributed and the algorithm slows as decreases.
Random generation can be done in constant time by truncating exponential random numbers. An exponential random variable can become geometrically distributed with parameter through . In turn, can be generated from a standard uniform random variable altering the formula into .
Applications
The geometric distribution is used in many disciplines. In queueing theory, the M/M/1 queue has a steady state following a geometric distribution. In stochastic processes, the Yule Furry process is geometrically distributed. The distribution also arises when modeling the lifetime of a device in discrete contexts. It has also been used to fit data including modeling patients spreading COVID-19.
| Mathematics | Probability | null |
45930 | https://en.wikipedia.org/wiki/List%20of%20index%20fossils | List of index fossils | Index fossils (also known as guide fossils or indicator fossils) are fossils used to define and identify geologic periods (or faunal stages). Index fossils must have a short vertical range, wide geographic distribution and rapid evolutionary trends. Another term, "zone fossil", is used when the fossil has all the characters stated above except wide geographical distribution; thus, they correlate the surrounding rock to a biozone rather than a specific time period.
| Physical sciences | Stratigraphy | Earth science |
45939 | https://en.wikipedia.org/wiki/Primary%20color | Primary color | A set of primary colors (see spelling differences) consists of colorants or colored lights that can be mixed in varying amounts to produce a gamut of colors. This is the essential method used to create the perception of a broad range of colors in, e.g., electronic displays, color printing, and paintings. Perceptions associated with a given combination of primary colors can be predicted by an appropriate mixing model (e.g., additive, subtractive) that reflects the physics of how light interacts with physical media, and ultimately the retina. The most common color mixing models are the additive primary colors (red, green, blue) and the subtractive primary colors (cyan, magenta, yellow). Red, yellow and blue are also commonly taught as primary colors (usually in the context of subtractive color mixing as opposed to additive color mixing), despite some criticism due to its lack of scientific basis.
Primary colors can also be conceptual (not necessarily real), either as additive mathematical elements of a color space or as irreducible phenomenological categories in domains such as psychology and philosophy. Color space primaries are precisely defined and empirically rooted in psychophysical colorimetry experiments which are foundational for understanding color vision. Primaries of some color spaces are complete (that is, all visible colors are described in terms of their primaries weighted by nonnegative primary intensity coefficients) but necessarily imaginary (that is, there is no plausible way that those primary colors could be represented physically, or perceived). Phenomenological accounts of primary colors, such as the psychological primaries, have been used as the conceptual basis for practical color applications even though they are not a quantitative description in and of themselves.
Sets of color space primaries are generally arbitrary, in the sense that there is no one set of primaries that can be considered the canonical set. Primary pigments or light sources are selected for a given application on the basis of subjective preferences as well as practical factors such as cost, stability, availability etc.
The concept of primary colors has a long, complex history. The choice of primary colors has changed over time in different domains that study color. Descriptions of primary colors come from areas including philosophy, art history, color order systems, and scientific work involving the physics of light and perception of color.
Art education materials commonly use red, yellow, and blue as primary colors, sometimes suggesting that they can mix all colors. No set of real colorants or lights can mix all possible colors, however. In other domains, the three primary colors are typically red, green and blue, which are more closely aligned to the sensitivities of the photoreceptor pigments in the cone cells.
Color model primaries
A color model is an abstract model intended to describe the ways that colors behave, especially in color mixing. Most color models are defined by the interaction of multiple primary colors. Since most humans are trichromatic, color models that want to reproduce a meaningful portion of a human's perceptual gamut must use at least three primaries. More than three primaries are allowed, for example, to increase the size of the gamut of the color space, but the entire human perceptual gamut can be reproduced with just three primaries (albeit imaginary ones as in the CIE XYZ color space).
Some humans (and most mammals) are dichromats, corresponding to specific forms of color blindness in which color vision is mediated by only two of the types of color receptors. Dichromats require only two primaries to reproduce their entire gamut and their participation in color matching experiments was essential in the determination of cone fundamentals leading to all modern color spaces. Despite most vertebrates being tetrachromatic, and therefore requiring four primaries to reproduce their entire gamut, there is only one scholarly report of a functional human tetrachromat, for which trichromatic color models are insufficient.
Additive models
The perception elicited by multiple light sources co-stimulating the same area of the retina is additive, i.e., predicted via summing the spectral power distributions (the intensity of each wavelength) of the individual light sources assuming a color matching context. For example, a purple spotlight on a dark background could be matched with coincident blue and red spotlights that are both dimmer than the purple spotlight. If the intensity of the purple spotlight was doubled it could be matched by doubling the intensities of both the red and blue spotlights that matched the original purple. The principles of additive color mixing are embodied in Grassmann's laws. Additive mixing is sometimes described as "additive color matching" to emphasize the fact the predictions based on additivity only apply assuming the color matching context. Additivity relies on assumptions of the color matching context such as the match being in the foveal field of view, under appropriate luminance, etc.
Additive mixing of coincident spot lights was applied in the experiments used to derive the CIE 1931 colorspace (see color space primaries section). The original monochromatic primaries of the wavelengths of 435.8 nm (violet), 546.1 nm (green), and 700 nm (red) were used in this application due to the convenience they afforded to the experimental work.
Small red, green, and blue elements (with controllable brightness) in electronic displays mix additively from an appropriate viewing distance to synthesize compelling colored images. This specific type of additive mixing is described as partitive mixing. Red, green, and blue light are popular primaries for partitive mixing since primary lights with those hues provide a large color triangle (gamut).
The exact colors chosen for additive primaries are a compromise between the available technology (including considerations such as cost and power usage) and the need for large chromaticity gamut. For example, in 1953 the NTSC specified primaries that were representative of the phosphors available in that era for color CRTs. Over decades, market pressures for brighter colors resulted in CRTs using primaries that deviated significantly from the original standard. Currently, ITU-R BT.709-5 primaries are typical for high-definition television.
Subtractive models
The subtractive color mixing model predicts the resultant spectral power distribution of light filtered through overlaid partially absorbing materials, usually in the context of an underlying reflective surface such as white paper. Each layer partially absorbs some wavelengths of light from the illumination while letting others pass through, resulting in a colored appearance. The resultant spectral power distribution is predicted by the wavelength-by-wavelength product of the spectral reflectance of the illumination and the product of the spectral reflectances of all of the layers. Overlapping layers of ink in printing mix subtractively over reflecting white paper, while the reflected light mixes in a partitive way to generate color images. Importantly, unlike additive mixture, the color of the mixture is not well predicted by the colors of the individual dyes or inks. The typical number of inks in such a printing process is 3 (CMY) or 4 (CMYK), but can commonly range to 6 (e.g., Pantone hexachrome). In general, using fewer inks as primaries results in more economical printing but using more may result in better color reproduction.
Cyan (C), magenta (M), and yellow (Y) are good chromatic subtractive primaries in that filters with those colors can be overlaid to yield a surprisingly large chromaticity gamut. A black (K) ink (from the older "key plate") is also used in CMYK systems to augment C, M and Y inks or dyes: this is more efficient in terms of time and expense and less likely to introduce visible defects. Before the color names cyan and magenta were in common use, these primaries were often known as blue and red, respectively, and their exact color has changed over time with access to new pigments and technologies. Organizations such as Fogra, European Color Initiative and SWOP publish colorimetric CMYK standards for the printing industry.
Traditional red, yellow, and blue primary colors as a subtractive system
Color theorists since the seventeenth century, and many artists and designers since that time, have taken red, yellow, and blue to be the primary colors (see history below). This RYB system, in "traditional color theory", is often used to order and compare colors, and sometimes proposed as a system of mixing pigments to get a wide range of, or "all", colors.
O'Connor describes the role of RYB primaries in traditional color theory:
Traditional color theory is based on experience with pigments, more than on the science of light. In 1920, Snow and Froehlich explained:
The widespread adoption of teaching of RYB as primary colors in post-secondary art schools in the twentieth century has been attributed to the influence of the Bauhaus, where Johannes Itten developed his ideas on color during his time there in the 1920s, and of his book on color published in 1961.
In discussing color design for the web, Jason Beaird writes:
As with any system of real primaries, not all colors can be mixed from RYB primaries.
For example, if the blue pigment is a deep Prussian blue, then a muddy desaturated green may be the best that can be had by mixing with yellow. To achieve a larger gamut of colors via mixing, the blue and red pigments used in illustrative materials such as the Color Mixing Guide in the image are often closer to peacock blue (a blue-green or cyan) and carmine (or crimson or magenta) respectively.
Printers traditionally used inks of such colors, known as "process blue" and "process red", before modern color science and the printing industry converged on the process colors (and names) cyan and magenta RYB is not the same as CMY, nor exactly subtractive, but that there is a range of ways to conceptualize traditional RYB as a subtractive system in the framework of modern color science.
Faber-Castell identifies the following three colors: "Cadmium yellow" (number 107) for yellow, "Phthalo blue" (number 110) for blue and "Deep scarlet red" (number 219) for red, as the closest to primary colors for its Art & Graphic color pencils range. "Cadmium yellow" (number 107) for yellow, "Phthalo blue" (number 110) for blue and "Pale geranium lake" (number 121) for red, are provided as primary colors in its basic 5 color "Albrecht Dürer" watercolor marker set.
Mixing pigments in limited palettes
The first known use of red, yellow, and blue as "simple" or "primary" colors, by Chalcidius, ca. AD 300, was possibly based on the art of paint mixing.
Mixing pigments for the purpose of creating realistic paintings with diverse color gamuts is known to have been practiced at least since Ancient Greece (see history section). The identity of a/the set of minimal pigments to mix diverse gamuts has long been the subject of speculation by theorists whose claims have changed over time, for example, Pliny's white, black, one or another red, and "sil", which might have been yellow or blue; Robert Boyle's white, black, red, yellow, and blue; and variations with more or fewer "primary" color or pigments. Some writers and artists have found these schemes difficult to reconcile with the actual practice of painting. Nonetheless, it has long been known that limited palettes consisting of a small set of pigments are sufficient to mix a diverse gamut of colors.
The set of pigments available to mix diverse gamuts of color (in various media such as oil, watercolor, acrylic, gouache, and pastel) is large and has changed throughout history. There is no consensus on a specific set of pigments that are considered primary colors the choice of pigments depends entirely on the artist's subjective preference of subject and style of art, as well as material considerations like lightfastness and mixing behavior. A variety of limited palettes have been employed by artists for their work.
The color of light (i.e., the spectral power distribution) reflected from illuminated surfaces coated in paint mixes is not well approximated by a subtractive or additive mixing model. Color predictions that incorporate light scattering effects of pigment particles and paint layer thickness require approaches based on the Kubelka–Munk equations, but even such approaches are not expected to predict the color of paint mixtures precisely due to inherent limitations. Artists typically rely on mixing experience and "recipes" to mix desired colors from a small initial set of primaries and do not use mathematical modeling.
MacEvoy explains why artists often chose a palette closer to RYB than to CMY:
Color space primaries
A color space is a subset of a color model, where the primaries have been defined, either directly as photometric spectra, or indirectly as a function of other color spaces. For example, sRGB and Adobe RGB are both color spaces based on the RGB color model. However, the green primary of Adobe RGB is more saturated than the equivalent in sRGB, and therefore yields a larger gamut. Otherwise, choice of color space is largely arbitrary and depends on the utility to a specific application.
Imaginary primaries
Color space primaries are derived from canonical colorimetric experiments that represent a standardized model of an observer (i.e., a set of color matching functions) adopted by Commission Internationale de l'Eclairage (CIE) standards. The abbreviated account of color space primaries in this section is based on descriptions in Colorimetry - Understanding The CIE System.
The CIE 1931 standard observer is derived from experiments in which participants observe a foveal secondary bipartite field with a dark surround. Half of the field is illuminated with a monochromatic test stimulus (ranging from 380 nm to 780 nm) and the other half is the matching stimulus illuminated with three coincident monochromatic primary lights: 700 nm for red (R), 546.1 nm for green (G), and 435.8 nm for blue (B). These primaries correspond to CIE RGB color space. The intensities of the primary lights could be adjusted by the participant observer until the matching stimulus matched the test stimulus, as predicted by Grassman's laws of additive mixing. Different standard observers from other color matching experiments have been derived since 1931. The variations in experiments include choices of primary lights, field of view, number of participants etc. but the presentation below is representative of those results.
Matching was performed across many participants in incremental steps along the range of test stimulus wavelengths (380 nm to 780 nm) to ultimately yield the color matching functions: , and that represent the relative intensities of red, green, and blue light to match each wavelength (). These functions imply that units of the test stimulus with any spectral power distribution, , can be matched by , , and units of each primary where:
Each integral term in the above equation is known as a tristimulus value and measures amounts in the adopted units. No set of real primary lights can match another monochromatic light under additive mixing so at least one of the color matching functions is negative for each wavelength. A negative tristimulus value corresponds to that primary being added to the test stimulus instead of the matching stimulus to achieve a match.
The negative tristimulus values made certain types of calculations difficult, so the CIE put forth new color matching functions , , and defined by the following linear transformation:
These new color matching functions correspond to imaginary primary lights X, Y, and Z (CIE XYZ color space). All colors can be matched by finding the amounts , , and analogously to , , and as defined in . The functions , , and based on the specifications that they should be nonnegative for all wavelengths, be equal to photometric luminance, and that for an equienergy (i.e., a uniform spectral power distribution) test stimulus.
Derivations use the color matching functions, along with data from other experiments, to ultimately yield the cone fundamentals: , and . These functions correspond to the response curves for the three types of color photoreceptors found in the human retina: long-wavelength (L), medium-wavelength (M), and short-wavelength (S) cones. The three cone fundamentals are related to the original color matching functions by the following linear transformation (specific to a 10° field):
LMS color space comprises three primary lights (L, M, and S) that stimulate only the L-, M-, and S-cones respectively. A real primary that stimulates only the M-cone is impossible, and therefore these primaries are imaginary. The LMS color space has significant physiological relevance as these three photoreceptors mediate trichromatic color vision in humans.
Both XYZ and LMS color spaces are complete since all colors in the gamut of the standard observer are contained within their color spaces. Complete color spaces must have imaginary primaries, but color spaces with imaginary primaries are not necessarily complete (e.g. ProPhoto RGB color space).
Real primaries
Color spaces used in color reproduction must use real primaries that can be reproduced by practical sources, either lights in additive models, or pigments in subtractive models. Most RGB color spaces have real primaries, though some maintain imaginary primaries. For example, all the sRGB primaries fall within the gamut of human perception, and so can be easily represented by practical light sources, including CRT and LED displays, hence why sRGB is still the color space of choice for digital displays.
A color in a color space is defined as a combination of its primaries, where each primary must give a non-negative contribution. Any color space based on a finite number of real primaries is incomplete in that it cannot reproduce every color within the gamut of the standard observer.
Practical color spaces such as sRGB and scRGB are typically (at least partially) defined in terms of linear transformations from CIE XYZ, and color management often uses CIE XYZ as a middle point for transformations between two other color spaces.
Most color spaces in the color-matching context (those defined by their relationship to CIE XYZ) inherit its three-dimensionality. However, more complex color appearance models like CIECAM02 require extra dimensions to describe colors appear under different viewing conditions.
Psychological primaries
The opponent process was proposed by Ewald Hering in which he described the four unique hues (later called psychological primaries in some contexts): red, green, yellow and blue. To Hering, the unique hues appeared as pure colors, while all others were "psychological mixes" of two of them. Furthermore, these colors were organized in "opponent" pairs, red vs. green and yellow vs. blue so that mixing could occur across pairs (e.g., a yellowish green or a yellowish red) but not within a pair (i.e., reddish green cannot be imagined). An achromatic opponent process along black and white is also part of Hering's explanation of color perception. Hering asserted that we did not know why these color relationships were true but knew that they were. Although there is a great deal of evidence for the opponent process in the form of neural mechanisms, there is currently no clear mapping of the psychological primaries to neural correlates.
The psychological primaries were applied by Richard S. Hunter as the primaries for Hunter L,a,b colorspace that led to the creation of CIELAB. The Natural Color System is also directly inspired by the psychological primaries.
History
Philosophy
Philosophical writing from ancient Greece has described notions of primary colors, but they can be difficult to interpret in terms of modern color science. Theophrastus (c. 371–287 BCE) described Democritus' position that the primary colors were white, black, red, and green. In Classical Greece, Empedocles identified white, black, red, and, (depending on the interpretation) either yellow or green as primary colors. Aristotle described a notion in which white and black could be mixed in different ratios to yield chromatic colors; this idea had considerable influence in Western thinking about color. François d'Aguilon's notion of the five primary colors (white, yellow, red, blue, black) was influenced by Aristotle's idea of the chromatic colors being made of black and white.The 20th century philosopher Ludwig Wittgenstein explored color-related ideas using red, green, blue, and yellow as primary colors.
Light and color vision
Isaac Newton used the term "primary color" to describe the colored spectral components of sunlight. A number of color theorists did not agree with Newton's work. David Brewster advocated that red, yellow, and blue light could be combined into any spectral hue late into the 1840s. Thomas Young proposed red, green, and violet as the three primary colors, while James Clerk Maxwell favored changing violet to blue. Hermann von Helmholtz proposed "a slightly purplish red, a vegetation-green, slightly yellowish, and an ultramarine-blue" as a trio. Newton, Young, Maxwell, and Helmholtz were all prominent contributors to "modern color science" that ultimately described the perception of color in terms of the three types of retinal photoreceptors.
Colorants
John Gage's The Fortunes Of Apelles provides a summary of the history of primary colors as pigments in painting and describes the evolution of the idea as complex. Gage begins by describing Pliny the Elder's account of notable Greek painters who used four primaries. Pliny distinguished the pigments (i.e., substances) from their apparent colors: white from Milos (ex albis), red from Sinope (ex rubris), Attic yellow (sil) and atramentum (ex nigris). Sil was historically confused as a blue pigment between the 16th and 17th centuries, leading to claims about white, black, red, and blue being the fewest colors required for painting. Thomas Bardwell, an 18th century Norwich portrait painter, was skeptical of the practical relevance of Pliny's account.
Robert Boyle, the Irish chemist, introduced the term primary color in English in 1664 and claimed that there were five primary colors (white, black, red, yellow, and blue). The German painter Joachim von Sandrart eventually proposed removing white and black from the primaries and that one only needed red, yellow, blue, and green to paint "the whole creation".
Red, yellow, and blue as primaries became a popular notion in the 18th and 19th centuries. Jacob Christoph Le Blon, an engraver, was the first to use separate plates for each color in mezzotint printmaking: yellow, red, and blue, plus black to add shades and contrast. Le Blon used primitive in 1725 to describe red, yellow, and blue in a very similar sense as Boyle used primary. Moses Harris, an entomologist and engraver, also describes red, yellow, and blue as "primitive" colors in 1766. Léonor Mérimée described red, yellow, and blue in his book on painting (originally published in French in 1830) as the three simple/primitive colors that can make a "great variety" of tones and colors found in nature. George Field, a chemist, used the word primary to describe red, yellow, and blue in 1835. Michel Eugène Chevreul, also a chemist, discussed red, yellow, and blue as "primary" colors in 1839.
Color order systems
Historical perspectives on color order systems ("catalogs" of color) that were proposed in the 18th and 19th centuries describe them as using red, yellow, and blue pigments as chromatic primaries. Tobias Mayer (a German mathematician, physicist, and astronomer) described a triangular bipyramid with red, yellow and blue at the 3 vertices in the same plane, white at the top vertex, and black and the bottom vertex in a public lecture in 1758. There are 11 planes of colors between the white and black vertices inside the triangular bipyramid. Mayer did not seem to distinguish between colored light and colorant though he used vermilion, orpiment (King’s yellow), and Bergblau (azurite) in partially complete colorings of planes in his solid. Johann Heinrich Lambert (a Swiss mathematician, physicist, and astronomer) proposed a triangular pyramid with gamboge, carmine, and Prussian blue as primaries and only white at the top vertex (since Lambert could produce a mixture that was sufficiently black with those pigments). Lambert's work on this system was published in 1772. Philipp Otto Runge (the Romantic German painter) firmly believed in the theory of red, yellow and blue as the primary colors (again without distinguishing light color and colorant). His color sphere was ultimately described in an essay titled Farben-Kugel (color ball) published by Goethe in 1810. His spherical model of colors equally spaced red, yellow, and blue longitudinally with orange, green, and violet between them, and white and black at opposite poles.
Red, yellow, and blue as primary colors
Numerous authors have taught that red, yellow, and blue (RYB) are the primary colors in art education materials since at least the 19th century, following the ideas tabulated above from earlier centuries.
A wide variety of contemporary educational sources also describe the RYB primaries. These sources range from children's books and art material manufacturers to painting and color guides. Art education materials often suggest that RYB primaries can be mixed to create all other colors.
Criticism
Albert Munsell, an American painter (and creator of the Munsell color system), referred to the notion of RYB primaries as "mischief", "a widely accepted error", and underspecified in his book A Color Notation, first published in 1905.
Itten's ideas about RYB primaries have been criticized as ignoring modern color science with demonstrations that some of Itten's claims about mixing RYB primaries are impossible.
| Physical sciences | Basics_7 | null |
45995 | https://en.wikipedia.org/wiki/Building | Building | A building or edifice is an enclosed structure with a roof and walls, usually standing permanently in one place, such as a house or factory. Buildings come in a variety of sizes, shapes, and functions, and have been adapted throughout history for numerous factors, from building materials available, to weather conditions, land prices, ground conditions, specific uses, prestige, and aesthetic reasons. To better understand the concept, see Nonbuilding structure for contrast.
Buildings serve several societal needs – occupancy, primarily as shelter from weather, security, living space, privacy, to store belongings, and to comfortably live and work. A building as a shelter represents a physical separation of the human habitat (a place of comfort and safety) from the outside (a place that may be harsh and harmful at times).
Ever since the first cave paintings, buildings have been objects or canvasses of much artistic expression. In recent years, interest in sustainable planning and building practices has become an intentional part of the design process of many new buildings and other structures, usually green buildings.
Definition
A building is 'a structure that has a roof and walls and stands more or less permanently in one place'; "there was a three-storey building on the corner"; "it was an imposing edifice". In the broadest interpretation a fence or wall is a building. However, the word structure is used more broadly than building, to include natural and human-made formations and ones that do not have walls; structure is more often used for a fence. Sturgis' Dictionary included that differs from architecture in excluding all idea of artistic treatment; and it differs from construction in the idea of excluding scientific or highly skilful treatment."
Structural height in technical usage is the height to the highest architectural detail on the building from street level. Spires and masts may or may not be included in this height, depending on how they are classified. Spires and masts used as antennas are not generally included. The distinction between a low-rise and high-rise building is a matter of debate, but generally three stories or less is considered low-rise.
History
There is clear evidence of homebuilding from around 18,000 BC. Buildings became common during the Neolithic period.
Types
Residential
Single-family residential buildings are most often called houses or homes. Multi-family residential buildings containing more than one dwelling unit are called duplexes or apartment buildings. Condominiums are apartments that occupants own rather than rent. Houses may be built in pairs (semi-detached) or in terraces, where all but two of the houses have others on either side. Apartments may be built round courtyards or as rectangular blocks surrounded by plots of ground. Houses built as single dwellings may later be divided into apartments or bedsitters, or converted to other uses (e.g., offices or shops). Hotels, especially of the extended-stay variety (apartels), can be classed as residential.
Building types may range from huts to multimillion-dollar high-rise apartment blocks able to house thousands of people. Increasing settlement density in buildings (and smaller distances between buildings) is usually a response to high ground prices resulting from the desire of many people to live close to their places of employment or similar attractors.
Terms for residential buildings reflect such characteristics as function (e.g., holiday cottage (vacation home) or timeshare if occupied seasonally); size (cottage or great house); value (shack or mansion); manner of construction (log home or mobile home); architectural style (castle or Victorian); and proximity to geographical features (earth shelter, stilt house, houseboat, or floating home). For residents in need of special care, or those society considers dangerous enough to deprive of liberty, there are institutions (nursing homes, orphanages, psychiatric hospitals, and prisons) and group housing (barracks and dormitories).
Historically, many people lived in communal buildings called longhouses, smaller dwellings called pit-houses, and houses combined with barns, sometimes called housebarns.
Common building materials include brick, concrete, stone, and combinations thereof. Buildings are defined to be substantial, permanent structures. Such forms as yurts and motorhomes are therefore considered dwellings but not buildings.
Commercial
A commercial building is one in which at least one business is based and people do not live. Examples include stores, restaurant, and hotels.
Industrial
Industrial buildings are those in which heavy industry is done, such as manufacturing. These edifices include warehouses and factories.
Agricultural
Agricultural buildings are the outbuildings, such as barns located on farms.
Mixed use
Some buildings incorporate several or multiple different uses, most commonly commercial and residential.
Complex
Sometimes a group of inter-related (and possibly inter-connected) builds are referred to as a complex – for example a housing complex, educational complex, hospital complex, etc.
Creation
The practice of designing, constructing, and operating buildings is most usually a collective effort of different groups of professionals and trades. Depending on the size, complexity, and purpose of a particular building project, the project team may include:
A real estate developer who secures funding for the project;
One or more financial institutions or other investors that provide the funding
Local planning and code authorities
A surveyor who performs an ALTA/ACSM and construction surveys throughout the project;
Construction managers who coordinate the effort of different groups of project participants;
Licensed architects and engineers who provide building design and prepare construction documents;
The principal design Engineering disciplines which would normally include the following professionals: Civil, Structural, Mechanical building services or HVAC (heating Ventilation and Air Conditioning) Electrical Building Services, Plumbing and drainage. Also other possible design Engineer specialists may be involved such as Fire (prevention), Acoustic, façade engineers, building physics, Telecoms, AV (Audio Visual), BMS (Building Management Systems) Automatic controls etc. These design Engineers also prepare construction documents which are issued to specialist contractors to obtain a price for the works and to follow for the installations.
Landscape architects;
Interior designers;
Other consultants;
Contractors who provide construction services and install building systems such as climate control, electrical, plumbing, decoration, fire protection, security and telecommunications;
Marketing or leasing agents;
Facility managers who are responsible for operating the building.
Regardless of their size or intended use, all buildings in the US must comply with zoning ordinances, building codes and other regulations such as fire codes, life safety codes and related standards.
Vehicles—such as trailers, caravans, ships and passenger aircraft—are treated as "buildings" for life safety purposes.
Ownership and funding
Mortgage loan
Real estate developer
Environmental impacts
Building services
Physical plant
Any building requires a certain general amount of internal infrastructure to function, which includes such elements like heating / cooling, power and telecommunications, water and wastewater etc. Especially in commercial buildings (such as offices or factories), these can be extremely intricate systems taking up large amounts of space (sometimes located in separate areas or double floors / false ceilings) and constitute a big part of the regular maintenance required.
Conveying systems
Systems for transport of people within buildings:
Elevator
Escalator
Moving sidewalk (horizontal and inclined)
Systems for transport of people between interconnected buildings:
Skyway
Underground city
Building damage
Buildings may be damaged during construction or during maintenance. They may be damaged by accidents involving storms, explosions, subsidence caused by mining, water withdrawal or poor foundations and landslides. Buildings may suffer fire damage and flooding. They may become dilapidated through lack of proper maintenance, or alteration work improperly carried out.
| Technology | Structures | null |
46037 | https://en.wikipedia.org/wiki/Porpoise | Porpoise | Porpoises () are small dolphin-like cetaceans classified under the family Phocoenidae. Although similar in appearance to dolphins, they are more closely related to narwhals and belugas than to the true dolphins. There are eight extant species of porpoise, all among the smallest of the toothed whales. Porpoises are distinguished from dolphins by their flattened, spade-shaped teeth distinct from the conical teeth of dolphins, and lack of a pronounced beak, although some dolphins (e.g. Hector's dolphin) also lack a pronounced beak. Porpoises, and other cetaceans, belong to the clade Cetartiodactyla with even-toed ungulates.
Porpoises range in size from the vaquita, at in length and in weight, to the Dall's porpoise, at and . Several species exhibit sexual dimorphism in that the females are larger than males. They have streamlined bodies and two limbs that are modified into flippers. Porpoises use echolocation as their primary sensory system. Some species are well adapted for diving to great depths. As all cetaceans, they have a layer of fat, or blubber, under the skin to keep them warm in cold water.
Porpoises are abundant and found in a multitude of environments, including rivers (finless porpoise), coastal and shelf waters (harbour porpoise, vaquita) and open ocean (Dall's porpoise and spectacled porpoise), covering all water temperatures from tropical (Sea of Cortez, vaquita) to polar (Greenland, harbour porpoise). Porpoises feed largely on fish and squid, much like the rest of the odontocetes. Little is known about reproductive behaviour. Females may have one calf every year under favourable conditions. Calves are typically born in the spring and summer months and remain dependent on the female until the following spring. Porpoises produce ultrasonic clicks, which are used for both navigation (echolocation) and social communication. In contrast to many dolphin species, porpoises do not form large social groups.
Porpoises were, and still are, hunted by some countries by means of drive hunting. Larger threats to porpoises include extensive bycatch in gill nets, competition for food from fisheries, and marine pollution, in particular heavy metals and organochlorides. The vaquita is nearly extinct due to bycatch in gill nets, with a predicted population of fewer than a dozen individuals. Since the extinction of the baiji, the vaquita is considered the most endangered cetacean. Some species of porpoises have been and are kept in captivity and trained for research, education and public display.
Taxonomy and evolution
Porpoises, along with whales and dolphins, are descendants of land-living ungulates (hoofed animals) that first entered the oceans around 50 million years ago (Mya). During the Miocene (23 to 5 Mya), mammals were fairly modern, meaning they seldom changed physiologically from the time. The cetaceans diversified, and fossil evidence suggests porpoises and dolphins diverged from their last common ancestor around 15 Mya. The oldest fossils are known from the shallow seas around the North Pacific, with animals spreading to the European coasts and Southern Hemisphere only much later, during the Pliocene.
ORDER Artiodactyla
Infraorder Cetacea
Parvorder Odontoceti toothed whales
Superfamily Delphinoidea
Family Phocoenidae – porpoises
Genus †Haborophocoena
H. toyoshimai
Genus Neophocaena
N. phocaeniodes – Indo-Pacific finless porpoise
N. sunameri – East Asian finless porpoise
N. asiaeorientalis – Yangtze finless porpoise
Genus †Numataphocoena
N. yamashitai
Genus Phocoena
P. phocoena – harbour porpoise
P. sinus – vaquita
P. dioptrica – spectacled porpoise
P. spinipinnis – Burmeister's porpoise
Genus Phocoenoides
P. dalli – Dall's porpoise
Genus †Semirostrum
S.ceruttii
Genus †Septemtriocetus
S. bosselaersii
Genus †Piscolithax
P. aenigmaticus
P. longirostris
P. boreios
P. tedfordi
Recently discovered hybrids between male harbour porpoises and female Dall's porpoises indicate the two species may actually be members of the same genus.
Biology
Anatomy
Porpoises have a bulbous head, no external ear flaps, a non-flexible neck, a torpedo shaped body, limbs modified into flippers, and a tail fin. Their skull has small eye orbits, small, blunt snouts, and eyes placed on the sides of the head. Porpoises range in size from the and vaquita to the and Dall's porpoise. Overall, they tend to be dwarfed by other cetaceans. Almost all species have female-biased sexual dimorphism, with the females being larger than the males, although those physical differences are generally small; one exception is Dall's porpoise.
Odontocetes possess teeth with cementum cells overlying dentine cells. Unlike human teeth, which are composed mostly of enamel on the portion of the tooth outside of the gum, whale teeth have cementum outside the gum. Porpoises have a three-chambered stomach, including a fore-stomach and fundic and pyloric chambers. Porpoises, like other odontocetes, possess only one blowhole. Breathing involves expelling stale air from the blowhole, forming an upward, steamy spout, followed by inhaling fresh air into the lungs. All porpoises have a thick layer of blubber. This blubber can help with insulation from the harsh underwater climate, protection to some extent as predators would have a hard time getting through a thick layer of fat, and energy for leaner times. Calves are born with only a thin layer of blubber, but rapidly gain a thick layer from the milk, which has a very high fat content.
Locomotion
Porpoises have two flippers on the front and a tail fin. Their flippers contain four digits. Although porpoises do not possess fully developed hind limbs, they possess discrete rudimentary appendages, which may contain feet and digits. Porpoises are fast swimmers in comparison to seals, which typically cruise at . The fusing of the neck vertebrae, while increasing stability when swimming at high speeds, decreases flexibility, making it impossible for them to turn their head. When swimming, they move their tail fin and lower body up and down, propelling themselves through vertical movement, while their flippers are mainly used for steering. Flipper movement is continuous. Some species log out of the water, which may allow them to travel faster, and sometimes they porpoise out of the water, meaning jump out of the water. Their skeletal anatomy allows them to be fast swimmers. They have a very well defined and triangular dorsal fin, allowing them to steer better in the water. Unlike their dolphin counterparts, they are adapted for coastal shores, bays, and estuaries.
Senses
The porpoise ear has specific adaptations to the marine environment. In humans, the middle ear works as an impedance equaliser between the outside air's low impedance and the cochlear fluid's high impedance. In whales, and other marine mammals, there is no great difference between the outer and inner environments. Instead of sound passing through the outer ear to the middle ear, porpoises receive sound through the throat, from which it passes through a low-impedance fat-filled cavity to the inner ear. The porpoise ear is acoustically isolated from the skull by air-filled sinus pockets, which allow for greater directional hearing underwater. Odontocetes send out high frequency clicks from an organ known as a melon. This melon consists of fat, and the skull of any such creature containing a melon will have a large depression. The large bulge on top of the porpoises head is caused by the melon.
The porpoise eye is relatively small for its size, yet they do retain a good degree of eyesight. As well as this, the eyes of a porpoise are placed on the sides of its head, so their vision consists of two fields, rather than a binocular view like humans have. When porpoises surface, their lens and cornea correct the nearsightedness that results from the refraction of light; their eyes contain both rod and cone cells, meaning they can see in both dim and bright light. Porpoises do, however, lack short wavelength sensitive visual pigments in their cone cells indicating a more limited capacity for colour vision than most mammals. Most porpoises have slightly flattened eyeballs, enlarged pupils (which shrink as they surface to prevent damage), slightly flattened corneas and a tapetum lucidum; these adaptations allow for large amounts of light to pass through the eye and, therefore, they are able to form a very clear image of the surrounding area.
The olfactory lobes are absent in porpoises, suggesting that they have no sense of smell.
Porpoises are not thought to have a good sense of taste, as their taste buds are atrophied or missing altogether. However, some have preferences between different kinds of fish, indicating some sort of attachment to taste.
Sleep
Unlike most animals, porpoises are conscious breathers. All mammals sleep, but porpoises cannot afford to become unconscious for long because they may drown. While knowledge of sleep in wild cetaceans is limited, porpoises in captivity have been recorded to sleep with one side of their brain at a time, so that they may swim, breathe consciously, and avoid both predators and social contact during their period of rest.
Behaviour
Life cycle
Porpoises are fully aquatic creatures. Females deliver a single calf after a gestation period lasting about a year. Calving takes place entirely under water, with the foetus positioned for tail-first delivery to help prevent drowning. Females have mammary glands, but the shape of a newborn calf's mouth does not allow it to obtain a seal around the nipple—instead of the calf sucking milk, the mother squirts milk into the calf's mouth. This milk contains high amounts of fat, which aids in the development of blubber; it contains so much fat that it has the consistency of toothpaste. The calves are weaned at about 11 months of age. Males play no part in rearing calves. The calf is dependent for one to two years, and maturity occurs after seven to ten years, all varying between species. This mode of reproduction produces few offspring, but increases the probability of each one surviving.
Diet
Porpoises eat a wide variety of creatures. The stomach contents of harbour porpoises suggests that they mainly feed on benthic fish, and sometimes pelagic fish. They may also eat benthic invertebrates. In rare cases, algae, such as Ulva lactuca, is consumed. Atlantic porpoises are thought to follow the seasonal migration of bait fish, like herring, and their diet varies between seasons. The stomach contents of Dall's porpoises reveal that they mainly feed on cephalopods and bait fish, like capelin and sardines. Their stomachs also contained some deep-sea benthic organisms.
The finless porpoise is known to also follow seasonal migrations. It is known that populations in the mouth of the Indus River migrate to the sea from April through October to feed on the annual spawning of prawns. In Japan, sightings of small pods of them herding sand lance onto shore are common year-round.
Little is known about the diets of other species of porpoises. A dissection of three Burmeister's porpoises shows that they consume shrimp and euphausiids (krill). A dissection of a beached Vaquita showed remains of squid and grunts. Nothing is known about the diet of the spectacled porpoise.
Interactions with humans
Research history
In Aristotle's time, the 4th century BCE, porpoises were regarded as fish due to their superficial similarity. Aristotle, however, could already see many physiological and anatomical similarities with the terrestrial vertebrates, such as blood (circulation), lungs, uterus and fin anatomy. His detailed descriptions were assimilated by the Romans, but mixed with a more accurate knowledge of the dolphins, as mentioned by Pliny the Elder in his "Natural history". In the art of this and subsequent periods, porpoises are portrayed with a long snout (typical of dolphins) and a high-arched head. The harbour porpoise was one of the most accessible species for early cetologists, because it could be seen very close to land, inhabiting shallow coastal areas of Europe. Much of the findings that apply to all cetaceans were first discovered in porpoises. One of the first anatomical descriptions of the airways of the whales on the basis of a harbor porpoise dates from 1671 by John Ray. It nevertheless referred to the porpoise as a fish, most likely not in the modern-day sense, where it refers to a zoological group, but the older reference as simply a creature of the sea (cf. for example star-fish, cuttle-fish, jelly-fish and whale-fish).
In captivity
Harbour porpoises have historically been kept in captivity, under the assumption that they would fare better than their dolphin counterparts due to their smaller size and shallow-water habitats. Up until the 1980s, they were consistently short-lived. Harbour porpoises have a very long captive history, with poorly documented attempts as early as the 15th century, and better documented starting in the 1860s and 1870s in London Zoo, the now-closed Brighton Aquarium & Dolphinarium, and a zoo in Germany. At least 150 harbour porpoises have been kept worldwide, but only about 20 were actively caught for captivity. The captive history is best documented from Denmark where about 100 harbour porpoises have been kept, most in the 1960s and 1970s. All but two were incidental catches in fishing nets or strandings. Nearly half of these died within a month of diseases caught before they were captured or from damage sustained during capture. Up until 1984, none lived for more than 14 months. Attempts to rehabilitate seven rescued individuals in 1986 only resulted in three that could be released 6 months later. Very few have been brought into captivity later, but they have lived considerably longer. In recent decades, the only place keeping the species in Denmark is the Fjord & Bælt Centre, where three rescues have been kept, along with their offspring. Among the three rescues, one (father of world's first harbour porpoise born in captivity) lived for 20 years in captivity, another for 15 years, while the third (mother of first born in captivity) is the world's oldest known harbour porpoise, being 28 years old in 2023. The typical age reached in the wild is 14 years or less. Very few harbour porpoises have been born in captivity. Historically, harbour porpoises were often kept singly and those who were together often were not mature or of the same sex. Disregarding one born more than 100 years ago that was the result of a pregnant female being brought into captivity, the world's first full captive breeding was in 2007 in the Fjord & Bælt Centre, followed by another in 2009 in the Dolfinarium Harderwijk, the Netherlands. In addition to the few kept in Europe, harbour porpoise were displayed at the Vancouver Aquarium (Canada) until recently. This was a female that had beached herself onto Horseshoe Bay in 2008 and a male that had done the same in 2011. They died in 2017 and 2016 respectively.
Finless porpoises have commonly been kept in Japan, as well as China and Indonesia. As of 1984, ninety-four in total had been in captivity in Japan, eleven in China, and at least two in Indonesia. As of 1986, three establishments in Japan had bred them, and there had been five recorded births. Three calves died moments after their birth, but two survived for several years. This breeding success, combined with the results with harbour porpoise in Denmark and the Netherlands, proved that porpoises can be successfully bred in captivity, and this could open up new conservation options. The reopened Miyajima Public Aquarium (Japan) houses three finless porpoises. As part of an attempt of saving the narrow-ridged (or Yangtze) finless porpoise, several are kept in the Baiji Dolphinarium in China. After having been kept in captivity for 9 years, the first breeding happened in 2005.
Small numbers of Dall's porpoises have been kept in captivity in both the United States and Japan, with the most recent being in the 1980s. The first recorded instance of a Dall's taken for an aquarium was in 1956 captured off Catalina Island in southern California. Dall's porpoises consistently failed to thrive in captivity. These animals often repeatedly ran into the walls of their enclosures, refused food, and exhibited skin sloughing. Almost all Dall's porpoises introduced to aquaria died shortly after, typically within days. Only two have lived for more than 60 days: a male reached 15 months at Marineland of the Pacific and another 21 months at a United States Navy facility.
As part of last-ditch effort of saving the extremely rare vaquita (the tiny remaining population is rapidly declining because of bycatch in gillnets), there have been attempts of transferring some to captivity. The first and only caught for captivity were two females in 2017. Both became distressed and were rapidly released, but one of them died in the process. Soon after the project was abandoned.
Only a single Burmeister's porpoise and a single spectacled porpoise have been kept in captivity. Both were stranded individuals that only survived a few days after their rescue.
Threats
Hunting
Porpoises and other smaller cetaceans have traditionally been hunted in many areas, at least in Asia, Europe and North America, for their meat and blubber. A dominant hunting technique is drive hunting, where a pod of animals is driven together with boats and usually into a bay or onto a beach. Their escape is prevented by closing off the route to the ocean with other boats or nets. This type of fishery for harbour porpoises is best documented from the Danish straits, where it occurred until the end of the 19th century (it was banned in 1899), and again during the shortages in World War I and World War II. The Inuit in the Arctic hunt harbour porpoises by shooting and drive hunt for Dall's porpoise still takes place in Japan. The number of individuals taken each year is in the thousands, although a quota of around 17,000 per year is in effect today making it the largest direct hunt of any cetacean species in the world and the sustainability of the hunt has been questioned.
Fishing
Porpoises are highly affected by bycatch. Many porpoises, mainly the vaquita, are subject to great mortality due to gillnetting. Although it is the world's most endangered marine cetacean, the vaquita continues to be caught in small-mesh gillnet fisheries throughout much of its range. Incidental mortality caused by the fleet of El Golfo de Santa Clara was estimated to be at around 39 vaquitas per year, which is over 17% of the population size. Harbour porpoises also suffer drowning by gillnetting, but on a less threatening scale due to their high population; their mortality rate per year increases a mere 5% due to this.
The fishing market, historically has always had a porpoise bycatch. Today, the Marine Mammal Protection Act of 1972 has enforced the use of safer fishing equipment to reduce bycatch.
Environmental hazards
Porpoises are very sensitive to anthropogenic disturbances, and are keystone species, which can indicate the overall health of the marine environment. Populations of harbor porpoises in the North and Baltic Seas are under increasing pressure from anthropogenic causes such as offshore construction, ship traffic, fishing, and military exercises. Increasing pollution is a serious problem for marine mammals. Heavy metals and plastic waste are not biodegradable, and sometimes cetaceans consume these hazardous materials, mistaking them for food items. As a result, the animals are more susceptible to diseases and have fewer offspring. Harbour porpoises from the English Channel were found to have accumulated heavy metals.
The military and geologists employ strong sonar and produce an increases in noise in the oceans. Marine mammals that make use of biosonar for orientation and communication are not only hindered by the extra noise, but may race to the surface in panic. This may lead to a bubbling out of blood gases, and the animal then dies because the blood vessels become blocked, so-called decompression sickness. This effect, of course, only occurs in porpoises that dive to great depths, such as Dall's porpoise.
Additionally, civilian vessels produce sonar waves to measure the depth of the body of water in which they are. Similar to the navy, some boats produce waves that attract porpoises, while others may repel them. The problem with the waves that attract is that the animal may be injured or even killed by being hit by the vessel or its propeller.
Conservation
The harbour porpoise, spectacled porpoise, Burmeister's porpoise, and Dall's porpoise are all listed on Appendix II of the Convention on the Conservation of Migratory Species of Wild Animals (CMS). In addition, the Harbour porpoise is covered by the Agreement on the Conservation of Small Cetaceans of the Baltic, North East Atlantic, Irish and North Seas (ASCOBANS), the Agreement on the Conservation of Cetaceans in the Black Sea, Mediterranean Sea and Contiguous Atlantic Area (ACCOBAMS) and the Memorandum of Understanding Concerning the Conservation of the Manatee and Small Cetaceans of Western Africa and Macaronesia. Their conservation statuses are either at least concern or data deficient.
As of 2014, only 505 Yangtze finless porpoises remained in the main section of the Yangtze, with an alarming population density in Ezhou and Zhenjiang. While many threatened species decline rate slows after their classification, population decline rates of the porpoise are actually accelerating. While population decline tracked from 1994 to 2008 has been pegged at a rate of 6.06% annually, from 2006 to 2012, the porpoise population decreased by more than half. Finless porpoise population decrease of 69.8% in just a 22-year span from 1976 to 2000. 5.3%. A majority of factors of this population decline are being driven by the massive growth in Chinese industry since 1990 which caused increased shipping and pollution and ultimately environmental degradation. Some of these can be seen in damming of the river as well as illegal fishing activity. To protect the species, China's Ministry of Agriculture classified the species as being National First Grade Key Protected Wild Animal, the strictest classification by law, meaning it is illegal to bring harm to a porpoise. Protective measures in the Tian-e-Zhou Oxbow Nature Reserve has increased its population of porpoises from five to forty in 25 years. The Chinese Academy of Science's Wuhan Institute of Hydrobiology has been working with the World Wildlife Fund to ensure the future for this subspecies, and have placed five porpoises in another well-protected area, the He-wang-miao oxbow. Five protected natural reserves have been established in areas of the highest population density and mortality rates with measures being taken to ban patrolling and harmful fishing gear in those areas. There have also been efforts to study porpoise biology to help specialize conservation through captivation breeding. The Baiji Dolphinarium, was established in 1992 at the Institute of Hydrobiology of the Chinese Academy of Sciences in Wuhan which allowing the study of behavioral and biological factors affecting the finless porpoise, specifically breeding biology like seasonal changes in reproductive hormones and breeding behavior.
Because vaquitas are indigenous to the Gulf of California, Mexico is leading conservation efforts with the creation of the International Committee for the Recovery of the Vaquita (CIRVA), which has tried to prevent the accidental deaths of vaquitas by outlawing the use of fishing nets within the vaquita's habitat. CIRVA has worked together with the CITES, the Endangered Species Act, and the Marine Mammal Protection Act (MMPA) to nurse the vaquita population back to a point at which they can sustain themselves. CIRVA concluded in 2000 that between 39 and 84 individuals are killed each year by such gillnets. To try to prevent extinction, the Mexican government has created a nature reserve covering the upper part of the Gulf of California and the Colorado River delta. They have also placed a temporary ban on fishing, with compensation to those affected, that may pose a threat to the vaquita.
| Biology and health sciences | Cetaceans | null |
46083 | https://en.wikipedia.org/wiki/Halley%27s%20Comet | Halley's Comet | Halley's Comet is the only known short-period comet that is consistently visible to the naked eye from Earth, appearing every 72–80 years, though with the majority of recorded apparations (25 of 30) occurring after 75–77 years. It last appeared in the inner parts of the Solar System in 1986 and will next appear in mid-2061. Officially designated 1P/Halley, it is also commonly called Comet Halley, or sometimes simply Halley.
Halley's periodic returns to the inner Solar System have been observed and recorded by astronomers around the world since at least 240 BC, but it was not until 1705 that the English astronomer Edmond Halley understood that these appearances were re-appearances of the same comet. As a result of this discovery, the comet is named after Halley.
During its 1986 visit to the inner Solar System, Halley's Comet became the first comet to be observed in detail by a spacecraft, Giotto, providing the first observational data on the structure of a comet nucleus and the mechanism of coma and tail formation. These observations supported a number of longstanding hypotheses about comet construction, particularly Fred Whipple's "dirty snowball" model, which correctly predicted that Halley would be composed of a mixture of volatile ices—such as water, carbon dioxide, ammonia—and dust. The missions also provided data that substantially reformed and reconfigured these ideas; for instance, it is now understood that the surface of Halley is largely composed of dusty, non-volatile materials, and that only a small portion of it is icy.
Pronunciation
Comet Halley is usually pronounced , rhyming with valley, or sometimes , rhyming with daily. As to the surname Halley, Colin Ronan, one of Edmond Halley's biographers, preferred , rhyming with crawly. Spellings of Halley's name during his lifetime included Hailey, Haley, Hayley, Halley, Haly, Hawley, and Hawly, so its contemporary pronunciation is uncertain, but the version rhyming with valley seems to be preferred by current bearers of the surname.
Computation of orbit
Halley was the first comet to be recognised as periodic. Until the Renaissance, the philosophical consensus on the nature of comets, promoted by Aristotle, was that they were disturbances in Earth's atmosphere. This idea was disproven in 1577 by Tycho Brahe, who used parallax measurements to show that comets must lie beyond the Moon. Many were still unconvinced that comets orbited the Sun, and assumed instead that they must follow straight paths through the Solar System. In 1687, Sir Isaac Newton published his Philosophiæ Naturalis Principia Mathematica, in which he outlined his laws of gravity and motion. His work on comets was decidedly incomplete. Although he had suspected that two comets that had appeared in succession in 1680 and 1681 were the same comet before and after passing behind the Sun (he was later found to be correct; see Newton's Comet), he was initially unable to completely reconcile comets into his model.
Ultimately, it was Newton's friend, editor and publisher, Edmond Halley, who, in his 1705 Synopsis of the Astronomy of Comets, used Newton's new laws to calculate the gravitational effects of Jupiter and Saturn on cometary orbits. Having compiled a list of 24 comet observations, he calculated that the orbital elements of a second comet that had appeared in 1682 were nearly the same as those of two comets that had appeared in 1531 (observed by Petrus Apianus) and 1607 (observed by Johannes Kepler). Halley thus concluded that all three comets were, in fact, the same object returning about every 76 years, a period that has since been found to vary between 72 and 80 years. After a rough estimate of the perturbations the comet would sustain from the gravitational attraction of the planets, he predicted its return for 1758. While he had personally observed the comet around perihelion in September 1682, Halley died in 1742 before he could observe its predicted return.
Halley's prediction of the comet's return proved to be correct, although it was not seen until 25 December 1758, by Johann Georg Palitzsch, a German farmer and amateur astronomer. Other observers from throughout Europe and its colonies sent in confirmations to Paris after the comet brightened the following spring. In the Americas, John Winthrop lectured at Harvard University to explain the implications of the comet's reappearance for Newtonian mechanics and natural theology.
Another independent recognition that the comet had returned was made by the Jamaican astronomer Francis Williams, but his observations did not reach Europe. A unique portrait commissioned by Williams demonstrates the impact of the comet's return on period astronomers. Williams' hand rests on the page of Newton's Principia with procedures to predict comet sightings. The white smudge in the sky is probably a depiction of Halley's comet relative to the constellations in March 1759, and the chord hanging above the book likely represents the comet's orbit. In 2024, using X-ray imaging, the painting was shown to depict the field of stars in which the comet would have been visible in 1759. Williams likely commissioned the portrait to commemorate his observations.
The comet did not pass through its perihelion until 13 March 1759, the attraction of Jupiter and Saturn having caused a delay of 618 days. This effect was computed before its return (with a one-month error to 13 April) by a team of three French mathematicians, Alexis Clairaut, Joseph Lalande, and Nicole-Reine Lepaute. The confirmation of the comet's return was the first time anything other than planets had been shown to orbit the Sun. It was also one of the earliest successful tests of Newtonian physics, and a clear demonstration of its explanatory power. The comet was first named in Halley's honour by French astronomer Nicolas-Louis de Lacaille in 1759.
Some scholars have proposed that first-century Mesopotamian astronomers already had recognised Halley's Comet as periodic. This theory notes a passage in the Babylonian Talmud, tractate Horayot that refers to "a star which appears once in seventy years that makes the captains of the ships err". Others doubt this idea based on historical considerations about the exact timing of this alleged observation, and suggest it refers to the variable star Mira.
Researchers in 1981 attempting to calculate the past orbits of Halley by numerical integration starting from accurate observations in the seventeenth and eighteenth centuries could not produce accurate results further back than 837 owing to a close approach to Earth in that year. It was necessary to use ancient Chinese comet observations to constrain their calculations.
Orbit and origin
Halley's orbital period has varied between 74 and 80 years since 240 BC. Its orbit around the Sun is highly elliptical, with an orbital eccentricity of 0.967 (with 0 being a circle and 1 being a parabolic trajectory). The perihelion, the point in the comet's orbit when it is nearest the Sun, is . This is between the orbits of Mercury and Venus. Its aphelion, or farthest distance from the Sun, is , roughly the orbital distance of Pluto. Unlike the overwhelming majority of objects in the Solar System, Halley's orbit is retrograde; it orbits the Sun in the opposite direction to the planets, or, clockwise from above the Sun's north pole. The orbit is inclined by 18° to the ecliptic, with much of it lying south of the ecliptic. This is usually represented as 162°, to account for Halley's retrograde orbit. The 1910 passage was at a relative velocity of . Because its orbit comes close to Earth's in two places, Halley is associated with two meteor showers: the Eta Aquariids in early May, and the Orionids in late October.
Halley is classified as a periodic or short-period comet: one with an orbit lasting 200 years or less. This contrasts it with long-period comets, whose orbits last for thousands of years. Periodic comets have an average inclination to the ecliptic of only ten degrees, and an orbital period of just 6.5 years, so Halley's orbit is atypical. Most short-period comets (those with orbital periods shorter than 20 years and inclinations of 30 degrees or less) are called Jupiter-family comets. Those resembling Halley, with orbital periods of between 20 and 200 years and inclinations extending from zero to more than 90 degrees, are called Halley-type comets. , 105 Halley-type comets have been observed, compared with 816 identified Jupiter-family comets.
The orbits of the Halley-type comets suggest that they were originally long-period comets whose orbits were perturbed by the gravity of the giant planets and directed into the inner Solar System. If Halley was once a long-period comet, it is likely to have originated in the Oort cloud, a sphere of cometary bodies around 20,000–50,000 au from the Sun. Conversely the Jupiter-family comets are generally believed to originate in the Kuiper belt, a flat disc of icy debris between 30 au (Neptune's orbit) and 50 au from the Sun (in the scattered disc). Another point of origin for the Halley-type comets was proposed in 2008, when a trans-Neptunian object with a retrograde orbit similar to Halley's was discovered, , whose orbit takes it from just outside that of Uranus to twice the distance of Pluto. It may be a member of a new population of small Solar System bodies that serves as the source of Halley-type comets.
Halley has probably been in its current orbit for 16,000–200,000 years, although it is not possible to numerically integrate its orbit for more than a few tens of apparitions, and close approaches before 837 AD can only be verified from recorded observations. The non-gravitational effects can be crucial; as Halley approaches the Sun, it expels jets of sublimating gas from its surface, which knock it very slightly off its orbital path. These orbital changes cause delays in its perihelion passage of four days on average.
In 1989, Boris Chirikov and Vitold Vecheslavov performed an analysis of 46 apparitions of Halley's Comet taken from historical records and computer simulations, which showed that its dynamics were chaotic and unpredictable on long timescales. Halley's projected dynamical lifetime is estimated to be about 10 million years. The dynamics of its orbit can be approximately described by a two-dimensional symplectic map, known as the Kepler map, a solution to the restricted three-body problem for highly eccentric orbits. Based on records from the 1910 apparition, David Hughes calculated in 1985 that Halley's nucleus has been reduced in mass by 80 to 90% over the last 2,000 to 3,000 revolutions, and that it will most likely disappear completely after another 2,300 perihelion passages. More recent work suggests that Halley will evaporate, or split in two, within the next few tens of thousands of years, or will be ejected from the Solar System within a few hundred thousand years.
Structure and composition
The Giotto and Vega missions gave planetary scientists their first view of Halley's surface and structure. The nucleus is a conglomerate of ices and dust, often referred to as a "dirty snowball". Like all comets, as Halley nears the Sun, its volatile compounds (those with low boiling points, such as water, carbon monoxide, carbon dioxide and other ices) begin to sublimate from the surface. This causes the comet to develop a coma, or atmosphere, at distances up to from the nucleus. Sublimation of this dirty ice releases dust particles, which travel with the gas away from the nucleus. Gas molecules in the coma absorb solar light and then re-radiate it at different wavelengths, a phenomenon known as fluorescence, whereas dust particles scatter the solar light. Both processes are responsible for making the coma visible. As a fraction of the gas molecules in the coma are ionised by the solar ultraviolet radiation, pressure from the solar wind, a stream of charged particles emitted by the Sun, pulls the coma's ions out into a long tail, which may extend more than 100 million kilometres into space. Changes in the flow of the solar wind can cause disconnection events, in which the tail completely breaks off from the nucleus.
Despite the vast size of its coma, Halley's nucleus is relatively small: barely long, wide and perhaps thick. Based on a reanalysis of images taken by the Giotto and Vega spacecraft, Lamy et al. determined an effective diameter of . Its shape has been variously compared to that of a peanut, a potato, or an avocado. Its mass is roughly 2.2 kg, with an average density of about . The low density indicates that it is made of a large number of small pieces, held together very loosely, forming a structure known as a rubble pile. Ground-based observations of coma brightness suggested that Halley's rotation period was about 7.4 days. Images taken by the various spacecraft, along with observations of the jets and shell, suggested a period of 52 hours. Given the irregular shape of the nucleus, Halley's rotation is likely to be complex. The flyby images revealed an extremely varied topography, with hills, mountains, ridges, depressions, and at least one crater.
Halley's day side (the side facing the Sun) is far more active than the night side. Spacecraft observations showed that the gases ejected from the nucleus were 80% water vapour, 17% carbon monoxide and 3–4% carbon dioxide, with traces of hydrocarbons although more recent sources give a value of 10% for carbon monoxide and also include traces of methane and ammonia. The dust particles were found to be primarily a mixture of carbon–hydrogen–oxygen–nitrogen (CHON) compounds common in the outer Solar System, and silicates, such as are found in terrestrial rocks. The dust particles ranged in size down to the limits of detection (≈0.001 μm). The ratio of deuterium to hydrogen in the water released by Halley was initially thought to be similar to that found in Earth's ocean water, suggesting that Halley-type comets may have delivered water to Earth in the distant past. Subsequent observations showed Halley's deuterium ratio to be far higher than that found in Earth's oceans, making such comets unlikely sources for Earth's water.
Giotto provided the first evidence in support of Fred Whipple's "dirty snowball" hypothesis for comet construction; Whipple postulated that comets are icy objects warmed by the Sun as they approach the inner Solar System, causing ices on their surfaces to sublime (change directly from a solid to a gas), and jets of volatile material to burst outward, creating the coma. Giotto showed that this model was broadly correct, though with modifications. Halley's albedo, for instance, is about 4%, meaning that it reflects only 4% of the sunlight hitting it – about what one would expect for coal. Thus, despite astronomers predicting that Halley would have an albedo of about 0.17 (roughly equivalent to bare soil), Halley's Comet is in fact pitch black. The "dirty ices" on the surface sublime at temperatures between in sections of higher albedo to at low albedo; Vega 1 found Halley's surface temperature to be in the range . This suggested that only 10% of Halley's surface was active, and that large portions of it were coated in a layer of dark dust that retained heat. Together, these observations suggested that Halley was in fact predominantly composed of non-volatile materials, and thus more closely resembled a "snowy dirtball" than a "dirty snowball".
History
Before 1066
The first certain appearance of Halley's Comet in the historical record is a description from 240 BC, in the Chinese chronicle Records of the Grand Historian or Shiji, which describes a comet that appeared in the east and moved north. The only surviving record of the 164 BC apparition is found on two fragmentary Babylonian tablets, which were rediscovered in August 1984 in the collection of the British Museum.
The apparition of 87 BC was recorded in Babylonian tablets which state that the comet was seen "day beyond day" for a month. This appearance may be recalled in the representation of Tigranes the Great, an Armenian king who is depicted on coins with a crown that features, according to Vahe Gurzadyan and R. Vardanyan, "a star with a curved tail [that] may represent the passage of Halley's Comet in 87 BC." Gurzadyan and Vardanyan argue that "Tigranes could have seen Halley's Comet when it passed closest to the Sun on August 6 in 87 BC" as the comet would have been a "most recordable event"; for ancient Armenians it could have heralded the New Era of the brilliant King of Kings.
The apparition of 12 BC was recorded in the Book of Han by Chinese astronomers of the Han dynasty who tracked it from August through October. It passed within 0.16 au of Earth. According to the Roman historian Cassius Dio, a comet appeared suspended over Rome for several days portending the death of Marcus Vipsanius Agrippa in that year. Halley's appearance in 12 BC, only a few years distant from the conventionally assigned date of the birth of Jesus Christ, has led some theologians and astronomers to suggest that it might explain the biblical story of the Star of Bethlehem. There are other explanations for the phenomenon, such as planetary conjunctions, and there are also records of other comets that appeared closer to the date of Jesus's birth.
If Yehoshua ben Hananiah's reference to "a star which arises once in seventy years and misleads the sailors" refers to Halley's Comet, he can only have witnessed the 66 AD appearance. Another possible report comes from Jewish historian Josephus, who wrote that in 66 AD "The signs ... were so evident, and did so plainly foretell their future desolation ... there was a star resembling a sword, which stood over the city, and a comet, that continued a whole year". This portent was in reference to the city of Jerusalem and the First Jewish–Roman War.
The 141 AD apparition was recorded in Chinese chronicles, with observations of a bluish white comet on 27 March and 16, 22 and 23 April. The early Tamil bards of southern India (c. 1st - 4th century CE) also describe a certain relatable event.
The 374 AD and 607 approaches each came within 0.09 au of Earth. The 451 AD apparition was said to herald the defeat of Attila the Hun at the Battle of Chalons.
The 684 AD apparition was reported in Chinese records as the "broom star".
The 760 AD apparition was recorded in the Zuqnin Chronicle'''s entry for iyyōr 1071 SE (May 760 AD), calling it a "white sign":
In 837 AD, Halley's Comet may have passed as close as from Earth, by far its closest approach. Its tail may have stretched 60 degrees across the sky. It was recorded by astronomers in China, Japan, Germany, the Byzantine Empire, and the Middle East; Emperor Louis the Pious observed this appearance and devoted himself to prayer and penance, fearing that "by this token a change in the realm and the death of a prince are made known".
In 912 AD, Halley is recorded in the Annals of Ulster, which states "A dark and rainy year. A comet appeared."
1066
In 1066, the comet was seen in England and thought to be an omen: later that year Harold II of England died at the Battle of Hastings and William the Conqueror claimed the throne. The comet is represented on the Bayeux Tapestry and described in the tituli as a star. Surviving accounts from the period describe it as appearing to be four times the size of Venus, and shining with a light equal to a quarter of that of the Moon. Halley came within 0.10 au of Earth at that time.
This appearance of the comet is also noted in the Anglo-Saxon Chronicle. Eilmer of Malmesbury may have seen Halley in 989 and 1066, as recorded by William of Malmesbury:
Not long after, a comet, portending (they say) a change in governments, appeared, trailing its long flaming hair through the empty sky: concerning which there was a fine saying of a monk of our monastery called Æthelmær. Crouching in terror at the sight of the gleaming star, "You've come, have you?", he said. "You've come, you source of tears to many mothers. It is long since I saw you; but as I see you now you are much more terrible, for I see you brandishing the downfall of my country."
The Irish Annals of the Four Masters recorded the comet as "A star [that] appeared on the seventh of the Calends of May, on Tuesday after Little Easter, than whose light the brilliance or light of The Moon was not greater; and it was visible to all in this manner till the end of four nights afterwards." Chaco Native Americans in New Mexico may have recorded the 1066 apparition in their petroglyphs.
The Italo-Byzantine chronicle of Lupus the Protospatharios mentions that a "comet-star" appeared in the sky in the year 1067 (the chronicle is erroneous, as the event occurred in 1066, and by Robert he means William).
The Emperor Constantine Ducas died in the month of May, and his son Michael received the Empire. And in this year there appeared a comet star, and the Norman count Robert [sic] fought a battle with Harold, King of the English, and Robert was victorious and became king over the people of the English.
1145–1378
The 1145 apparition may have been recorded by the monk Eadwine.
According to legend, Genghis Khan was inspired to turn his conquests toward Europe by the westward-seeming trajectory of the 1222 apparition. In Korea, the comet was reportedly visible during the daylight on 9 September 1222.
The 1301 apparition was visually spectacular, and may be the first that resulted in convincing portraits of a particular comet. The Florentine chronicler Giovanni Villani wrote that the comet left "great trails of fumes behind", and that it remained visible from September 1301 until January 1302. It was seen by the artist Giotto di Bondone, who represented the Star of Bethlehem as a fire-coloured comet in the Nativity section of his Arena Chapel cycle, completed in 1305. Giotto's depiction includes details of the coma, a sweeping tail, and the central condensation. According to the art historian Roberta Olson, it is much more accurate than other contemporary descriptions, and was not equaled in painting until the 19th century. Olson's identification of Halley's Comet in Giotto's Adoration of the Magi is what inspired the European Space Agency to name their mission to the comet Giotto, after the artist.
Halley's 1378 appearance is recorded in the Annales Mediolanenses as well as in East Asian sources.
1456
In 1456, the year of Halley's next apparition, the Ottoman Empire invaded the Kingdom of Hungary, culminating in the siege of Belgrade in July of that year. In a papal bull, Pope Callixtus III ordered special prayers be said for the city's protection. In 1470, the humanist scholar Bartolomeo Platina wrote in his that,
A hairy and fiery star having then made its appearance for several days, the mathematicians declared that there would follow grievous pestilence, dearth and some great calamity. Calixtus, to avert the wrath of God, ordered supplications that if evils were impending for the human race He would turn all upon the Turks, the enemies of the Christian name. He likewise ordered, to move God by continual entreaty, that notice should be given by the bells to call the faithful at midday to aid by their prayers those engaged in battle with the Turk.
Platina's account is not mentioned in official records. In the 18th century, a Frenchman further embellished the story, in anger at the Church, by claiming that the Pope had "excommunicated" the comet, though this story was most likely his own invention.
Halley's apparition of 1456 was also witnessed in Kashmir and depicted in great detail by Śrīvara, a Sanskrit poet and biographer to the Sultans of Kashmir. He read the apparition as a cometary portent of doom foreshadowing the imminent fall of Sultan Zayn al-Abidin (AD 1418/1420–1470).
After witnessing a bright light in the sky which most historians have identified as Halley's Comet, Zara Yaqob, Emperor of Ethiopia from 1434 to 1468, founded the city of Debre Berhan (tr. City of Light) and made it his capital for the remainder of his reign.
1531-1759
Petrus Apianus and Girolamo Fracastoro described the comet's visit in 1531, with the former even including graphics in his publication. Through his observations, Apianus was able to prove that a comet's tail always points away from the Sun.
In the Sikh scriptures of the Guru Granth Sahib, the founder of the faith Guru Nanak makes reference to "a long star that has risen" at Ang 1110, and it is believed by some Sikh scholars to be a reference to Halley's appearance in 1531.
Halley's periodic returns have been subject to scientific investigation since the 16th century. The three apparitions from 1531 to 1682 were noted by Edmond Halley, enabling him to predict it would return. One key breakthrough occurred when Halley talked with Newton about his ideas of the laws of motion. Newton also helped Halley get John Flamsteed's data on the 1682 apparition. By studying data on the 1531, 1607, and 1682 comets, he came to the conclusion these were the same comet, and presented his findings in 1696.
One difficulty was accounting for variations in the comet's orbital period, which was over a year longer between 1531 and 1607 than it was between 1607 and 1682. Newton had theorised that such delays were caused by the gravity of other comets, but Halley found that Jupiter and Saturn would cause the appropriate delays. In the decades that followed, more refined mathematics would be worked on, notable by Paris Observatory; the work on Halley also provided a boost to Newton and Kepler's rules for celestial motions. ( | Physical sciences | Solar System | null |
46086 | https://en.wikipedia.org/wiki/Eared%20seal | Eared seal | An eared seal, otariid, or otary is any member of the marine mammal family Otariidae, one of three groupings of pinnipeds. They comprise 15 extant species in seven genera (another species became extinct in the 1950s) and are commonly known either as sea lions or fur seals, distinct from true seals (phocids) and the walrus (odobenids). Otariids are adapted to a semiaquatic lifestyle, feeding and migrating in the water, but breeding and resting on land or ice. They reside in subpolar, temperate, and equatorial waters throughout the Pacific and Southern Oceans, the southern Indian, and Atlantic Oceans. They are conspicuously absent in the north Atlantic.
The words "otariid" and "otary" come from the Greek meaning "little ear", referring to the small but visible external ear flaps (pinnae), which distinguishes them from the phocids.
Evolution and taxonomy
Morphological and molecular evidence supports a monophyletic origin of pinnipeds, sharing a common ancestor with Musteloidea, though an earlier hypothesis suggested that Otаriidae are descended from a common ancestor most closely related to modern bears. Debate remains as to whether the phocids diverged from the otariids before or after the walrus.
Otariids arose in the Miocene (15–17 million years ago) in the North Pacific, diversifying rapidly into the Southern Hemisphere, where most species now live. The earliest known fossil otariid is Eotaria crypta from southern California, while the genus Callorhinus (northern fur seal) has the oldest fossil record of any living otariid, extending to the middle Pliocene. It probably arose from the extinct fur seal genus Thalassoleon.
Traditionally, otariids had been subdivided into the fur seal (Arctocephalinae) and sea lion (Otariinae) subfamilies, with the major distinction between them being the presence of a thick underfur layer in the former. Under this categorization, the fur seals comprised two genera: Callorhinus in the North Pacific with a single representative, the northern fur seal (C. ursinus), and eight species in the Southern Hemisphere under the genus Arctocephalus; while the sea lions comprise five species under five genera. Recent analyses of the genetic evidence suggests that Callorhinus ursinus is in fact more closely related to several sea lion species. Furthermore, many of the Otariinae appear to be more phylogenetically distinct than previously assumed; for example, the Japanese sea lion (Zalophus japonicus) is now considered a separate species, rather than a subspecies of the California sea lion (Zalophus californius).
In light of this evidence, the subfamily separation has been removed entirely and the family Otariidae has been organized into seven genera with 16 species and two subspecies.
Nonetheless, because of morphological and behavioral similarities among the "fur seals" and "sea lions", these remain useful categories when discussing differences between groups of species. Compared to sea lions, fur seals are generally smaller, exhibit greater sexual dimorphism, eat smaller prey and go on longer foraging trips; and, of course, there is the contrast between the coarse short sea lion hair and the fur seal's fur.
Anatomy and appearance
Otariids have proportionately much larger foreflippers and pectoral muscles than phocids, and have the ability to turn their hind limbs forward and walk on all fours, making them far more maneuverable on land. They are generally considered to be less adapted to an aquatic lifestyle, since they breed primarily on land and haul out more frequently than true seals. However, they can attain higher bursts of speed and have greater maneuverability in the water. Their swimming power derives from the use of flippers more so than the sinuous whole-body movements typical of phocids and walruses.
Otariids are further distinguished by a more dog-like head, sharp, well-developed canines, and the aforementioned visible external pinnae. Their postcanine teeth are generally simple and conical in shape. The dental formula for eared seals is: . Sea lions are covered with coarse guard hairs, while fur seals have a thick underfur, which has historically made them the objects of commercial exploitation.
Male otariids range in size from the Galápagos fur seal, smallest of all otariids, to the over 1,000-kg (2,200-lb) Steller sea lion. Mature male otariids weigh two to six times as much as females, with proportionately larger heads, necks, and chests, making them the most sexually dimorphic of all mammals.
Behavior
All otariids breed on land during well-defined breeding seasons. Except for the Australian sea lion, which has an atypical 17.5 month breeding cycle, they form strictly annual aggregations on beaches or rocky substrates, often on islands. All species are polygynous; i.e. successful males breed with several females. In most species, males arrive at breeding sites first and establish and maintain territories through vocal and visual displays and occasional fighting. Females typically arrive on shore a day or so before giving birth. While considered social animals, no permanent hierarchies or statuses are established on the colonies. The extent to which males control females or territories varies between species. Thus, the northern fur seal and the South American sea lion tend to herd specific harem-associated females, occasionally injuring them, while the Steller sea lion and the New Zealand sea lion control spatial territories, but do not generally interfere with the movement of the females. Female New Zealand sea lions are the only otrariids that move up to into forests to protect their pups during the breeding season.
Otariids are carnivorous, feeding on fish, squid and krill. Sea lions tend to feed closer to shore in upwelling zones, feeding on larger fish, while the smaller fur seals tend to take longer, offshore foraging trips and can subsist on large numbers of smaller prey items. They are visual feeders. Some females are capable of dives of up to .
Species
Family Otariidae
Subfamily Arctocephalinae (fur seals)
Genus Arctocephalus
Brown fur seal, A. pusillus
South African fur seal, A. pusillus pusillus
Australian fur seal, A. pusillus doriferus
Antarctic fur seal, A. gazella
Guadalupe fur seal, A. townsendi
Juan Fernández fur seal, A. philippii
Galápagos fur seal, A. galapagoensis
New Zealand fur seal (or southern fur seal), A. forsteri
Subantarctic fur seal, A. tropicalis
South American fur seal, A. australis
Genus Callorhinus
Northern fur seal, C. ursinus
Subfamily Otariinae (sea lions)
Genus Eumetopias
Steller sea lion, E. jubatus
Genus Neophoca
Australian sea lion, N. cinerea
Genus Otaria
South American sea lion, O. flavescens
Genus Phocarctos
New Zealand sea lion (or Hooker's sea lion), P. hookeri
Genus Zalophus
California sea lion, Z. californianus
†Japanese sea lion, Z. japonicus – extinct (1970s)
Galápagos sea lion, Z. wollebaeki
Although the two subfamilies of otariids, the Otariinae (sea lions) and Arctocephalinae (fur seals), are still widely used, recent molecular studies have demonstrated that they may be invalid. Instead, they suggest three clades within the family; one consisting of the northern sea lions (Eumetopias and Zalophus), one of the northern fur seal (Callorhinus) and its extinct relatives, and the third of all the remaining Southern Hemisphere species.
| Biology and health sciences | Pinnipeds | null |
46095 | https://en.wikipedia.org/wiki/Russell%27s%20paradox | Russell's paradox | In mathematical logic, Russell's paradox (also known as Russell's antinomy) is a set-theoretic paradox published by the British philosopher and mathematician, Bertrand Russell, in 1901. Russell's paradox shows that every set theory that contains an unrestricted comprehension principle leads to contradictions. According to the unrestricted comprehension principle, for any sufficiently well-defined property, there is the set of all and only the objects that have that property. Let R be the set of all sets that are not members of themselves. (This set is sometimes called "the Russell set".) If R is not a member of itself, then its definition entails that it is a member of itself; yet, if it is a member of itself, then it is not a member of itself, since it is the set of all sets that are not members of themselves. The resulting contradiction is Russell's paradox. In symbols:
Russell also showed that a version of the paradox could be derived in the axiomatic system constructed by the German philosopher and mathematician Gottlob Frege, hence undermining Frege's attempt to reduce mathematics to logic and calling into question the logicist programme. Two influential ways of avoiding the paradox were both proposed in 1908: Russell's own type theory and the Zermelo set theory. In particular, Zermelo's axioms restricted the unlimited comprehension principle. With the additional contributions of Abraham Fraenkel, Zermelo set theory developed into the now-standard Zermelo–Fraenkel set theory (commonly known as ZFC when including the axiom of choice). The main difference between Russell's and Zermelo's solution to the paradox is that Zermelo modified the axioms of set theory while maintaining a standard logical language, while Russell modified the logical language itself. The language of ZFC, with the help of Thoralf Skolem, turned out to be that of first-order logic.
The paradox had already been discovered independently in 1899 by the German mathematician Ernst Zermelo. However, Zermelo did not publish the idea, which remained known only to David Hilbert, Edmund Husserl, and other academics at the University of Göttingen. At the end of the 1890s, Georg Cantor – considered the founder of modern set theory – had already realized that his theory would lead to a contradiction, as he told Hilbert and Richard Dedekind by letter.
Informal presentation
Most sets commonly encountered are not members of themselves. Let us call a set "normal" if it is not a member of itself, and "abnormal" if it is a member of itself. Clearly every set must be either normal or abnormal. For example, consider the set of all squares in a plane. This set is not itself a square in the plane, thus it is not a member of itself and is therefore normal. In contrast, the complementary set that contains everything which is not a square in the plane is itself not a square in the plane, and so it is one of its own members and is therefore abnormal.
Now we consider the set of all normal sets, R, and try to determine whether R is normal or abnormal. If R were normal, it would be contained in the set of all normal sets (itself), and therefore be abnormal; on the other hand if R were abnormal, it would not be contained in the set of all normal sets (itself), and therefore be normal. This leads to the conclusion that R is neither normal nor abnormal: Russell's paradox.
Formal presentation
The term "naive set theory" is used in various ways. In one usage, naive set theory is a formal theory, that is formulated in a first-order language with a binary non-logical predicate , and that includes the axiom of extensionality:
and the axiom schema of unrestricted comprehension:
for any predicate with as a free variable inside . Substitute for to get
Then by existential instantiation (reusing the symbol ) and universal instantiation we have
a contradiction. Therefore, this naive set theory is inconsistent.
Philosophical implications
Prior to Russell's paradox (and to other similar paradoxes discovered around the time, such as the Burali-Forti paradox), a common conception of the idea of set was the "extensional concept of set", as recounted by von Neumann and Morgenstern:
In particular, there was no distinction between sets and proper classes as collections of objects. Additionally, the existence of each of the elements of a collection was seen as sufficient for the existence of the set of said elements. However, paradoxes such as Russell's and Burali-Forti's showed the impossibility of this conception of set, by examples of collections of objects that do not form sets, despite all said objects being existent.
Set-theoretic responses
From the principle of explosion of classical logic, any proposition can be proved from a contradiction. Therefore, the presence of contradictions like Russell's paradox in an axiomatic set theory is disastrous; since if any formula can be proved true it destroys the conventional meaning of truth and falsity. Further, since set theory was seen as the basis for an axiomatic development of all other branches of mathematics, Russell's paradox threatened the foundations of mathematics as a whole. This motivated a great deal of research around the turn of the 20th century to develop a consistent (contradiction-free) set theory.
In 1908, Ernst Zermelo proposed an axiomatization of set theory that avoided the paradoxes of naive set theory by replacing arbitrary set comprehension with weaker existence axioms, such as his axiom of separation (Aussonderung). (Avoiding paradox was not Zermelo's original intention, but instead to document which assumptions he used in proving the well-ordering theorem.) Modifications to this axiomatic theory proposed in the 1920s by Abraham Fraenkel, Thoralf Skolem, and by Zermelo himself resulted in the axiomatic set theory called ZFC. This theory became widely accepted once Zermelo's axiom of choice ceased to be controversial, and ZFC has remained the canonical axiomatic set theory down to the present day.
ZFC does not assume that, for every property, there is a set of all things satisfying that property. Rather, it asserts that given any set X, any subset of X definable using first-order logic exists. The object R defined by Russell's paradox above cannot be constructed as a subset of any set X, and is therefore not a set in ZFC. In some extensions of ZFC, notably in von Neumann–Bernays–Gödel set theory, objects like R are called proper classes.
ZFC is silent about types, although the cumulative hierarchy has a notion of layers that resemble types. Zermelo himself never accepted Skolem's formulation of ZFC using the language of first-order logic. As José Ferreirós notes, Zermelo insisted instead that "propositional functions (conditions or predicates) used for separating off subsets, as well as the replacement functions, can be 'entirely arbitrary [ganz beliebig]"; the modern interpretation given to this statement is that Zermelo wanted to include higher-order quantification in order to avoid Skolem's paradox. Around 1930, Zermelo also introduced (apparently independently of von Neumann), the axiom of foundation, thus—as Ferreirós observes—"by forbidding 'circular' and 'ungrounded' sets, it [ZFC] incorporated one of the crucial motivations of TT [type theory]—the principle of the types of arguments". This 2nd order ZFC preferred by Zermelo, including axiom of foundation, allowed a rich cumulative hierarchy. Ferreirós writes that "Zermelo's 'layers' are essentially the same as the types in the contemporary versions of simple TT [type theory] offered by Gödel and Tarski. One can describe the cumulative hierarchy into which Zermelo developed his models as the universe of a cumulative TT in which transfinite types are allowed. (Once we have adopted an impredicative standpoint, abandoning the idea that classes are constructed, it is not unnatural to accept transfinite types.) Thus, simple TT and ZFC could now be regarded as systems that 'talk' essentially about the same intended objects. The main difference is that TT relies on a strong higher-order logic, while Zermelo employed second-order logic, and ZFC can also be given a first-order formulation. The first-order 'description' of the cumulative hierarchy is much weaker, as is shown by the existence of countable models (Skolem's paradox), but it enjoys some important advantages."
In ZFC, given a set A, it is possible to define a set B that consists of exactly the sets in A that are not members of themselves. B cannot be in A by the same reasoning in Russell's Paradox. This variation of Russell's paradox shows that no set contains everything.
Through the work of Zermelo and others, especially John von Neumann, the structure of what some see as the "natural" objects described by ZFC eventually became clear: they are the elements of the von Neumann universe, V, built up from the empty set by transfinitely iterating the power set operation. It is thus now possible again to reason about sets in a non-axiomatic fashion without running afoul of Russell's paradox, namely by reasoning about the elements of V. Whether it is appropriate to think of sets in this way is a point of contention among the rival points of view on the philosophy of mathematics.
Other solutions to Russell's paradox, with an underlying strategy closer to that of type theory, include Quine's New Foundations and Scott–Potter set theory. Yet another approach is to define multiple membership relation with appropriately modified comprehension scheme, as in the Double extension set theory.
History
Russell discovered the paradox in May or June 1901. By his own account in his 1919 Introduction to Mathematical Philosophy, he "attempted to discover some flaw in Cantor's proof that there is no greatest cardinal". In a 1902 letter, he announced the discovery to Gottlob Frege of the paradox in Frege's 1879 Begriffsschrift and framed the problem in terms of both logic and set theory, and in particular in terms of Frege's definition of function:
Russell would go on to cover it at length in his 1903 The Principles of Mathematics, where he repeated his first encounter with the paradox:
Russell wrote to Frege about the paradox just as Frege was preparing the second volume of his Grundgesetze der Arithmetik. Frege responded to Russell very quickly; his letter dated 22 June 1902 appeared, with van Heijenoort's commentary in Heijenoort 1967:126–127. Frege then wrote an appendix admitting to the paradox, and proposed a solution that Russell would endorse in his Principles of Mathematics, but was later considered by some to be unsatisfactory. For his part, Russell had his work at the printers and he added an appendix on the doctrine of types.
Ernst Zermelo in his (1908) A new proof of the possibility of a well-ordering (published at the same time he published "the first axiomatic set theory") laid claim to prior discovery of the antinomy in Cantor's naive set theory. He states: "And yet, even the elementary form that Russell9 gave to the set-theoretic antinomies could have persuaded them [J. König, Jourdain, F. Bernstein] that the solution of these difficulties is not to be sought in the surrender of well-ordering but only in a suitable restriction of the notion of set". Footnote 9 is where he stakes his claim:
Frege sent a copy of his Grundgesetze der Arithmetik to Hilbert; as noted above, Frege's last volume mentioned the paradox that Russell had communicated to Frege. After receiving Frege's last volume, on 7 November 1903, Hilbert wrote a letter to Frege in which he said, referring to Russell's paradox, "I believe Dr. Zermelo discovered it three or four years ago". A written account of Zermelo's actual argument was discovered in the Nachlass of Edmund Husserl.
In 1923, Ludwig Wittgenstein proposed to "dispose" of Russell's paradox as follows:
The reason why a function cannot be its own argument is that the sign for a function already contains the prototype of its argument, and it cannot contain itself. For let us suppose that the function F(fx) could be its own argument: in that case there would be a proposition F(F(fx)), in which the outer function F and the inner function F must have different meanings, since the inner one has the form O(fx) and the outer one has the form Y(O(fx)). Only the letter 'F' is common to the two functions, but the letter by itself signifies nothing. This immediately becomes clear if instead of F(Fu) we write (do) : F(Ou) . Ou = Fu. That disposes of Russell's paradox. (Tractatus Logico-Philosophicus, 3.333)
Russell and Alfred North Whitehead wrote their three-volume Principia Mathematica hoping to achieve what Frege had been unable to do. They sought to banish the paradoxes of naive set theory by employing a theory of types they devised for this purpose. While they succeeded in grounding arithmetic in a fashion, it is not at all evident that they did so by purely logical means. While Principia Mathematica avoided the known paradoxes and allows the derivation of a great deal of mathematics, its system gave rise to new problems.
In any event, Kurt Gödel in 1930–31 proved that while the logic of much of Principia Mathematica, now known as first-order logic, is complete, Peano arithmetic is necessarily incomplete if it is consistent. This is very widely—though not universally—regarded as having shown the logicist program of Frege to be impossible to complete.
In 2001, A Centenary International Conference celebrating the first hundred years of Russell's paradox was held in Munich and its proceedings have been published.
Applied versions
There are some versions of this paradox that are closer to real-life situations and may be easier to understand for non-logicians. For example, the barber paradox supposes a barber who shaves all men who do not shave themselves and only men who do not shave themselves. When one thinks about whether the barber should shave himself or not, a similar paradox begins to emerge.
An easy refutation of the "layman's versions" such as the barber paradox seems to be that no such barber exists, or that the barber is not a man, and so can exist without paradox. The whole point of Russell's paradox is that the answer "such a set does not exist" means the definition of the notion of set within a given theory is unsatisfactory. Note the difference between the statements "such a set does not exist" and "it is an empty set". It is like the difference between saying "There is no bucket" and saying "The bucket is empty".
A notable exception to the above may be the Grelling–Nelson paradox, in which words and meaning are the elements of the scenario rather than people and hair-cutting. Though it is easy to refute the barber's paradox by saying that such a barber does not (and cannot) exist, it is impossible to say something similar about a meaningfully defined word.
One way that the paradox has been dramatised is as follows: Suppose that every public library has to compile a catalogue of all its books. Since the catalogue is itself one of the library's books, some librarians include it in the catalogue for completeness; while others leave it out as it being one of the library's books is self evident. Now imagine that all these catalogues are sent to the national library. Some of them include themselves in their listings, others do not. The national librarian compiles two master catalogues—one of all the catalogues that list themselves, and one of all those that do not.
The question is: should these master catalogues list themselves? The 'catalogue of all catalogues that list themselves' is no problem. If the librarian does not include it in its own listing, it remains a true catalogue of those catalogues that do include themselves. If he does include it, it remains a true catalogue of those that list themselves. However, just as the librarian cannot go wrong with the first master catalogue, he is doomed to fail with the second. When it comes to the 'catalogue of all catalogues that do not list themselves', the librarian cannot include it in its own listing, because then it would include itself, and so belong in the other catalogue, that of catalogues that do include themselves. However, if the librarian leaves it out, the catalogue is incomplete. Either way, it can never be a true master catalogue of catalogues that do not list themselves.
Applications and related topics
Russell-like paradoxes
As illustrated above for the barber paradox, Russell's paradox is not hard to extend. Take:
A transitive verb , that can be applied to its substantive form.
Form the sentence:
The er that s all (and only those) who do not themselves,
Sometimes the "all" is replaced by "all ers".
An example would be "paint":
The painter that paints all (and only those) that do not paint themselves.
or "elect"
The elector (representative), that elects all that do not elect themselves.
In the Season 8 episode of The Big Bang Theory, "The Skywalker Intrusion", Sheldon Cooper analyzes the song "Play That Funky Music", concluding that the lyrics present a musical example of Russell's Paradox.
Paradoxes that fall in this scheme include:
The barber with "shave".
The original Russell's paradox with "contain": The container (Set) that contains all (containers) that do not contain themselves.
The Grelling–Nelson paradox with "describer": The describer (word) that describes all words, that do not describe themselves.
Richard's paradox with "denote": The denoter (number) that denotes all denoters (numbers) that do not denote themselves. (In this paradox, all descriptions of numbers get an assigned number. The term "that denotes all denoters (numbers) that do not denote themselves" is here called Richardian.)
"I am lying.", namely the liar paradox and Epimenides paradox, whose origins are ancient
Russell–Myhill paradox
Related paradoxes
The Burali-Forti paradox, about the order type of all well-orderings
The Kleene–Rosser paradox, showing that the original lambda calculus is inconsistent, by means of a self-negating statement
Curry's paradox (named after Haskell Curry), which does not require negation
The smallest uninteresting integer paradox
Girard's paradox in type theory
| Mathematics | Discrete mathematics | null |
46149 | https://en.wikipedia.org/wiki/GLONASS | GLONASS | GLONASS (, ; ) is a Russian satellite navigation system operating as part of a radionavigation-satellite service. It provides an alternative to Global Positioning System (GPS) and is the second navigational system in operation with global coverage and of comparable precision.
Satellite navigation devices supporting both GPS and GLONASS have more satellites available, meaning positions can be fixed more quickly and accurately, especially in built-up areas where buildings may obscure the view to some satellites. Owing to its higher orbital inclination, GLONASS supplementation of GPS systems also improves positioning in high latitudes (near the poles).
Development of GLONASS began in the Soviet Union in 1976. Beginning on 12 October 1982, numerous rocket launches added satellites to the system until the completion of the constellation in 1995. In 2001, after a decline in capacity during the late 1990s, the restoration of the system was made a government priority, and funding increased substantially. GLONASS is the most expensive program of the Roscosmos, consuming a third of its budget in 2010.
By 2010, GLONASS had achieved full coverage of Russia's territory. In October 2011, the full orbital constellation of 24 satellites was restored, enabling full global coverage. The GLONASS satellites' designs have undergone several upgrades, with the latest version, GLONASS-K2, launched in 2023.
System description
GLONASS is a global navigation satellite system, providing real time position and velocity determination for military and civilian users. The satellites are located in middle circular orbit at altitude with a 64.8° inclination and an orbital period of 11 hours and 16 minutes (every 17 revolutions, done in 8 sidereal days, a satellite passes over the same location). GLONASS's orbit makes it especially suited for usage in high latitudes (north or south), where getting a GPS signal can be problematic.
The constellation operates in three orbital planes, with eight evenly spaced satellites on each. A fully operational constellation with global coverage consists of 24 satellites, while 18 satellites are necessary for covering the territory of Russia. To get a position fix the receiver must be in the range of at least four satellites.
Signal
FDMA
GLONASS satellites transmit two types of signals: open standard-precision signal L1OF/L2OF, and obfuscated high-precision signal L1SF/L2SF.
The signals use similar DSSS encoding and binary phase-shift keying (BPSK) modulation as in GPS signals. All GLONASS satellites transmit the same code as their standard-precision signal; however each transmits on a different frequency using a 15-channel frequency-division multiple access (FDMA) technique spanning either side from 1602.0 MHz, known as the L1 band. The center frequency is 1602 MHz + n × 0.5625 MHz, where n is a satellite's frequency channel number (n=−6,...,0,...,6, previously n=0,...,13). Signals are transmitted in a 38° cone, using right-hand circular polarization, at an EIRP between 25 and 27 dBW (316 to 500 watts). Note that the 24-satellite constellation is accommodated with only 15 channels by using identical frequency channels to support antipodal (opposite side of planet in orbit) satellite pairs, as these satellites are never both in view of an Earth-based user at the same time.
The L2 band signals use the same FDMA as the L1 band signals, but transmit straddling 1246 MHz with the center frequency 1246 MHz + n × 0.4375 MHz, where n spans the same range as for L1. In the original GLONASS design, only obfuscated high-precision signal was broadcast in the L2 band, but starting with GLONASS-M, an additional civil reference signal L2OF is broadcast with an identical standard-precision code to the L1OF signal.
The open standard-precision signal is generated with modulo-2 addition (XOR) of 511 kbit/s pseudo-random ranging code, 50 bit/s navigation message, and an auxiliary 100 Hz meander sequence (Manchester code), all generated using a single time/frequency oscillator. The pseudo-random code is generated with a 9-stage shift register operating with a period of 1 milliseconds.
The navigational message is modulated at 50 bits per second. The superframe of the open signal is 7500 bits long and consists of 5 frames of 30 seconds, taking 150 seconds (2.5 minutes) to transmit the continuous message. Each frame is 1500 bits long and consists of 15 strings of 100 bits (2 seconds for each string), with 85 bits (1.7 seconds) for data and check-sum bits, and 15 bits (0.3 seconds) for time mark. Strings 1-4 provide immediate data for the transmitting satellite, and are repeated every frame; the data include ephemeris, clock and frequency offsets, and satellite status. Strings 5-15 provide non-immediate data (i.e. almanac) for each satellite in the constellation, with frames I-IV each describing five satellites, and frame V describing remaining four satellites.
The ephemerides are updated every 30 minutes using data from the Ground Control segment; they use Earth Centred Earth Fixed (ECEF) Cartesian coordinates in position and velocity, and include lunisolar acceleration parameters. The almanac uses modified orbital elements (Keplerian elements) and is updated daily.
The more accurate high-precision signal is available for authorized users, such as the Russian military, yet unlike the United States P(Y) code, which is modulated by an encrypting W code, the GLONASS restricted-use codes are broadcast in the clear using only security through obscurity. The details of the high-precision signal have not been disclosed. The modulation (and therefore the tracking strategy) of the data bits on the L2SF code has recently changed from unmodulated to 250 bit/s burst at random intervals. The L1SF code is modulated by the navigation data at 50 bit/s without a Manchester meander code.
The high-precision signal is broadcast in phase quadrature with the standard-precision signal, effectively sharing the same carrier wave, but with a ten-times-higher bandwidth than the open signal. The message format of the high-precision signal remains unpublished, although attempts at reverse-engineering indicate that the superframe is composed of 72 frames, each containing 5 strings of 100 bits and taking 10 seconds to transmit, with total length of 36 000 bits or 720 seconds (12 minutes) for the whole navigational message. The additional data are seemingly allocated to critical Lunisolar acceleration parameters and clock correction terms.
Accuracy
At peak efficiency, the standard-precision signal offers horizontal positioning accuracy within 5–10 metres, vertical positioning within , a velocity vector measuring within , and timing within 200 nanoseconds, all based on measurements from four first-generation satellites simultaneously; newer satellites such as GLONASS-M improve on this.
GLONASS uses a coordinate datum named "PZ-90" (Earth Parameters 1990 – Parametry Zemli 1990), in which the precise location of the North Pole is given as an average of its position from 1990 to 1995. This is in contrast to the GPS's coordinate datum, WGS 84, which uses the location of the North Pole in 1984. As of 17 September 2007, the PZ-90 datum has been updated to version PZ-90.02 which differ from WGS 84 by less than in any given direction. Since 31 December 2013, version PZ-90.11 is being broadcast, which is aligned to the International Terrestrial Reference System and Frame 2008 at epoch 2011.0 at the centimetre level, but ideally a conversion to ITRF2008 should be done.
CDMA
Since 2008, new CDMA signals are being researched for use with GLONASS.
The interface control documents for GLONASS CDMA signals was published in August 2016.
According to GLONASS developers, there will be three open and two restricted CDMA signals. The open signal L3OC is centered at 1202.025 MHz and uses BPSK(10) modulation for both data and pilot channels; the ranging code transmits at 10.23 million chips per second, modulated onto the carrier frequency using QPSK with in-phase data and quadrature pilot. The data is error-coded with 5-bit Barker code and the pilot with 10-bit Neuman-Hoffman code.
Open L1OC and restricted L1SC signals are centered at 1600.995 MHz, and open L2OC and restricted L2SC signals are centered at 1248.06 MHz, overlapping with GLONASS FDMA signals. Open signals L1OC and L2OC use time-division multiplexing to transmit pilot and data signals, with BPSK(1) modulation for data and BOC(1,1) modulation for pilot; wide-band restricted signals L1SC and L2SC use BOC (5, 2.5) modulation for both data and pilot, transmitted in quadrature phase to the open signals; this places peak signal strength away from the center frequency of narrow-band open signals.
Binary phase-shift keying (BPSK) is used by standard GPS and GLONASS signals. Binary offset carrier (BOC) is the modulation used by Galileo, modernized GPS, and BeiDou-2.
The navigational message of CDMA signals is transmitted as a sequence of text strings. The message has variable size - each pseudo-frame usually includes six strings and contains ephemerides for the current satellite (string types 10, 11, and 12 in a sequence) and part of the almanac for three satellites (three strings of type 20). To transmit the full almanac for all current 24 satellites, a superframe of 8 pseudo-frames is required. In the future, the superframe will be expanded to 10 pseudo-frames of data to cover full 30 satellites.
The message can also contain Earth's rotation parameters, ionosphere models, long-term orbit parameters for GLONASS satellites, and COSPAS-SARSAT messages. The system time marker is transmitted with each string; UTC leap second correction is achieved by shortening or lengthening (zero-padding) the final string of the day by one second, with abnormal strings being discarded by the receiver.
The strings have a version tag to facilitate forward compatibility: future upgrades to the message format will not break older equipment, which will continue to work by ignoring new data (as long as the constellation still transmits old string types), but up-to-date equipment will be able to use additional information from newer satellites.
The navigational message of the L3OC signal is transmitted at 100 bit/s, with each string of symbols taking 3 seconds (300 bits). A pseudo-frame of 6 strings takes 18 seconds (1800 bits) to transmit. A superframe of 8 pseudo-frames is 14,400 bits long and takes 144 seconds (2 minutes 24 seconds) to transmit the full almanac.
The navigational message of the L1OC signal is transmitted at 100 bit/s. The string is 250 bits long and takes 2.5 seconds to transmit. A pseudo-frame is 1500 bits (15 seconds) long, and a superframe is 12,000 bits or 120 seconds (2 minutes).
L2OC signal does not transmit any navigational message, only the pseudo-range codes:
Glonass-K1 test satellite launched in 2011 introduced L3OC signal. Glonass-M satellites produced since 2014 (s/n 755+) will also transmit L3OC signal for testing purposes.
Enhanced Glonass-K1 and Glonass-K2 satellites, to be launched from 2023, will feature a full suite of modernized CDMA signals in the existing L1 and L2 bands, which includes L1SC, L1OC, L2SC, and L2OC, as well as the L3OC signal. Glonass-K2 series should gradually replace existing satellites starting from 2023, when Glonass-M launches will cease.
Glonass-KM satellites will be launched by 2025. Additional open signals are being studied for these satellites, based on frequencies and formats used by existing GPS, Galileo, and Beidou/COMPASS signals:
open signal L1OCM using BOC(1,1) modulation centered at 1575.42 MHz, similar to modernized GPS signal L1C, Galileo signal E1, and Beidou/COMPASS signal B1C;
open signal L5OCM using BPSK(10) modulation centered at 1176.45 MHz, similar to the GPS "Safety of Life" (L5), Galileo signal E5a, and Beidou/COMPASS signal B2a;
open signal L3OCM using BPSK(10) modulation centered at 1207.14 MHz, similar to Galileo signal E5b and Beidou/COMPASS signal B2b.
Such an arrangement will allow easier and cheaper implementation of multi-standard GNSS receivers.
With the introduction of CDMA signals, the constellation will be expanded to 30 active satellites by 2025; this may require eventual deprecation of FDMA signals. The new satellites will be deployed into three additional planes, bringing the total to six planes from the current three—aided by System for Differential Correction and Monitoring (SDCM), which is a GNSS augmentation system based on a network of ground-based control stations and communication satellites Luch 5A and Luch 5B.
Six additional Glonass-V satellites, using Tundra orbit in three orbital planes, will be launched starting in 2025; this regional high-orbit segment will offer increased regional availability and 25% improvement in precision over Eastern Hemisphere, similar to Japanese QZSS system and Beidou-1. The new satellites will form two ground traces with inclination of 64.8°, eccentricity of 0.072, period of 23.9 hours, and ascending node longitude of 60° and 120°. Glonass-V vehicles are based on Glonass-K platform and will broadcast new CDMA signals only. Previously Molniya orbit, geosynchronous orbit, or inclined orbit were also under consideration for the regional segment.
Navigational message
L1OC
L3OC
Common properties of open CDMA signals
Satellites
The main contractor of the GLONASS program is Joint Stock Company Information Satellite Systems Reshetnev (ISS Reshetnev, formerly called NPO-PM). The company, located in Zheleznogorsk, is the designer of all GLONASS satellites, in cooperation with the Institute for Space Device Engineering (:ru:РНИИ КП) and the Russian Institute of Radio Navigation and Time. Serial production of the satellites is accomplished by the company Production Corporation Polyot in Omsk.
Over the three decades of development, the satellite designs have gone through numerous improvements, and can be divided into three generations: the original GLONASS (since 1982), GLONASS-M (since 2003) and GLONASS-K (since 2011). Each GLONASS satellite has a GRAU designation 11F654, and each of them also has the military "Cosmos-NNNN" designation.
First generation
The true first generation of GLONASS (also called Uragan) satellites were all three-axis stabilized vehicles, generally weighing and were equipped with a modest propulsion system to permit relocation within the constellation. Over time they were upgraded to Block IIa, IIb, and IIv vehicles, with each block containing evolutionary improvements.
Six Block IIa satellites were launched in 1985–1986 with improved time and frequency standards over the prototypes, and increased frequency stability. These spacecraft also demonstrated a 16-month average operational lifetime. Block IIb spacecraft, with a two-year design lifetimes, appeared in 1987, of which a total of 12 were launched, but half were lost in launch vehicle accidents. The six spacecraft that made it to orbit worked well, operating for an average of nearly 22 months.
Block IIv was the most prolific of the first generation. Used exclusively from 1988 to 2000, and continued to be included in launches through 2005, a total of 56 satellites were launched. The design life was three years, however numerous spacecraft exceeded this, with one late model lasting 68 months, nearly double.
Block II satellites were typically launched three at a time from the Baikonur Cosmodrome using Proton-K Blok-DM2 or Proton-K Briz-M boosters. The only exception was when, on two launches, an Etalon geodetic reflector satellite was substituted for a GLONASS satellite.
Second generation
The second generation of satellites, known as Glonass-M, were developed beginning in 1990 and first launched in 2003. These satellites possess a substantially increased lifetime of seven years and weigh slightly more at . They are approximately in diameter and high, with a solar array span of for an electrical power generation capability of 1600 watts at launch. The aft payload structure houses 12 primary antennas for L-band transmissions. Laser corner-cube reflectors are also carried to aid in precise orbit determination and geodetic research. On-board cesium clocks provide the local clock source. 52 Glonass-M have been produced and launched.
A total of 41 second generation satellites were launched through the end of 2013. As with the previous generation, the second generation spacecraft were launched three at a time using Proton-K Blok-DM2 or Proton-K Briz-M boosters. Some were launched alone with Soyuz-2-1b/Fregat.
In July 2015, ISS Reshetnev announced that it had completed the last GLONASS-M (No. 61) spacecraft and it was putting it in storage waiting for launch, along with eight previously built satellites.
As on 22 September 2017, GLONASS-M No.52 satellite went into operation and the orbital grouping has again increased to 24 space vehicles.
Third generation
GLONASS-K is a substantial improvement of the previous generation: it is the first unpressurised GLONASS satellite with a much reduced mass of versus the of GLONASS-M. It has an operational lifetime of 10 years, compared to the 7-year lifetime of the second generation GLONASS-M. It will transmit more navigation signals to improve the system's accuracy — including new CDMA signals in the L3 and L5 bands, which will use modulation similar to modernized GPS, Galileo, and BeiDou. Glonass-K consist of 26 satellites having satellite index 65-98 and widely used in Russian Military space.
The new satellite's advanced equipment—made solely from Russian components — will allow the doubling of GLONASS' accuracy. As with the previous satellites, these are 3-axis stabilized, nadir pointing with dual solar arrays. The first GLONASS-K satellite was successfully launched on 26 February 2011.
Due to their weight reduction, GLONASS-K spacecraft can be launched in pairs from the Plesetsk Cosmodrome launch site using the substantially lower cost Soyuz-2.1b boosters or in six-at-once from the Baikonur Cosmodrome using Proton-K Briz-M launch vehicles.
Ground control
The ground control segment of GLONASS is almost entirely located within former Soviet Union territory, except for several in Brazil and one in Nicaragua.
The GLONASS ground segment consists of:
a system control centre;
five Telemetry, Tracking and Command centers;
two Laser Ranging Stations; and
ten Monitoring and Measuring Stations.
Receivers
Companies producing GNSS receivers making use of GLONASS:
Furuno
JAVAD GNSS, Inc
Septentrio
Topcon
C-Nav
Magellan Navigation
Novatel
ComNav technology Ltd.
Leica Geosystems
Hemisphere GNSS
Trimble Inc
u-blox
NPO Progress describes a receiver called GALS-A1, which combines GPS and GLONASS reception.
SkyWave Mobile Communications manufactures an Inmarsat-based satellite communications terminal that uses both GLONASS and GPS.
, some of the latest receivers in the Garmin eTrex line also support GLONASS (along with GPS). Garmin also produce a standalone Bluetooth receiver, the GLO for Aviation, which combines GPS, WAAS and GLONASS.
Various smartphones from 2011 onwards have integrated GLONASS capability in addition to their pre-existing GPS receivers, with the intention of reducing signal acquisition periods by allowing the device to pick up more satellites than with a single-network receiver, including devices from:
Xiaomi
Sony Ericsson
ZTE
Huawei
Samsung
Apple (since iPhone 4S, concurrently with GPS)
HTC
LG
Motorola
Nokia
Status
Availability
, the GLONASS constellation status is:
The system requires 18 satellites for continuous navigation services covering all of Russia, and 24 satellites to provide services worldwide. The GLONASS system covers 100% of worldwide territory.
On 2 April 2014, the system experienced a technical failure that resulted in practical unavailability of the navigation signal for around 12 hours.
On 14–15 April 2014, nine GLONASS satellites experienced a technical failure due to software problems.
On 19 February 2016, three GLONASS satellites experienced a technical failure: the batteries of GLONASS-738 exploded, the batteries of GLONASS-737 were depleted, and GLONASS-736 experienced a stationkeeping failure due to human error during maneuvering. GLONASS-737 and GLONASS-736 were expected to be operational again after maintenance, and one new satellite (GLONASS-751) to replace GLONASS-738 was expected to complete commissioning in early March 2016. The full capacity of the satellite group was expected to be restored in the middle of March 2016.
After the launching of two new satellites and maintenance of two others, the full capacity of the satellite group was restored.
Accuracy
According to Russian System of Differentional Correction and Monitoring's data, , precision of GLONASS navigation definitions (for p=0.95) for latitude and longitude were with mean number of navigation space vehicles (NSV) equals 7—8 (depending on station). In comparison, the same time precision of GPS navigation definitions were with mean number of NSV equals 6—11 (depending on station).
Some modern receivers are able to use both GLONASS and GPS satellites together, providing greatly improved coverage in urban canyons and giving a very fast time to fix due to over 50 satellites being available. In indoor, urban canyon or mountainous areas, accuracy can be greatly improved over using GPS alone. For using both navigation systems simultaneously, precision of GLONASS/GPS navigation definitions were with mean number of NSV equals 14—19 (depends on station).
In May 2009, Anatoly Perminov, then director of the Roscosmos, stated that actions were undertaken to expand GLONASS's constellation and to improve the ground segment to increase the navigation definition of GLONASS to an accuracy of by 2011. In particular, the latest satellite design, GLONASS-K has the ability to double the system's accuracy once introduced. The system's ground segment is also to undergo improvements. As of early 2012, sixteen positioning ground stations are under construction in Russia and in the Antarctic at the Bellingshausen and Novolazarevskaya bases. New stations will be built around the southern hemisphere from Brazil to Indonesia. Together, these improvements are expected to bring GLONASS' accuracy to 0.6 m or better by 2020. The setup of a GLONASS receiving station in the Philippines is also now under negotiation.
History
| Technology | Navigation | null |
46191 | https://en.wikipedia.org/wiki/Tillage | Tillage | Tillage is the agricultural preparation of soil by mechanical agitation of various types, such as digging, stirring, and overturning. Examples of human-powered tilling methods using hand tools include shoveling, picking, mattock work, hoeing, and raking. Examples of draft-animal-powered or mechanized work include ploughing (overturning with moldboards or chiseling with chisel shanks), rototilling, rolling with cultipackers or other rollers, harrowing, and cultivating with cultivator shanks (teeth).
Tillage that is deeper and more thorough is classified as primary, and tillage that is shallower and sometimes more selective of location is secondary. Primary tillage such as ploughing tends to produce a rough surface finish, whereas secondary tillage tends to produce a smoother surface finish, such as that required to make a good seedbed for many crops. Harrowing and rototilling often combine primary and secondary tillage into one operation.
"Tillage" can also mean the land that is tilled. The word "cultivation" has several senses that overlap substantially with those of "tillage". In a general context, both can refer to agriculture. Within agriculture, both can refer to any kind of soil agitation. Additionally, "cultivation" or "cultivating" may refer to an even narrower sense of shallow, selective secondary tillage of row crop fields that kills weeds while sparing the crop plants.
Definitions
Primary tillage loosens the soil and mixes in fertilizer or plant material, resulting in soil with a rough texture.
Secondary tillage produces finer soil and sometimes shapes the rows, preparing the seed bed. It also provides weed control throughout the growing season during the maturation of the crop plants, unless such weed control is instead achieved with low-till or no-till methods involving herbicides.
The seedbed preparation can be done with harrows (of which there are many types and subtypes), dibbles, hoes, shovels, rotary tillers, subsoilers, ridge- or bed-forming tillers, rollers, or cultivators.
The weed control, to the extent that it is done via tillage, is usually achieved with cultivators or hoes, which disturb the top few centimeters of soil around the crop plants but with minimal disturbance of the crop plants themselves. The tillage kills the weeds via two mechanisms: uprooting them, burying their leaves (cutting off their photosynthesis), or a combination of both. Weed control both prevents the crop plants from being outcompeted by the weeds (for water and sunlight) and prevents the weeds from reaching their seed stage, thus reducing future weed population aggressiveness.
History
Tilling was first performed via human labor, sometimes involving slaves. Hoofed animals could also be used to till soil by trampling, in addition to pigs, whose natural instincts are to root the ground regularly if allowed to. The wooden plow was then invented. (It is difficult to pinpoint the exact date of its invention. However, the earliest evidence of plow usage dates back to around 4000 BCE in Mesopotamia (modern-day Iraq) . It could be pulled with human labor, or by mule, ox, elephant, water buffalo, or a similar sturdy animal. Horses are generally unsuitable, though breeds such as the Clydesdale were bred as draft animals.
Tilling could at times be very labor-intensive. This aspect is discussed in the 16th-century French agronomic text written by Charles Estienne:
The popularity of tillage as an agricultural technique in early modern times had to do with theories about plant biology proposed by European thinkers. In 1731, English writer Jethro Tull published the book "Horse-Hoeing Husbandry: An Essay on the Principles of Vegetation and Tillage," which argued that soil needed to be pulverized into fine powder for plants to make use of it. Tull believed that, since water, air, and heat were clearly not the primary substance of a plant, plants were made of earth, and thus had to consume very small pieces of earth as food. Tull wrote that each subsequent tillage of the soil would increase its fertility, and that it was impossible to till the soil too much. However, scientific observation has shown that the opposite is true; tillage causes soil to lose structural qualities that allow plant roots, water, and nutrients to penetrate it, accelerates soil loss by erosion, and results in soil compaction.
The steel plow allowed farming in the American Midwest, where tough prairie grasses and rocks caused trouble. Soon after 1900, the farm tractor was introduced, which made modern large-scale agriculture possible. However, the destruction of the prairie grasses and tillage of the fertile topsoil of the American Midwest caused the Dust Bowl, in which the soil was blown away and stirred up into dust storms that blackened the sky. This prompted re-consideration of tillage techniques, but in the United States as of 2019, 3 trillion pounds of soil were estimated to be lost due to erosion while adoption of improved techniques for controlling erosion are still not widespread. In the mid-1930s Frank and Herbert Petty of Doncaster, Victoria, Australia developed the Petty Plough. This steerable plough could be pulled by either two horses or a tractor and the disc wheels could be steered in unison, or separately allowing the operator to plough the center of rows as well as between and around orchard trees.
Types
Primary and secondary tillage
Primary tillage is usually conducted after the last harvest, when the soil is wet enough to allow plowing but also allows good traction. Some soil types can be plowed dry. The objective of primary tillage is to attain a reasonable depth of soft soil, incorporate crop residues, kill weeds, and to aerate the soil. Secondary tillage is any subsequent tillage, to incorporate fertilizers, reduce the soil to a finer tilth, level the surface, or control weeds.
Reduced tillage
Reduced tillage leaves between 15 and 30% crop residue cover on the soil or 500 to 1000 pounds per acre (560 to 1100 kg/ha) of small grain residue during the critical erosion period. This may involve the use of a chisel plow, field cultivators, or other implements. See the general comments below to see how they can affect the amount of residue.
Intensive tillage
Intensive tillage leaves less than 15% crop residue cover or less than 500 pounds per acre (560 kg/ha) of small grain residue. This type of tillage is often referred to as conventional tillage, but as conservational tillage is now more widely used than intensive tillage (in the United States), it is often not appropriate to refer to this type of tillage as conventional. Intensive tillage often involves multiple operations with implements such as a mold board, disk, or chisel plow. After this, a finisher with a harrow, rolling basket, and cutter can be used to prepare the seed bed. There are many variations.
Conservation tillage
Conservation tillage leaves at least 30% of crop residue on the soil surface, or at least 1,000 lb/ac (1,100 kg/ha) of small grain residue on the surface during the critical soil erosion period. This slows water movement, which reduces the amount of soil erosion. Additionally, conservation tillage has been found to benefit predatory arthropods that can enhance pest control. Conservation tillage also benefits farmers by reducing fuel consumption and soil compaction. By reducing the number of times the farmer travels over the field, significant savings in fuel and labor are made.
Conservation tillage is used on over 370 million acres, mostly in South America, Oceania and North America. In most years since 1997, conservation tillage was used in US cropland more than intensive or reduced tillage.
However, conservation tillage delays warming of the soil due to the reduction of dark earth exposure to the warmth of the spring sun, thus delaying the planting of the next year's spring crop of corn.
No-till – plows, disks, et cetera are not used. Aims for 100% ground cover.
Strip-till – Narrow strips are tilled where seeds will be planted, leaving the soil in between the rows untilled.
Mulch-till - Soil is covered with mulch to conserve heat and moisture. 100% soil disturbance.
Rotational tillage – Tilling the soil every two years or less often (every other year, or every third year, etc.).
Ridge-till
Zone tillage
Zone tillage is a form of modified deep tillage in which only narrow strips are tilled, leaving soil in between the rows untilled. This type of tillage agitates the soil to help reduce soil compaction problems and to improve internal soil drainage. It is designed to only disrupt the soil in a narrow strip directly below the crop row. In comparison to no-till, which relies on the previous year's plant residue to protect the soil and aids in postponement of the warming of the soil and crop growth in Northern climates, zone tillage produces a strip approximately five inches wide that simultaneously breaks up plow pans, assists in warming the soil and helps to prepare a seedbed. When combined with cover crops, zone tillage helps replace lost organic matter, slows the deterioration of the soil, improves soil drainage, increases soil water and nutrient holding capacity, and allows necessary soil organisms to survive.
It has been successfully used on farms in the Midwest and West of the US for over 40 years, and is currently used on more than 36% of the U.S. farmland. Some specific states where zone tillage is currently in practice are Pennsylvania, Connecticut, Minnesota, Indiana, Wisconsin, and Illinois.
Its use in the USA's Northern Corn Belt states lacks consistent yield results; however, there is still interest in deep tillage within agriculture. In areas that are not well-drained, deep tillage may be used as an alternative to installing more expensive tile drainage.
Effects
Positive
Plowing:
Loosens and aerates the top layer of soil or horizon A, which facilitates planting the crop.
Helps mix harvest residue, organic matter (humus), and nutrients evenly into the soil.
Mechanically destroys weeds.
Dries the soil before seeding (in wetter climates, tillage aids in keeping the soil drier).
When done in autumn, helps exposed soil crumble over winter through frosting and defrosting, which helps prepare a smooth surface for spring planting.
Can reduce infestations of slugs, cut worms, army worms, and harmful insects as they are attracted by leftover residues from former crops.
Reduces the risk of crop diseases which can be harbored in surface residues.
Negative
Dries the soil before seeding.
Soil loses nutrients, like nitrogen and fertilizer, and its ability to store water.
Decreases the water infiltration rate of soil. (Results in more runoff and erosion as the soil absorbs water more slowly than before)
Tilling the soil results in dislodging the cohesiveness of the soil particles, thereby inducing erosion.
Chemical runoff.
Reduces organic matter in the soil.
Reduces microbes, earthworms, ants, etc.
Destroys soil aggregates.
Compaction of the soil, also known as a tillage pan.
Eutrophication (nutrient runoff into a body of water).
Archaeology
Tilling can damage ancient structures such as long barrows. In the UK, half of the long barrows in Gloucestershire and almost all the burial mounds in Essex have been damaged. According to English Heritage in 2003, ploughing with modern powerful tractors had done as much damage in the last six decades as traditional farming did in the previous six centuries.
General comments
The type of implement makes the most difference, although other factors can have an effect.
Tilling in absolute darkness (night tillage) might reduce the number of weeds that sprout following the tilling operation by half. Light is necessary to break the dormancy of some weed species' seed, so if fewer seeds are exposed to light during the tilling process, fewer will sprout. This may help reduce the amount of herbicides needed for weed control.
Greater speeds, when using certain tillage implements (disks and chisel plows), lead to more intensive tillage (i.e., less residue is on the soil surface).
Increasing the angle of disks causes residues to be buried more deeply. Increasing their concavity makes them more aggressive.
Chisel plows can have spikes or sweeps. Spikes are more aggressive.
Percentage residue is used to compare tillage systems because the amount of crop residue affects the soil loss due to erosion.
Alternatives
Modern agricultural science has greatly reduced the use of tillage. Crops can be grown for several years without any tillage through the use of herbicides to control weeds, crop varieties that tolerate packed soil, and equipment that can plant seeds or fumigate the soil without really digging it up. This practice, called no-till farming, reduces costs and environmental change by reducing soil erosion and diesel fuel usage.
Site preparation of forest land
Site preparation is any of the various treatments applied to a site to ready it for seeding or planting. The purpose is to facilitate the regeneration of that site by the chosen method. Site preparation may be designed to achieve, singly or in any combination, improved access by reducing or rearranging slash and ameliorating adverse forest floor, soil, vegetation, or other biotic factors. Site preparation is undertaken to ameliorate one or more constraints that would otherwise be likely to thwart management objectives. A valuable bibliography on the effects of soil temperature and site preparation on subalpine and boreal tree species has been prepared by McKinnon et al. (2002).
Site preparation is the work that is done before a forest area is regenerated. Some types of site preparation are burning.
Burning
Broadcast burning is commonly used to prepare clearcut sites for planting, e.g., in central British Columbia, and in the temperate region of North America generally.
Prescribed burning is carried out primarily for slash hazard reduction and to improve site conditions for regeneration; all or some of the following benefits may accrue:
a) Reduction of logging slash, plant competition, and humus prior to direct seeding, planting, scarifying or in anticipation of natural seeding in partially cut stands or in connection with seed-tree systems.
b) Reduction or elimination of unwanted forest cover prior to planting or seeding, or prior to preliminary scarification thereto.
c) Reduction of humus on cold, moist sites to favour regeneration.
d) Reduction or elimination of slash, grass, or brush fuels from strategic areas around forested land to reduce the chances of damage by wildfire.
Prescribed burning for preparing sites for direct seeding was tried on a few occasions in Ontario, but none of the burns was hot enough to produce a seedbed that was adequate without supplementary mechanical site preparation.
Changes in soil chemical properties associated with burning include significantly increased pH, which Macadam (1987) in the Sub-boreal Spruce Zone of central British Columbia found persisting more than a year after the burn. Average fuel consumption was 20 to 24 t/ha and the forest floor depth was reduced by 28% to 36%. The increases correlated well with the amounts of slash (both total and ≥7 cm diameter) consumed. The change in pH depends on the severity of the burn and the amount consumed; the increase can be as much as 2 units, a 100-fold change. Deficiencies of copper and iron in the foliage of white spruce on burned clearcuts in central British Columbia might be attributable to elevated pH levels.
Even a broadcast slash fire in a clearcut does not give a uniform burn over the whole area. Tarrant (1954), for instance, found only 4% of a 140-ha slash burn had burned severely, 47% had burned lightly, and 49% was unburned. Burning after windrowing obviously accentuates the subsequent heterogeneity.
Marked increases in exchangeable calcium also correlated with the amount of slash at least 7 cm in diameter consumed. Phosphorus availability also increased, both in the forest floor and in the 0 cm to 15 cm mineral soil layer, and the increase was still evident, albeit somewhat diminished, 21 months after burning. However, in another study in the same Sub-boreal Spruce Zone found that although it increased immediately after the burn, phosphorus availability had dropped to below pre-burn levels within 9 months.
Nitrogen will be lost from the site by burning, though concentrations in remaining forest floor were found by Macadam (1987) to have increased in two out of six plots, the others showing decreases. Nutrient losses may be outweighed, at least in the short term, by improved soil microclimate through the reduced thickness of forest floor where low soil temperatures are a limiting factor.
The Picea/Abies forests of the Alberta foothills are often characterized by deep accumulations of organic matter on the soil surface and cold soil temperatures, both of which make reforestation difficult and result in a general deterioration in site productivity; Endean and Johnstone (1974) describe experiments to test prescribed burning as a means of seedbed preparation and site amelioration on representative clear-felled Picea/Abies areas. Results showed that, in general, prescribed burning did not reduce organic layers satisfactorily, nor did it increase soil temperature, on the sites tested. Increases in seedling establishment, survival, and growth on the burned sites were probably the result of slight reductions in the depth of the organic layer, minor increases in soil temperature, and marked improvements in the efficiency of the planting crews. Results also suggested that the process of site deterioration has not been reversed by the burning treatments applied.
Ameliorative intervention
Slash weight (the oven-dry weight of the entire crown and that portion of the stem less than four inches in diameter) and size distribution are major factors influencing the forest fire hazard on harvested sites. Forest managers interested in the application of prescribed burning for hazard reduction and silviculture, were shown a method for quantifying the slash load by Kiil (1968). In west-central Alberta, he felled, measured, and weighed 60 white spruce, graphed (a) slash weight per merchantable unit volume against diameter at breast height (dbh), and (b) weight of fine slash (<1.27 cm) also against dbh, and produced a table of slash weight and size distribution on one acre of a hypothetical stand of white spruce. When the diameter distribution of a stand is unknown, an estimate of slash weight and size distribution can be obtained from average stand diameter, number of trees per unit area, and merchantable cubic foot volume. The sample trees in Kiil's study had full symmetrical crowns. Densely growing trees with short and often irregular crowns would probably be overestimated; open-grown trees with long crowns would probably be underestimated.
The need to provide shade for young outplants of Engelmann spruce in the high Rocky Mountains is emphasized by the U.S. Forest Service. Acceptable planting spots are defined as microsites on the north and east sides of down logs, stumps, or slash, and lying in the shadow cast by such material. Where the objectives of management specify more uniform spacing, or higher densities, than obtainable from an existing distribution of shade-providing material, redistribution or importing of such material has been undertaken.
Access
Site preparation on some sites might be done simply to facilitate access by planters, or to improve access and increase the number or distribution of microsites suitable for planting or seeding.
Wang et al. (2000) determined field performance of white and black spruces 8 and 9 years after outplanting on boreal mixedwood sites following site preparation (Donaren disc trenching versus no trenching) in 2 plantation types (open versus sheltered) in southeastern Manitoba. Donaren trenching slightly reduced the mortality of black spruce but significantly increased the mortality of white spruce. Significant difference in height was found between open and sheltered plantations for black spruce but not for white spruce, and root collar diameter in sheltered plantations was significantly larger than in open plantations for black spruce but not for white spruce. Black spruce open plantation had significantly smaller volume (97 cm3) compared with black spruce sheltered (210 cm3), as well as white spruce open (175 cm3) and sheltered (229 cm3) plantations. White spruce open plantations also had smaller volume than white spruce sheltered plantations. For transplant stock, strip plantations had a significantly higher volume (329 cm3) than open plantations (204 cm3). Wang et al. (2000) recommended that sheltered plantation site preparation should be used.
Mechanical
Up to 1970, no "sophisticated" site preparation equipment had become operational in Ontario, but the need for more efficacious and versatile equipment was increasingly recognized. By this time, improvements were being made to equipment originally developed by field staff, and field testing of equipment from other sources was increasing.
According to J. Hall (1970), in Ontario at least, the most widely used site preparation technique was post-harvest mechanical scarification by equipment front-mounted on a bulldozer (blade, rake, V-plow, or teeth), or dragged behind a tractor (Imsett or S.F.I. scarifier, or rolling chopper). Drag type units designed and constructed by Ontario's Department of Lands and Forests used anchor chain or tractor pads separately or in combination, or were finned steel drums or barrels of various sizes and used in sets alone or combined with tractor pad or anchor chain units.
J. Hall's (1970) report on the state of site preparation in Ontario noted that blades and rakes were found to be well suited to post-cut scarification in tolerant hardwood stands for natural regeneration of yellow birch. Plows were most effective for treating dense brush prior to planting, often in conjunction with a planting machine. Scarifying teeth, e.g., Young's teeth, were sometimes used to prepare sites for planting, but their most effective use was found to be preparing sites for seeding, particularly in backlog areas carrying light brush and dense herbaceous growth. Rolling choppers found application in treating heavy brush but could be used only on stone-free soils. Finned drums were commonly used on jack pine–spruce cutovers on fresh brushy sites with a deep duff layer and heavy slash, and they needed to be teamed with a tractor pad unit to secure good distribution of the slash. The S.F.I. scarifier, after strengthening, had been "quite successful" for 2 years, promising trials were under way with the cone scarifier and barrel ring scarifier, and development had begun on a new flail scarifier for use on sites with shallow, rocky soils. Recognition of the need to become more effective and efficient in site preparation led the Ontario Department of Lands and Forests to adopt the policy of seeking and obtaining for field testing new equipment from Scandinavia and elsewhere that seemed to hold promise for Ontario conditions, primarily in the north. Thus, testing was begun of the Brackekultivator from Sweden and the Vako-Visko rotary furrower from Finland.
Mounding
Site preparation treatments that create raised planting spots have commonly improved outplant performance on sites subject to low soil temperature and excess soil moisture. Mounding can certainly have a big influence on soil temperature. Draper et al. (1985), for instance, documented this as well as the effect it had on root growth of outplants (Table 30).
The mounds warmed up quickest, and at soil depths of 0.5 cm and 10 cm averaged 10 and 7 °C higher, respectively, than in the control. On sunny days, daytime surface temperature maxima on the mound and organic mat reached 25 °C to 60 °C, depending on soil wetness and shading. Mounds reached mean soil temperatures of 10 °C at 10 cm depth 5 days after planting, but the control did not reach that temperature until 58 days after planting. During the first growing season, mounds had 3 times as many days with a mean soil temperature greater than 10 °C than did the control microsites.
Draper et al.'s (1985) mounds received 5 times the amount of photosynthetically active radiation (PAR) summed over all sampled microsites throughout the first growing season; the control treatment consistently received about 14% of daily background PAR, while mounds received over 70%. By November, fall frosts had reduced shading, eliminating the differential. Quite apart from its effect on temperature, incident radiation is also important photosynthetically. The average control microsite was exposed to levels of light above the compensation point for only 3 hours, i.e., one-quarter of the daily light period, whereas mounds received light above the compensation point for 11 hours, i.e., 86% of the same daily period. Assuming that incident light in the 100–600 μE/m2/s intensity range is the most important for photosynthesis, the mounds received over 4 times the total daily light energy that reached the control microsites.
Orientation of linear site preparation
With linear site preparation, orientation is sometimes dictated by topography or other considerations, but the orientation can often be chosen. It can make a difference. A disk-trenching experiment in the Sub-boreal Spruce Zone in interior British Columbia investigated the effect on growth of young outplants (lodgepole pine) in 13 microsite planting positions: berm, hinge, and trench in each of north, south, east, and west aspects, as well as in untreated locations between the furrows. Tenth-year stem volumes of trees on south-, east-, and west-facing microsites were significantly greater than those of trees on north-facing and untreated microsites. However, planting spot selection was seen to be more important overall than trench orientation.
In a Minnesota study, the N–S strips accumulated more snow but snow melted faster than on E–W strips in the first year after felling. Snow-melt was faster on strips near the centre of the strip-felled area than on border strips adjoining the intact stand. The strips, 50 feet (15.24 m) wide, alternating with uncut strips 16 feet (4.88 m) wide, were felled in a Pinus resinosa stand, aged 90 to 100 years.
| Technology | Horticultural techniques | null |
46193 | https://en.wikipedia.org/wiki/Threshing%20machine | Threshing machine | A threshing machine or a thresher is a piece of farm equipment that separates grain seed from the stalks and husks. It does so by beating the plant to make the seeds fall out. Before such machines were developed, threshing was done by hand with flails: such hand threshing was very laborious and time-consuming, taking about one-quarter of agricultural labour by the 18th century. Mechanization of this process removed a substantial amount of drudgery from farm labour. The first threshing machine was invented circa 1786 by the Scottish engineer Andrew Meikle, and the subsequent adoption of such machines was one of the earlier examples of the mechanization of agriculture. During the 19th century, threshers and mechanical reapers and reaper-binders gradually became widespread and made grain production much less laborious.
Separate reaper-binders and threshers have largely been replaced by machines that combine all of their functions, that is combine harvesters or combines. However, the simpler machines remain important as appropriate technology in low-capital farming contexts, both in developing countries and in developed countries on small farms that strive for especially high levels of self-sufficiency. For example, pedal-powered threshers are a low-cost option, and some Amish sects use horse-drawn binders and old-style threshers.
As the verb thresh is cognate with the verb thrash (and synonymous in the grain-beating sense), the names thrashing machine and thrasher are (less common) alternate forms.
Early social impacts
The Swing Riots in the UK were partly a result of the threshing machine. Following years of war, high taxes and low wages, farm labourers finally revolted in 1830. They had faced unemployment for years, due to the widespread introduction of the threshing machine and the policy of enclosing fields. No longer were thousands of men needed to tend the crops; a few would suffice. With fewer jobs, lower wages and no prospects, the threshing machine was the final straw; it would place them on the brink of starvation. The Swing Rioters smashed threshing machines and threatened farmers who had them. The riots were dealt with very harshly. Nine of the rioters were hanged and a further 450 were transported to Australia.
Later adoption
Early threshing machines were hand-fed and horse-powered. Some were housed in a specially constructed building, a gin gang, which would be attached to a threshing barn. They were small by today's standards and were about the size of an upright piano. Later machines were steam-powered, driven by a portable engine or traction engine. Isaiah Jennings, a skilled inventor, created a small thresher that does not harm the straw in the process. In 1834, John Avery and Hiram Abial Pitts devised significant improvements to a machine that automatically threshes and separates grain from the chaff, freeing farmers from a slow and laborious process. Avery and Pitts were granted United States patent #542 on December 29, 1837.
John Ridley, an Australian inventor, also developed a threshing machine in South Australia in 1843.
The 1881 Household Cyclopedia said of Meikle's machine:
"Since the invention of this machine, Mr. Meikle and others have progressively introduced a variety of improvements, all tending to simplify the labour, and to augment the quantity of the work performed. When first erected, though the grain was equally well separated from the straw, yet as the whole of the straw, chaff, and grain, was indiscriminately thrown into a confused heap, the work could only with propriety be considered as half executed. By the addition of rakes, or shakers, and two pairs of fanners, all driven by the same machinery, the different processes of thrashing, shaking, and winnowing are now all at once performed, and the grain immediately prepared for the public market. When it is added, that the quantity of grain gained from the superior powers of the machine is fully equal to a twentieth part of the crop, and that, in some cases, the expense of thrashing and cleaning the grain is considerably less than what was formerly paid for cleaning it alone, the immense saving arising from the invention will at once be seen."
"The expense of horse labour, from the increased value of the animal and the charge of his keeping, being an object of great importance, it is recommended that, upon all sizable farms, that is to say, where two hundred acres [800,000 m²], or upwards, of grain are sown, the machine should be worked by wind, unless where local circumstances afford the conveniency of water. Where coals are plenty and cheap, steam may be advantageously used for working the machine."
Steam-powered machines used belts connected to a traction engine; often both engine and thresher belonged to a contractor who toured the farms of a district. Steam remained a viable commercial option until the early post-WWII years.
Modern developments
In Europe and Americas
Modern-day combines harvesters (or simply combines) operate on the same principles and use the same components as the original threshing machines built in the 19th century. Combines also perform the reaping operation at the same time. The name combine is derived from the fact that the two steps are combined in a single machine. Also, most modern combines are self-powered (usually by a diesel engine) and self-propelled, although tractor-powered, pull-type combines models were offered by John Deere and Case International into the 1990s.
Today, as in the 19th century, threshing begins with a cylinder and concave. The cylinder has sharp serrated bars, and rotates at high speed (about 500 RPM) so that the bars beat against the entire plant as it is mechanically fed from the reaping equipment at the front of the combine to the gap between the concave and the rotating beater/cylinder. The concave is curved to match the curve of the cylinder, and the grain, now separated from the plant stalks falls immediately through grated openings in the concave as it is beaten. The motion of the rotating cylinder thrusts the remaining straw and chaff toward the rear of the machine.
Whilst the majority of the grain falls through the concave, the straw is carried by a set of "walkers" to the rear of the machine, allowing any grain and chaff still in the straw to fall below. Below the straw walkers, a fan blows a stream of air across the grain, removing dust and small bits of crushed plant material out of the back of the combine. The residues fall to the ground and occasionally are collected for other purposes, such as fodder.
The grain, either coming through the concave or the walkers, meets a set of sieves mounted on an assembly called a shoe, which is shaken mechanically. The top sieve has larger openings and serves to remove large pieces of chaff from the grain. The lower sieve separates clean grain, which falls through, from incompletely threshed pieces. The incompletely threshed grain is returned to the cylinder by means of a system of conveyors, where the process repeats.
Some threshing machines were equipped with a bagger, which invariably held two bags, one being filled, and the other being replaced with an empty. A worker called a sewer removed and replaced the bags, and sewed full bags shut with a needle and thread. Other threshing machines would discharge grain from a conveyor, for bagging by hand. Combines are equipped with a grain tank, which accumulates grain for deposit in a truck or wagon.
A large amount of chaff and straw would accumulate around a threshing machine, and several innovations, such as the air chaffer, were developed to deal with this. Combines generally chop and disperse straw as they move through the field, though the chopping is disabled when the straw is to be baled, and chaff collectors are sometimes used to prevent the dispersal of weed seed throughout a field.
The corn sheller was almost identical in design, with slight modifications to deal with the larger kernel size and presence of cobs. Modern-day combines can be adjusted to work with any grain crop and many unusual seed crops.
Both the older and modern machines require a good deal of effort to operate. The concave clearance, cylinder speed, fan velocity, sieve sizes, and feeding rate must be adjusted for crop conditions.
Another development in Asia
From the early 20th century, petrol or diesel-powered threshing machines, designed especially to thresh rice, the most important crop in Asia, have been developed along different lines to the modern combine.
Even after the combine was invented and became popular, a new compact-size thresher called a harvester, with wheels, still remains in use and at present it is available from a Japanese agricultural manufacturer. The compact-size machine is very convenient to handle in small terrace fields in mountain areas where a large machine, such as a combine, is not usable.
People there use this harvester with a modern compact binder.
Preservation
A number of older threshing machines have survived into preservation. They are often to be seen in operation at live steam festivals and traction engine rallies such as the Great Dorset Steam Fair in England, and the Western Minnesota Steam Threshers Reunion in northwest Minnesota.
Musical references
Irish songwriter John Duggan immortalised the threshing machine in the song "The Old Thrashing Mill". The song has been recorded by Foster and Allen and Brendan Shine.
On the Alan Lomax collection Songs of Seduction (Rounder Select, 2000), there is a bawdy Irish folk song called "The Thrashing Machine" sung by tinker Annie O'Neil, as recorded in the early 20th century.
In his film score for Of Mice and Men (1939) and consequently in his collection Music for the Movies (1942), American composer Aaron Copland titled a section of the score "Threshing Machines," to suit a scene in the Lewis Milestone film where Curley is threatening Slim over giving May a puppy, when many of the itinerant worker men are standing around or working on threshers.
In the song "Thrasher" from the album Rust Never Sleeps, Neil Young compares the modern threshing machine's technique of separating wheat from wheat stalks to the natural forces of time that separate close friends from one another.
Threshing machines appear in Twenty One Pilots' music video for the song "House of Gold".
The song The Thrashing Machine By Chad Morgan Depicts Chadwick trying to impress a girl by showing her his threshing machine.
| Technology | Farm and garden machinery | null |
46211 | https://en.wikipedia.org/wiki/Tuna | Tuna | A tuna (: tunas or tuna) is a saltwater fish that belongs to the tribe Thunnini, a subgrouping of the Scombridae (mackerel) family. The Thunnini comprise 15 species across five genera, the sizes of which vary greatly, ranging from the bullet tuna (max length: , weight: ) up to the Atlantic bluefin tuna (max length: , weight: ), which averages and is believed to live up to 50 years.
Tuna, opah, and mackerel sharks are the only species of fish that can maintain a body temperature higher than that of the surrounding water. An active and agile predator, the tuna has a sleek, streamlined body, and is among the fastest-swimming pelagic fish – the yellowfin tuna, for example, is capable of speeds of up to . Greatly inflated speeds can be found in early scientific reports and are still widely reported in the popular literature.
Found in warm seas, the tuna is commercially fished extensively as a food fish, and is popular as a bluewater game fish. As a result of overfishing, some tuna species, such as the southern bluefin tuna, are threatened with extinction.
Etymology
The term "tuna" comes from Spanish atún < Andalusian Arabic at-tūn, assimilated from al-tūn [Modern Arabic ] : 'tuna fish' < Middle Latin thunnus. is derived from used for the Atlantic bluefin tuna, that name in turn is ultimately derived from thýnō, meaning "to rush, dart along".
In English, tuna has been referred to as Chicken of the Sea. This name persists today in Japan, where tuna as a food can be called , literally "sea chicken".
Taxonomy
The Thunnini tribe is a monophyletic clade comprising 15 species in five genera:
family Scombridae
tribe Thunnini: tunas
genus Allothunnus: slender tunas
genus Auxis: frigate tunas
genus Euthynnus: little tunas
genus Katsuwonus: skipjack tunas
genus Thunnus: albacores and true tunas
subgenus Thunnus (Thunnus): bluefin group
subgenus Thunnus (Neothunnus): yellowfin group
The cladogram is a tool for visualizing and comparing the evolutionary relationships between taxa, and is read left-to-right as if on a timeline. The following cladogram illustrates the relationship between the tunas and other tribes of the family Scombridae. For example, the cladogram illustrates that the skipjack tunas are more closely related to the true tunas than are the slender tunas (the most primitive of the tunas), and that the next nearest relatives of the tunas are the bonitos of the tribe Sardini.
True species
The "true" tunas are those that belong to the genus Thunnus. Until recently, it was thought that there were seven Thunnus species, and that Atlantic bluefin tuna and Pacific bluefin tuna were subspecies of a single species. In 1999, Collette established that based on both molecular and morphological considerations, they are in fact distinct species.
The genus Thunnus is further classified into two subgenera: Thunnus (Thunnus) (the bluefin group), and Thunnus (Neothunnus) (the yellowfin group).
|}
Other species
The Thunnini tribe also includes seven additional species of tuna across four genera. They are:
{| class="wikitable"
|-
! colspan="9"| Other tuna species
|-
! style="width:10em" | Common name
! style="width:11em" | Scientific name
! Maximumlength
! Commonlength
! Maximumweight
! Maximumage
! Trophiclevel
! Source
! style="width:11em" |IUCN status
|-
| Slender tuna
| Allothunnus fallai(Serventy, 1948)
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:center;"| 3.74
| style="text-align:center;"|
| Least concern
|-
| Bullet tuna
| Auxis rochei(Risso, 1810)
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"| 5 years
| style="text-align:center;"| 4.13
| style="text-align:center;"|
| Least concern
|-
| Frigate tuna
| Auxis thazard (Lacépède, 1800)
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"| 5 years
| style="text-align:center;"| 4.34
| style="text-align:center;"|
| Least concern
|-
| Mackerel tuna,Kawakawa
| Euthynnus affinis(Cantor, 1849)
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"| 6 years
| style="text-align:center;"| 4.50
| style="text-align:center;"|
| Least concern
|-
| Little tunny
| Euthynnus alletteratus(Rafinesque, 1810)
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"| 10 years
| style="text-align:center;"| 4.13
| style="text-align:center;"|
| Least concern
|-
| Black skipjack tuna
| Euthynnus lineatus(Kishinouye, 1920)
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:center;"| 3.83
| style="text-align:center;"|
| Least concern
|-
| Skipjack tuna
| Katsuwonus pelamis(Linnaeus, 1758)
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"|
| style="text-align:right;"| 6–12 yrs
| style="text-align:center;"| 3.75
| style="text-align:center;"|
| Least concern
|}
Biology
Description
The tuna is a sleek, elongated and streamlined fish, adapted for speed. It has two closely spaced but separated dorsal fins on its back; The first fin is "depressible" – it can be laid down, flush, in a groove that runs along its back; it is supported by spines. Seven to ten yellow finlets run from the dorsal fins to the tail, which is lunate – curved like a crescent moon – and tapered to pointy tips. A tuna's pelvic fins are located below the base of the pectoral fins. Both dorsal and pelvic fins retract when the fish is swimming fast.
The tuna's body is countershaded to camouflage itself in deeper water when seen from above, its dorsal side is generally a metallic dark blue while the ventral or under side is silvery, often with an iridescent shine. The caudal peduncle, to which the tail is attached, is quite thin, with three stabilizing horizontal keels on each side.
Physiology
Thunnus are widely but sparsely distributed throughout the oceans of the world, generally in tropical and temperate waters at latitudes ranging between about 45° north and south of the equator. All tunas are able to maintain the temperature of certain parts of their body above the temperature of ambient seawater. For example, bluefin can maintain a core body temperature of , in water as cold as . Unlike other endothermic creatures such as mammals and birds, tuna do not maintain temperature within a relatively narrow range.
Tunas achieve endothermy by conserving the heat generated through normal metabolism. In all tunas, the heart operates at ambient temperature, as it receives cooled blood, and coronary circulation is directly from the gills. The rete mirabile ("wonderful net"), the intertwining of veins and arteries in the body's periphery, allows nearly all of the metabolic heat from venous blood to be "re-claimed" and transferred to the arterial blood via a counter-current exchange system, thus mitigating the effects of surface cooling. This allows the tuna to elevate the temperatures of the highly-aerobic tissues of the skeletal muscles, eyes and brain, which supports faster swimming speeds and reduced energy expenditure, and which enables them to survive in cooler waters over a wider range of ocean environments than those of other fish.
Also unlike most fish, which have white flesh, the muscle tissue of tuna ranges from pink to dark red. The red myotomal muscles derive their color from myoglobin, an oxygen-binding molecule, which tuna express in quantities far higher than most other fish. The oxygen-rich blood further enables energy delivery to their muscles.
For powerful swimming animals like dolphins and tuna, cavitation may be detrimental, because it limits their maximum swimming speed. Even if they have the power to swim faster, dolphins may have to restrict their speed, because collapsing cavitation bubbles on their tail are too painful. Cavitation also slows tuna, but for a different reason. Unlike dolphins, these fish do not feel the bubbles, because they have bony fins without nerve endings. Nevertheless, they cannot swim faster because the cavitation bubbles create a vapor film around their fins that limits their speed. Lesions have been found on tuna that are consistent with cavitation damage.
Fishing
Commerce
Tuna is an important commercial fish. The International Seafood Sustainability Foundation (ISSF) compiled a detailed scientific report on the state of global tuna stocks in 2009, which includes regular updates. According to the ISSF, the most important species for commercial and recreational tuna fisheries are yellowfin (Thunnus albacares), bigeye (T. obesus), bluefin (T. thynnus, T. orientalis, and T. macoyii), albacore (T. alalunga), and skipjack (Katsuwonus pelamis).
Based on catches from 2007, the report states:
The Australian government alleged in 2006 that Japan had illegally overfished southern bluefin by taking 12,000 to 20,000 tonnes per year instead of the agreed upon 6,000 tonnes; the value of such overfishing would be as much as US$2 billion. Such overfishing has severely damaged bluefin stocks. According to the WWF, "Japan's huge appetite for tuna will take the most sought-after stocks to the brink of commercial extinction unless fisheries agree on more rigid quotas". Japan's Fisheries Research Agency counters that Australian and New Zealand tuna fishing companies under-report their total catches of southern bluefin tuna and ignore internationally mandated total allowable catch totals.
In recent years, opening day fish auctions at Tokyo's Tsukiji fish market and Toyosu Market have seen record-setting prices for bluefin tuna, reflecting market demand. In each of 2010, 2011, 2012, 2013 and 2019, new record prices have been set for a single fish – the current record is 333.6 million japanese yen (US$3.1 million) for a bluefin, or a unit price of JP¥ 1,200,000/kg (US$5,057/lb). The opening auction price for 2014 plummeted to less than 5% of the previous year's price, which had drawn complaints for climbing "way out of line". A summary of record-setting auctions are shown in the following table (highlighted values indicate new world records):
In November 2011, a different record was set when a fisherman in Massachusetts caught an tuna. It was captured inadvertently using a dragnet. Due to the laws and restrictions on tuna fishing in the United States, federal authorities impounded the fish because it was not caught with a rod and reel. Because of the tuna's deteriorated condition as a result of the trawl net, the fish sold for just under $5,000.
Methods
Besides for edible purposes, many tuna species are caught frequently as game, often for recreation or for contests in which money is awarded based on weight. Larger specimens are notorious for putting up a fight while hooked, and have been known to injure people who try to catch them, as well as damage their equipment.
Phoenician technique for trapping and catching Atlantic bluefin tuna called Almadraba, still used today in Portugal, Spain, Morocco and Italy which uses a maze of nets. In Sicily, the same method is called Tonnara.
Fish farming (cage system)
Tuna ranching
Longline fishing
Purse seines
Pole and line
Harpoon gun
Big game fishing
Fish aggregating device
Association with whaling
In 2005, Nauru, defending its vote from Australian criticism at that year's meeting of the International Whaling Commission, argued that some whale species have the potential to devastate Nauru's tuna stocks, and that Nauru's food security and economy relies heavily on fishing. Despite this, Nauru does not permit whaling in its own waters and does not allow other fishing vessels to take or intentionally interact with marine mammals in its Exclusive Economic Zone. In 2010 and 2011, Nauru supported Australian proposals for a western Pacific-wide ban on tuna purse-seining in the vicinity of marine mammals – a measure which was agreed by the Western and Central Pacific Fisheries Commission at its eighth meeting in March 2012.
Association with dolphins
Dolphins swim beside several tuna species. These include yellowfin tuna in the eastern Pacific Ocean, but not albacore. Tuna schools are believed to associate themselves with dolphins for protection against sharks, which are tuna predators.
Commercial fishing vessels used to exploit this association by searching for dolphin pods. Vessels would encircle the pod with nets to catch the tuna beneath. The nets were prone to entangling dolphins, injuring or killing them. Public outcry and new government regulations, which are now monitored by NOAA have led to more dolphin-friendly methods, now generally involving lines rather than nets. There are neither universal independent inspection programs nor verification of dolphin safety, so these protections are not absolute. According to Consumers Union, the resulting lack of accountability means claims of tuna that is "dolphin safe" should be given little credence.
Fishery practices have changed to be dolphin friendly, which has caused greater bycatch including sharks, turtles and other oceanic fish. Fishermen no longer follow dolphins, but concentrate their fisheries around floating objects such as fish aggregation devices, also known as FADs, which attract large populations of other organisms. Measures taken thus far to satisfy the public demand to protect dolphins can be potentially damaging to other species as well.
Aquaculture
Increasing quantities of high-grade tuna caught at sea are reared in net pens and fed bait fish. In Australia, former fishermen raise southern bluefin tuna (Thunnus maccoyii) and another bluefin species. Farming its close relative, the Atlantic bluefin tuna, Thunnus thynnus, is beginning in the Mediterranean, North America and Japan. Hawaii approved permits for the first U.S. offshore farming of bigeye tuna in water deep in 2009.
Japan is the biggest tuna consuming nation and is also the leader in tuna farming research. Japan first successfully farm-hatched and raised bluefin tuna in 1979. In 2002, it succeeded in completing the reproduction cycle and in 2007, completed a third generation. The farm breed is known as Kindai tuna. Kindai is the contraction of Kinki University in Japanese (Kinki daigaku). In 2009, Clean Seas, an Australian company which has been receiving assistance from Kinki University managed to breed southern bluefin tuna in captivity and was awarded the second place in World's Best Invention of 2009 by Time magazine.
Food
Fresh and frozen
The fresh or frozen flesh of tuna is widely regarded as a delicacy in most areas where it is shipped, being prepared in a variety of ways. When served as a steak, the meat of most species is known for its thickness and tough texture. In the U.K., supermarkets began flying in fresh tuna steaks in the late 1990s, which helped to increase the popularity of using fresh tuna in cooking; by 2009, celebrity chefs regularly featured fresh tuna in salads, wraps, and char-grilled dishes.
Served raw
Various species of tuna are often served raw in Japanese cuisine as sushi or sashimi.
Commercial sashimi tuna may have their coloration fixated by pumping carbon monoxide (CO) into bags containing the tuna, and holding it at 4 °C. For a 2-inch tuna steak, this requires 24 hours. The fish is then vacuum sealed and frozen. In Japan, color fixation using CO is prohibited.
Canned
Tuna is canned in edible oils, in brine, in water, and in various sauces. Tuna may be processed and labeled as "solid", "chunked" ("chunk") or "flaked". When tuna is canned and packaged for sale, the product is sometimes called tuna fish (U.S.), a calque (loan translation) from the German Thunfisch. Canned tuna is sometimes used as food for pets, especially cats.
Australia
Canned tuna was first produced in Australia in 1903 and quickly became popular.
In the early 1980s canned tuna in Australia was most likely southern bluefin, it was usually yellowfin, skipjack, or tongol (labelled "northern bluefin" or "longtail").
Australian standards once required cans of tuna to contain at least 51% tuna, but those regulations were dropped in 2003. The remaining weight is usually oil or water.
United States
The product became more plentiful in the United States in the late 1940s. In 1950, 8,500,000 pounds of canned tuna were produced, and the U.S. Department of Agriculture classified it as a "plentiful food".
In the United States, 52% of canned tuna is used for sandwiches; 22% for tuna salads; and 15% for tuna casseroles and dried, prepackaged meal kits, such as General Mills's Tuna Helper line. Other canned tuna dishes include tuna melts (a type of sandwich where the tuna is mixed with mayonnaise and served on bread with cheese melted on top); salade niçoise (a salad made of tuna, olives, green beans, potatoes, hard-boiled eggs and anchovy dressing); and tuna burgers (served on buns).
In the United States, the Food and Drug Administration (FDA) regulates canned tuna (see part c).
Precooked
As tunas are often caught far from where they are processed, poor interim conservation can lead to spoilage. Tuna is typically gutted by hand, and later precooked for prescribed times of 45 minutes to three hours. The fish are then cleaned and filleted, canned (and sealed), with the dark lateral blood meat often separately canned for pet food (cat or dog). The sealed can is then heated under pressure (called "retort cooking") for 2–4 hours. This process kills any bacteria, but retains the histamine that may have been produced by those bacteria, and so may still taste spoiled. The international standard sets the maximum histamine level at 200 milligrams per kilogram. An Australian study of 53 varieties of unflavored canned tuna found none to exceed the safe histamine level, although some had "off" flavors.
Light and white
In some markets, depending upon the color of the flesh of the tuna species, the can is marked as "light" or "white" meat, with "light" meaning a greyish pink color and "white" meaning a light pink color. In the United States, only albacore can legally be sold in canned form as "white meat tuna"; in other countries, yellowfin is also acceptable.
Ventresca tuna
Ventresca tuna (from ventre, the Italian word for belly), is a luxury canned tuna, from the fatty bluefin tuna belly, also used in sushi as toro.
Nutrition
Canned light tuna in oil is 29% protein, 8% fat, 60% water, and contains no carbohydrates, while providing 200 calories in a 100 gram reference amount (table). It is a rich source (20% or more of the Daily Value, DV) of phosphorus (44% DV) and vitamin D (45% DV), and a moderate source of iron (11% DV).
Mercury and health
Mercury content in tuna can vary widely. Among those calling for improved warnings about mercury in tuna is the American Medical Association, which adopted a policy that physicians should help make their patients more aware of the potential risks. A study published in 2008 found that mercury distribution in the meat of farmed tuna is inversely related to the lipid content, suggesting that higher lipid concentration within edible tissues of tuna raised in captivity might, other factors remaining equal, have a diluting effect on mercury content. Mackerel tuna is one species of tuna that is lower in mercury concentration than skipjack or yellowfin, but this species is known as "black meat" or "dark meat" tuna, which is a lower grade for canning because of the color, unfavorable flavor, and poor yield.
In March 2004, the United States FDA issued guidelines recommending that pregnant women, nursing mothers, and children limit their intake of tuna and other predatory fish. The Environmental Protection Agency provides guidelines on how much canned tuna is safe to eat. Roughly speaking, the guidelines recommend one can of light tuna per week for individuals weighing less than , and two cans per week for those who weigh more. In 2007, it was reported that some canned light tuna such as yellowfin tuna is significantly higher in mercury than skipjack, and caused Consumers Union and other activist groups to advise pregnant women to refrain from consuming canned tuna. In 2009, a California appeals court upheld a ruling that canned tuna does not need warning labels as the methylmercury is naturally occurring.
A January 2008 report revealed potentially dangerous levels of mercury in certain varieties of sushi tuna, reporting levels "so high that the Food and Drug Administration could take legal action to remove the fish from the market."
Management and conservation
The main tuna fishery management bodies are the Western and Central Pacific Fisheries Commission, the Inter-American Tropical Tuna Commission, the Indian Ocean Tuna Commission, the International Commission for the Conservation of Atlantic Tunas, and the Commission for the Conservation of Southern Bluefin Tuna. The five gathered for the first time in Kobe, Japan in January 2007. Environmental organizations made submissions on risks to fisheries and species. The meeting concluded with an action plan drafted by some 60 countries or areas. Concrete steps include issuing certificates of origin to prevent illegal fishing and greater transparency in the setting of regional fishing quotas. The delegates were scheduled to meet at another joint meeting in January or February 2009 in Europe.
In 2010, Greenpeace International added the albacore, bigeye tuna, Pacific bluefin tuna, Atlantic bluefin tuna, southern bluefin tuna, and yellowfin tuna to its seafood red list, which are fish "commonly sold in supermarkets around the world, and which have a very high risk of being sourced from unsustainable fisheries."
Bluefin tuna have been widely accepted as being severely overfished, with some stocks at risk of collapse. According to the International Seafood Sustainability Foundation (a global, nonprofit partnership between the tuna industry, scientists, and the World Wide Fund for Nature), Indian Ocean yellowfin tuna, Pacific Ocean (eastern and western) bigeye tuna, and North Atlantic albacore tuna are all overfished. In April 2009, no stock of skipjack tuna (which makes up roughly 60% of all tuna fished worldwide) was considered to be overfished.
The BBC documentary South Pacific, which first aired in May 2009, stated that, should fishing in the Pacific continue at its current rate, populations of all tuna species could collapse within five years. It highlighted huge Japanese and European tuna fishing vessels, sent to the South Pacific international waters after overfishing their own fish stocks to the point of collapse.
A 2010 tuna fishery assessment report, released in January 2012 by the Secretariat of the Pacific Community, supported this finding, recommending that all tuna fishing should be reduced or limited to current levels and that limits on skipjack fishing be considered.
Research indicates that increasing ocean temperatures are taking a toll on the tuna in the Indian Ocean, where rapid warming of the ocean has resulted in a reduction of marine phytoplankton. The bigeye tuna catch rates have also declined abruptly during the past half century, mostly due to increased industrial fisheries, with the ocean warming adding further stress to the fish species.
| Biology and health sciences | Acanthomorpha | null |
46223 | https://en.wikipedia.org/wiki/Sowing | Sowing | Sowing is the process of planting seeds. An area that has had seeds planted in it will be described as a sowed or sown area.
When sowing it is important to:
Use quality seeds
Maintain proper distance between seeds
Plant at correct depth
Ensure the soil is clean , healthy , and free of pathogens (disease causing microorganisms)
Plants which are usually sown
Among the major field crops, oats, wheat, and rye are sown, grasses and legumes are seeded and maize and soybeans are planted. In planting, wider rows (generally 75 cm (30 in) or more) are used, and the intent is to have precise; even spacing between individual seeds in the row, various mechanisms have been devised to count out individual seeds at exact intervals.
Depth of sowing
In sowing, little if any soil is placed over the seeds, as seeds can be generally sown into the soil by maintaining a planting depth of about 2-3 times the size of the seed.
Sowing types and patterns
For hand sowing, several sowing types exist; these include:
Flat sowing
Ridge sowing
Wide bed sowing
Several patterns for sowing may be used together with these types; these include:
Rows that are indented at the even rows (so that the seeds are
Symmetrical grid pattern – using the pattern described in The Garden of Cyrus placed in a crossed pattern). This method is much better, as more light may fall on the seedlings as they come out.
Types of sowing
Hand sowing
Hand sowing or (planting) is the process of casting handfuls of seed over prepared ground: broadcasting, that is, broadcast seeding (from which the technological term is derived). Usually, a drag or harrow is employed to incorporate the seed into the soil. Though labor-intensive for any but small areas, this method is still used in some situations. Practice is required to sow evenly and at the desired rate. A hand seeder can be used for sowing, though it is less of a help than it is for the smaller seeds of grasses and legumes.
Hand sowing may be combined with pre-sowing in seed trays. This allows the plants to come to strength indoors during cold periods (e.g. spring in temperate countries).
Seed drill
In agriculture, most seed is now sown using a seed drill, which offers greater precision; seed is sown evenly and at the desired rate. The drill also places the seed at a measured distance below the soil, so that less seed is required. The standard design uses a fluted feed metering system, which is volumetric in nature; individual seeds are not counted. Rows are typically about 10–30 cm apart, depending on the crop species and growing conditions. Several row opener types are used depending on soil type and local tradition. Grain drills are most often drawn by tractors, but can also be pulled by horses. Pickup trucks are sometimes used, since little draft is required.
A seed rate of about 100 kg of seed per hectare (2 bushels per acre) is typical, though rates vary considerably depending on crop species, soil conditions, and farmer's preference. Excessive rates can cause the crop to lodge, while too thin a rate will result in poor utilisation of the land, competition with weeds and a reduction in the yield.
Open field
Open-field planting refers to the form of sowing used historically in the agricultural context whereby fields are prepared generically and left open, as the name suggests, before being sown directly with seed. The seed is frequently left uncovered at the surface of the soil before germinating and therefore exposed to the prevailing climate and conditions like storms etc. This is in contrast to the seedbed method used more commonly in domestic gardening or more specific (modern) agricultural scenarios where the seed is applied beneath the soil surface and monitored and manually tended frequently to ensure more successful growth rates and better yields.
Pre-treatment of seed and soil before sowing
Before sowing, certain seeds first require a treatment prior to the sowing process.
This treatment may be seed scarification, stratification, seed soaking or seed cleaning with cold (or medium hot) water.
Seed soaking is generally done by placing seeds in medium hot water for at least 24 to up to 48 hours
Seed cleaning is done especially with fruit, as the flesh of the fruit around the seed can quickly become prone to attack from insects or plagues. Seed washing is generally done by submerging cleansed seeds 20 minutes in 50 degree Celsius water. This (rather hot than moderately hot) water kills any organisms that may have survived on the skin of a seed. Especially with easily infected tropical fruit such as lychees and rambutans, seed washing with high-temperature water is vital.
In addition to the mentioned seed pretreatments, seed germination is also assisted when a disease-free soil is used. Especially when trying to germinate difficult seed (e.g. certain tropical fruit), prior treatment of the soil (along with the usage of the most suitable soil; e.g. potting soil, prepared soil or other substrates) is vital. The two most used soil treatments are pasteurisation and sterilisation. Depending on the necessity, pasteurisation is to be preferred as this does not kill all organisms. Sterilisation can be done when trying to grow truly difficult crops. To pasteurise the soil, the soil is heated for 15 minutes in an oven of 120 °C.
| Technology | Agronomical techniques | null |
46238 | https://en.wikipedia.org/wiki/Refrigeration | Refrigeration | Refrigeration is any of various types of cooling of a space, substance, or system to lower and/or maintain its temperature below the ambient one (while the removed heat is ejected to a place of higher temperature). Refrigeration is an artificial, or human-made, cooling method.
Refrigeration refers to the process by which energy, in the form of heat, is removed from a low-temperature medium and transferred to a high-temperature medium. This work of energy transfer is traditionally driven by mechanical means (whether ice or electromechanical machines), but it can also be driven by heat, magnetism, electricity, laser, or other means. Refrigeration has many applications, including household refrigerators, industrial freezers, cryogenics, and air conditioning. Heat pumps may use the heat output of the refrigeration process, and also may be designed to be reversible, but are otherwise similar to air conditioning units.
Refrigeration has had a large impact on industry, lifestyle, agriculture, and settlement patterns. The idea of preserving food dates back to human prehistory, but for thousands of years humans were limited regarding the means of doing so. They used curing via salting and drying, and they made use of natural coolness in caves, root cellars, and winter weather, but other means of cooling were unavailable. In the 19th century, they began to make use of the ice trade to develop cold chains. In the late 19th through mid-20th centuries, mechanical refrigeration was developed, improved, and greatly expanded in its reach. Refrigeration has thus rapidly evolved in the past century, from ice harvesting to temperature-controlled rail cars, refrigerator trucks, and ubiquitous refrigerators and freezers in both stores and homes in many countries. The introduction of refrigerated rail cars contributed to the settlement of areas that were not on earlier main transport channels such as rivers, harbors, or valley trails.
These new settlement patterns sparked the building of large cities which are able to thrive in areas that were otherwise thought to be inhospitable, such as Houston, Texas, and Las Vegas, Nevada. In most developed countries, cities are heavily dependent upon refrigeration in supermarkets in order to obtain their food for daily consumption. The increase in food sources has led to a larger concentration of agricultural sales coming from a smaller percentage of farms. Farms today have a much larger output per person in comparison to the late 1800s. This has resulted in new food sources available to entire populations, which has had a large impact on the nutrition of society.
History
Earliest forms of cooling
The seasonal harvesting of snow and ice is an ancient practice estimated to have begun earlier than 1000 BC. A Chinese collection of lyrics from this time period known as the Sleaping, describes religious ceremonies for filling and emptying ice cellars. However, little is known about the construction of these ice cellars or the purpose of the ice. The next ancient society to record the harvesting of ice may have been the Jews in the book of Proverbs, which reads, "As the cold of snow in the time of harvest, so is a faithful messenger to them who sent him." Historians have interpreted this to mean that the Jews used ice to cool beverages rather than to preserve food. Other ancient cultures such as the Greeks and the Romans dug large snow pits insulated with grass, chaff, or branches of trees as cold storage. Like the Jews, the Greeks and Romans did not use ice and snow to preserve food, but primarily as a means to cool beverages. Egyptians cooled water by evaporation in shallow earthen jars on the roofs of their houses at night. The ancient people of India used this same concept to produce ice. The Persians stored ice in a pit called a Yakhchal and may have been the first group of people to use cold storage to preserve food. In the Australian outback before a reliable electricity supply was available many farmers used a Coolgardie safe, consisting of a box frame with hessian (burlap) sides soaked in water. The water would evaporate and thereby cool the interior air, allowing many perishables such as fruit, butter, and cured meats to be kept.
Ice harvesting
Before 1830, few Americans used ice to refrigerate foods due to a lack of ice-storehouses and iceboxes. As these two things became more widely available, individuals used axes and saws to harvest ice for their storehouses. This method proved to be difficult, dangerous, and certainly did not resemble anything that could be duplicated on a commercial scale.
Despite the difficulties of harvesting ice, Frederic Tudor thought that he could capitalize on this new commodity by harvesting ice in New England and shipping it to the Caribbean islands as well as the southern states. In the beginning, Tudor lost thousands of dollars, but eventually turned a profit as he constructed icehouses in Charleston, Virginia and in the Cuban port town of Havana. These icehouses as well as better insulated ships helped reduce ice wastage from 66% to 8%. This efficiency gain influenced Tudor to expand his ice market to other towns with icehouses such as New Orleans and Savannah. This ice market further expanded as harvesting ice became faster and cheaper after one of Tudor's suppliers, Nathaniel Wyeth, invented a horse-drawn ice cutter in 1825. This invention as well as Tudor's success inspired others to get involved in the ice trade and the ice industry grew.
Ice became a mass-market commodity by the early 1830s with the price of ice dropping from six cents per pound to a half of a cent per pound. In New York City, ice consumption increased from 12,000 tons in 1843 to 100,000 tons in 1856. Boston's consumption leapt from 6,000 tons to 85,000 tons during that same period. Ice harvesting created a "cooling culture" as majority of people used ice and iceboxes to store their dairy products, fish, meat, and even fruits and vegetables. These early cold storage practices paved the way for many Americans to accept the refrigeration technology that would soon take over the country.
Refrigeration research
The history of artificial refrigeration began when Scottish professor William Cullen designed a small refrigerating machine in 1755. Cullen used a pump to create a partial vacuum over a container of diethyl ether, which then boiled, absorbing heat from the surrounding air. The experiment even created a small amount of ice, but had no practical application at that time.
In 1758, Benjamin Franklin and John Hadley, professor of chemistry, collaborated on a project investigating the principle of evaporation as a means to rapidly cool an object at Cambridge University, England. They confirmed that the evaporation of highly volatile liquids, such as alcohol and ether, could be used to drive down the temperature of an object past the freezing point of water. They conducted their experiment with the bulb of a mercury thermometer as their object and with a bellows used to quicken the evaporation; they lowered the temperature of the thermometer bulb down to , while the ambient temperature was . They noted that soon after they passed the freezing point of water , a thin film of ice formed on the surface of the thermometer's bulb and that the ice mass was about a thick when they stopped the experiment upon reaching . Franklin wrote, "From this experiment, one may see the possibility of freezing a man to death on a warm summer's day". In 1805, American inventor Oliver Evans described a closed vapor-compression refrigeration cycle for the production of ice by ether under vacuum.
In 1820, the English scientist Michael Faraday liquefied ammonia and other gases by using high pressures and low temperatures, and in 1834, an American expatriate to Great Britain, Jacob Perkins, built the first working vapor-compression refrigeration system in the world. It was a closed-cycle that could operate continuously, as he described in his patent:
I am enabled to use volatile fluids for the purpose of producing the cooling or freezing of fluids, and yet at the same time constantly condensing such volatile fluids, and bringing them again into operation without waste.
His prototype system worked although it did not succeed commercially.
In 1842, a similar attempt was made by American physician, John Gorrie, who built a working prototype, but it was a commercial failure. Like many of the medical experts during this time, Gorrie thought too much exposure to tropical heat led to mental and physical degeneration, as well as the spread of diseases such as malaria. He conceived the idea of using his refrigeration system to cool the air for comfort in homes and hospitals to prevent disease. American engineer Alexander Twining took out a British patent in 1850 for a vapour compression system that used ether.
The first practical vapour-compression refrigeration system was built by James Harrison, a British journalist who had emigrated to Australia. His 1856 patent was for a vapour-compression system using ether, alcohol, or ammonia. He built a mechanical ice-making machine in 1851 on the banks of the Barwon River at Rocky Point in Geelong, Victoria, and his first commercial ice-making machine followed in 1854. Harrison also introduced commercial vapour-compression refrigeration to breweries and meat-packing houses, and by 1861, a dozen of his systems were in operation. He later entered the debate of how to compete against the American advantage of unrefrigerated beef sales to the United Kingdom. In 1873 he prepared the sailing ship Norfolk for an experimental beef shipment to the United Kingdom, which used a cold room system instead of a refrigeration system. The venture was a failure as the ice was consumed faster than expected.
The first gas absorption refrigeration system using gaseous ammonia dissolved in water (referred to as "aqua ammonia") was developed by Ferdinand Carré of France in 1859 and patented in 1860. Carl von Linde, an engineer specializing in steam locomotives and professor of engineering at the Technological University of Munich in Germany, began researching refrigeration in the 1860s and 1870s in response to demand from brewers for a technology that would allow year-round, large-scale production of lager; he patented an improved method of liquefying gases in 1876. His new process made possible using gases such as ammonia, sulfur dioxide (SO2) and methyl chloride (CH3Cl) as refrigerants and they were widely used for that purpose until the late 1920s.
Thaddeus Lowe, an American balloonist, held several patents on ice-making machines. His "Compression Ice Machine" would revolutionize the cold-storage industry. In 1869, he and other investors purchased an old steamship onto which they loaded one of Lowe's refrigeration units and began shipping fresh fruit from New York to the Gulf Coast area, and fresh meat from Galveston, Texas back to New York, but because of Lowe's lack of knowledge about shipping, the business was a costly failure.
Commercial use
In 1842, John Gorrie created a system capable of refrigerating water to produce ice. Although it was a commercial failure, it inspired scientists and inventors around the world. France's Ferdinand Carre was one of the inspired and he created an ice producing system that was simpler and smaller than that of Gorrie. During the Civil War, cities such as New Orleans could no longer get ice from New England via the coastal ice trade. Carre's refrigeration system became the solution to New Orleans' ice problems and, by 1865, the city had three of Carre's machines. In 1867, in San Antonio, Texas, a French immigrant named Andrew Muhl built an ice-making machine to help service the expanding beef industry before moving it to Waco in 1871. In 1873, the patent for this machine was contracted by the Columbus Iron Works, a company acquired by the W.C. Bradley Co., which went on to produce the first commercial ice-makers in the US.
By the 1870s, breweries had become the largest users of harvested ice. Though the ice-harvesting industry had grown immensely by the turn of the 20th century, pollution and sewage had begun to creep into natural ice, making it a problem in the metropolitan suburbs. Eventually, breweries began to complain of tainted ice. Public concern for the purity of water, from which ice was formed, began to increase in the early 1900s with the rise of germ theory. Numerous media outlets published articles connecting diseases such as typhoid fever with natural ice consumption. This caused ice harvesting to become illegal in certain areas of the country. All of these scenarios increased the demands for modern refrigeration and manufactured ice. Ice producing machines like that of Carre's and Muhl's were looked to as means of producing ice to meet the needs of grocers, farmers, and food shippers.
Refrigerated railroad cars were introduced in the US in the 1840s for short-run transport of dairy products, but these used harvested ice to maintain a cool temperature.
The new refrigerating technology first met with widespread industrial use as a means to freeze meat supplies for transport by sea in reefer ships from the British Dominions and other countries to the British Isles. Although not actually the first to achieve successful transportation of frozen goods overseas (the Strathleven had arrived at the London docks on 2 February 1880 with a cargo of frozen beef, mutton and butter from Sydney and Melbourne ), the breakthrough is often attributed to William Soltau Davidson, an entrepreneur who had emigrated to New Zealand. Davidson thought that Britain's rising population and meat demand could mitigate the slump in world wool markets that was heavily affecting New Zealand. After extensive research, he commissioned the Dunedin to be refitted with a compression refrigeration unit for meat shipment in 1881. On February 15, 1882, the Dunedin sailed for London with what was to be the first commercially successful refrigerated shipping voyage, and the foundation of the refrigerated meat industry.
The Times commented "Today we have to record such a triumph over physical difficulties, as would have been incredible, even unimaginable, a very few days ago...". The Marlborough—sister ship to the Dunedin – was immediately converted and joined the trade the following year, along with the rival New Zealand Shipping Company vessel Mataurua, while the German Steamer Marsala began carrying frozen New Zealand lamb in December 1882. Within five years, 172 shipments of frozen meat were sent from New Zealand to the United Kingdom, of which only 9 had significant amounts of meat condemned. Refrigerated shipping also led to a broader meat and dairy boom in Australasia and South America. J & E Hall of Dartford, England outfitted the SS Selembria with a vapor compression system to bring 30,000 carcasses of mutton from the Falkland Islands in 1886. In the years ahead, the industry rapidly expanded to Australia, Argentina and the United States.
By the 1890s, refrigeration played a vital role in the distribution of food. The meat-packing industry relied heavily on natural ice in the 1880s and continued to rely on manufactured ice as those technologies became available. By 1900, the meat-packing houses of Chicago had adopted ammonia-cycle commercial refrigeration. By 1914, almost every location used artificial refrigeration. The major meat packers, Armour, Swift, and Wilson, had purchased the most expensive units which they installed on train cars and in branch houses and storage facilities in the more remote distribution areas.
By the middle of the 20th century, refrigeration units were designed for installation on trucks or lorries. Refrigerated vehicles are used to transport perishable goods, such as frozen foods, fruit and vegetables, and temperature-sensitive chemicals. Most modern refrigerators keep the temperature between –40 and –20 °C, and have a maximum payload of around 24,000 kg gross weight (in Europe).
Although commercial refrigeration quickly progressed, it had limitations that prevented it from moving into the household. First, most refrigerators were far too large. Some of the commercial units being used in 1910 weighed between five and two hundred tons. Second, commercial refrigerators were expensive to produce, purchase, and maintain. Lastly, these refrigerators were unsafe. It was not uncommon for commercial refrigerators to catch fire, explode, or leak toxic gases. Refrigeration did not become a household technology until these three challenges were overcome.
Home and consumer use
During the early 1800s, consumers preserved their food by storing food and ice purchased from ice harvesters in iceboxes. In 1803, Thomas Moore patented a metal-lined butter-storage tub which became the prototype for most iceboxes. These iceboxes were used until nearly 1910 and the technology did not progress. In fact, consumers that used the icebox in 1910 faced the same challenge of a moldy and stinky icebox that consumers had in the early 1800s.
General Electric (GE) was one of the first companies to overcome these challenges. In 1911, GE released a household refrigeration unit that was powered by gas. The use of gas eliminated the need for an electric compressor motor and decreased the size of the refrigerator. However, electric companies that were customers of GE did not benefit from a gas-powered unit. Thus, GE invested in developing an electric model. In 1927, GE released the Monitor Top, the first refrigerator to run on electricity.
In 1930, Frigidaire, one of GE's main competitors, synthesized Freon. With the invention of synthetic refrigerants based mostly on a chlorofluorocarbon (CFC) chemical, safer refrigerators were possible for home and consumer use. Freon led to the development of smaller, lighter, and cheaper refrigerators. The average price of a refrigerator dropped from $275 to $154 with the synthesis of Freon. This lower price allowed ownership of refrigerators in American households to exceed 50% by 1940. Freon is a trademark of the DuPont Corporation and refers to these CFCs, and later hydro chlorofluorocarbon (HCFC) and hydro fluorocarbon (HFC), refrigerants developed in the late 1920s. These refrigerants were considered — at the time — to be less harmful than the commonly-used refrigerants of the time, including methyl formate, ammonia, methyl chloride, and sulfur dioxide. The intent was to provide refrigeration equipment for home use without danger. These CFC refrigerants answered that need. In the 1970s, though, the compounds were found to be reacting with atmospheric ozone, an important protection against solar ultraviolet radiation, and their use as a refrigerant worldwide was curtailed in the Montreal Protocol of 1987.
Impact on settlement patterns in the United States of America
In the last century, refrigeration allowed new settlement patterns to emerge. This new technology has allowed for new areas to be settled that are not on a natural channel of transport such as a river, valley trail or harbor that may have otherwise not been settled. Refrigeration has given opportunities to early settlers to expand westward and into rural areas that were unpopulated. These new settlers with rich and untapped soil saw opportunity to profit by sending raw goods to the eastern cities and states. In the 20th century, refrigeration has made "Galactic Cities" such as Dallas, Phoenix, and Los Angeles possible.
Refrigerated rail cars
The refrigerated rail car (refrigerated van or refrigerator car), along with the dense railroad network, became an exceedingly important link between the marketplace and the farm allowing for a national opportunity rather than a just a regional one. Before the invention of the refrigerated rail car, it was impossible to ship perishable food products long distances. The beef packing industry made the first demand push for refrigeration cars. The railroad companies were slow to adopt this new invention because of their heavy investments in cattle cars, stockyards, and feedlots. Refrigeration cars were also complex and costly compared to other rail cars, which also slowed the adoption of the refrigerated rail car. After the slow adoption of the refrigerated car, the beef packing industry dominated the refrigerated rail car business with their ability to control ice plants and the setting of icing fees. The United States Department of Agriculture estimated that, in 1916, over sixty-nine percent of the cattle killed in the country was done in plants involved in interstate trade. The same companies that were also involved in the meat trade later implemented refrigerated transport to include vegetables and fruit. The meat packing companies had much of the expensive machinery, such as refrigerated cars, and cold storage facilities that allowed for them to effectively distribute all types of perishable goods. During World War I, a national refrigerator car pool was established by the United States Administration to deal with problem of idle cars and was later continued after the war. The idle car problem was the problem of refrigeration cars sitting pointlessly in between seasonal harvests. This meant that very expensive cars sat in rail yards for a good portion of the year while making no revenue for the car's owner. The car pool was a system where cars were distributed to areas as crops matured ensuring maximum use of the cars. Refrigerated rail cars moved eastward from vineyards, orchards, fields, and gardens in western states to satisfy Americas consuming market in the east. The refrigerated car made it possible to transport perishable crops hundreds and even thousands of kilometres or miles. The most noticeable effect the car gave was a regional specialization of vegetables and fruits. The refrigeration rail car was widely used for the transportation of perishable goods up until the 1950s. By the 1960s, the nation's interstate highway system was adequately complete allowing for trucks to carry the majority of the perishable food loads and to push out the old system of the refrigerated rail cars.
Expansion west and into rural areas
The widespread use of refrigeration allowed for a vast amount of new agricultural opportunities to open up in the United States. New markets emerged throughout the United States in areas that were previously uninhabited and far-removed from heavily populated areas. New agricultural opportunity presented itself in areas that were considered rural, such as states in the south and in the west. Shipments on a large scale from the south and California were both made around the same time, although natural ice was used from the Sierras in California rather than manufactured ice in the south. Refrigeration allowed for many areas to specialize in the growing of specific fruits. California specialized in several fruits, grapes, peaches, pears, plums, and apples, while Georgia became famous for specifically its peaches. In California, the acceptance of the refrigerated rail cars led to an increase of car loads from 4,500 carloads in 1895 to between 8,000 and 10,000 carloads in 1905. The Gulf States, Arkansas, Missouri and Tennessee entered into strawberry production on a large-scale while Mississippi became the center of the tomato industry. New Mexico, Colorado, Arizona, and Nevada grew cantaloupes. Without refrigeration, this would have not been possible. By 1917, well-established fruit and vegetable areas that were close to eastern markets felt the pressure of competition from these distant specialized centers. Refrigeration was not limited to meat, fruit and vegetables but it also encompassed dairy product and dairy farms. In the early twentieth century, large cities got their dairy supply from farms as far as . Dairy products were not as easily transported over great distances like fruits and vegetables due to greater perishability. Refrigeration made production possible in the west far from eastern markets, so much in fact that dairy farmers could pay transportation cost and still undersell their eastern competitors. Refrigeration and the refrigerated rail gave opportunity to areas with rich soil far from natural channel of transport such as a river, valley trail or harbors.
Rise of the galactic city
"Edge city" was a term coined by Joel Garreau, whereas the term "galactic city" was coined by Lewis Mumford. These terms refer to a concentration of business, shopping, and entertainment outside a traditional downtown or central business district in what had previously been a residential or rural area. There were several factors contributing to the growth of these cities such as Los Angeles, Las Vegas, Houston, and Phoenix. The factors that contributed to these large cities include reliable automobiles, highway systems, refrigeration, and agricultural production increases. Large cities such as the ones mentioned above have not been uncommon in history, but what separates these cities from the rest are that these cities are not along some natural channel of transport, or at some crossroad of two or more channels such as a trail, harbor, mountain, river, or valley. These large cities have been developed in areas that only a few hundred years ago would have been uninhabitable. Without a cost efficient way of cooling air and transporting water and food from great distances, these large cities would have never developed. The rapid growth of these cities was influenced by refrigeration and an agricultural productivity increase, allowing more distant farms to effectively feed the population.
Impact on agriculture and food production
Agriculture's role in developed countries has drastically changed in the last century due to many factors, including refrigeration. Statistics from the 2007 census gives information on the large concentration of agricultural sales coming from a small portion of the existing farms in the United States today. This is a partial result of the market created for the frozen meat trade by the first successful shipment of frozen sheep carcasses coming from New Zealand in the 1880s. As the market continued to grow, regulations on food processing and quality began to be enforced. Eventually, electricity was introduced into rural homes in the United States, which allowed refrigeration technology to continue to expand on the farm, increasing output per person. Today, refrigeration's use on the farm reduces humidity levels, avoids spoiling due to bacterial growth, and assists in preservation.
Demographics
The introduction of refrigeration and evolution of additional technologies drastically changed agriculture in the United States. During the beginning of the 20th century, farming was a common occupation and lifestyle for United States citizens, as most farmers actually lived on their farm. In 1935, there were 6.8 million farms in the United States and a population of 127 million. Yet, while the United States population has continued to climb, citizens pursuing agriculture continue to decline. Based on the 2007 US Census, less than one percent of a population of 310 million people claim farming as an occupation today. However, the increasing population has led to an increasing demand for agricultural products, which is met through a greater variety of crops, fertilizers, pesticides, and improved technology. Improved technology has decreased the risk and time involved for agricultural management and allows larger farms to increase their output per person to meet society's demand.
Meat packing and trade
Prior to 1882, the South Island of New Zealand had been experimenting with sowing grass and crossbreeding sheep, which immediately gave their farmers economic potential in the exportation of meat. In 1882, the first successful shipment of sheep carcasses was sent from Port Chalmers in Dunedin, New Zealand, to London. By the 1890s, the frozen meat trade became increasingly more profitable in New Zealand, especially in Canterbury, where 50% of exported sheep carcasses came from in 1900. It was not long before Canterbury meat was known for the highest quality, creating a demand for New Zealand meat around the world. In order to meet this new demand, the farmers improved their feed so sheep could be ready for the slaughter in only seven months. This new method of shipping led to an economic boom in New Zealand by the mid 1890s.
In the United States, the Meat Inspection Act of 1891 was put in place in the United States because local butchers felt the refrigerated railcar system was unwholesome. When meat packing began to take off, consumers became nervous about the quality of the meat for consumption. Upton Sinclair's 1906 novel The Jungle brought negative attention to the meat packing industry, by drawing to light unsanitary working conditions and processing of diseased animals. The book caught the attention of President Theodore Roosevelt, and the 1906 Meat Inspection Act was put into place as an amendment to the Meat Inspection Act of 1891. This new act focused on the quality of the meat and environment it is processed in.
Electricity in rural areas
In the early 1930s, 90 percent of the urban population of the United States had electric power, in comparison to only 10 percent of rural homes. At the time, power companies did not feel that extending power to rural areas (rural electrification) would produce enough profit to make it worth their while. However, in the midst of the Great Depression, President Franklin D. Roosevelt realized that rural areas would continue to lag behind urban areas in both poverty and production if they were not electrically wired. On May 11, 1935, the president signed an executive order called the Rural Electrification Administration, also known as REA. The agency provided loans to fund electric infrastructure in the rural areas. In just a few years, 300,000 people in rural areas of the United States had received power in their homes.
While electricity dramatically improved working conditions on farms, it also had a large impact on the safety of food production. Refrigeration systems were introduced to the farming and food distribution processes, which helped in food preservation and kept food supplies safe. Refrigeration also allowed for shipment of perishable commodities throughout the United States. As a result, United States farmers quickly became the most productive in the world, and entire new food systems arose.
Farm use
In order to reduce humidity levels and spoiling due to bacterial growth, refrigeration is used for meat, produce, and dairy processing in farming today. Refrigeration systems are used the heaviest in the warmer months for farming produce, which must be cooled as soon as possible in order to meet quality standards and increase the shelf life. Meanwhile, dairy farms refrigerate milk year round to avoid spoiling.
Effects on lifestyle and diet
In the late 19th Century and into the very early 20th Century, except for staple foods (sugar, rice, and beans) that needed no refrigeration, the available foods were affected heavily by the seasons and what could be grown locally. Refrigeration has removed these limitations. Refrigeration played a large part in the feasibility and then popularity of the modern supermarket. Fruits and vegetables out of season, or grown in distant locations, are now available at relatively low prices. Refrigerators have led to a huge increase in meat and dairy products as a portion of overall supermarket sales. As well as changing the goods purchased at the market, the ability to store these foods for extended periods of time has led to an increase in leisure time. Prior to the advent of the household refrigerator, people would have to shop on a daily basis for the supplies needed for their meals.
Impact on nutrition
The introduction of refrigeration allowed for the hygienic handling and storage of perishables, and as such, promoted output growth, consumption, and the availability of nutrition. The change in our method of food preservation moved us away from salts to a more manageable sodium level. The ability to move and store perishables such as meat and dairy led to a 1.7% increase in dairy consumption and overall protein intake by 1.25% annually in the US after the 1890s.
People were not only consuming these perishables because it became easier for they themselves to store them, but because the innovations in refrigerated transportation and storage led to less spoilage and waste, thereby driving the prices of these products down. Refrigeration accounts for at least 5.1% of the increase in adult stature (in the US) through improved nutrition, and when the indirect effects associated with improvements in the quality of nutrients and the reduction in illness is additionally factored in, the overall impact becomes considerably larger. Recent studies have also shown a negative relationship between the number of refrigerators in a household and the rate of gastric cancer mortality.
Current applications of refrigeration
Probably the most widely used current applications of refrigeration are for air conditioning of private homes and public buildings, and refrigerating foodstuffs in homes, restaurants and large storage warehouses. The use of refrigerators and walk-in coolers and freezers in kitchens, factories and warehouses for storing and processing fruits and vegetables has allowed adding fresh salads to the modern diet year round, and storing fish and meats safely for long periods.
The optimum temperature range for perishable food storage is .
In commerce and manufacturing, there are many uses for refrigeration. Refrigeration is used to liquefy gases – oxygen, nitrogen, propane, and methane, for example. In compressed air purification, it is used to condense water vapor from compressed air to reduce its moisture content. In oil refineries, chemical plants, and petrochemical plants, refrigeration is used to maintain certain processes at their needed low temperatures (for example, in alkylation of butenes and butane to produce a high-octane gasoline component). Metal workers use refrigeration to temper steel and cutlery. When transporting temperature-sensitive foodstuffs and other materials by trucks, trains, airplanes and seagoing vessels, refrigeration is a necessity.
Dairy products are constantly in need of refrigeration, and it was only discovered in the past few decades that eggs needed to be refrigerated during shipment rather than waiting to be refrigerated after arrival at the grocery store. Meats, poultry and fish all must be kept in climate-controlled environments before being sold. Refrigeration also helps keep fruits and vegetables edible longer.
One of the most influential uses of refrigeration was in the development of the sushi/sashimi industry in Japan. Before the discovery of refrigeration, many sushi connoisseurs were at risk of contracting diseases. The dangers of unrefrigerated sashimi were not brought to light for decades due to the lack of research and healthcare distribution across rural Japan. Around mid-century, the Zojirushi corporation, based in Kyoto, made breakthroughs in refrigerator designs, making refrigerators cheaper and more accessible for restaurant proprietors and the general public.
Methods of refrigeration
Methods of refrigeration can be classified as non-cyclic, cyclic, thermoelectric and magnetic.
Non-cyclic refrigeration
This refrigeration method cools a contained area by melting ice, or by sublimating dry ice. Perhaps the simplest example of this is a portable cooler, where items are put in it, then ice is poured over the top. Regular ice can maintain temperatures near, but not below the freezing point, unless salt is used to cool the ice down further (as in a traditional ice-cream maker). Dry ice can reliably bring the temperature well below water freezing point.
Cyclic refrigeration
This consists of a refrigeration cycle, where heat is removed from a low-temperature space or source and rejected to a high-temperature sink with the help of external work, and its inverse, the thermodynamic power cycle. In the power cycle, heat is supplied from a high-temperature source to the engine, part of the heat being used to produce work and the rest being rejected to a low-temperature sink. This satisfies the second law of thermodynamics.
A refrigeration cycle describes the changes that take place in the refrigerant as it alternately absorbs and rejects heat as it circulates through a refrigerator. It is also applied to heating, ventilation, and air conditioning HVACR work, when describing the "process" of refrigerant flow through an HVACR unit, whether it is a packaged or split system.
Heat naturally flows from hot to cold. Work is applied to cool a living space or storage volume by pumping heat from a lower temperature heat source into a higher temperature heat sink. Insulation is used to reduce the work and energy needed to achieve and maintain a lower temperature in the cooled space. The operating principle of the refrigeration cycle was described mathematically by Sadi Carnot in 1824 as a heat engine.
The most common types of refrigeration systems use the reverse-Rankine vapor-compression refrigeration cycle, although absorption heat pumps are used in a minority of applications.
Cyclic refrigeration can be classified as:
Vapor cycle, and
Gas cycle
Vapor cycle refrigeration can further be classified as:
Vapor-compression refrigeration
Sorption Refrigeration
Vapor-absorption refrigeration
Adsorption refrigeration
Vapor-compression cycle
The vapor-compression cycle is used in most household refrigerators as well as in many large commercial and industrial refrigeration systems. Figure 1 provides a schematic diagram of the components of a typical vapor-compression refrigeration system.
The thermodynamics of the cycle can be analyzed on a diagram as shown in Figure 2. In this cycle, a circulating refrigerant such as a low boiling hydrocarbon or hydrofluorocarbons enters the compressor as a vapour. From point 1 to point 2, the vapor is compressed at constant entropy and exits the compressor as a vapor at a higher temperature, but still below the vapor pressure at that temperature. From point 2 to point 3 and on to point 4, the vapor travels through the condenser which cools the vapour until it starts condensing, and then condenses the vapor into a liquid by removing additional heat at constant pressure and temperature. Between points 4 and 5, the liquid refrigerant goes through the expansion valve (also called a throttle valve) where its pressure abruptly decreases, causing flash evaporation and auto-refrigeration of, typically, less than half of the liquid.
That results in a mixture of liquid and vapour at a lower temperature and pressure as shown at point 5. The cold liquid-vapor mixture then travels through the evaporator coil or tubes and is completely vaporized by cooling the warm air (from the space being refrigerated) being blown by a fan across the evaporator coil or tubes. The resulting refrigerant vapour returns to the compressor inlet at point 1 to complete the thermodynamic cycle.
The above discussion is based on the ideal vapour-compression refrigeration cycle, and does not take into account real-world effects like frictional pressure drop in the system, slight thermodynamic irreversibility during the compression of the refrigerant vapor, or non-ideal gas behavior, if any. Vapor compression refrigerators can be arranged in two stages in cascade refrigeration systems, with the second stage cooling the condenser of the first stage. This can be used for achieving very low temperatures.
More information about the design and performance of vapor-compression refrigeration systems is available in the classic Perry's Chemical Engineers' Handbook.
Sorption cycle
Absorption cycle
In the early years of the twentieth century, the vapor absorption cycle using water-ammonia systems or LiBr-water was popular and widely used. After the development of the vapor compression cycle, the vapor absorption cycle lost much of its importance because of its low coefficient of performance (about one fifth of that of the vapor compression cycle). Today, the vapor absorption cycle is used mainly where fuel for heating is available but electricity is not, such as in recreational vehicles that carry LP gas. It is also used in industrial environments where plentiful waste heat overcomes its inefficiency.
The absorption cycle is similar to the compression cycle, except for the method of raising the pressure of the refrigerant vapor. In the absorption system, the compressor is replaced by an absorber which dissolves the refrigerant in a suitable liquid, a liquid pump which raises the pressure and a generator which, on heat addition, drives off the refrigerant vapor from the high-pressure liquid. Some work is needed by the liquid pump but, for a given quantity of refrigerant, it is much smaller than needed by the compressor in the vapor compression cycle. In an absorption refrigerator, a suitable combination of refrigerant and absorbent is used. The most common combinations are ammonia (refrigerant) with water (absorbent), and water (refrigerant) with lithium bromide (absorbent).
Adsorption cycle
The main difference with absorption cycle, is that in adsorption cycle, the refrigerant (adsorbate) could be ammonia, water, methanol, etc., while the adsorbent is a solid, such as silica gel, activated carbon, or zeolite, unlike in the absorption cycle where absorbent is liquid.
The reason adsorption refrigeration technology has been extensively researched in recent 30 years lies in that the operation of an adsorption refrigeration system is often noiseless, non-corrosive and environment friendly.
Gas cycle
When the working fluid is a gas that is compressed and expanded but does not change phase, the refrigeration cycle is called a gas cycle. Air is most often this working fluid. As there is no condensation and evaporation intended in a gas cycle, components corresponding to the condenser and evaporator in a vapor compression cycle are the hot and cold gas-to-gas heat exchangers in gas cycles.
The gas cycle is less efficient than the vapor compression cycle because the gas cycle works on the reverse Brayton cycle instead of the reverse Rankine cycle. As such, the working fluid does not receive and reject heat at constant temperature. In the gas cycle, the refrigeration effect is equal to the product of the specific heat of the gas and the rise in temperature of the gas in the low temperature side. Therefore, for the same cooling load, a gas refrigeration cycle needs a large mass flow rate and is bulky.
Because of their lower efficiency and larger bulk, air cycle coolers are not often used nowadays in terrestrial cooling devices. However, the air cycle machine is very common on gas turbine-powered jet aircraft as cooling and ventilation units, because compressed air is readily available from the engines' compressor sections. Such units also serve the purpose of pressurizing the aircraft.
Thermoelectric refrigeration
Thermoelectric cooling uses the Peltier effect to create a heat flux between the junction of two types of material. This effect is commonly used in camping and portable coolers and for cooling electronic components and small instruments. Peltier coolers are often used where a traditional vapor-compression cycle refrigerator would be impractical or take up too much space, and in cooled image sensors as an easy, compact and lightweight, if inefficient, way to achieve very low temperatures, using two or more stage peltier coolers arranged in a cascade refrigeration configuration, meaning that two or more Peltier elements are stacked on top of each other, with each stage being larger than the one before it, in order to extract more heat and waste heat generated by the previous stages. Peltier cooling has a low COP (efficiency) when compared with that of the vapor-compression cycle, so it emits more waste heat (heat generated by the Peltier element or cooling mechanism) and consumes more power for a given cooling capacity.
Magnetic refrigeration
Magnetic refrigeration, or adiabatic demagnetization, is a cooling technology based on the magnetocaloric effect, an intrinsic property of magnetic solids. The refrigerant is often a paramagnetic salt, such as cerium magnesium nitrate. The active magnetic dipoles in this case are those of the electron shells of the paramagnetic atoms.
A strong magnetic field is applied to the refrigerant, forcing its various magnetic dipoles to align and putting these degrees of freedom of the refrigerant into a state of lowered entropy. A heat sink then absorbs the heat released by the refrigerant due to its loss of entropy. Thermal contact with the heat sink is then broken so that the system is insulated, and the magnetic field is switched off. This increases the heat capacity of the refrigerant, thus decreasing its temperature below the temperature of the heat sink.
Because few materials exhibit the needed properties at room temperature, applications have so far been limited to cryogenics and research.
Other methods
Other methods of refrigeration include the air cycle machine used in aircraft; the vortex tube used for spot cooling, when compressed air is available; and thermoacoustic refrigeration using sound waves in a pressurized gas to drive heat transfer and heat exchange; steam jet cooling popular in the early 1930s for air conditioning large buildings; thermoelastic cooling using a smart metal alloy stretching and relaxing. Many Stirling cycle heat engines can be run backwards to act as a refrigerator, and therefore these engines have a niche use in cryogenics. In addition, there are other types of cryocoolers such as Gifford-McMahon coolers, Joule-Thomson coolers, pulse-tube refrigerators and, for temperatures between 2 mK and 500 mK, dilution refrigerators.
Elastocaloric refrigeration
Another potential solid-state refrigeration technique and a relatively new area of study comes from a special property of super elastic materials. These materials undergo a temperature change when experiencing an applied mechanical stress (called the elastocaloric effect). Since super elastic materials deform reversibly at high strains, the material experiences a flattened elastic region in its stress-strain curve caused by a resulting phase transformation from an austenitic to a martensitic crystal phase.
When a super elastic material experiences a stress in the austenitic phase, it undergoes an exothermic phase transformation to the martensitic phase, which causes the material to heat up. Removing the stress reverses the process, restores the material to its austenitic phase, and absorbs heat from the surroundings cooling down the material.
The most appealing part of this research is how potentially energy efficient and environmentally friendly this cooling technology is. The different materials used, commonly shape-memory alloys, provide a non-toxic source of emission free refrigeration. The most commonly studied materials studied are shape-memory alloys, like nitinol and Cu-Zn-Al. Nitinol is of the more promising alloys with output heat at about 66 J/cm3 and a temperature change of about 16–20 K. Due to the difficulty in manufacturing some of the shape memory alloys, alternative materials like natural rubber have been studied. Even though rubber may not give off as much heat per volume (12 J/cm3 ) as the shape memory alloys, it still generates a comparable temperature change of about 12 K and operates at a suitable temperature range, low stresses, and low cost.
The main challenge however comes from potential energy losses in the form of hysteresis, often associated with this process. Since most of these losses comes from incompatibilities between the two phases, proper alloy tuning is necessary to reduce losses and increase reversibility and efficiency. Balancing the transformation strain of the material with the energy losses enables a large elastocaloric effect to occur and potentially a new alternative for refrigeration.
Fridge Gate
The Fridge Gate method is a theoretical application of using a single logic gate to drive a refrigerator in the most energy efficient way possible without violating the laws of thermodynamics. It operates on the fact that there are two energy states in which a particle can exist: the ground state and the excited state. The excited state carries a little more energy than the ground state, small enough so that the transition occurs with high probability. There are three components or particle types associated with the fridge gate. The first is on the interior of the refrigerator, the second on the outside and the third is connected to a power supply which heats up every so often that it can reach the E state and replenish the source. In the cooling step on the inside of the refrigerator, the g state particle absorbs energy from ambient particles, cooling them, and itself jumping to the e state. In the second step, on the outside of the refrigerator where the particles are also at an e state, the particle falls to the g state, releasing energy and heating the outside particles. In the third and final step, the power supply moves a particle at the e state, and when it falls to the g state it induces an energy-neutral swap where the interior e particle is replaced by a new g particle, restarting the cycle.
Passive systems
When combining a passive daytime radiative cooling system with thermal insulation and evaporative cooling, one study found a 300% increase in ambient cooling power when compared to a stand-alone radiative cooling surface, which could extend the shelf life of food by 40% in humid climates and 200% in desert climates without refrigeration. The system's evaporative cooling layer would require water "re-charges" every 10 days to a month in humid areas and every 4 days in hot and dry areas.
Capacity ratings
The refrigeration capacity of a refrigeration system is the product of the evaporators' enthalpy rise and the evaporators' mass flow rate. The measured capacity of refrigeration is often dimensioned in the unit of kW or BTU/h. Domestic and commercial refrigerators may be rated in kJ/s, or Btu/h of cooling. For commercial and industrial refrigeration systems, the kilowatt (kW) is the basic unit of refrigeration, except in North America, where both ton of refrigeration and BTU/h are used.
A refrigeration system's coefficient of performance (CoP) is very important in determining a system's overall efficiency. It is defined as refrigeration capacity in kW divided by the energy input in kW. While CoP is a very simple measure of performance, it is typically not used for industrial refrigeration in North America. Owners and manufacturers of these systems typically use performance factor (PF). A system's PF is defined as a system's energy input in horsepower divided by its refrigeration capacity in TR. Both CoP and PF can be applied to either the entire system or to system components. For example, an individual compressor can be rated by comparing the energy needed to run the compressor versus the expected refrigeration capacity based on inlet volume flow rate. It is important to note that both CoP and PF for a refrigeration system are only defined at specific operating conditions, including temperatures and thermal loads. Moving away from the specified operating conditions can dramatically change a system's performance.
Air conditioning systems used in residential application typically use SEER (Seasonal Energy Efficiency Ratio)for the energy performance rating. Air conditioning systems for commercial application often use EER (Energy Efficiency Ratio) and IEER (Integrated Energy Efficiency Ratio) for the energy efficiency performance rating.
| Technology | Food and health | null |
46253 | https://en.wikipedia.org/wiki/Fever | Fever | Fever or pyrexia in humans is a symptom of an anti-infection defense mechanism that appears with body temperature exceeding the normal range due to an increase in the body's temperature set point in the hypothalamus. There is no single agreed-upon upper limit for normal temperature: sources use values ranging between in humans.
The increase in set point triggers increased muscle contractions and causes a feeling of cold or chills. This results in greater heat production and efforts to conserve heat. When the set point temperature returns to normal, a person feels hot, becomes flushed, and may begin to sweat. Rarely a fever may trigger a febrile seizure, with this being more common in young children. Fevers do not typically go higher than .
A fever can be caused by many medical conditions ranging from non-serious to life-threatening. This includes viral, bacterial, and parasitic infections—such as influenza, the common cold, meningitis, urinary tract infections, appendicitis, Lassa fever, COVID-19, and malaria. Non-infectious causes include vasculitis, deep vein thrombosis, connective tissue disease, side effects of medication or vaccination, and cancer. It differs from hyperthermia, in that hyperthermia is an increase in body temperature over the temperature set point, due to either too much heat production or not enough heat loss.
Treatment to reduce fever is generally not required. Treatment of associated pain and inflammation, however, may be useful and help a person rest. Medications such as ibuprofen or paracetamol (acetaminophen) may help with this as well as lower temperature. Children younger than three months require medical attention, as might people with serious medical problems such as a compromised immune system or people with other symptoms. Hyperthermia requires treatment.
Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children and occurs in up to 75% of adults who are seriously sick. While fever evolved as a defense mechanism, treating a fever does not appear to improve or worsen outcomes. Fever is often viewed with greater concern by parents and healthcare professionals than is usually deserved, a phenomenon known as "fever phobia."
Associated symptoms
A fever is usually accompanied by sickness behavior, which consists of lethargy, depression, loss of appetite, sleepiness, hyperalgesia, dehydration, and the inability to concentrate. Sleeping with a fever can often cause intense or confusing nightmares, commonly called "fever dreams". Mild to severe delirium (which can also cause hallucinations) may also present itself during high fevers.
Diagnosis
A range for normal temperatures has been found. Central temperatures, such as rectal temperatures, are more accurate than peripheral temperatures.
Fever is generally agreed to be present if the elevated temperature is caused by a raised set point and:
Temperature in the anus (rectum/rectal) is at or over . An ear (tympanic) or forehead (temporal) temperature may also be used.
Temperature in the mouth (oral) is at or over in the morning or over in the afternoon
Temperature under the arm (axillary) is usually about below core body temperature.
In adults, the normal range of oral temperatures in healthy individuals is among men and among women, while when taken rectally it is among men and among women, and for ear measurement it is among men and among women.
Normal body temperatures vary depending on many factors, including age, sex, time of day, ambient temperature, activity level, and more. Normal daily temperature variation has been described as 0.5 °C (0.9 °F). A raised temperature is not always a fever. For example, the temperature rises in healthy people when they exercise, but this is not considered a fever, as the set point is normal. On the other hand, a "normal" temperature may be a fever, if it is unusually high for that person; for example, medically frail elderly people have a decreased ability to generate body heat, so a "normal" temperature of may represent a clinically significant fever.
Hyperthermia
Hyperthermia is an elevation of body temperature over the temperature set point, due to either too much heat production or not enough heat loss. Hyperthermia is thus not considered fever. Hyperthermia should not be confused with hyperpyrexia (which is a very high fever).
Clinically, it is important to distinguish between fever and hyperthermia as hyperthermia may quickly lead to death and does not respond to antipyretic medications. The distinction may however be difficult to make in an emergency setting, and is often established by identifying possible causes.
Types
Various patterns of measured patient temperatures have been observed, some of which may be indicative of a particular medical diagnosis:
Continuous fever, where temperature remains above normal and does not fluctuate more than in 24 hours (e.g. in bacterial pneumonia, typhoid fever, infective endocarditis, tuberculosis, or typhus).
Intermittent fever is present only for a certain period, later cycling back to normal (e.g., in malaria, leishmaniasis, pyemia, sepsis, or African trypanosomiasis).
Remittent fever, where the temperature remains above normal throughout the day and fluctuates more than in 24 hours (e.g., in infective endocarditis or brucellosis).
Pel–Ebstein fever is a cyclic fever that is rarely seen in patients with Hodgkin's lymphoma.
Undulant fever, seen in brucellosis.
Typhoid fever is a continuous fever showing a characteristic step-ladder pattern, a step-wise increase in temperature with a high plateau.
Among the types of intermittent fever are ones specific to cases of malaria caused by different pathogens. These are:
Quotidian fever, with a 24-hour periodicity, typical of malaria caused by Plasmodium knowlesi (P. knowlesi);
Tertian fever, with a 48-hour periodicity, typical of later course malaria caused by P. falciparum, P. vivax, or P. ovale;
Quartan fever, with a 72-hour periodicity, typical of later course malaria caused by P. malariae.
In addition, there is disagreement regarding whether a specific fever pattern is associated with Hodgkin's lymphoma—the Pel–Ebstein fever, with patients argued to present high temperature for one week, followed by low for the next week, and so on, where the generality of this pattern is debated.
Persistent fever that cannot be explained after repeated routine clinical inquiries is called fever of unknown origin. A neutropenic fever, also called febrile neutropenia, is a fever in the absence of normal immune system function. Because of the lack of infection-fighting neutrophils, a bacterial infection can spread rapidly; this fever is, therefore, usually considered to require urgent medical attention. This kind of fever is more commonly seen in people receiving immune-suppressing chemotherapy than in apparently healthy people.
Hyperpyrexia
Hyperpyrexia is an extreme elevation of body temperature which, depending upon the source, is classified as a core body temperature greater than or equal to ; the range of hyperpyrexia includes cases considered severe (≥ 40 °C) and extreme (≥ 42 °C). It differs from hyperthermia in that one's thermoregulatory system's set point for body temperature is set above normal, then heat is generated to achieve it. In contrast, hyperthermia involves body temperature rising above its set point due to outside factors. The high temperatures of hyperpyrexia are considered medical emergencies, as they may indicate a serious underlying condition or lead to severe morbidity (including permanent brain damage), or to death. A common cause of hyperpyrexia is an intracranial hemorrhage. Other causes in emergency room settings include sepsis, Kawasaki syndrome, neuroleptic malignant syndrome, drug overdose, serotonin syndrome, and thyroid storm.
Differential diagnosis
Fever is a common symptom of many medical conditions:
Infectious disease, e.g., COVID-19, dengue, Ebola, gastroenteritis, HIV, influenza, Lyme disease, rocky mountain spotted fever, secondary syphilis, malaria, mononucleosis, as well as infections of the skin, e.g., abscesses and boils.
Immunological diseases, e.g., relapsing polychondritis, autoimmune hepatitis, granulomatosis with polyangiitis, Horton disease, inflammatory bowel diseases, Kawasaki disease, lupus erythematosus, sarcoidosis, Still's disease, rheumatoid arthritis, lymphoproliferative disorders and psoriasis;
Tissue destruction, as a result of cerebral bleeding, crush syndrome, hemolysis, infarction, rhabdomyolysis, surgery, etc.;
Cancers, particularly blood cancers such as leukemia and lymphomas;
Metabolic disorders, e.g., gout, and porphyria; and
Inherited metabolic disorder, e.g., Fabry disease.
Adult and pediatric manifestations for the same disease may differ; for instance, in COVID-19, one metastudy describes 92.8% of adults versus 43.9% of children presenting with fever.
In addition, fever can result from a reaction to an incompatible blood product.
Function
Immune function
Fever is thought to contribute to host defense, as the reproduction of pathogens with strict temperature requirements can be hindered, and the rates of some important immunological reactions are increased by temperature. Fever has been described in teaching texts as assisting the healing process in various ways, including:
increased mobility of leukocytes;
enhanced leukocyte phagocytosis;
decreased endotoxin effects; and
increased proliferation of T cells.
Advantages and disadvantages
A fever response to an infectious disease is generally regarded as protective, whereas fever in non-infections may be maladaptive. Studies have not been consistent on whether treating fever generally worsens or improves mortality risk. Benefits or harms may depend on the type of infection, health status of the patient and other factors. Studies using warm-blooded vertebrates suggest that they recover more rapidly from infections or critical illness due to fever. In sepsis, fever is associated with reduced mortality.
Pathophysiology of fever induction
Hypothalamus
Temperature is regulated in the hypothalamus. The trigger of a fever, called a pyrogen, results in the release of prostaglandin E2 (PGE2). PGE2 in turn acts on the hypothalamus, which creates a systemic response in the body, causing heat-generating effects to match a new higher temperature set point. There are four receptors in which PGE2 can bind (EP1-4), with a previous study showing the EP3 subtype is what mediates the fever response. Hence, the hypothalamus can be seen as working like a thermostat. When the set point is raised, the body increases its temperature through both active generation of heat and retention of heat. Peripheral vasoconstriction both reduces heat loss through the skin and causes the person to feel cold. Norepinephrine increases thermogenesis in brown adipose tissue, and muscle contraction through shivering raises the metabolic rate.
If these measures are insufficient to make the blood temperature in the brain match the new set point in the hypothalamus, the brain orchestrates heat effector mechanisms via the autonomic nervous system or primary motor center for shivering. These may be:
Increased heat production by increased muscle tone, shivering (muscle movements to produce heat) and release of hormones like epinephrine; and
Prevention of heat loss, e.g., through vasoconstriction.
When the hypothalamic set point moves back to baseline—either spontaneously or via medication—normal functions such as sweating, and the reverse of the foregoing processes (e.g., vasodilation, end of shivering, and nonshivering heat production) are used to cool the body to the new, lower setting.
This contrasts with hyperthermia, in which the normal setting remains, and the body overheats through undesirable retention of excess heat or over-production of heat. Hyperthermia is usually the result of an excessively hot environment (heat stroke) or an adverse reaction to drugs. Fever can be differentiated from hyperthermia by the circumstances surrounding it and its response to anti-pyretic medications.
In infants, the autonomic nervous system may also activate brown adipose tissue to produce heat (non-shivering thermogenesis).
Increased heart rate and vasoconstriction contribute to increased blood pressure in fever.
Pyrogens
A pyrogen is a substance that induces fever. In the presence of an infectious agent, such as bacteria, viruses, viroids, etc., the immune response of the body is to inhibit their growth and eliminate them. The most common pyrogens are endotoxins, which are lipopolysaccharides (LPS) produced by Gram-negative bacteria such as E. coli. But pyrogens include non-endotoxic substances (derived from microorganisms other than gram-negative-bacteria or from chemical substances) as well. The types of pyrogens include internal (endogenous) and external (exogenous) to the body.
The "pyrogenicity" of given pyrogens varies: in extreme cases, bacterial pyrogens can act as superantigens and cause rapid and dangerous fevers.
Endogenous
Endogenous pyrogens are cytokines released from monocytes (which are part of the immune system). In general, they stimulate chemical responses, often in the presence of an antigen, leading to a fever. Whilst they can be a product of external factors like exogenous pyrogens, they can also be induced by internal factors like damage associated molecular patterns such as cases like rheumatoid arthritis or lupus.
Major endogenous pyrogens are interleukin 1 (α and β) and interleukin 6 (IL-6). Minor endogenous pyrogens include interleukin-8, tumor necrosis factor-β, macrophage inflammatory protein-α and macrophage inflammatory protein-β as well as interferon-α, interferon-β, and interferon-γ. Tumor necrosis factor-α (TNF) also acts as a pyrogen, mediated by interleukin 1 (IL-1) release. These cytokine factors are released into general circulation, where they migrate to the brain's circumventricular organs where they are more easily absorbed than in areas protected by the blood–brain barrier. The cytokines then bind to endothelial receptors on vessel walls to receptors on microglial cells, resulting in activation of the arachidonic acid pathway.
Of these, IL-1β, TNF, and IL-6 are able to raise the temperature setpoint of an organism and cause fever. These proteins produce a cyclooxygenase which induces the hypothalamic production of PGE2 which then stimulates the release of neurotransmitters such as cyclic adenosine monophosphate and increases body temperature.
Exogenous
Exogenous pyrogens are external to the body and are of microbial origin. In general, these pyrogens, including bacterial cell wall products, may act on Toll-like receptors in the hypothalamus and elevate the thermoregulatory setpoint.
An example of a class of exogenous pyrogens are bacterial lipopolysaccharides (LPS) present in the cell wall of gram-negative bacteria. According to one mechanism of pyrogen action, an immune system protein, lipopolysaccharide-binding protein (LBP), binds to LPS, and the LBP–LPS complex then binds to a CD14 receptor on a macrophage. The LBP-LPS binding to CD14 results in cellular synthesis and release of various endogenous cytokines, e.g., interleukin 1 (IL-1), interleukin 6 (IL-6), and tumor necrosis factor-alpha (TNFα). A further downstream event is activation of the arachidonic acid pathway.
PGE2 release
PGE2 release comes from the arachidonic acid pathway. This pathway (as it relates to fever), is mediated by the enzymes phospholipase A2 (PLA2), cyclooxygenase-2 (COX-2), and prostaglandin E2 synthase. These enzymes ultimately mediate the synthesis and release of PGE2.
PGE2 is the ultimate mediator of the febrile response. The setpoint temperature of the body will remain elevated until PGE2 is no longer present. PGE2 acts on neurons in the preoptic area (POA) through the prostaglandin E receptor 3 (EP3). EP3-expressing neurons in the POA innervate the dorsomedial hypothalamus (DMH), the rostral raphe pallidus nucleus in the medulla oblongata (rRPa), and the paraventricular nucleus (PVN) of the hypothalamus. Fever signals sent to the DMH and rRPa lead to stimulation of the sympathetic output system, which evokes non-shivering thermogenesis to produce body heat and skin vasoconstriction to decrease heat loss from the body surface. It is presumed that the innervation from the POA to the PVN mediates the neuroendocrine effects of fever through the pathway involving pituitary gland and various endocrine organs.
Management
Fever does not necessarily need to be treated, and most people with a fever recover without specific medical attention. Although it is unpleasant, fever rarely rises to a dangerous level even if untreated. Damage to the brain generally does not occur until temperatures reach , and it is rare for an untreated fever to exceed . Treating fever in people with sepsis does not affect outcomes. Small trials have shown no benefit of treating fevers of or higher of critically ill patients in ICUs, and one trial was terminated early because patients receiving aggressive fever treatment were dying more often.
According to the NIH, the two assumptions which are generally used to argue in favor of treating fevers have not been experimentally validated. These are that (1) a fever is noxious, and (2) suppression of a fever will reduce its noxious effect. Most of the other studies supporting the association of fever with poorer outcomes have been observational in nature. In theory, these critically ill patients and those faced with additional physiologic stress may benefit from fever reduction, but the evidence on both sides of the argument appears to be mostly equivocal.
Conservative measures
Limited evidence supports sponging or bathing feverish children with tepid water. The use of a fan or air conditioning may somewhat reduce the temperature and increase comfort. If the temperature reaches the extremely high level of hyperpyrexia, aggressive cooling is required (generally produced mechanically via conduction by applying numerous ice packs across most of the body or direct submersion in ice water). In general, people are advised to keep adequately hydrated. Whether increased fluid intake improves symptoms or shortens respiratory illnesses such as the common cold is not known.
Medications
Medications that lower fevers are called antipyretics. The antipyretic ibuprofen is effective in reducing fevers in children. It is more effective than acetaminophen (paracetamol) in children. Ibuprofen and acetaminophen may be safely used together in children with fevers. The efficacy of acetaminophen by itself in children with fevers has been questioned. Ibuprofen is also superior to aspirin in children with fevers. Additionally, aspirin is not recommended in children and young adults (those under the age of 16 or 19 depending on the country) due to the risk of Reye's syndrome.
Using both paracetamol and ibuprofen at the same time or alternating between the two is more effective at decreasing fever than using only paracetamol or ibuprofen. It is not clear if it increases child comfort. Response or nonresponse to medications does not predict whether or not a child has a serious illness.
With respect to the effect of antipyretics on the risk of death in those with infection, studies have found mixed results, as of 2019.
Epidemiology
Fever is one of the most common medical signs. It is part of about 30% of healthcare visits by children, and occurs in up to 75% of adults who are seriously sick. About 5% of people who go to an emergency room have a fever.
History
A number of types of fever were known as early as 460 BC to 370 BC when Hippocrates was practicing medicine including that due to malaria (tertian or every 2 days and quartan or every 3 days). It also became clear around this time that fever was a symptom of disease rather than a disease in and of itself.
Infections presenting with fever were a major source of mortality in humans for about 200,000 years. Until the late nineteenth century, approximately half of all humans died from infections before the age of fifteen.
An older term, febricula (a diminutive form of the Latin word for fever), was once used to refer to a low-grade fever lasting only a few days. This term fell out of use in the early 20th century, and the symptoms it referred to are now thought to have been caused mainly by various minor viral respiratory infections.
Society and culture
Mythology
Febris (fever in Latin) is the goddess of fever in Roman mythology. People with fevers would visit her temples.
Tertiana and Quartana are the goddesses of tertian and quartan fevers of malaria in Roman mythology.
Jvarasura (fever-demon in Hindi) is the personification of fever and disease in Hindu and Buddhist mythology.
Pediatrics
Fever is often viewed with greater concern by parents and healthcare professionals than might be deserved, a phenomenon known as fever phobia, which is based in both caregiver's and parents' misconceptions about fever in children. Among them, many parents incorrectly believe that fever is a disease rather than a medical sign, that even low fevers are harmful, and that any temperature even briefly or slightly above the oversimplified "normal" number marked on a thermometer is a clinically significant fever. They are also afraid of harmless side effects like febrile seizures and dramatically overestimate the likelihood of permanent damage from typical fevers. The underlying problem, according to professor of pediatrics Barton D. Schmitt, is that "as parents we tend to suspect that our children's brains may melt." As a result of these misconceptions parents are anxious, give the child fever-reducing medicine when the temperature is technically normal or only slightly elevated, and interfere with the child's sleep to give the child more medicine.
Other species
Fever is an important metric for the diagnosis of disease in domestic animals. The body temperature of animals, which is taken rectally, is different from one species to another. For example, a horse is said to have a fever above (). In species that allow the body to have a wide range of "normal" temperatures, such as camels, whose body temperature varies as the environmental temperature varies, the body temperature which constitutes a febrile state differs depending on the environmental temperature. Fever can also be behaviorally induced by invertebrates that do not have immune-system based fever. For instance, some species of grasshopper will thermoregulate to achieve body temperatures that are 2–5 °C higher than normal in order to inhibit the growth of fungal pathogens such as Beauveria bassiana and Metarhizium acridum. Honeybee colonies are also able to induce a fever in response to a fungal parasite Ascosphaera apis.
| Biology and health sciences | Symptoms and signs | Health |
46256 | https://en.wikipedia.org/wiki/Telemetry | Telemetry | Telemetry is the in situ collection of measurements or other data at remote points and their automatic transmission to receiving equipment (telecommunication) for monitoring. The word is derived from the Greek roots tele, 'far off', and metron, 'measure'. Systems that need external instructions and data to operate require the counterpart of telemetry: telecommand.
Although the term commonly refers to wireless data transfer mechanisms (e.g., using radio, ultrasonic, or infrared systems), it also encompasses data transferred over other media such as a telephone or computer network, optical link or other wired communications like power line carriers. Many modern telemetry systems take advantage of the low cost and ubiquity of GSM networks by using SMS to receive and transmit telemetry data.
A telemeter is a physical device used in telemetry. It consists of a sensor, a transmission path, and a display, recording, or control device. Electronic devices are widely used in telemetry and can be wireless or hard-wired, analog or digital. Other technologies are also possible, such as mechanical, hydraulic and optical.
Telemetry may be commutated to allow the transmission of multiple data streams in a fixed frame.
History
The beginning of industrial telemetry lies in the steam age, although the sensor was not called telemeter at that time. Examples are James Watt's (1736-1819) additions to his steam engines for monitoring from a (near) distance such as the mercury pressure gauge and the fly-ball governor.
Although the original telemeter referred to a ranging device (the rangefinding telemeter), by the late 19th century the same term had been in wide use by electrical engineers applying it refer to electrically operated devices measuring many other quantities besides distance (for instance, in the patent of an "Electric Telemeter Transmitter"). General telemeters included such sensors as the thermocouple (from the work of Thomas Johann Seebeck), the resistance thermometer (by William Siemens based on the work of Humphry Davy), and the electrical strain gauge (based on Lord Kelvin's discovery that conductors under mechanical strain change their resistance) and output devices such as Samuel Morse's telegraph sounder and the relay. In 1889 this led an author in the Institution of Civil Engineers proceedings to suggest that the term for the rangefinder telemeter might be replaced with tacheometer.
In the 1930s use of electrical telemeters grew rapidly. The electrical strain gauge was widely used in rocket and aviation research and the radiosonde was invented for meteorological measurements. The advent of World War II gave an impetus to industrial development and henceforth many of these telemeters became commercially viable.
Carrying on from rocket research, radio telemetry was used routinely as space exploration got underway. Spacecraft are in a place where a physical connection is not possible, leaving radio or other electromagnetic waves (such as infrared lasers) as the only viable option for telemetry. During crewed space missions it is used to monitor not only parameters of the vehicle, but also the health and life support of the astronauts. During the Cold War telemetry found uses in espionage. US intelligence found that they could monitor the telemetry from Soviet missile tests by building a telemeter of their own to intercept the radio signals and hence learn a great deal about Soviet capabilities.
Types of telemeter
Telemeters are the physical devices used in telemetry. It consists of a sensor, a transmission path, and a display, recording, or control device. Electronic devices are widely used in telemetry and can be wireless or hard-wired, analog or digital. Other technologies are also possible, such as mechanical, hydraulic and optical.
Telemetering information over wire had its origins in the 19th century. One of the first data-transmission circuits was developed in 1845 between the Russian Tsar's Winter Palace and army headquarters. In 1874, French engineers built a system of weather and snow-depth sensors on Mont Blanc that transmitted real-time information to Paris. In 1901 the American inventor C. Michalke patented the selsyn, a circuit for sending synchronized rotation information over a distance. In 1906 a set of seismic stations were built with telemetering to the Pulkovo Observatory in Russia. In 1912, Commonwealth Edison developed a system of telemetry to monitor electrical loads on its power grid. The Panama Canal (completed 1913–1914) used extensive telemetry systems to monitor locks and water levels.
Wireless telemetry made early appearances in the radiosonde, developed concurrently in 1930 by Robert Bureau in France and Pavel Molchanov in Russia. Molchanov's system modulated temperature and pressure measurements by converting them to wireless Morse code. The German V-2 rocket used a system of primitive multiplexed radio signals called "Messina" to report four rocket parameters, but it was so unreliable that Wernher von Braun once claimed it was more useful to watch the rocket through binoculars.
In the US and the USSR, the Messina system was quickly replaced with better systems; in both cases, based on pulse-position modulation (PPM).
Early Soviet missile and space telemetry systems which were developed in the late 1940s used either PPM (e.g., the Tral telemetry system developed by OKB-MEI) or pulse-duration modulation (e.g., the RTS-5 system developed by NII-885). In the United States, early work employed similar systems, but were later replaced by pulse-code modulation (PCM) (for example, in the Mars probe Mariner 4). Later Soviet interplanetary probes used redundant radio systems, transmitting telemetry by PCM on a decimeter band and PPM on a centimeter band.
Applications
Meteorology
Weather balloons use telemetry to transmit meteorological data since 1920.
Oil and gas industry
Telemetry is used to transmit drilling mechanics and formation evaluation information uphole, in real time, as a well is drilled. These services are known as Measurement while drilling and Logging while drilling. Information acquired thousands of feet below ground, while drilling, is sent through the drilling hole to the surface sensors and the demodulation software. The pressure wave (sana) is translated into useful information after DSP and noise filters. This information is used for Formation evaluation, Drilling Optimization, and Geosteering.
Motor racing
Telemetry is a key factor in modern motor racing, allowing race engineers to interpret data collected during a test or race and use it to properly tune the car for optimum performance. Systems used in series such as Formula One have become advanced to the point where the potential lap time of the car can be calculated, and this time is what the driver is expected to meet. Examples of measurements on a race car include accelerations (G forces) in three axes, temperature readings, wheel speed, and suspension displacement. In Formula One, driver input is also recorded so the team can assess driver performance and (in case of an accident) the FIA can determine or rule out driver error as a possible cause.
Later developments include two-way telemetry which allows engineers to update calibrations on the car in real time (even while it is out on the track). In Formula One, two-way telemetry surfaced in the early 1990s and consisted of a message display on the dashboard which the team could update. Its development continued until May 2001, when it was first allowed on the cars. By 2002, teams were able to change engine mapping and deactivate engine sensors from the pit while the car was on the track. For the 2003 season, the FIA banned two-way telemetry from Formula One; however, the technology may be used in other types of racing or on road cars.
One way telemetry system has also been applied in R/C racing car to get information by car's sensors like: engine RPM, voltage, temperatures, throttle.
Transportation
In the transportation industry, telemetry provides meaningful information about a vehicle or driver's performance by collecting data from sensors within the vehicle. This is undertaken for various reasons ranging from staff compliance monitoring, insurance rating to predictive maintenance.
Telemetry is used to link traffic counter devices to data recorders to measure traffic flows and vehicle lengths and weights.
Telemetry is used by the railway industry for measuring the health of trackage. This permits optimized and focused predictive and preventative maintenance. Typically this is done with specialized trains, such as the New Measurement Train used in the United Kingdom by Network Rail, which can check for track defects, such as problems with gauge, and deformations in the rail. Japan uses similar, but quicker trains, nicknamed Doctor Yellow. Such trains, besides checking the tracks, can also verify whether or not there are any problems with the overhead power supply (catenary), where it is installed. Dedicated rail inspection companies, such as Sperry Rail, have their own customized rail cars and rail-wheel equipped trucks, that use a variety of methods, including lasers, ultrasound, and induction (measuring resulting magnetic fields from running electricity into rails) to find any defects.
Agriculture
Most activities related to healthy crops and good yields depend on timely availability of weather and soil data. Therefore, wireless weather stations play a major role in disease prevention and precision irrigation. These stations transmit parameters necessary for decision-making to a base station: air temperature and relative humidity, precipitation and leaf wetness (for disease prediction models), solar radiation and wind speed (to calculate evapotranspiration), water deficit stress (WDS) leaf sensors and soil moisture (crucial to irrigation decisions).
Because local micro-climates can vary significantly, such data needs to come from within the crop. Monitoring stations usually transmit data back by terrestrial radio, although occasionally satellite systems are used. Solar power is often employed to make the station independent of the power grid.
Water management
Telemetry is important in water management, including water quality and stream gauging functions. Major applications include AMR (automatic meter reading), groundwater monitoring, leak detection in distribution pipelines and equipment surveillance. Having data available in almost real time allows quick reactions to events in the field. Telemetry control allows engineers to intervene with assets such as pumps and by remotely switching pumps on or off depending on the circumstances. Watershed telemetry is an excellent strategy of how to implement a water management system.
Defense, space and resource exploration
Telemetry is used in complex systems such as missiles, RPVs, spacecraft, oil rigs, and chemical plants since it allows the automatic monitoring, alerting, and record-keeping necessary for efficient and safe operation. Space agencies such as NASA, ISRO, the European Space Agency (ESA), and other agencies use telemetry and/or telecommand systems to collect data from spacecraft and satellites.
Telemetry is vital in the development of missiles, satellites and aircraft because the system might be destroyed during or after the test. Engineers need critical system parameters to analyze (and improve) the performance of the system. In the absence of telemetry, this data would often be unavailable.
Space science
Telemetry is used by crewed or uncrewed spacecraft for data transmission. Distances of more than 10 billion kilometres have been covered, e.g., by Voyager 1.
Rocketry
In rocketry, telemetry equipment forms an integral part of the rocket range assets used to monitor the position and health of a launch vehicle to determine range safety flight termination criteria (Range purpose is for public safety). Problems include the extreme environment (temperature, acceleration and vibration), the energy supply, antenna alignment and (at long distances, e.g., in spaceflight) signal travel time.
Flight testing
Today nearly every type of aircraft, missiles, or spacecraft carries a wireless telemetry system as it is tested. Aeronautical mobile telemetry is used for the safety of the pilots and persons on the ground during flight tests. Telemetry from an on-board flight test instrumentation system is the primary source of real-time measurement and status information transmitted during the testing of crewed and uncrewed aircraft.
Military intelligence
Intercepted telemetry was an important source of intelligence for the United States and UK when Soviet missiles were tested; for this purpose, the United States operated a listening post in Iran. Eventually, the Russians discovered the United States intelligence-gathering network and encrypted their missile-test telemetry signals. Telemetry was also a source for the Soviets, who operated listening ships in Cardigan Bay to eavesdrop on UK missile tests performed in the area.
Energy monitoring
In factories, buildings and houses, energy consumption of systems such as HVAC are monitored at multiple locations; related parameters (e.g., temperature) are sent via wireless telemetry to a central location. The information is collected and processed, enabling the most efficient use of energy. Such systems also facilitate predictive maintenance.
Resource distribution
Many resources need to be distributed over wide areas. Telemetry is useful in these cases, since it allows the logistics system to channel resources where they are needed, as well as provide security for those assets; principal examples of this are dry goods, fluids, and granular bulk solids.
Dry goods
Dry goods, such as packaged merchandise, may be tracked and remotely monitored, tracked and inventoried by RFID sensing systems, barcode reader, optical character recognition (OCR) reader, or other sensing devices—coupled to telemetry devices, to detect RFID tags, barcode labels or other identifying markers affixed to the item, its package, or (for large items and bulk shipments) affixed to its shipping container or vehicle. This facilitates knowledge of their location, and can record their status and disposition, as when merchandise with barcode labels is scanned through a checkout reader at point-of-sale systems in a retail store. Stationary or hand-held barcode RFID scanners or Optical reader with remote communications, can be used to expedite inventory tracking and counting in stores, warehouses, shipping terminals, transportation carriers and factories.
Fluids
Fluids stored in tanks are a principal object of constant commercial telemetry. This typically includes monitoring of tank farms in gasoline refineries and chemical plants—and distributed or remote tanks, which must be replenished when empty (as with gas station storage tanks, home heating oil tanks, or ag-chemical tanks at farms), or emptied when full (as with production from oil wells, accumulated waste products, and newly produced fluids). Telemetry is used to communicate the variable measurements of flow and tank level sensors detecting fluid movements and/or volumes by pneumatic, hydrostatic, or differential pressure; tank-confined ultrasonic, radar or Doppler effect echoes; or mechanical or magnetic sensors.
Bulk solids
Telemetry of bulk solids is common for tracking and reporting the volume status and condition of grain and livestock feed bins, powdered or granular food, powders and pellets for manufacturing, sand and gravel, and other granular bulk solids. While technology associated with fluid tank monitoring also applies, in part, to granular bulk solids, reporting of overall container weight, or other gross characteristics and conditions, are sometimes required, owing to bulk solids' more complex and variable physical characteristics.
Medicine/healthcare
Telemetry is used for patients (biotelemetry) who are at risk of abnormal heart activity, generally in a coronary care unit. Telemetry specialists are sometimes used to monitor many patients within a hospital. Such patients are outfitted with measuring, recording and transmitting devices. A data log can be useful in diagnosis of the patient's condition by doctors. An alerting function can alert nurses if the patient is suffering from an acute (or dangerous) condition.
Systems are available in medical-surgical nursing for monitoring to rule out a heart condition, or to monitor a response to antiarrhythmic medications such as amiodarone.
A new and emerging application for telemetry is in the field of neurophysiology, or neurotelemetry. Neurophysiology is the study of the central and peripheral nervous systems through the recording of bioelectrical activity, whether spontaneous or stimulated. In neurotelemetry (NT) the electroencephalogram (EEG) of a patient is monitored remotely by a registered EEG technologist using advanced communication software. The goal of neurotelemetry is to recognize a decline in a patient's condition before physical signs and symptoms are present.
Neurotelemetry is synonymous with real-time continuous video EEG monitoring and has application in the epilepsy monitoring unit, neuro ICU, pediatric ICU and newborn ICU. Due to the labor-intensive nature of continuous EEG monitoring NT is typically done in the larger academic teaching hospitals using in-house programs that include R.EEG Technologists, IT support staff, neurologist and neurophysiologist and monitoring support personnel.
Modern microprocessor speeds, software algorithms and video data compression allow hospitals to centrally record and monitor continuous digital EEGs of multiple critically ill patients simultaneously.
Neurotelemetry and continuous EEG monitoring provides dynamic information about brain function that permits early detection of changes in neurologic status, which is especially useful when the clinical examination is limited.
Fishery and wildlife research and management
Telemetry is used to study wildlife, and has been useful for monitoring threatened species at the individual level. Animals under study can be outfitted with instrumentation tags, which include sensors that measure temperature, diving depth and duration (for marine animals), speed and location (using GPS or Argos packages). Telemetry tags can give researchers information about animal behavior, functions, and their environment. This information is then either stored (with archival tags) or the tags can send (or transmit) their information to a satellite or handheld receiving device. Capturing and marking wild animals can put them at some risk, so it is important to minimize these impacts.
Retail
At a 2005 workshop in Las Vegas, a seminar noted the introduction of telemetry equipment which would allow vending machines to communicate sales and inventory data to a route truck or to a headquarters. This data could be used for a variety of purposes, such as eliminating the need for drivers to make a first trip to see which items needed to be restocked before delivering the inventory.
Retailers also use RFID tags to track inventory and prevent shoplifting. Most of these tags passively respond to RFID readers (e.g., at the cashier), but active RFID tags are available which periodically transmit location information to a base station.
Law enforcement
Telemetry hardware is useful for tracking persons and property in law enforcement. An ankle collar worn by convicts on probation can warn authorities if a person violates the terms of his or her parole, such as by straying from authorized boundaries or visiting an unauthorized location. Telemetry has also enabled bait cars, where law enforcement can rig a car with cameras and tracking equipment and leave it somewhere they expect it to be stolen. When stolen the telemetry equipment reports the location of the vehicle, enabling law enforcement to deactivate the engine and lock the doors when it is stopped by responding officers.
Energy providers
In some countries, telemetry is used to measure the amount of electrical energy consumed. The electricity meter communicates with a concentrator, and the latter sends the information through GPRS or GSM to the energy provider's server. Telemetry is also used for the remote monitoring of substations and their equipment. For data transmission, phase line carrier systems operating on frequencies between 30 and 400 kHz are sometimes used.
Falconry
In falconry, "telemetry" means a small radio transmitter carried by a bird of prey that will allow the bird's owner to track it when it is out of sight.
Testing
Telemetry is used in testing hostile environments which are dangerous to humans. Examples include munitions storage facilities, radioactive sites, volcanoes, deep sea, and outer space.
Communications
Telemetry is used in many battery operated wireless systems to inform monitoring personnel when the battery power is reaching a low point and the end item needs fresh batteries.
Mining
In the mining industry, telemetry serves two main purposes: the measurement of key parameters from mining equipment and the monitoring of safety practices. The information provided by the collection and analysis of key parameters allows for root-cause identification of inefficient operations, unsafe practices and incorrect equipment usage for maximizing productivity and safety. Further applications of the technology allow for sharing knowledge and best practices across the organization.
Software
In software, telemetry is used to gather data on the use and performance of applications and application components, e.g. how often certain features are used, measurements of start-up time and processing time, hardware, application crashes, and general usage statistics and/or user behavior. In some cases, very detailed data is reported like individual window metrics, counts of used features, and individual function timings.
This kind of telemetry can be essential to software developers to receive data from a wide variety of endpoints that can't possibly all be tested in-house, as well as getting data on the popularity of certain features and whether they should be given priority or be considered for removal. Due to concerns about privacy since software telemetry can easily be used to profile users, telemetry in user software is often user choice, commonly presented as an opt-out feature (requiring explicit user action to disable it) or user choice during the software installation process.
International standards
As in other telecommunications fields, international standards exist for telemetry equipment and software. International standards producing bodies include Consultative Committee for Space Data Systems (CCSDS) for space agencies, Inter-Range Instrumentation Group (IRIG) for missile ranges, and Telemetering Standards Coordination Committee (TSCC), an organisation of the International Foundation for Telemetering.
| Physical sciences | Basics | Basics and measurement |
46310 | https://en.wikipedia.org/wiki/Lobster | Lobster | Lobsters are malacostracans of the family Nephropidae or its synonym Homaridae. They have long bodies with muscular tails and live in crevices or burrows on the sea floor. Three of their five pairs of legs have claws, including the first pair, which are usually much larger than the others. Highly prized as seafood, lobsters are economically important and are often one of the most profitable commodities in the coastal areas they populate.
Commercially important species include two species of Homarus from the northern Atlantic Ocean and scampi (which look more like a shrimp, or a "mini lobster")—the Northern Hemisphere genus Nephrops and the Southern Hemisphere genus Metanephrops.
Distinction
Although several other groups of crustaceans have the word "lobster" in their names, the unqualified term "lobster" generally refers to the clawed lobsters of the family Nephropidae. Clawed lobsters are not closely related to spiny lobsters or slipper lobsters, which have no claws (chelae), or to squat lobsters. The most similar living relatives of clawed lobsters are the reef lobsters and the three families of freshwater crayfish.
Description
Body
Lobsters are invertebrates with a hard protective exoskeleton. Like most arthropods, lobsters must shed to grow, which leaves them vulnerable. During the shedding process, several species change color. Lobsters have eight walking legs; the front three pairs bear claws, the first of which are larger than the others. The front pincers are also biologically considered legs, so they belong in the order Decapods ("ten-footed"). Although lobsters are largely bilaterally symmetrical like most other arthropods, some genera possess unequal, specialized claws.
Lobster anatomy includes two main body parts: the cephalothorax and the abdomen. The cephalothorax fuses the head and the thorax, both of which are covered by a chitinous carapace. The lobster's head bears antennae, antennules, mandibles, the first and second maxillae. The head also bears the (usually stalked) compound eyes. Because lobsters live in murky environments at the bottom of the ocean, they mostly use their antennae as sensors. The lobster eye has a reflective structure above a convex retina. In contrast, most complex eyes use refractive ray concentrators (lenses) and a concave retina. The lobster's thorax is composed of maxillipeds, appendages that function primarily as mouthparts, and pereiopods, appendages that serve for walking and for gathering food. The abdomen includes pleopods (also known as swimmerets), used for swimming, as well as the tail fan, composed of uropods and the telson.
Lobsters, like snails and spiders, have blue blood due to the presence of hemocyanin, which contains copper. In contrast, vertebrates, and many other animals have red blood from iron-rich hemoglobin. Lobsters possess a green hepatopancreas, called the tomalley by chefs, which functions as the animal's liver and pancreas.
Lobsters of the family Nephropidae are similar in overall form to several other related groups. They differ from freshwater crayfish in lacking the joint between the last two segments of the thorax, and they differ from the reef lobsters of the family Enoplometopidae in having full claws on the first three pairs of legs, rather than just one. The distinctions from fossil families such as the Chilenophoberidae are based on the pattern of grooves on the carapace.
Analysis of the neural gene complement revealed extraordinary development of the chemosensory machinery, including a profound diversification of ligand-gated ion channels and secretory molecules.
Coloring
Typically, lobsters are dark colored, either bluish-green or greenish-brown, to blend in with the ocean floor, but they can be found in many colors. Lobsters with atypical coloring are extremely rare, accounting for only a few of the millions caught every year, and due to their rarity, they usually are not eaten, instead being released back into the wild or donated to aquariums. Often, in cases of atypical coloring, there is a genetic factor, such as albinism or hermaphroditism. Special coloring does not appear to affect the lobster's taste once cooked; except for albinos, all lobsters possess astaxanthin, which is responsible for the bright red color lobsters turn after being cooked.
Longevity
Lobsters live up to an estimated 45 to 50 years in the wild, although determining age is difficult: it is typically estimated from size and other variables. Newer techniques may lead to more accurate age estimates.
Research suggests that lobsters may not slow down, weaken, or lose fertility with age and that older lobsters may be more fertile than younger lobsters. This longevity may be due to telomerase, an enzyme that repairs long repetitive sections of DNA sequences at the ends of chromosomes, referred to as telomeres. Telomerase is expressed by most vertebrates during embryonic stages but is generally absent from adult stages of life. However, unlike most vertebrates, lobsters express telomerase as adults through most tissue, which has been suggested to be related to their longevity. Telomerase is especially present in green spotted lobsters, whose markings are thought to be produced by the enzyme interacting with their shell pigmentation. Lobster longevity is limited by their size. Moulting requires metabolic energy, and the larger the lobster, the more energy is needed; 10 to 15% of lobsters die of exhaustion during moulting, while in older lobsters, moulting ceases and the exoskeleton degrades or collapses entirely, leading to death.
Like many decapod crustaceans, lobsters grow throughout life and can add new muscle cells at each moult. Lobster longevity allows them to reach impressive sizes. According to Guinness World Records, the largest lobster ever caught was in Nova Scotia, Canada, weighing .
Ecology
Lobsters live in all oceans, on rocky, sandy, or muddy bottoms from the shoreline to beyond the edge of the continental shelf, contingent largely on size and age. Smaller, younger lobsters are typically found in crevices or in burrows under rocks and do not typically migrate. Larger, older lobsters are more likely to be found in deeper seas, migrating back to shallow waters seasonally.
Lobsters are omnivores and typically eat live prey such as fish, mollusks, other crustaceans, worms, and some plant life. They scavenge if necessary and are known to resort to cannibalism in captivity. However, when lobster skin is found in lobster stomachs, this is not necessarily evidence of cannibalism because lobsters eat their shed skin after moulting. While cannibalism was thought to be nonexistent among wild lobster populations, it was observed in 2012 by researchers studying wild lobsters in Maine. These first known instances of lobster cannibalism in the wild are theorized to be attributed to a local population explosion among lobsters caused by the disappearance of many of the Maine lobsters' natural predators.
In general, lobsters are long and move by slowly walking on the sea floor. However, they swim backward quickly when they flee by curling and uncurling their abdomens. A speed of has been recorded. This is known as the caridoid escape reaction.
Symbiotic animals of the genus Symbion, the only known member of the phylum Cycliophora, live exclusively on lobster gills and mouthparts. Different species of Symbion have been found on the three commercially important lobsters of the North Atlantic Ocean: Nephrops norvegicus, Homarus gammarus, and Homarus americanus.
As food
Lobster is commonly served boiled or steamed in the shell. Diners crack the shell with lobster crackers and fish out the meat with lobster picks. The meat is often eaten with melted butter and lemon juice. Lobster is also used in soup, bisque, lobster rolls, cappon magro, and dishes such as lobster Newberg and lobster Thermidor.
Cooks boil or steam live lobsters. When a lobster is cooked, its shell's color changes from brown to orange because the heat from cooking breaks down a protein called crustacyanin, which suppresses the orange hue of the chemical astaxanthin, which is also found in the shell.
According to the United States Food and Drug Administration (FDA), the mean level of mercury in American lobster between 2005 and 2007 was 0.107ppm.
History
Humans are claimed to have eaten lobster since early history. Large piles of lobster shells near areas populated by fishing communities attest to the crustacean's extreme popularity during this period. Evidence indicates that lobster was being consumed as a regular food product in fishing communities along the shores of Britain, South Africa, Australia, and Papua New Guinea years ago. Lobster became a significant source of nutrients among European coastal dwellers. Historians suggest lobster was an important secondary food source for most European coastal dwellers, and it was a primary food source for coastal communities in Britain during this time.
Lobster became a popular mid-range delicacy during the mid to late Roman period. The price of lobster could vary widely due to various factors, but evidence indicates that lobster was regularly transported inland over long distances to meet popular demand. A mosaic found in the ruins of Pompeii suggests that the spiny lobster was of considerable interest to the Roman population during the early imperial period.
Lobster was a popular food among the Moche people of Peru between 50 CE and 800 CE. Besides its use as food, lobster shells were also used to create a light pink dye, ornaments, and tools. A mass-produced lobster-shaped effigy vessel dated to this period attests to lobster's popularity at this time, though the purpose of this vessel has not been identified.
The Viking period saw an increase in lobster and other shellfish consumption among northern Europeans. This can be attributed to the overall increase in marine activity due to the development of better boats and the increasing cultural investment in building ships and training sailors. The consumption of marine life went up overall in this period, and the consumption of lobster went up in accordance with this general trend.
Unlike fish, however, lobster had to be cooked within two days of leaving salt water, limiting the availability of lobster for inland dwellers. Thus lobster, more than fish, became a food primarily available to the relatively well-off, at least among non-coastal dwellers.
Lobster is first mentioned in cookbooks during the medieval period. Le Viandier de Taillevent, a French recipe collection written around 1300, suggests that lobster (also called saltwater crayfish) be "Cooked in wine and water, or in the oven; eaten in vinegar." Le Viandier de Taillevent is considered to be one of the first "haute cuisine" cookbooks, advising on how to cook meals that would have been quite elaborate for the period and making usage of expensive and hard to obtain ingredients. Though the original edition, which includes the recipe for lobster, was published before the birth of French court cook Guillaume Tirel, Tirel later expanded and republished this recipe collection, suggesting that the recipes included in both editions were popular among the highest circles of French nobility, including King Philip VI. The inclusion of a lobster recipe in this cookbook, especially one which does not make use of other more expensive ingredients, attests to the popularity of lobster among the wealthy.
The French household guidebook Le Ménagier de Paris, published in 1393, includes no less than five recipes including lobster, which vary in elaboration. A guidebook intended to provide advice for women running upper-class households, Le Ménagier de Paris is similar to its predecessor in that it indicates the popularity of lobster as a food among the upper classes.
That lobster was first mentioned in cookbooks during the 1300s and only mentioned in two during this century should not be taken as an implication that lobster was not widely consumed before or during this time. Recipe collections were virtually non-existent before the 1300s, and only a handful exist from the medieval period.
During the early 1400s, lobster was still a popular dish among the upper classes. During this time, influential households used the variety and variation of species served at feasts to display wealth and prestige. Lobster was commonly found among these spreads, indicating that it continued to be held in high esteem among the wealthy. In one notable instance, the Bishop of Salisbury offered at least 42 kinds of crustaceans and fish at his feasts over nine months, including several varieties of lobster. However, lobster was not a food exclusively accessed by the wealthy. The general population living on the coasts made use of the various food sources provided by the ocean, and shellfish especially became a more popular source of nutrition. Among the general population, lobster was generally eaten boiled during the mid-15th century, but the influence of the cuisine of higher society can be seen in that it was now also regularly eaten cold with vinegar. The inland peasantry would still have generally been unfamiliar with lobster during this time.
Lobster continued to be eaten as a delicacy and a general staple food among coastal communities until the late 17th century. During this time, the influence of the Church and the government regulating and sometimes banning meat consumption during certain periods continued to encourage the popularity of seafood, especially shellfish, as a meat alternative among all classes. Throughout this period, lobster was eaten fresh, pickled, and salted. From the late 17th century onward, developments in fishing, transportation, and cooking technology allowed lobster to more easily make its way inland, and the variety of dishes involving lobster and cooking techniques used with the ingredient expanded. However, these developments coincided with a decrease in the lobster population, and lobster increasingly became a delicacy food, valued among the rich as a status symbol and less likely to be found in the diet of the general population.
The American lobster was not originally popular among European colonists in North America. This was partially due to the European inlander's association of lobster with barely edible salted seafood and partially due to a cultural opinion that seafood was a lesser alternative to meat that did not provide the taste or nutrients desired. It was also due to the extreme abundance of lobster at the time of the colonists' arrival, which contributed to a general perception of lobster as an undesirable peasant food. The American lobster did not achieve popularity until the mid-19th century when New Yorkers and Bostonians developed a taste for it, and commercial lobster fisheries only flourished after the development of the lobster smack, a custom-made boat with open holding wells on the deck to keep the lobsters alive during transport.
Before this time, lobster was considered a poverty food or as a food for indentured servants or lower members of society in Maine, Massachusetts, and the Canadian Maritimes. Some servants specified in employment agreements that they would not eat lobster more than twice per week, however there is limited evidence for this. Lobster was also commonly served in prisons, much to the displeasure of inmates. American lobster was initially deemed worthy only of being used as fertilizer or fish bait, and until well into the 20th century, it was not viewed as more than a low-priced canned staple food.
As a crustacean, lobster remains a taboo food in the dietary laws of Judaism and certain streams of Islam.
Grading
Caught lobsters are graded as new-shell, hard-shell, or old-shell. Because lobsters that have recently shed their shells are the most delicate, an inverse relationship exists between the price of American lobster and its flavor. New-shell lobsters have paper-thin shells and a worse meat-to-shell ratio, but the meat is very sweet. However, the lobsters are so delicate that even transport to Boston almost kills them, making the market for new-shell lobsters strictly local to the fishing towns where they are offloaded. Hard-shell lobsters with firm shells but less sweet meat can survive shipping to Boston, New York, and even Los Angeles, so they command a higher price than new-shell lobsters. Meanwhile, old-shell lobsters, which have not shed since the previous season and have a coarser flavor, can be air-shipped anywhere in the world and arrive alive, making them the most expensive.
Killing methods and animal welfare
Several methods are used for killing lobsters. The most common way of killing lobsters is by placing them live in boiling water, sometimes after being placed in a freezer for a period. Another method is to split the lobster or sever the body in half lengthwise. Lobsters may also be killed or immobilized immediately before boiling by a stab into the brain (pithing), in the belief that this will stop suffering. However, a lobster's brain operates from not one but several ganglia, and disabling only the frontal ganglion does not usually result in death. The boiling method is illegal in some places, such as in Italy, where offenders face fines up to €495. Lobsters can be killed by electrocution prior to cooking with a device called the CrustaStun. Since March 2018, lobsters in Switzerland need to be knocked out, or killed instantly, before they are boiled. They also receive other forms of protection while in transit.
Fishery and aquaculture
Lobsters are caught using baited one-way traps with a color-coded marker buoy to mark cages. Lobster is fished in water between , although some lobsters live at . Cages are of plastic-coated galvanized steel or wood. A lobster fisher may tend to as many as 2,000 traps.
Around the year 2000, owing to overfishing and high demand, lobster aquaculture expanded.
Species
The fossil record of clawed lobsters extends back at least to the Valanginian age of the Cretaceous (140 million years ago). This list contains all 54 extant species in the family Nephropidae:
Acanthacaris
Acanthacaris caeca A. Milne-Edwards, 1881
Acanthacaris tenuimana Bate, 1888
Dinochelus Ahyong, Chan & Bouchet, 2010
Dinochelus ausubeli Ahyong, Chan & Bouchet, 2010
Eunephrops Smith, 1885
Eunephrops bairdii Smith, 1885
Eunephrops cadenasi Chace, 1939
Eunephrops luckhursti Manning, 1997
Eunephrops manningi Holthuis, 1974
Homarinus Kornfield, Williams & Steneck, 1995
Homarinus capensis (Herbst, 1792) – Cape lobster
Homarus Weber, 1795
Homarus americanus H. Milne-Edwards, 1837 – American lobster
Homarus gammarus (Linnaeus, 1758) – European lobster
Metanephrops Jenkins, 1972
Metanephrops andamanicus (Wood-Mason, 1892) – Andaman lobster
Metanephrops arafurensis (De Man, 1905)
Metanephrops armatus Chan & Yu, 1991
Metanephrops australiensis (Bruce, 1966) – Australian scampi
Metanephrops binghami (Boone, 1927) – Caribbean lobster
Metanephrops boschmai (Holthuis, 1964) – Bight lobster
Metanephrops challengeri (Balss, 1914) – New Zealand scampi
Metanephrops formosanus Chan & Yu, 1987
Metanephrops japonicus (Tapparone-Canefri, 1873) – Japanese lobster
Metanephrops mozambicus Macpherson, 1990
Metanephrops neptunus (Bruce, 1965)
Metanephrops rubellus (Moreira, 1903)
Metanephrops sagamiensis (Parisi, 1917)
Metanephrops sibogae (De Man, 1916)
Metanephrops sinensis (Bruce, 1966) – China lobster
Metanephrops taiwanicus (Hu, 1983)
Metanephrops thomsoni (Bate, 1888)
Metanephrops velutinus Chan & Yu, 1991
Nephropides Manning, 1969
Nephropides caribaeus Manning, 1969
Nephrops Leach, 1814
Nephrops norvegicus (Linnaeus, 1758) – Norway lobster, Dublin Bay prawn, langoustine
Nephropsis Wood-Mason, 1872
Nephropsis acanthura Macpherson, 1990
Nephropsis aculeata Smith, 1881 – Florida lobsterette
Nephropsis agassizii A. Milne-Edwards, 1880
Nephropsis atlantica Norman, 1882
Nephropsis carpenteri Wood-Mason, 1885
Nephropsis ensirostris Alcock, 1901
Nephropsis holthuisii Macpherson, 1993
Nephropsis malhaensis Borradaile, 1910
Nephropsis neglecta Holthuis, 1974
Nephropsis occidentalis Faxon, 1893
Nephropsis rosea Bate, 1888
Nephropsis serrata Macpherson, 1993
Nephropsis stewarti Wood-Mason, 1872
Nephropsis suhmi Bate, 1888
Nephropsis sulcata Macpherson, 1990
Thaumastocheles Wood-Mason, 1874
Thaumastocheles dochmiodon Chan & Saint Laurent, 1999
Thaumastocheles japonicus Calman, 1913
Thaumastocheles zaleucus (Thomson, 1873)
Thaumastochelopsis Bruce, 1988
Thaumastochelopsis brucei Ahyong, Chu & Chan, 2007
Thaumastochelopsis wardi Bruce, 1988
Thymopides Burukovsky & Averin, 1977
Thymopides grobovi (Burukovsky & Averin, 1976)
Thymopides laurentae Segonzac & Macpherson, 2003
Thymops Holthuis, 1974
Thymops birsteini (Zarenkov & Semenov, 1972)
Thymopsis Holthuis, 1974
Thymopsis nilenta Holthuis, 1974
| Biology and health sciences | Crustaceans | null |
46311 | https://en.wikipedia.org/wiki/Flounder | Flounder | Flounders are a group of flatfish species. They are demersal fish, found at the bottom of oceans around the world; some species will also enter estuaries.
Taxonomy
The name "flounder" is used for several only distantly related species, though all are in the suborder Pleuronectoidei (families Achiropsettidae, Bothidae, Pleuronectidae, Paralichthyidae, and Samaridae). Some of the better known species that are important in fisheries are:
Western Atlantic
Gulf flounder – Paralichthys albigutta
Southern flounder – Paralichthys lethostigma
Summer flounder (also known as fluke) – Paralichthys dentatus
Winter flounder – Pseudopleuronectes americanus
European waters
European flounder – Platichthys flesus
Witch flounder – Glyptocephalus cynoglossus
North Pacific
Halibut – Hippoglossus stenolepis
Olive flounder – Paralichthys olivaceus
Eye migration
Larval flounder are born with one eye on each side of their head, but as they grow from the larval to juvenile stage through metamorphosis, one eye migrates to the other side of the body. As a result, both eyes are then on the side which faces up. The side to which the eyes migrate is dependent on the species type. As an adult, a flounder changes its habits and camouflages itself by lying on the bottom of the ocean floor as protection against predators.
Habitat
Flounders ambush their prey, feeding at soft muddy areas of the sea bottom, near bridge piles, docks, and coral reefs.
A flounder's diet consists mainly of fish spawn, crustaceans, polychaetes and small fish. Flounder typically grow to a length of , and as large as . Their width is about half their length. Male Platichthys have been found up to off the coast of northern Sardinia, sometimes with heavy encrustations of various species of barnacle.
Fluke, a type of flounder, are being farm raised in open water by Mariculture Technologies in Greenport, New York.
Threats
World stocks of large predatory fish and large ground fish, including sole and flounder, were estimated in 2003 to be only about 10% of pre-industrial levels, largely due to overfishing. Most overfishing is due to the extensive activities of the fishing industry. Current estimates suggest that approximately 30 million flounder (excluding sole) are alive in the world today. In the Gulf of Mexico, along the coast of Texas, research indicates the flounder population could be as low as 15 million due to heavy overfishing and industrial pollution.
| Biology and health sciences | Acanthomorpha | null |
46319 | https://en.wikipedia.org/wiki/Asparagus | Asparagus | Asparagus (Asparagus officinalis) is a perennial flowering plant species in the genus Asparagus native to Eurasia. Widely cultivated as a vegetable crop, its young shoots are used as a spring vegetable.
Description
Asparagus is an herbaceous, perennial plant growing to tall, with stout stems with much-branched, feathery foliage. The 'leaves' are needle-like cladodes (modified stems) in the axils of scale leaves; they are long and broad, and clustered in fours, up to 15, together, in a rose-like shape. The root system, often referred to as a 'crown', is adventitious; the root type is fasciculated. The flowers are bell-shaped, greenish-white to yellowish, long, with six tepals partially fused together at the base; they are produced singly or in clusters of two or three in the junctions of the branchlets. It is usually dioecious, with male and female flowers on separate plants, but sometimes hermaphrodite flowers are found. The fruit is a small red berry in diameter, which is toxic to humans.
Plants native to the western coasts of Europe (from northern Spain to northwest Germany, north Ireland, and Great Britain) are treated as A. officinalis subsp. prostratus , distinguished by its low-growing, often prostrate stems growing to only high, and shorter cladodes long. Some authors treat it as a distinct species, A. prostratus .
Taxonomy
Asparagus was once classified in the lily family, as were the related Allium species onions and garlic. Genetic research currently places lilies, Allium, and asparagus in three separate families: the Liliaceae, Amaryllidaceae, and Asparagaceae, respectively. The latter two are part of the order Asparagales.
Etymology
The English word asparagus derives from classical Latin but the plant was once known in English as sperage, from the Medieval Latin sparagus. This term itself derives from the - aspáragos, a variant of - aspháragos. The Greek terms are of uncertain provenance; the former form admits the possibility of a Proto-Indo-European root meaning "to jerk, scatter," directly or via a Persian descendant meaning "twig, branch"; but the Ancient Greek word itself, meaning "gully, chasm," seems to be of Pre-Greek origin instead.
In English, A. officinalis is widely known simply as "asparagus", or sometimes "garden asparagus".
Asparagus was corrupted by folk etymology in some places to "sparrow grass"; indeed, John Walker wrote in 1791 that "Sparrowgrass is so general that asparagus has an air of stiffness and pedantry".
The name 'sparrow grass' was still in common use in rural East Anglia, England well into the twentieth century.
Distribution and habitat
Sources differ as to the plant's native range, but generally include most of Europe and western temperate Asia.
Cultivation
Since asparagus often originates in maritime habitats, it thrives in soils that are too saline for normal weeds to grow. Thus, a little salt was traditionally used to suppress weeds in beds intended for asparagus; this has the disadvantage that the soil cannot be used for anything else. Some regions and gardening zones are better-suited for growing asparagus than others, such as the west coast of North America and other more maritime, “Mediterranean” environments. The fertility of the soil is a large factor. "Crowns" are planted in winter, and the first shoots appear in spring; the first pickings or "thinnings" are known as sprue asparagus. Sprue has thin stems.
A breed of "early-season asparagus" that can be harvested two months earlier than usual was announced by a UK grower in early 2011. This variety does not need to lie dormant and blooms at , rather than the usual .
Purple asparagus differs from its green and white counterparts in having high sugar and low fibre levels. Purple asparagus was originally developed in Italy, near the city of Albenga and commercialized under the variety name 'Violetto d' Albenga'. Purple asparagus can also turn green while being cooked due to its sensitivity to heat.
Companion planting
Asparagus is said to be a useful companion plant for tomatoes, as the tomato plant repels the asparagus beetle. Asparagus may repel some harmful root nematodes that affect tomato plants.
Uses
The genome of the species has been sequenced as a model to study the evolution of sex chromosomes in plants and dioecy.
Nutrition
Water makes up 93% of asparagus's composition. Asparagus is low in food energy and very low in sodium. It is a good source of vitamin B6, calcium, magnesium, and zinc, and a very good source of dietary fibre, protein, beta-carotene, vitamin C, vitamin E, vitamin K, thiamin, riboflavin, rutin, niacin, folic acid, iron, phosphorus, potassium, copper, manganese, and selenium, as well as chromium, a trace mineral that regulates the ability of insulin to transport glucose from the bloodstream into cells. The amino acid asparagine gets its name from asparagus, from which it was first isolated, as the asparagus plant is relatively rich in this compound.
Culinary
Only young asparagus shoots are commonly eaten: once the buds start to open ("ferning out"), the shoots quickly turn woody. The roots contain starch.
The shoots are prepared and served in a number of ways around the world, typically as an appetizer or vegetable side dish. In Asian-style cooking, asparagus is often stir-fried. Cantonese restaurants in the United States often serve asparagus stir-fried with chicken, shrimp, or beef. It may also be quickly grilled over charcoal or hardwood embers, and is also used as an ingredient in some stews and soups.
Asparagus can also be pickled and stored for several years. Some brands label shoots prepared in this way as "marinated".
Stem thickness indicates the age of the plant (and not the age of the stalk), with the thicker stems coming from older plants. Older, thicker stalks can be woody, although peeling the skin at the base removes the tough layer. Peeled asparagus will poach much faster. The bottom portion of asparagus often contains sand and soil, so thorough cleaning is generally advised before cooking.
Male plants tend to produce spears that are smaller and thinner, while female plants tend produce larger and thicker spears. Thickness and thinness are not an indication of tenderness or toughness. The stalks are thick or thin from the moment they sprout from the ground.
Green asparagus is eaten worldwide, and the availability of imports throughout the year has made it less of a delicacy than it once was. In Europe, according to one source, the "asparagus season is a highlight of the foodie calendar"; in the UK this traditionally begins on 23 April and ends on Midsummer Day. As in continental Europe, due to the short growing season and demand for local produce, asparagus commands a premium price.
Commercial production
The top asparagus importers (2016) were the United States (214,735 tonnes), followed by Germany (24,484 tonnes), and Canada (19,224 tonnes).
China is by far the world's largest producer: in 2017 it produced 7,845,162 tonnes, followed by Peru with 383,098 tonnes and Mexico with 245,681 tonnes. U.S. production was concentrated in California, Michigan, and Washington.
The annual production for white asparagus in Germany is 57,000 tonnes (61% of consumer demand).
When grown under tunnels, growers can extend the harvest season. In the UK, it is estimated that the asparagus harvest season can begin as early as mid-February and continue into late autumn by growing cold-resistant cultivars under heated polytunnels. Furthermore, late season harvests can be achieved using 'reverse season growth' where spears are left to fern between March–August and harvested in September–October.
In Asia, an alternative approach to cultivating asparagus has been employed and is referred to as 'Mother Stalk Method' where three to five stalks per plant are allowed to develop into fern, while harvesting adjacent spears.
White asparagus
White asparagus is very popular in Europe and western Asia. White asparagus is the result of applying a blanching technique while the asparagus shoots are growing. To cultivate white asparagus, the shoots are covered with soil as they grow, i.e. earthed up; without exposure to sunlight, no photosynthesis starts, and the shoots remain white. Compared to green asparagus, the locally cultivated so-called "white gold" or "edible ivory" asparagus, also referred to as "the royal vegetable", is believed to be less bitter and much more tender. Freshness is very important, and the lower ends of white asparagus must be peeled before cooking or raw consumption.
Only seasonally on the menu, asparagus dishes are advertised outside many restaurants, usually from late April to June. For the French style, asparagus is often boiled or steamed and served with Hollandaise sauce, white sauce, melted butter or most recently with olive oil and Parmesan cheese. Tall, narrow asparagus cooking pots allow the shoots to be steamed gently, their tips staying out of the water.
During the German Spargelsaison or Spargelzeit ("asparagus season" or "asparagus time"), the asparagus season that traditionally finishes on 24 June, roadside stands and open-air markets sell about half of the country's white asparagus consumption.
In western Himalayan regions, such as Nepal and north-western India, wild asparagus is harvested as a seasonal vegetable delicacy known as kurilo or jhijhirkani.
In culture
Asparagus has been used as a vegetable owing to its distinct flavor, and in medicine due to its diuretic properties and its purported function as an aphrodisiac. It is pictured as an offering on an Egyptian frieze dating to 3000 BC. In ancient times, it was also known in Syria and in the Iberian Peninsula. Greeks and Romans ate it fresh when in season, and dried the vegetable for use in winter. Emperor Augustus coined the expression "faster than cooking asparagus" for quick action.
A recipe for cooking asparagus is given in one of the oldest surviving collections of recipes (Apicius's 1st century AD De re coquinaria, Book III). In the second century AD, the Greek physician Galen, highly respected within Roman society, mentioned asparagus as a beneficial herb, but as dominance of the Roman empire waned, asparagus' medicinal value drew little attention
until al-Nafzawi's The Perfumed Garden. That piece of writing celebrates its purported aphrodisiacal power that the Indian Ananga Ranga attributes to "special phosphorus elements" that also counteract fatigue.
By 1469, asparagus was cultivated in French monasteries. Asparagus appears to have been little noticed in England until 1538, and in Germany until 1542.
Asparagus was brought to North America by European settlers at least as early as 1655. Adriaen van der Donck, a Dutch immigrant to New Netherland, mentions asparagus in his description of Dutch farming practices in the New World. Asparagus was grown by British immigrants as well; in 1685, one of William Penn's advertisements for Pennsylvania included asparagus in a long list of crops that grew well in the American climate.
The points d'amour ("love tips") were served as a delicacy to Madame de Pompadour (1721–1764).
Effects on urine
The effect of eating asparagus on urine excreted afterwards has long been observed:
[Asparagus] cause a powerful and disagreeable smell in the urine, as everybody knows.
— Treatise of All Sorts of Foods, Louis Lémery, 1702
asparagus... affects the urine with a foetid smell (especially if cut when they are white) and therefore have been suspected by some physicians as not friendly to the kidneys; when they are older, and begin to ramify, they lose this quality; but then they are not so agreeable.
— "An Essay Concerning the Nature of Aliments", John Arbuthnot, 1735
A few Stems of Asparagus eaten, shall give our Urine a disagreeable Odour...
— "Letter to the Royal Academy of Brussels", Benjamin Franklin, c. 1781
Asparagus "...transforms my chamber-pot into a flask of perfume."
— Marcel Proust (1871–1922)
Asparagus contains asparagusic acid. When the vegetable is digested, a group of volatile sulfur-containing compounds is produced.
Certain compounds in asparagus are metabolized to yield ammonia and various sulfur-containing degradation products, including various thiols and thioesters, which following consumption give urine a characteristic smell. Some of the volatile organic compounds responsible for the smell are:
methanethiol
dimethyl sulfide
dimethyl disulfide
bis(methylthio)methane
dimethyl sulfoxide
dimethyl sulfone
Subjectively, the first two are the most pungent, while the last two (sulfur-oxidized) give a sweet aroma. A mixture of these compounds form a "reconstituted asparagus urine" odor. This was first investigated in 1891 by Marceli Nencki, who attributed the smell to methanethiol. These compounds originate in the asparagus as asparagusic acid and its derivatives, as these are the only sulfur-containing compounds unique to asparagus. As these are more present in young asparagus, this accords with the observation that the smell is more pronounced after eating young asparagus. The biological mechanism for the production of these compounds is less clear.
The onset of the asparagus urine smell is remarkably rapid while the decline is slower. The smell has been reported to be detectable 15 to 30 minutes after ingestion and subsides with a half-life of approximately four hours.
Asparagus has been eaten and cultivated for at least two millennia but the association between odorous urine and asparagus consumption was not observed until the late 17th century when sulfur-rich fertilisers became common in agriculture. Small-scale studies noted that the "asparagus urine" odour was not produced by all individuals and estimates as to the proportion of the population who are excretors (reporting a noticeable asparagus urine odour after eating asparagus) has ranged from about 40% to as high as 79%. When excretors are exposed to non-excretor urine after asparagus consumption, however, the characteristic asparagus urine odour is usually reported. More recent work has confirmed that a small proportion of individuals do not produce asparagus urine, and amongst those that do, some cannot detect the odour due to a single-nucleotide polymorphism within a cluster of olfactory receptors.
Debate exists about the universality of producing the sulfurous smell, as well as the ability to detect it. Originally, this was thought to be because some people digested asparagus differently from others, so some excreted odorous urine after eating asparagus, and others did not. In the 1980s, three studies from France, China, and Israel published results showing that producing odorous urine from asparagus was a common human characteristic. The Israeli study found that from their 307 subjects, all of those who could smell "asparagus urine" could detect it in the urine of anyone who had eaten asparagus, even if the person who produced it could not detect it. A 2010 study found variations in both production of odorous urine and the ability to detect the odor, but that these were not tightly related. Most people are thought to produce the odorous compounds after eating asparagus, but the differing abilities of various individuals to detect the odor at increasing dilutions suggests a genetically determined specific sensitivity.
In 2010, the company 23andMe published a genome-wide association study on whether participants have "ever noticed a peculiar odor when [they] pee after eating asparagus". This study pinpointed a single-nucleotide polymorphism (SNP) in a cluster of olfactory genes associated with the ability to detect the odor. While this SNP did not explain all of the difference in detection between people, it provides support for the theory that genetic differences occur in olfactory receptors that lead people to be unable to smell these odorous compounds.
Celebrations
The green crop is significant enough in California's Sacramento–San Joaquin River Delta region that the city of Stockton holds a festival every year to celebrate it. Oceana County, Michigan, the self-proclaimed "asparagus capital of the world" hosts an annual festival complete with a parade and asparagus queen; The Vale of Evesham in Worcestershire is the largest producer within Northern Europe, celebrating with the annual British Asparagus Festival involving auctions of the best crop, an "Asparagus Run" modelled on the Beaujolais Run and a weekend "Asparafest" music festival.
Many German cities hold an annual Spargelfest (asparagus festival) celebrating the harvest of white asparagus. Schwetzingen claims to be the "Asparagus Capital of the World", and during its festival, an Asparagus Queen is crowned. The Bavarian city of Nuremberg feasts a week long in April, with a competition to find the fastest asparagus peeler in the region; this usually involves generous amounts of the local wines and beers being consumed to aid the spectators' appreciative support.
Helmut Zipner, who peeled a ton of asparagus in 16 hours, holds the world record in asparagus peeling.
Gallery
| Biology and health sciences | Monocots | null |
46331 | https://en.wikipedia.org/wiki/Flatfish | Flatfish | A flatfish is a member of the ray-finned demersal fish superorder Pleuronectoidei, also called the Heterosomata. In many species, both eyes lie on one side of the head, one or the other migrating through or around the head during development. Some species face their left sides upward, some face their right sides upward, and others face either side upward. The most primitive members of the group, the threadfins, do not resemble the flatfish but are their closest relatives.
Many important food fish are in this order, including the flounders, soles, turbot, plaice, and halibut. Some flatfish can camouflage themselves on the ocean floor.
Taxonomy
Due to their highly distinctive morphology, flatfishes were previously treated as belonging to their own order, Pleuronectiformes. However, more recent taxonomic studies have found them to group within a diverse group of nektonic marine fishes known as the Carangiformes, which also includes jacks and billfish. Specifically, flatfish are most closely related to the threadfins, which are now also placed in the suborder Pleuronectoidei. Together, the group is most closely related to the archerfish and beachsalmons within Toxotoidei. Due to this, they are now treated as a suborder of the Carangiformes.
Over 800 described species are placed into 16 families. When they were treated as an order, the flatfishes are divided into two suborders, Psettodoidei and Pleuronectoidei, with > 99% of the species diversity found within the Pleuronectoidei. The largest families are Soleidae, Bothidae and Cynoglossidae with more than 150 species each. There also exist two monotypic families (Paralichthodidae and Oncopteridae). Some families are the results of relatively recent splits. For example, the Achiridae were classified as a subfamily of Soleidae in the past, and the Samaridae were considered a subfamily of the Pleuronectidae. The families Paralichthodidae, Poecilopsettidae, and Rhombosoleidae were also traditionally treated as subfamilies of Pleuronectidae, but are now recognised as families in their own right. The Paralichthyidae has long been indicated to be paraphyletic, with the formal description of Cyclopsettidae in 2019 resulting in the split of this family as well.
The taxonomy of some groups is in need of a review. The last monograph covering the entire order was John Roxborough Norman's Monograph of the Flatfishes published in 1934. In particular, Tephrinectes sinensis may represent a family-level lineage and requires further evaluation e.g. New species are described with some regularity and undescribed species likely remain.
Hybrids
Hybrids are well known in flatfishes. The Pleuronectidae have the largest number of reported hybrids of marine fishes. Two of the most famous intergeneric hybrids are between the European plaice (Pleuronectes platessa) and European flounder (Platichthys flesus) in the Baltic Sea, and between the English sole (Parophrys vetulus) and starry flounder (Platichthys stellatus) in Puget Sound. The offspring of the latter species pair is popularly known as the hybrid sole and was initially believed to be a valid species in its own right.
Distribution
Flatfishes are found in oceans worldwide, ranging from the Arctic, through the tropics, to Antarctica. Species diversity is centered in the Indo-West Pacific and declines following both latitudinal and longitudinal gradients away from the Indo-West Pacific. Most species are found in depths between 0 and , but a few have been recorded from depths in excess of . None have been confirmed from the abyssal or hadal zones. An observation of a flatfish from the Bathyscaphe Trieste at the bottom of the Mariana Trench at a depth of almost has been questioned by fish experts, and recent authorities do not recognize it as valid. Among the deepwater species, Symphurus thermophilus lives congregating around "ponds" of sulphur at hydrothermal vents on the seafloor. No other flatfish is known from hydrothermal vents. Many species will enter brackish or fresh water, and a smaller number of soles (families Achiridae and Soleidae) and tonguefish (Cynoglossidae) are entirely restricted to fresh water.
Characteristics
The most obvious characteristic of the flatfish is its asymmetry, with both eyes lying on the same side of the head in the adult fish. In some families, the eyes are usually on the right side of the body (dextral or right-eyed flatfish), and in others, they are usually on the left (sinistral or left-eyed flatfish). The primitive spiny turbots include equal numbers of right- and left-sided individuals, and are generally less asymmetrical than the other families. Other distinguishing features of the order are the presence of protrusible eyes, another adaptation to living on the seabed (benthos), and the extension of the dorsal fin onto the head.
The most basal members of the group, the threadfins, do not closely resemble the flatfishes.
The surface of the fish facing away from the sea floor is pigmented, often serving to camouflage the fish, but sometimes with striking coloured patterns. Some flatfishes are also able to change their pigmentation to match the background, in a manner similar to some cephalopods. The side of the body without the eyes, facing the seabed, is usually colourless or very pale.
In general, flatfishes rely on their camouflage for avoiding predators, but some have aposematic traits such as conspicuous eyespots (e.g., Microchirus ocellatus) and several small tropical species (at least Aseraggodes, Pardachirus and Zebrias) are poisonous. Juveniles of Soleichthys maculosus mimic toxic flatworms of the genus Pseudobiceros in both colours and swimming mode. Conversely, a few octopus species have been reported to mimic flatfishes in colours, shape and swimming mode.
The flounders and spiny turbots eat smaller fish, and have well-developed teeth. They sometimes seek prey in the midwater, away from the bottom, and show fewer extreme adaptations than other families. The soles, by contrast, are almost exclusively bottom-dwellers, and feed on invertebrates. They show a more extreme asymmetry, and may lack teeth on one side of the jaw.
Flatfishes range in size from Tarphops oligolepis, measuring about in length, and weighing , to the Atlantic halibut, at and .
Species and species groups
Brill
Dab
Sanddab
Flounder
Halibut
Megrim
Plaice
Sole
Tonguefish
Turbot
Reproduction
Flatfishes lay eggs that hatch into larvae resembling typical, symmetrical, fish. These are initially elongated, but quickly develop into a more rounded form. The larvae typically have protective spines on the head, over the gills, and in the pelvic and pectoral fins. They also possess a swim bladder, and do not dwell on the bottom, instead dispersing from their hatching grounds as plankton.
The length of the planktonic stage varies between different types of flatfishes, but eventually they begin to metamorphose into the adult form. One of the eyes migrates across the top of the head and onto the other side of the body, leaving the fish blind on one side. The larva also loses its swim bladder and spines, and sinks to the bottom, laying its blind side on the underlying surface.
Origin and evolution
Scientists have been proposing since the 1910s that flatfishes evolved from percoid ancestors. There has been some disagreement whether they are a monophyletic group. Some palaeontologists think that some percomorph groups other than flatfishes were "experimenting" with head asymmetry during the Eocene, and certain molecular studies conclude that the primitive family of Psettodidae evolved their flat bodies and asymmetrical head independently of other flatfish groups. Many scientists, however, argue that pleuronectiformes are monophyletic.
The fossil record indicates that flatfishes might have been present before the Eocene, based on fossil otoliths resembling those of modern pleuronectiforms dating back to the Thanetian and Ypresian stages (57-53 million years ago).
Flatfishes have been cited as dramatic examples of evolutionary adaptation. Richard Dawkins, in The Blind Watchmaker, explains the flatfishes' evolutionary history thus:
...bony fish as a rule have a marked tendency to be flattened in a vertical direction.... It was natural, therefore, that when the ancestors of [flatfish] took to the sea bottom, they should have lain on one side.... But this raised the problem that one eye was always looking down into the sand and was effectively useless. In evolution this problem was solved by the lower eye 'moving' round to the upper side.
The origin of the unusual morphology of flatfishes was enigmatic up to the 2000s, and early researchers suggested that it came about as a result of saltation rather than gradual evolution through natural selection, because a partially migrated eye were considered to have been maladaptive. This started to change in 2008 with a study on the two fossil genera Amphistium and Heteronectes, dated to about 50 million years ago. These genera retain primitive features not seen in modern types of flatfishes. In addition, their heads are less asymmetric than modern flatfishes, retaining one eye on each side of their heads, although the eye on one side is closer to the top of the head than on the other. The more recently described fossil genera Quasinectes and Anorevus have been proposed to show similar morphologies and have also been classified as "stem pleuronectiforms". Suchs findings lead Friedman to conclude that the evolution of flatfish morphology "happened gradually, in a way consistent with evolution via natural selection—not suddenly, as researchers once had little choice but to believe."
To explain the survival advantage of a partially migrated eye, it has been proposed that primitive flatfishes like Amphistium rested with the head propped up above the seafloor (a behaviour sometimes observed in modern flatfishes), enabling them to use their partially migrated eye to see things closer to the seafloor.
While known basal genera like Amphistium and Heteronectes support a gradual acquisition of the flatfish morphology, they were probably not direct ancestors to living pleuronectiforms, as fossil evidence indicate that most flatfish lineages living today were present in the Eocene and contemporaneous with them. It has been suggested that the more primitive forms were eventually outcompeted.
As food
Flatfish is considered a Whitefish because of the high concentration of oils within its liver. Its lean flesh makes for a unique flavor that differs from species to species. Methods of cooking include grilling, pan-frying, baking and deep-frying.
Timeline of genera
| Biology and health sciences | Acanthomorpha | null |
46336 | https://en.wikipedia.org/wiki/Passerine | Passerine | A passerine () is any bird of the order Passeriformes (; from Latin 'sparrow' and '-shaped') which includes more than half of all bird species. Sometimes known as perching birds, passerines generally have an anisodactyl arrangement of their toes (three pointing forward and one back), which facilitates perching.
With more than 140 families and some 6,500 identified species, Passeriformes is the largest order of birds and among the most diverse clades of terrestrial vertebrates, representing 60% of birds. Passerines are divided into three suborders: New Zealand wrens; diverse birds found only in North and South America; and songbirds. Passerines originated in the Southern Hemisphere around 60 million years ago.
Most passerines are insectivorous or omnivorous, and eat both insects and fruit or seeds.
Etymology
The terms "passerine" and "Passeriformes" are derived from the scientific name of the house sparrow, Passer domesticus, whose genus is the Latin word for sparrow. Formerly this meant the songbirds of Europe. Now it also includes perching, non-singing birds from the Americas.
Description
The order is divided into three suborders, Tyranni (non-singing, Americas), Passeri (songbirds), and the basal New Zealand wrens. Oscines have the best control of their syrinx muscles among birds, producing a wide range of songs and other vocalizations, though some of them, such as the crows, do not sound musical to human beings. Some, such as the lyrebird, are accomplished mimics. The New Zealand wrens are tiny birds restricted to New Zealand, at least in modern times; they were long placed in Passeri.
Most passerines are smaller than typical members of other avian orders. The heaviest and altogether largest passerines are the thick-billed raven and the larger races of common raven, each exceeding and . The superb lyrebird and some birds-of-paradise, due to very long tails or tail coverts, are longer overall. The smallest passerine is the short-tailed pygmy tyrant, at and .
Anatomy
The foot of a passerine has three toes directed forward and one toe directed backward, called anisodactyl arrangement. The hind toe (hallux) is long and joins the leg at approximately the same level as the front toes. This arrangement enables passerine birds to easily perch upright on branches. The toes have no webbing or joining, but in some cotingas, the second and third toes are united at their basal third.
The leg of passerine birds contains an additional special adaptation for perching. A tendon in the rear of the leg running from the underside of the toes to the muscle behind the tibiotarsus will automatically be pulled and tighten when the leg bends, causing the foot to curl and become stiff when the bird lands on a branch. This enables passerines to sleep while perching without falling off.
Most passerine birds have 12 tail feathers but the superb lyrebird has 16, and several spinetails in the family Furnariidae have 10, 8, or even 6, as is the case of Des Murs's wiretail. Species adapted to tree trunk climbing such as treecreepers and woodcreeper have stiff tail feathers that are used as props during climbing. Extremely long tails used as sexual ornaments are shown by species in different families. A well-known example is the long-tailed widowbird.
Eggs and nests
The chicks of passerines are altricial: blind, featherless, and helpless when hatched from their eggs. Hence, the chicks require extensive parental care. Most passerines lay colored eggs, in contrast with nonpasserines, most of whose eggs are white except in some ground-nesting groups such as Charadriiformes and nightjars, where camouflage is necessary, and in some parasitic cuckoos, which match the passerine host's egg. The vinous-throated parrotbill has two egg colors, white and blue, to deter the brood parasitic common cuckoo.
Clutches vary considerably in size: some larger passerines of Australia such as lyrebirds and scrub-robins lay only a single egg, most smaller passerines in warmer climates lay between two and five, while in the higher latitudes of the Northern Hemisphere, hole-nesting species like tits can lay up to a dozen and other species around five or six.
The family Viduidae do not build their own nests, instead, they lay eggs in other birds' nests.
The Passeriformes contain several groups of brood parasites such as the viduas, cuckoo-finches, and the cowbirds.
Origin and evolution
The evolutionary history of the passerine families and the relationships among them remained rather mysterious until the late 20th century. In many cases, passerine families were grouped together on the basis of morphological similarities that, it is now believed, are the result of convergent evolution, not a close genetic relationship. For example, the wrens of the Americas and Eurasia, those of Australia, and those of New Zealand look superficially similar and behave in similar ways, yet belong to three far-flung branches of the passerine family tree; they are as unrelated as it is possible to be while remaining Passeriformes.
Advances in molecular biology and improved paleobiogeographical data gradually are revealing a clearer picture of passerine origins and evolution that reconciles molecular affinities, the constraints of morphology, and the specifics of the fossil record. The first passerines are now thought to have evolved in the Southern Hemisphere in the late Paleocene or early Eocene, around 50 million years ago.
The initial diversification of passerines coincides with the separation of the southern continents in the early Eocene. The New Zealand wrens are the first to become isolated in Zealandia, and the second split involved the origin of the Tyranni in South America and the Passeri in the Australian continent. The Passeri experienced a great radiation of forms in Australia. A major branch of the Passeri, the parvorder Passerida, dispersed into Eurasia and Africa about 40 million years ago, where they experienced further radiation of new lineages. This eventually led to three major Passerida lineages comprising about 4,000 species, which in addition to the Corvida and numerous minor lineages make up songbird diversity today. Extensive biogeographical mixing happens, with northern forms returning to the south, southern forms moving north, and so on.
Fossil record
Earliest passerines
Perching bird osteology, especially of the limb bones, is rather diagnostic. However, the early fossil record is poor because passerines are relatively small, and their delicate bones do not preserve well. Queensland Museum specimens F20688 (carpometacarpus) and F24685 (tibiotarsus) from Murgon, Queensland, are fossil bone fragments initially assigned to Passeriformes. However, the material is too fragmentary and their affinities have been questioned. Several more recent fossils from the Oligocene of Europe, such as Wieslochia, Jamna, Resoviaornis, and Crosnoornis, are more complete and definitely represent early passeriforms, and have been found to belong to a variety of modern and extinct lineages.
From the Bathans Formation at the Manuherikia River in Otago, New Zealand, MNZ S42815 (a distal right tarsometatarsus of a tui-sized bird) and several bones of at least one species of saddleback-sized bird have recently been described. These date from the Early to Middle Miocene (Awamoan to Lillburnian, 19–16 mya).
Early European passerines
In Europe, perching birds are not too uncommon in the fossil record from the Oligocene onward, belonging to several lineages:
Wieslochia (Early Oligocene of Frauenweiler, Germany) – suboscine
Resoviaornis (Early Oligocene of Wola Rafałowska, Poland) – oscine
Jamna (Early Oligocene of Jamna Dolna, Poland) – basal
Winnicavis (Early Oligocene of Lower Silesian Voivodeship, Poland)
Crosnoornis (Early Oligocene of Poland) - suboscine
Passeriformes gen. et sp. indet. (Early Oligocene of Luberon, France) – suboscine or basal
Passeriformes gen. et spp. indet. (Late Oligocene of France) – several suboscine and oscine taxa
Passeriformes gen. et spp. indet. (Middle Miocene of France and Germany) – basal?
Passeriformes gen. et spp. indet. (Sajóvölgyi Middle Miocene of Mátraszőlős, Hungary) – at least 2 taxa, possibly 3; at least one probably Oscines.
Passeriformes gen. et sp. indet. (Middle Miocene of Felsőtárkány, Hungary) – oscine?
Passeriformes gen. et sp. indet. (Late Miocene of Polgárdi, Hungary) – Sylvioidea (Sylviidae? Cettiidae?)
That suboscines expanded much beyond their region of origin is proven by several fossils from Germany such as a presumed broadbill (Eurylaimidae) humerus fragment from the Early Miocene (roughly 20 mya) of Wintershof, Germany, the Late Oligocene carpometacarpus from France listed above, and Wieslochia, among others. Extant Passeri super-families were quite distinct by that time and are known since about 12–13 mya when modern genera were present in the corvoidean and basal songbirds. The modern diversity of Passerida genera is known mostly from the Late Miocene onward and into the Pliocene (about 10–2 mya). Pleistocene and early Holocene lagerstätten (<1.8 mya) yield numerous extant species, and many yield almost nothing but extant species or their chronospecies and paleosubspecies.
American fossils
In the Americas, the fossil record is more scant before the Pleistocene, from which several still-existing families are documented. Apart from the indeterminable MACN-SC-1411 (Pinturas Early/Middle Miocene of Santa Cruz Province, Argentina), an extinct lineage of perching birds has been described from the Late Miocene of California, United States: the Palaeoscinidae with the single genus Palaeoscinis. "Palaeostruthus" eurius (Pliocene of Florida) probably belongs to an extant family, most likely passeroidean.
Systematics and taxonomy
The Passeriformes is currently divided into three suborders: Acanthisitti (New Zealand wrens), Tyranni, (suboscines) and Passeri (oscines or songbirds). The Passeri is now subdivided into two major groups recognized now as Corvides and Passerida respectively containing the large superfamilies Corvoidea and Meliphagoidea, as well as minor lineages, and the superfamilies Sylvioidea, Muscicapoidea, and Passeroidea but this arrangement has been found to be oversimplified. Since the mid-2000s, studies have investigated the phylogeny of the Passeriformes and found that many families from Australasia traditionally included in the Corvoidea actually represent more basal lineages within oscines. Likewise, the traditional three-superfamily arrangement within the Passeri has turned out to be far more complex and will require changes in classification.
Major "wastebin" families such as the Old World warblers and Old World babblers have turned out to be paraphyletic and are being rearranged. Several taxa turned out to represent highly distinct lineages, so new families had to be established, some of these – like the stitchbird of New Zealand and the Eurasian bearded reedling – monotypic with only one living species. In the Passeri alone, a number of minor lineages will eventually be recognized as distinct superfamilies. For example, the kinglets constitute a single genus with less than 10 species today but seem to have been among the first perching bird lineages to diverge as the group spread across Eurasia. No particularly close relatives of theirs have been found among comprehensive studies of the living Passeri, though they might be fairly close to some little-studied tropical Asian groups. Nuthatches, wrens, and their closest relatives are currently grouped in a distinct super-family Certhioidea.
Taxonomic list of Passeriformes families
This list is in taxonomic order, placing related families next to one another. The families listed are those recognised by the International Ornithologists' Union (IOC). The order and the division into infraorders, parvorders, and superfamilies follows the phylogenetic analysis published by Carl Oliveros and colleagues in 2019. The relationships between the families in the suborder Tyranni (suboscines) were all well determined but some of the nodes in Passeri (oscines or songbirds) were unclear owing to the rapid splitting of the lineages.
Suborder Acanthisitti
Acanthisittidae: New Zealand wrens
Suborder Tyranni (suboscines)
Infraorder Eurylaimides: Old World suboscines
Infraorder Tyrannides: New World suboscines
Parvorder Furnariida
Parvorder Tyrannida
Suborder Passeri (oscines or songbirds)
Atrichornithidae: scrub-birds
Menuridae: lyrebirds
Climacteridae: Australian treecreepers
Ptilonorhynchidae: bowerbirds
Pomatostomidae: pseudo-babblers
Orthonychidae: logrunners
Superfamily Meliphagoidea
Acanthizidae: scrubwrens, thornbills, and gerygones
Meliphagidae: honeyeaters
Maluridae: fairywrens, emu-wrens and grasswrens
Dasyornithidae: bristlebirds
Pardalotidae: pardalotes
Infraorder Corvides – previously known as the parvorder Corvida
Cinclosomatidae: jewel-babblers, quail-thrushes
Campephagidae: cuckooshrikes and trillers
Mohouidae: whiteheads
Neosittidae: sittellas
Superfamily Orioloidea
Psophodidae: whipbirds
Eulacestomatidae: wattled ploughbill
Falcunculidae: shriketit
Oreoicidae: Australo-Papuan bellbirds
Paramythiidae: painted berrypeckers
Vireonidae: vireos
Pachycephalidae: whistlers
Oriolidae: Old World orioles and figbirds
Superfamily Malaconotoidea
Machaerirhynchidae: boatbills
Artamidae: woodswallows, butcherbirds, currawongs, and Australian magpie
Rhagologidae: mottled berryhunter
Malaconotidae: puffback shrikes, bush shrikes, tchagras, and boubous
Pityriaseidae: bristlehead
Aegithinidae: ioras
Platysteiridae: wattle-eyes and batises
Vangidae: vangas
Superfamily Corvoidea
Rhipiduridae: fantails
Dicruridae: drongos
Monarchidae: monarch flycatchers
Ifritidae: blue-capped ifrit
Paradisaeidae: birds-of-paradise
Corcoracidae: white-winged chough and apostlebird
Melampittidae: melampittas
Laniidae: shrikes
Platylophidae: jayshrike
Corvidae: crows, ravens, and jays
Infraorder Passerides – previously known as the parvorder Passerida
Cnemophilidae: satinbirds
Melanocharitidae: berrypeckers and longbills
Callaeidae: New Zealand wattlebirds
Notiomystidae: stitchbird
Petroicidae: Australian robins
Eupetidae: rail-babbler
Picathartidae: rockfowl
Chaetopidae: rock-jumpers
Parvorder Sylviida – previously known as the superfamily Sylviodea
Hyliotidae: hyliotas
Stenostiridae: fairy flycatchers
Paridae: tits, chickadees and titmice
Remizidae: penduline tits
Panuridae: bearded reedling
Alaudidae: larks
Nicatoridae: nicators
Macrosphenidae: crombecs and African warblers
Cisticolidae: cisticolas and allies
Superfamily Locustelloidea
Acrocephalidae: reed warblers, Grauer's warbler and allies
Locustellidae: grassbirds and allies
Donacobiidae: black-capped donacobius
Bernieridae: Malagasy warblers
—
Pnoepygidae: wren-babblers
Hirundinidae: swallows and martins
Superfamily Sylvioidea
Pycnonotidae: bulbuls
Sylviidae: sylviid babblers
Paradoxornithidae: parrotbills and myzornis
Zosteropidae: white-eyes
Timaliidae: tree babblers
Leiothrichidae: laughingthrushes and allies
Alcippeidae: Alcippe fulvettas
Pellorneidae: ground babblers
Superfamily Aegithaloidea
Phylloscopidae: leaf-warblers and allies
Hyliidae: hylias
Aegithalidae: long-tailed tits or bushtits
Scotocercidae: streaked scrub warbler
Cettiidae: Cettia bush warblers and allies
Erythrocercidae: yellow flycatchers
Parvorder Muscicapida – previously known as the superfamily Muscicapoidea
Superfamily Bombycilloidea
Dulidae: palmchat
Bombycillidae: waxwings
Ptiliogonatidae: silky flycatchers
Hylocitreidae: hylocitrea
Hypocoliidae: hypocolius
†Mohoidae: oos
Superfamily Muscicapoidea
Elachuridae: spotted elachura
Cinclidae: dippers
Muscicapidae: Old World flycatchers and chats
Turdidae: thrushes and allies
Buphagidae: oxpeckers
Sturnidae: starlings and rhabdornis
Mimidae: mockingbirds and thrashers
—
Regulidae: goldcrests and kinglets
Superfamily Certhioidea
Tichodromidae: wallcreeper
Sittidae: nuthatches
Certhiidae: treecreepers
Polioptilidae: gnatcatchers
Troglodytidae: wrens
Parvorder Passerida – previously known as the superfamily Passeroidea
Promeropidae: sugarbirds
Modulatricidae: dapple-throat and allies
Nectariniidae: sunbirds
Dicaeidae: flowerpeckers
Chloropseidae: leafbirds
Irenidae: fairy-bluebirds
Peucedramidae: olive warbler
Urocynchramidae: Przewalski's finch
Ploceidae: weavers
Viduidae: indigobirds and whydahs
Estrildidae: waxbills, munias and allies
Prunellidae: accentors
Passeridae: Old World sparrows and snowfinches
Motacillidae: wagtails and pipits
Fringillidae: finches and euphonias
Superfamily Emberizoidea – previously known as the New World nine-primaried oscines
Rhodinocichlidae: rosy thrush-tanager
Calcariidae: longspurs and snow buntings
Emberizidae: buntings
Cardinalidae: cardinals
Mitrospingidae: mitrospingid tanagers
Thraupidae: tanagers and allies
Passerellidae: New World sparrows, bush tanagers
Parulidae: New World warblers
Icteriidae: yellow-breasted chat
Icteridae: grackles, New World blackbirds, and New World orioles
Calyptophilidae: chat-tanagers
Zeledoniidae: wrenthrush
Teretistridae: Cuban warblers
Nesospingidae: Puerto Rican tanager
Spindalidae: spindalises
Phaenicophilidae: Hispaniolan tanagers
Phylogeny
Relationships between living Passeriformes families based on the phylogenetic analysis of Oliveros et al (2019). Some terminals have been renamed to reflect families recognised by the IOC but not in that study. The IOC families Alcippeidae and Teretistridae were not sampled in this study.
Explanatory notes
| Biology and health sciences | Passerines | null |
46371 | https://en.wikipedia.org/wiki/Phalangeriformes | Phalangeriformes | Phalangeriformes is a paraphyletic suborder of about 70 species of small to medium-sized arboreal marsupials native to Australia, New Guinea, and Sulawesi. The species are commonly known as possums, opossums, gliders, and cuscus. The common name "(o)possum" for various Phalangeriformes species derives from the creatures' resemblance to the opossums of the Americas (the term comes from Powhatan language aposoum "white animal", from Proto-Algonquian *wa·p-aʔɬemwa "white dog"). However, although opossums are also marsupials, Australasian possums are more closely related to other Australasian marsupials such as kangaroos.
Phalangeriformes are quadrupedal diprotodont marsupials with long tails. The smallest species, indeed the smallest diprotodont marsupial, is the Tasmanian pygmy possum, with an adult head-body length of and a weight of . The largest are the two species of bear cuscus, which may exceed . Phalangeriformes species are typically nocturnal and at least partially arboreal. They inhabit most vegetated habitats, and several species have adjusted well to urban settings. Diets range from generalist herbivores or omnivores (the common brushtail possum) to specialist browsers of eucalyptus (greater glider), insectivores (mountain pygmy possum) and nectar-feeders (honey possum).
Classification
About two-thirds of Australian marsupials belong to the order Diprotodontia, which is split into three suborders, namely the Vombatiformes (wombats and the koala, four species in total); the large and diverse Phalangeriformes (the possums and gliders) and Macropodiformes (kangaroos, potoroos, wallabies and the musky rat-kangaroo). Note: this classification is based on Ruedas & Morales 2005. However, Phalangeriformes has been recovered as paraphyletic with respect to Macropodiformes, rendering the latter a subset of the former if Phalangeriformes are to be considered a natural group.
Suborder Phalangeriformes: possums, gliders and allies
Superfamily Phalangeroidea
Family †Ektopodontidae:
Genus †Ektopodon
†Ektopodon serratus
†Ektopodon stirtoni
†Ektopodon ulta
Family Burramyidae: (pygmy possums)
Genus Burramys
Mountain pygmy possum, B. parvus
Genus Cercartetus
Long-tailed pygmy possum, C. caudatus
Southwestern pygmy possum, C. concinnus
Tasmanian pygmy possum, C. lepidus
Eastern pygmy possum, C. nanus
Family Phalangeridae: (brushtail possums and cuscuses)
Subfamily Ailuropinae
Genus Ailurops
Talaud bear cuscus, A. melanotis
Sulawesi bear cuscus, A. ursinus
Genus Strigocuscus
Sulawesi dwarf cuscus, S. celebensis
Banggai cuscus, S. pelegensis
Subfamily Phalangerinae
Tribe Phalangerini
Genus Phalanger
Gebe cuscus, P. alexandrae
Mountain cuscus, P. carmelitae
Ground cuscus, P. gymnotis
Eastern common cuscus, P. intercastellanus
Woodlark cuscus, P. lullulae
Blue-eyed cuscus, P. matabiru
Telefomin cuscus, P. matanim
Southern common cuscus, P. mimicus
Northern common cuscus, P. orientalis
Ornate cuscus, P. ornatus
Rothschild's cuscus, P. rothschildi
Silky cuscus, P. sericeus
Stein's cuscus, P. vestitus
Genus Spilocuscus
Admiralty Island cuscus, S. kraemeri
Common spotted cuscus, S. maculatus
Waigeou cuscus, S. papuensis
Black-spotted cuscus, S. rufoniger
Blue-eyed spotted cuscus, S. wilsoni
Tribe Trichosurini
Genus Trichosurus
Northern brushtail possum, T. arnhemensis
Short-eared possum, T. caninus
Mountain brushtail possum, T. cunninghami
Coppery brushtail possum, T. johnstonii
Common brushtail possum, T. vulpecula
Genus Wyulda
Scaly-tailed possum, W. squamicaudata
Superfamily Petauroidea
Family Pseudocheiridae: (ring-tailed possums and allies)
Subfamily Hemibelideinae
Genus Hemibelideus
Lemur-like ringtail possum, H. lemuroides
Genus Petauroides
Central greater glider, P. armillatus
Northern greater glider, P. minor
Southern greater glider, P. volans
Subfamily Pseudocheirinae
Genus Petropseudes
Rock-haunting ringtail possum, P. dahli
Genus Pseudocheirus
Common ringtail possum, P. peregrinus
Genus Pseudochirulus
Lowland ringtail possum, P. canescens
Weyland ringtail possum, P. caroli
Cinereus ringtail possum, P. cinereus
Painted ringtail possum, P. forbesi
Herbert River ringtail possum, P. herbertensis
Masked ringtail possum, P. larvatus
Pygmy ringtail possum, P. mayeri
Vogelkop ringtail possum, P. schlegeli
Subfamily Pseudochiropsinae
Genus Pseudochirops
D'Albertis' ringtail possum, P. albertisii
Green ringtail possum, P. archeri
Plush-coated ringtail possum, P. corinnae
Reclusive ringtail possum, P. coronatus
Coppery ringtail possum, P. cupreus
Family Petauridae: (striped possum, Leadbeater's possum, yellow-bellied glider, sugar glider, mahogany glider, squirrel glider)
Genus Dactylopsila
Great-tailed triok, D. megalura
Long-fingered triok, D. palpator
Tate's triok, D. tatei
Striped possum, D. trivirgata
Genus Gymnobelideus
Leadbeater's possum, G. leadbeateri
Genus Petaurus
Northern glider, P. abidi
Savanna glider, P. ariel
Yellow-bellied glider, P. australis
Biak glider, P. biacensis
Sugar glider, P. breviceps
Mahogany glider, P. gracilis
Squirrel glider, P. norfolcensis
Krefft's glider, P. notatus
Family Tarsipedidae: (honey possum)
Genus Tarsipes
Honey possum or noolbenger, T. rostratus
Family Acrobatidae: (feathertail glider and feather-tailed possum)
Genus Acrobates
Feathertail glider, A. pygmaeus
Genus Distoechurus
Feather-tailed possum, D. pennatus
| Biology and health sciences | Diprotodontia | Animals |
46374 | https://en.wikipedia.org/wiki/Diatom | Diatom | A diatom (Neo-Latin diatoma) is any member of a large group comprising several genera of algae, specifically microalgae, found in the oceans, waterways and soils of the world. Living diatoms make up a significant portion of the Earth's biomass: they generate about 20 to 50 percent of the oxygen produced on the planet each year, take in over 6.7 billion tonnes of silicon each year from the waters in which they live, and constitute nearly half of the organic material found in the oceans. The shells of dead diatoms can reach as much as a half-mile (800 m) deep on the ocean floor, and the entire Amazon basin is fertilized annually by 27 million tons of diatom shell dust transported by transatlantic winds from the African Sahara, much of it from the Bodélé Depression, which was once made up of a system of fresh-water lakes.
Diatoms are unicellular organisms: they occur either as solitary cells or in colonies, which can take the shape of ribbons, fans, zigzags, or stars. Individual cells range in size from 2 to 2000 micrometers. In the presence of adequate nutrients and sunlight, an assemblage of living diatoms doubles approximately every 24 hours by asexual multiple fission; the maximum life span of individual cells is about six days. Diatoms have two distinct shapes: a few (centric diatoms) are radially symmetric, while most (pennate diatoms) are broadly bilaterally symmetric.
The unique feature of diatoms is that they are surrounded by a cell wall made of silica (hydrated silicon dioxide), called a frustule. These frustules produce structural coloration, prompting them to be described as "jewels of the sea" and "living opals".
Movement in diatoms primarily occurs passively as a result of both ocean currents and wind-induced water turbulence; however, male gametes of centric diatoms have flagella, permitting active movement to seek female gametes. Similar to plants, diatoms convert light energy to chemical energy by photosynthesis, but their chloroplasts were acquired in different ways.
Unusually for autotrophic organisms, diatoms possess a urea cycle, a feature that they share with animals, although this cycle is used to different metabolic ends in diatoms. The family Rhopalodiaceae also possess a cyanobacterial endosymbiont called a spheroid body. This endosymbiont has lost its photosynthetic properties, but has kept its ability to perform nitrogen fixation, allowing the diatom to fix atmospheric nitrogen. Other diatoms in symbiosis with nitrogen-fixing cyanobacteria are among the genera Hemiaulus, Rhizosolenia and Chaetoceros.
Dinotoms are diatoms that have become endosymbionts inside dinoflagellates. Research on the dinoflagellates Durinskia baltica and Glenodinium foliaceum has shown that the endosymbiont event happened so recently, evolutionarily speaking, that their organelles and genome are still intact with minimal to no gene loss. The main difference between these and free living diatoms is that they have lost their cell wall of silica, making them the only known shell-less diatoms.
The study of diatoms is a branch of phycology. Diatoms are classified as eukaryotes, organisms with a nuclear envelope-bound cell nucleus, that separates them from the prokaryotes archaea and bacteria. Diatoms are a type of plankton called phytoplankton, the most common of the plankton types. Diatoms also grow attached to benthic substrates, floating debris, and on macrophytes. They comprise an integral component of the periphyton community. Another classification divides plankton into eight types based on size: in this scheme, diatoms are classed as microalgae. Several systems for classifying the individual diatom species exist.
Fossil evidence suggests that diatoms originated during or before the early Jurassic period, which was about 150 to 200 million years ago. The oldest fossil evidence for diatoms is a specimen of extant genus Hemiaulus in Late Jurassic aged amber from Thailand.
Diatoms are used to monitor past and present environmental conditions, and are commonly used in studies of water quality. Diatomaceous earth (diatomite) is a collection of diatom shells found in the Earth's crust. They are soft, silica-containing sedimentary rocks which are easily crumbled into a fine powder and typically have a particle size of 10 to 200 μm. Diatomaceous earth is used for a variety of purposes including for water filtration, as a mild abrasive, in cat litter, and as a dynamite stabilizer.
Overview
Diatoms are protists that form massive annual spring and fall blooms in aquatic environments and are estimated to be responsible for about half of photosynthesis in the global oceans. This predictable annual bloom dynamic fuels higher trophic levels and initiates delivery of carbon into the deep ocean biome. Diatoms have complex life history strategies that are presumed to have contributed to their rapid genetic diversification into ~200,000 species that are distributed between the two major diatom groups: centrics and pennates.
Morphology
Diatoms are generally 20 to 200 micrometers in size, with a few larger species. Their yellowish-brown chloroplasts, the site of photosynthesis, are typical of heterokonts, having four cell membranes and containing pigments such as the carotenoid fucoxanthin. Individuals usually lack flagella, but they are present in male gametes of the centric diatoms and have the usual heterokont structure, including the hairs (mastigonemes) characteristic in other groups.
Diatoms are often referred as "jewels of the sea" or "living opals" due to their optical properties. The biological function of this structural coloration is not clear, but it is speculated that it may be related to communication, camouflage, thermal exchange and/or UV protection.
Diatoms build intricate hard but porous cell walls called frustules composed primarily of silica. This siliceous wall can be highly patterned with a variety of pores, ribs, minute spines, marginal ridges and elevations; all of which can be used to delineate genera and species.
The cell itself consists of two halves, each containing an essentially flat plate, or valve, and marginal connecting, or girdle band. One half, the hypotheca, is slightly smaller than the other half, the epitheca. Diatom morphology varies. Although the shape of the cell is typically circular, some cells may be triangular, square, or elliptical. Their distinguishing feature is a hard mineral shell or frustule composed of opal (hydrated, polymerized silicic acid).
Diatoms are divided into two groups that are distinguished by the shape of the frustule: the centric diatoms and the pennate diatoms.
Pennate diatoms are bilaterally symmetric. Each one of their valves have openings that are slits along the raphes and their shells are typically elongated parallel to these raphes. They generate cell movement through cytoplasm that streams along the raphes, always moving along solid surfaces.
Centric diatoms are radially symmetric. They are composed of upper and lower valves – epitheca and hypotheca – each consisting of a valve and a girdle band that can easily slide underneath each other and expand to increase cell content over the diatoms progression. The cytoplasm of the centric diatom is located along the inner surface of the shell and provides a hollow lining around the large vacuole located in the center of the cell. This large, central vacuole is filled by a fluid known as "cell sap" which is similar to seawater but varies with specific ion content. The cytoplasmic layer is home to several organelles, like the chloroplasts and mitochondria. Before the centric diatom begins to expand, its nucleus is at the center of one of the valves and begins to move towards the center of the cytoplasmic layer before division is complete. Centric diatoms have a variety of shapes and sizes, depending on from which axis the shell extends, and if spines are present.
Silicification
Diatom cells are contained within a unique silica cell wall known as a frustule made up of two valves called thecae, that typically overlap one another. The biogenic silica composing the cell wall is synthesised intracellularly by the polymerisation of silicic acid monomers. This material is then extruded to the cell exterior and added to the wall. In most species, when a diatom divides to produce two daughter cells, each cell keeps one of the two-halves and grows a smaller half within it. As a result, after each division cycle, the average size of diatom cells in the population gets smaller. Once such cells reach a certain minimum size, rather than simply divide, they reverse this decline by forming an auxospore, usually through meiosis and sexual reproduction, but exceptions exist. The auxospore expands in size to give rise to a much larger cell, which then returns to size-diminishing divisions.
The exact mechanism of transferring silica absorbed by the diatom to the cell wall is unknown. Much of the sequencing of diatom genes comes from the search for the mechanism of silica uptake and deposition in nano-scale patterns in the frustule. The most success in this area has come from two species, Thalassiosira pseudonana, which has become the model species, as the whole genome was sequenced and methods for genetic control were established, and Cylindrotheca fusiformis, in which the important silica deposition proteins silaffins were first discovered. Silaffins, sets of polycationic peptides, were found in C. fusiformis cell walls and can generate intricate silica structures. These structures demonstrated pores of sizes characteristic to diatom patterns. When T. pseudonana underwent genome analysis it was found that it encoded a urea cycle, including a higher number of polyamines than most genomes, as well as three distinct silica transport genes. In a phylogenetic study on silica transport genes from 8 diverse groups of diatoms, silica transport was found to generally group with species. This study also found structural differences between the silica transporters of pennate (bilateral symmetry) and centric (radial symmetry) diatoms. The sequences compared in this study were used to create a diverse background in order to identify residues that differentiate function in the silica deposition process. Additionally, the same study found that a number of the regions were conserved within species, likely the base structure of silica transport.
These silica transport proteins are unique to diatoms, with no homologs found in other species, such as sponges or rice. The divergence of these silica transport genes is also indicative of the structure of the protein evolving from two repeated units composed of five membrane bound segments, which indicates either gene duplication or dimerization. The silica deposition that takes place from the membrane bound vesicle in diatoms has been hypothesized to be a result of the activity of silaffins and long chain polyamines. This Silica Deposition Vesicle (SDV) has been characterized as an acidic compartment fused with Golgi-derived vesicles. These two protein structures have been shown to create sheets of patterned silica in-vivo with irregular pores on the scale of diatom frustules. One hypothesis as to how these proteins work to create complex structure is that residues are conserved within the SDV's, which is unfortunately difficult to identify or observe due to the limited number of diverse sequences available. Though the exact mechanism of the highly uniform deposition of silica is as yet unknown, the Thalassiosira pseudonana genes linked to silaffins are being looked to as targets for genetic control of nanoscale silica deposition.
The ability of diatoms to make silica-based cell walls has been the subject of fascination for centuries. It started with a microscopic observation by an anonymous English country nobleman in 1703, who observed an object that looked like a chain of regular parallelograms and debated whether it was just crystals of salt, or a plant. The viewer decided that it was a plant because the parallelograms didn't separate upon agitation, nor did they vary in appearance when dried or subjected to warm water (in an attempt to dissolve the "salt"). Unknowingly, the viewer's confusion captured the essence of diatoms—mineral utilizing plants. It is not clear when it was determined that diatom cell walls are made of silica, but in 1939 a seminal reference characterized the material as silicic acid in a "subcolloidal" state Identification of the main chemical component of the cell wall spurred investigations into how it was made. These investigations have involved, and been propelled by, diverse approaches including, microscopy, chemistry, biochemistry, material characterisation, molecular biology, 'omics, and transgenic approaches. The results from this work have given a better understanding of cell wall formation processes, establishing fundamental knowledge which can be used to create models that contextualise current findings and clarify how the process works.
The process of building a mineral-based cell wall inside the cell, then exporting it outside, is a massive event that must involve large numbers of genes and their protein products. The act of building and exocytosing this large structural object in a short time period, synched with cell cycle progression, necessitates substantial physical movements within the cell as well as dedication of a significant proportion of the cell's biosynthetic capacities.
The first characterisations of the biochemical processes and components involved in diatom silicification were made in the late 1990s. These were followed by insights into how higher order assembly of silica structures might occur. More recent reports describe the identification of novel components involved in higher order processes, the dynamics documented through real-time imaging, and the genetic manipulation of silica structure. The approaches established in these recent works provide practical avenues to not only identify the components involved in silica cell wall formation but to elucidate their interactions and spatio-temporal dynamics. This type of holistic understanding will be necessary to achieve a more complete understanding of cell wall synthesis.
Behaviour
Most centric and araphid pennate diatoms are nonmotile, and their relatively dense cell walls cause them to readily sink. Planktonic forms in open water usually rely on turbulent mixing of the upper layers of the oceanic waters by the wind to keep them suspended in sunlit surface waters. Many planktonic diatoms have also evolved features that slow their sinking rate, such as spines or the ability to grow in colonial chains. These adaptations increase their surface area to volume ratio and drag, allowing them to stay suspended in the water column longer. Individual cells may regulate buoyancy via an ionic pump.
Some pennate diatoms are capable of a type of locomotion called "gliding", which allows them to move across surfaces via adhesive mucilage secreted through a seamlike structure called the raphe. In order for a diatom cell to glide, it must have a solid substrate for the mucilage to adhere to.
Cells are solitary or united into colonies of various kinds, which may be linked by siliceous structures; mucilage pads, stalks or tubes; amorphous masses of mucilage; or by threads of chitin (polysaccharide), which are secreted through strutted processes of the cell.
Life cycle
Reproduction and cell size
Reproduction among these organisms is asexual by binary fission, during which the diatom divides into two parts, producing two "new" diatoms with identical genes. Each new organism receives one of the two frustules – one larger, the other smaller – possessed by the parent, which is now called the epitheca; and is used to construct a second, smaller frustule, the hypotheca. The diatom that received the larger frustule becomes the same size as its parent, but the diatom that received the smaller frustule remains smaller than its parent. This causes the average cell size of this diatom population to decrease. It has been observed, however, that certain taxa have the ability to divide without causing a reduction in cell size. Nonetheless, in order to restore the cell size of a diatom population for those that do endure size reduction, sexual reproduction and auxospore formation must occur.
Cell division
Vegetative cells of diatoms are diploid (2N) and so meiosis can take place, producing male and female gametes which then fuse to form the zygote. The zygote sheds its silica theca and grows into a large sphere covered by an organic membrane, the auxospore. A new diatom cell of maximum size, the initial cell, forms within the auxospore thus beginning a new generation. Resting spores may also be formed as a response to unfavourable environmental conditions with germination occurring when conditions improve.
A defining characteristic of all diatoms is their restrictive and bipartite silica cell wall that causes them to progressively shrink during asexual cell division. At a critically small cell size and under certain conditions, auxosporulation restitutes cell size and prevents clonal death. The entire lifecycles of only a few diatoms have been described and rarely have sexual events been captured in the environment.
Sexual reproduction
Most eukaryotes are capable of sexual reproduction involving meiosis. Sexual reproduction appears to be an obligatory phase in the life cycle of diatoms, particularly as cell size decreases with successive vegetative divisions. Sexual reproduction involves production of gametes and the fusion of gametes to form a zygote in which maximal cell size is restored. The signaling that triggers the sexual phase is favored when cells accumulate together, so that the distance between them is reduced and the contacts and/or the perception of chemical cues is facilitated.
An exploration of the genomes of five diatoms and one diatom transcriptome led to the identification of 42 genes potentially involved in meiosis. Thus a meiotic toolkit appears to be conserved in these six diatom species, indicating a central role of meiosis in diatoms as in other eukaryotes.
Sperm motility
Diatoms are mostly non-motile; however, sperm found in some species can be flagellated, though motility is usually limited to a gliding motion. In centric diatoms, the small male gametes have one flagellum while the female gametes are large and non-motile (oogamous). Conversely, in pennate diatoms both gametes lack flagella (isogamous). Certain araphid species, that is pennate diatoms without a raphe (seam), have been documented as anisogamous and are, therefore, considered to represent a transitional stage between centric and raphid pennate diatoms, diatoms with a raphe.
Degradation by microbes
Certain species of bacteria in oceans and lakes can accelerate the rate of dissolution of silica in dead and living diatoms by using hydrolytic enzymes to break down the organic algal material.
Ecology
Distribution
Diatoms are a widespread group and can be found in the oceans, in fresh water, in soils, and on damp surfaces. They are one of the dominant components of phytoplankton in nutrient-rich coastal waters and during oceanic spring blooms, since they can divide more rapidly than other groups of phytoplankton. Most live pelagically in open water, although some live as surface films at the water-sediment interface (benthic), or even under damp atmospheric conditions. They are especially important in oceans, where a 2003 study found that they contribute an estimated 45% of the total oceanic primary production of organic material. However, a more recent 2016 study estimates that the number is closer to 20%. Spatial distribution of marine phytoplankton species is restricted both horizontally and vertically.
Growth
Planktonic diatoms in freshwater and marine environments typically exhibit a "boom and bust" (or "bloom and bust") lifestyle. When conditions in the upper mixed layer (nutrients and light) are favourable (as at the spring), their competitive edge and rapid growth rate enables them to dominate phytoplankton communities ("boom" or "bloom"). As such they are often classed as opportunistic r-strategists (i.e. those organisms whose ecology is defined by a high growth rate, r).
Impact
The freshwater diatom Didymosphenia geminata, commonly known as Didymo, causes severe environmental degradation in water-courses where it blooms, producing large quantities of a brown jelly-like material called "brown snot" or "rock snot". This diatom is native to Europe and is an invasive species both in the antipodes and in parts of North America. The problem is most frequently recorded from Australia and New Zealand.
When conditions turn unfavourable, usually upon depletion of nutrients, diatom cells typically increase in sinking rate and exit the upper mixed layer ("bust"). This sinking is induced by either a loss of buoyancy control, the synthesis of mucilage that sticks diatoms cells together, or the production of heavy resting spores. Sinking out of the upper mixed layer removes diatoms from conditions unfavourable to growth, including grazer populations and higher temperatures (which would otherwise increase cell metabolism). Cells reaching deeper water or the shallow seafloor can then rest until conditions become more favourable again. In the open ocean, many sinking cells are lost to the deep, but refuge populations can persist near the thermocline.
Ultimately, diatom cells in these resting populations re-enter the upper mixed layer when vertical mixing entrains them. In most circumstances, this mixing also replenishes nutrients in the upper mixed layer, setting the scene for the next round of diatom blooms. In the open ocean (away from areas of continuous upwelling), this cycle of bloom, bust, then return to pre-bloom conditions typically occurs over an annual cycle, with diatoms only being prevalent during the spring and early summer. In some locations, however, an autumn bloom may occur, caused by the breakdown of summer stratification and the entrainment of nutrients while light levels are still sufficient for growth. Since vertical mixing is increasing, and light levels are falling as winter approaches, these blooms are smaller and shorter-lived than their spring equivalents.
In the open ocean, the diatom (spring) bloom is typically ended by a shortage of silicon. Unlike other minerals, the requirement for silicon is unique to diatoms and it is not regenerated in the plankton ecosystem as efficiently as, for instance, nitrogen or phosphorus nutrients. This can be seen in maps of surface nutrient concentrations – as nutrients decline along gradients, silicon is usually the first to be exhausted (followed normally by nitrogen then phosphorus).
Because of this bloom-and-bust cycle, diatoms are believed to play a disproportionately important role in the export of carbon from oceanic surface waters (see also the biological pump). Significantly, they also play a key role in the regulation of the biogeochemical cycle of silicon in the modern ocean.
Reason for success
Diatoms are ecologically successful, and occur in virtually every environment that contains water – not only oceans, seas, lakes, and streams, but also soil and wetlands. The use of silicon by diatoms is believed by many researchers to be the key to this ecological success. Raven (1983) noted that, relative to organic cell walls, silica frustules require less energy to synthesize (approximately 8% of a comparable organic wall), potentially a significant saving on the overall cell energy budget. In a now classic study, Egge and Aksnes (1992) found that diatom dominance of mesocosm communities was directly related to the availability of silicic acid – when concentrations were greater than 2 μmol m−3, they found that diatoms typically represented more than 70% of the phytoplankton community. Other researchers have suggested that the biogenic silica in diatom cell walls acts as an effective pH buffering agent, facilitating the conversion of bicarbonate to dissolved CO2 (which is more readily assimilated). More generally, notwithstanding these possible advantages conferred by their use of silicon, diatoms typically have higher growth rates than other algae of the same corresponding size.
Sources for collection
Diatoms can be obtained from multiple sources. Marine diatoms can be collected by direct water sampling, and benthic forms can be secured by scraping barnacles, oyster and other shells. Diatoms are frequently present as a brown, slippery coating on submerged stones and sticks, and may be seen to "stream" with river current. The surface mud of a pond, ditch, or lagoon will almost always yield some diatoms. Living diatoms are often found clinging in great numbers to filamentous algae, or forming gelatinous masses on various submerged plants. Cladophora is frequently covered with Cocconeis, an elliptically shaped diatom; Vaucheria is often covered with small forms. Since diatoms form an important part of the food of molluscs, tunicates, and fishes, the alimentary tracts of these animals often yield forms that are not easily secured in other ways. Diatoms can be made to emerge by filling a jar with water and mud, wrapping it in black paper and letting direct sunlight fall on the surface of the water. Within a day, the diatoms will come to the top in a scum and can be isolated.
Biogeochemistry
Silica cycle
The diagram shows the major fluxes of silicon in the current ocean. Most biogenic silica in the ocean (silica produced by biological activity) comes from diatoms. Diatoms extract dissolved silicic acid from surface waters as they grow, and return it to the water column when they die. Inputs of silicon arrive from above via aeolian dust, from the coasts via rivers, and from below via seafloor sediment recycling, weathering, and hydrothermal activity.
Although diatoms may have existed since the Triassic, the timing of their ascendancy and "take-over" of the silicon cycle occurred more recently. Prior to the Phanerozoic (before 544 Ma), it is believed that microbial or inorganic processes weakly regulated the ocean's silicon cycle. Subsequently, the cycle appears dominated (and more strongly regulated) by the radiolarians and siliceous sponges, the former as zooplankton, the latter as sedentary filter-feeders primarily on the continental shelves. Within the last 100 My, it is thought that the silicon cycle has come under even tighter control, and that this derives from the ecological ascendancy of the diatoms.
However, the precise timing of the "take-over" remains unclear, and different authors have conflicting interpretations of the fossil record. Some evidence, such as the displacement of siliceous sponges from the shelves, suggests that this takeover began in the Cretaceous (146 Ma to 66 Ma), while evidence from radiolarians suggests "take-over" did not begin until the Cenozoic (66 Ma to present).
Carbon cycle
The diagram depicts some mechanisms by which marine diatoms contribute to the biological carbon pump and influence the ocean carbon cycle. The anthropogenic CO2 emission to the atmosphere (mainly generated by fossil fuel burning and deforestation) is nearly 11 gigatonne carbon (GtC) per year, of which almost 2.5 GtC is taken up by the surface ocean. In surface seawater (pH 8.1–8.4), bicarbonate () and carbonate ions () constitute nearly 90 and <10% of dissolved inorganic carbon (DIC) respectively, while dissolved CO2 (CO2 aqueous) contributes <1%. Despite this low level of CO2 in the ocean and its slow diffusion rate in water, diatoms fix 10–20 GtC annually via photosynthesis thanks to their carbon dioxide concentrating mechanisms, allowing them to sustain marine food chains. In addition, 0.1–1% of this organic material produced in the euphotic layer sinks down as particles, thus transferring the surface carbon toward the deep ocean and sequestering atmospheric CO2 for thousands of years or longer. The remaining organic matter is remineralized through respiration. Thus, diatoms are one of the main players in this biological carbon pump, which is arguably the most important biological mechanism in the Earth System allowing CO2 to be removed from the carbon cycle for very long period.
Urea cycle
A feature of diatoms is the urea cycle, which links them evolutionarily to animals. In 2011, Allen et al. established that diatoms have a functioning urea cycle. This result was significant, since prior to this, the urea cycle was thought to have originated with the metazoans which appeared several hundreds of millions of years before the diatoms. Their study demonstrated that while diatoms and animals use the urea cycle for different ends, they are seen to be evolutionarily linked in such a way that animals and plants are not.
While often overlooked in photosynthetic organisms, the mitochondria also play critical roles in energy balance. Two nitrogen-related pathways are relevant and they may also change under ammonium () nutrition compared with nitrate () nutrition. First, in diatoms, and likely some other algae, there is a urea cycle. The long-known function of the urea cycle in animals is to excrete excess nitrogen produced by amino acid Catabolism; like photorespiration, the urea cycle had long been considered a waste pathway. However, in diatoms the urea cycle appears to play a role in exchange of nutrients between the mitochondria and the cytoplasm, and potentially the plastid and may help to regulate ammonium metabolism. Because of this cycle, marine diatoms, in contrast to chlorophytes, also have acquired a mitochondrial urea transporter and, in fact, based on bioinformatics, a complete mitochondrial GS-GOGAT cycle has been hypothesised.
Other
Diatoms are mainly photosynthetic; however a few are obligate heterotrophs and can live in the absence of light provided an appropriate organic carbon source is available.
Photosynthetic diatoms that find themselves in an environment absent of oxygen and/or sunlight can switch to anaerobic respiration known as nitrate respiration (DNRA), and stay dormant for up till months and decades.
Major pigments of diatoms are chlorophylls a and c, beta-carotene, fucoxanthin, diatoxanthin and diadinoxanthin.
Taxonomy
Diatoms belong to a large group of protists, many of which contain plastids rich in chlorophylls a and c. The group has been variously referred to as heterokonts, chrysophytes, chromists or stramenopiles. Many are autotrophs such as golden algae and kelp; and heterotrophs such as water moulds, opalinids, and actinophryid heliozoa. The classification of this area of protists is still unsettled. In terms of rank, they have been treated as a division, phylum, kingdom, or something intermediate to those. Consequently, diatoms are ranked anywhere from a class, usually called Diatomophyceae or Bacillariophyceae, to a division (=phylum), usually called Bacillariophyta, with corresponding changes in the ranks of their subgroups.
Genera and species
An estimated 20,000 extant diatom species are believed to exist, of which around 12,000 have been named to date according to Guiry, 2012 (other sources give a wider range of estimates). Around 1,000–1,300 diatom genera have been described, both extant and fossil, of which some 250–300 exist only as fossils.
Classes and orders
For many years the diatoms—treated either as a class (Bacillariophyceae) or a phylum (Bacillariophyta)—were divided into just 2 orders, corresponding to the centric and the pennate diatoms (Centrales and Pennales). This classification was extensively overhauled by Round, Crawford and Mann in 1990 who treated the diatoms at a higher rank (division, corresponding to phylum in zoological classification), and promoted the major classification units to classes, maintaining the centric diatoms as a single class Coscinodiscophyceae, but splitting the former pennate diatoms into 2 separate classes, Fragilariophyceae and Bacillariophyceae (the latter older name retained but with an emended definition), between them encompassing 45 orders, the majority of them new.
Today (writing at mid 2020) it is recognised that the 1990 system of Round et al. is in need of revision with the advent of newer molecular work, however the best system to replace it is unclear, and current systems in widespread use such as AlgaeBase, the World Register of Marine Species and its contributing database DiatomBase, and the system for "all life" represented in Ruggiero et al., 2015, all retain the Round et al. treatment as their basis, albeit with diatoms as a whole treated as a class rather than division/phylum, and Round et al.'s classes reduced to subclasses, for better agreement with the treatment of phylogenetically adjacent groups and their containing taxa. (For references refer the individual sections below).
One proposal, by Linda Medlin and co-workers commencing in 2004, is for some of the centric diatom orders considered more closely related to the pennates to be split off as a new class, Mediophyceae, itself more closely aligned with the pennate diatoms than the remaining centrics. This hypothesis—later designated the Coscinodiscophyceae-Mediophyceae-Bacillariophyceae, or Coscinodiscophyceae+(Mediophyceae+Bacillariophyceae) (CMB) hypothesis—has been accepted by D.G. Mann among others, who uses it as the basis for the classification of diatoms as presented in Adl. et al.'s series of syntheses (2005, 2012, 2019), and also in the Bacillariophyta chapter of the 2017 Handbook of the Protists edited by Archibald et al., with some modifications reflecting the apparent non-monophyly of Medlin et al. original "Coscinodiscophyceae". Meanwhile, a group led by E.C. Theriot favours a different hypothesis of phylogeny, which has been termed the structural gradation hypothesis (SGH) and does not recognise the Mediophyceae as a monophyletic group, while another analysis, that of Parks et al., 2018, finds that the radial centric diatoms (Medlin et al.'s Coscinodiscophyceae) are not monophyletic, but supports the monophyly of Mediophyceae minus Attheya, which is an anomalous genus. Discussion of the relative merits of these conflicting schemes continues by the various parties involved.
Adl et al., 2019 treatment
In 2019, Adl et al. presented the following classification of diatoms, while noting: "This revision reflects numerous advances in the phylogeny of the diatoms over the last decade. Due to our poor taxon sampling outside of the Mediophyceae and pennate diatoms, and the known and anticipated diversity of all diatoms, many clades appear at a high classification level (and the higher level classification is rather flat)." This classification treats diatoms as a phylum (Diatomeae/Bacillariophyta), accepts the class Mediophyceae of Medlin and co-workers, introduces new subphyla and classes for a number of otherwise isolated genera, and re-ranks a number of previously established taxa as subclasses, but does not list orders or families. Inferred ranks have been added for clarity (Adl. et al. do not use ranks, but the intended ones in this portion of the classification are apparent from the choice of endings used, within the system of botanical nomenclature employed).
Clade Diatomista Derelle et al. 2016, emend. Cavalier-Smith 2017 (diatoms plus a subset of other ochrophyte groups)
Phylum Diatomeae Dumortier 1821 [= Bacillariophyta Haeckel 1878] (diatoms)
Subphylum Leptocylindrophytina D.G. Mann in Adl et al. 2019
Class Leptocylindrophyceae D.G. Mann in Adl et al. 2019 (Leptocylindrus, Tenuicylindrus)
Class Corethrophyceae D.G. Mann in Adl et al. 2019 (Corethron)
Subphylum Ellerbeckiophytina D.G. Mann in Adl et al. 2019 (Ellerbeckia)
Subphylum Probosciophytina D.G. Mann in Adl et al. 2019 (Proboscia)
Subphylum Melosirophytina D.G. Mann in Adl et al. 2019 (Aulacoseira, Melosira, Hyalodiscus, Stephanopyxis, Paralia, Endictya)
Subphylum Coscinodiscophytina Medlin & Kaczmarska 2004, emend. (Actinoptychus, Coscinodiscus, Actinocyclus, Asteromphalus, Aulacodiscus, Stellarima)
Subphylum Rhizosoleniophytina D.G. Mann in Adl et al. 2019 (Guinardia, Rhizosolenia, Pseudosolenia)
Subphylum Arachnoidiscophytina D.G. Mann in Adl et al. 2019 (Arachnoidiscus)
Subphylum Bacillariophytina Medlin & Kaczmarska 2004, emend.
Class Mediophyceae Jouse & Proshkina-Lavrenko in Medlin & Kaczmarska 2004
Subclass Chaetocerotophycidae Round & R.M. Crawford in Round et al. 1990, emend.
Subclass Lithodesmiophycidae Round & R.M. Crawford in Round et al. 1990, emend.
Subclass Thalassiosirophycidae Round & R.M. Crawford in Round et al. 1990
Subclass Cymatosirophycidae Round & R.M. Crawford in Round et al. 1990
Subclass Odontellophycidae D.G. Mann in Adl et al. 2019
Subclass Chrysanthemodiscophycidae D.G. Mann in Adl et al. 2019
Class Biddulphiophyceae D.G. Mann in Adl et al. 2019
Subclass Biddulphiophycidae Round and R.M. Crawford in Round et al. 1990, emend.
Biddulphiophyceae incertae sedis (Attheya)
Class Bacillariophyceae Haeckel 1878, emend.
Bacillariophyceae incertae sedis (Striatellaceae)
Subclass Urneidophycidae Medlin 2016
Subclass Fragilariophycidae Round in Round, Crawford & Mann 1990, emend.
Subclass Bacillariophycidae D.G. Mann in Round, Crawford & Mann 1990, emend.
See taxonomy of diatoms for more details.
Gallery
Three diatom species were sent to the International Space Station, including the huge (6 mm length) diatoms of Antarctica and the exclusive colonial diatom, Bacillaria paradoxa. The cells of Bacillaria moved next to each other in partial but opposite synchrony by a microfluidics method.
Evolution and fossil record
Origin
Heterokont chloroplasts appear to derive from those of red algae, rather than directly from prokaryotes as occurred in plants. This suggests they had a more recent origin than many other algae. However, fossil evidence is scant, and only with the evolution of the diatoms themselves do the heterokonts make a serious impression on the fossil record.
Earliest fossils
The earliest known fossil diatoms date from the early Jurassic (~185 Ma ago), although the molecular clock and sedimentary evidence suggests an earlier origin. It has been suggested that their origin may be related to the end-Permian mass extinction (~250 Ma), after which many marine niches were opened. The gap between this event and the time that fossil diatoms first appear may indicate a period when diatoms were unsilicified and their evolution was cryptic. Since the advent of silicification, diatoms have made a significant impression on the fossil record, with major fossil deposits found as far back as the early Cretaceous, and with some rocks such as diatomaceous earth, being composed almost entirely of them.
Relation to grasslands
The expansion of grassland biomes and the evolutionary radiation of grasses during the Miocene is believed to have increased the flux of soluble silicon to the oceans, and it has been argued that this promoted the diatoms during the Cenozoic era. Recent work suggests that diatom success is decoupled from the evolution of grasses, although both diatom and grassland diversity increased strongly from the middle Miocene.
Relation to climate
Diatom diversity over the Cenozoic has been very sensitive to global temperature, particularly to the equator-pole temperature gradient. Warmer oceans, particularly warmer polar regions, have in the past been shown to have had substantially lower diatom diversity. Future warm oceans with enhanced polar warming, as projected in global-warming scenarios, could thus in theory result in a significant loss of diatom diversity, although from current knowledge it is impossible to say if this would occur rapidly or only over many tens of thousands of years.
Method of investigation
The fossil record of diatoms has largely been established through the recovery of their siliceous frustules in marine and non-marine sediments. Although diatoms have both a marine and non-marine stratigraphic record, diatom biostratigraphy, which is based on time-constrained evolutionary originations and extinctions of unique taxa, is only well developed and widely applicable in marine systems. The duration of diatom species ranges have been documented through the study of ocean cores and rock sequences exposed on land. Where diatom biozones are well established and calibrated to the geomagnetic polarity time scale (e.g., Southern Ocean, North Pacific, eastern equatorial Pacific), diatom-based age estimates may be resolved to within <100,000 years, although typical age resolution for Cenozoic diatom assemblages is several hundred thousand years.
Diatoms preserved in lake sediments are widely used for paleoenvironmental reconstructions of Quaternary climate, especially for closed-basin lakes which experience fluctuations in water depth and salinity.
Isotope records
When diatoms die their shells (frustules) can settle on the seafloor and become microfossils. Over time, these microfossils become buried as opal deposits in the marine sediment. Paleoclimatology is the study of past climates. Proxy data is used in order to relate elements collected in modern-day sedimentary samples to climatic and oceanic conditions in the past. Paleoclimate proxies refer to preserved or fossilized physical markers which serve as substitutes for direct meteorological or ocean measurements. An example of proxies is the use of diatom isotope records of δ13C, δ18O, δ30Si (δ13Cdiatom, δ18Odiatom, and δ30Sidiatom). In 2015, Swann and Snelling used these isotope records to document historic changes in the photic zone conditions of the north-west Pacific Ocean, including nutrient supply and the efficiency of the soft-tissue biological pump, from the modern day back to marine isotope stage 5e, which coincides with the last interglacial period. Peaks in opal productivity in the marine isotope stage are associated with the breakdown of the regional halocline stratification and increased nutrient supply to the photic zone.
The initial development of the halocline and stratified water column has been attributed to the onset of major Northern Hemisphere glaciation at 2.73 Ma, which increased the flux of freshwater to the region, via increased monsoonal rainfall and/or glacial meltwater, and sea surface temperatures. The decrease of abyssal water upwelling associated with this may have contributed to the establishment of globally cooler conditions and the expansion of glaciers across the Northern Hemisphere from 2.73 Ma. While the halocline appears to have prevailed through the late Pliocene and early Quaternary glacial–interglacial cycles, other studies have shown that the stratification boundary may have broken down in the late Quaternary at glacial terminations and during the early part of interglacials.
Diversification
The Cretaceous record of diatoms is limited, but recent studies reveal a progressive diversification of diatom types. The Cretaceous–Paleogene extinction event, which in the oceans dramatically affected organisms with calcareous skeletons, appears to have had relatively little impact on diatom evolution.
Turnover
Although no mass extinctions of marine diatoms have been observed during the Cenozoic, times of relatively rapid evolutionary turnover in marine diatom species assemblages occurred near the Paleocene–Eocene boundary, and at the Eocene–Oligocene boundary. Further turnover of assemblages took place at various times between the middle Miocene and late Pliocene, in response to progressive cooling of polar regions and the development of more endemic diatom assemblages.
A global trend toward more delicate diatom frustules has been noted from the Oligocene to the Quaternary. This coincides with an increasingly more vigorous circulation of the ocean's surface and deep waters brought about by increasing latitudinal thermal gradients at the onset of major ice sheet expansion on Antarctica and progressive cooling through the Neogene and Quaternary towards a bipolar glaciated world. This caused diatoms to take in less silica for the formation of their frustules. Increased mixing of the oceans renews silica and other nutrients necessary for diatom growth in surface waters, especially in regions of coastal and oceanic upwelling.
Genetics
Expressed sequence tagging
In 2002, the first insights into the properties of the Phaeodactylum tricornutum gene repertoire were described using 1,000 expressed sequence tags (ESTs). Subsequently, the number of ESTs was extended to 12,000 and the diatom EST database was constructed for functional analyses. These sequences have been used to make a comparative analysis between P. tricornutum and the putative complete proteomes from the green alga Chlamydomonas reinhardtii, the red alga Cyanidioschyzon merolae, and the diatom Thalassiosira pseudonana. The diatom EST database now consists of over 200,000 ESTs from P. tricornutum (16 libraries) and T. pseudonana (7 libraries) cells grown in a range of different conditions, many of which correspond to different abiotic stresses.
Genome sequencing
In 2004, the entire genome of the centric diatom, Thalassiosira pseudonana (32.4 Mb) was sequenced, followed in 2008 with the sequencing of the pennate diatom, Phaeodactylum tricornutum (27.4 Mb). Comparisons of the two reveal that the P. tricornutum genome includes fewer genes (10,402 opposed to 11,776) than T. pseudonana; no major synteny (gene order) could be detected between the two genomes. T. pseudonana genes show an average of ~1.52 introns per gene as opposed to 0.79 in P. tricornutum, suggesting recent widespread intron gain in the centric diatom. Despite relatively recent evolutionary divergence (90 million years), the extent of molecular divergence between centrics and pennates indicates rapid evolutionary rates within the Bacillariophyceae compared to other eukaryotic groups. Comparative genomics also established that a specific class of transposable elements, the Diatom Copia-like retrotransposons (or CoDis), has been significantly amplified in the P. tricornutum genome with respect to T. pseudonana, constituting 5.8 and 1% of the respective genomes.
Endosymbiotic gene transfer
Diatom genomics brought much information about the extent and dynamics of the endosymbiotic gene transfer (EGT) process. Comparison of the T. pseudonana proteins with homologs in other organisms suggested that hundreds have their closest homologs in the Plantae lineage. EGT towards diatom genomes can be illustrated by the fact that the T. pseudonana genome encodes six proteins which are most closely related to genes encoded by the Guillardia theta (cryptomonad) nucleomorph genome. Four of these genes are also found in red algal plastid genomes, thus demonstrating successive EGT from red algal plastid to red algal nucleus (nucleomorph) to heterokont host nucleus. More recent phylogenomic analyses of diatom proteomes provided evidence for a prasinophyte-like endosymbiont in the common ancestor of chromalveolates as supported by the fact the 70% of diatom genes of Plantae origin are of green lineage provenance and that such genes are also found in the genome of other stramenopiles. Therefore, it was proposed that chromalveolates are the product of serial secondary endosymbiosis first with a green algae, followed by a second one with a red algae that conserved the genomic footprints of the previous but displaced the green plastid. However, phylogenomic analyses of diatom proteomes and chromalveolate evolutionary history will likely take advantage of complementary genomic data from under-sequenced lineages such as red algae.
Horizontal gene transfer
In addition to EGT, horizontal gene transfer (HGT) can occur independently of an endosymbiotic event. The publication of the P. tricornutum genome reported that at least 587 P. tricornutum genes appear to be most closely related to bacterial genes, accounting for more than 5% of the P. tricornutum proteome. About half of these are also found in the T. pseudonana genome, attesting their ancient incorporation in the diatom lineage.
Genetic engineering
To understand the biological mechanisms which underlie the great importance of diatoms in geochemical cycles, scientists have used the Phaeodactylum tricornutum and Thalassiosira spp. species as model organisms since the 90's.
Few molecular biology tools are currently available to generate mutants or transgenic lines : plasmids containing transgenes are inserted into the cells using the biolistic method or transkingdom bacterial conjugation (with 10−6 and 10−4 yield respectively), and other classical transfection methods such as electroporation or use of PEG have been reported to provide results with lower efficiencies.
Transfected plasmids can be either randomly integrated into the diatom's chromosomes or maintained as stable circular episomes (thanks to the CEN6-ARSH4-HIS3 yeast centromeric sequence). The phleomycin/zeocin resistance gene Sh Ble is commonly used as a selection marker, and various transgenes have been successfully introduced and expressed in diatoms with stable transmissions through generations, or with the possibility to remove it.
Furthermore, these systems now allow the use of the CRISPR-Cas genome edition tool, leading to a fast production of functional knock-out mutants and a more accurate comprehension of the diatoms' cellular processes.
Human uses
Paleontology
Decomposition and decay of diatoms leads to organic and inorganic (in the form of silicates) sediment, the inorganic component of which can lead to a method of analyzing past marine environments by corings of ocean floors or bay muds, since the inorganic matter is embedded in deposition of clays and silts and forms a permanent geological record of such marine strata (see siliceous ooze).
Industrial
Diatoms, and their shells (frustules) as diatomite or diatomaceous earth, are important industrial resources used for fine polishing and liquid filtration. The complex structure of their microscopic shells has been proposed as a material for nanotechnology.
Diatomite is considered to be a natural nano material and has many uses and applications such as: production of various ceramic products, construction ceramics, refractory ceramics, special oxide ceramics, for production of humidity control materials, used as filtration material, material in the cement production industry, initial material for production of prolonged-release drug carriers, absorption material in an industrial scale, production of porous ceramics, glass industry, used as catalyst support, as a filler in plastics and paints, purification of industrial waters, pesticide holder, as well as for improving the physical and chemical characteristics of certain soils, and other uses.
Diatoms are also used to help determine the origin of materials containing them, including seawater.
Nanotechnology
The deposition of silica by diatoms may also prove to be of utility to nanotechnology. Diatom cells repeatedly and reliably manufacture valves of various shapes and sizes, potentially allowing diatoms to manufacture micro- or nano-scale structures which may be of use in a range of devices, including: optical systems; semiconductor nanolithography; and even vehicles for drug delivery. With an appropriate artificial selection procedure, diatoms that produce valves of particular shapes and sizes might be evolved for cultivation in chemostat cultures to mass-produce nanoscale components. It has also been proposed that diatoms could be used as a component of solar cells by substituting photosensitive titanium dioxide for the silicon dioxide that diatoms normally use to create their cell walls. Diatom biofuel producing solar panels have also been proposed.
Forensic
The main goal of diatom analysis in forensics is to differentiate a death by submersion from a post-mortem immersion of a body in water. Laboratory tests may reveal the presence of diatoms in the body. Since the silica-based skeletons of diatoms do not readily decay, they can sometimes be detected even in heavily decomposed bodies. As they do not occur naturally in the body, if laboratory tests show diatoms in the corpse that are of the same species found in the water where the body was recovered, then it may be good evidence of drowning as the cause of death. The blend of diatom species found in a corpse may be the same or different from the surrounding water, indicating whether the victim drowned in the same site in which the body was found.
History of discovery
The first illustrations of diatoms are found in an article from 1703 in Transactions of the Royal Society showing unmistakable drawings of Tabellaria. Although the publication was authored by an unnamed English gentleman, there is recent evidence that he was Charles King of Staffordshire. The first formally identified diatom, the colonial Bacillaria paxillifera, was discovered and described in 1783 by Danish naturalist Otto Friedrich Müller. Like many others after him, he wrongly thought that it was an animal due to its ability to move. Even Charles Darwin saw diatom remains in dust whilst in the Cape Verde Islands, although he was not sure what they were. It was only later that they were identified for him as siliceous polygastrics. The infusoria that Darwin later noted in the face paint of Fueguinos, native inhabitants of Tierra del Fuego in the southern end of South America, were later identified in the same way. During his lifetime, the siliceous polygastrics were clarified as belonging to the Diatomaceae, and Darwin struggled to understand the reasons underpinning their beauty. He exchanged opinions with the noted cryptogamist G. H. K. Thwaites on the topic. In the fourth edition of On the Origin of Species, he wrote, "Few objects are more beautiful than the minute siliceous cases of the diatomaceae: were these created that they might be examined and admired under the high powers of the microscope?" and reasoned that their exquisite morphologies must have functional underpinnings rather than having been created purely for humans to admire.
| Biology and health sciences | Other organisms | null |
46380 | https://en.wikipedia.org/wiki/Coaxial%20cable | Coaxial cable | Coaxial cable, or coax (pronounced ), is a type of electrical cable consisting of an inner conductor surrounded by a concentric conducting shield, with the two separated by a dielectric (insulating material); many coaxial cables also have a protective outer sheath or jacket. The term coaxial refers to the inner conductor and the outer shield sharing a geometric axis.
Coaxial cable is a type of transmission line, used to carry high-frequency electrical signals with low losses. It is used in such applications as telephone trunk lines, broadband internet networking cables, high-speed computer data busses, cable television signals, and connecting radio transmitters and receivers to their antennas. It differs from other shielded cables because the dimensions of the cable and connectors are controlled to give a precise, constant conductor spacing, which is needed for it to function efficiently as a transmission line.
Coaxial cable was used in the first (1858) and following transatlantic cable installations, but its theory was not described until 1880 by English physicist, engineer, and mathematician Oliver Heaviside, who patented the design in that year (British patent No. 1,407).
Applications
Coaxial cable is used as a transmission line for radio frequency signals. Its applications include feedlines connecting radio transmitters and receivers to their antennas, computer network (e.g., Ethernet) connections, digital audio (S/PDIF), and distribution of cable television signals. One advantage of coaxial over other types of radio transmission line is that in an ideal coaxial cable the electromagnetic field carrying the signal exists only in the space between the inner and outer conductors. This allows coaxial cable runs to be installed next to metal objects such as gutters without the power losses that occur in other types of transmission lines. Coaxial cable also provides protection of the signal from external electromagnetic interference.
Description
Coaxial cable conducts electrical signals using an inner conductor (usually a solid copper, stranded copper or copper-plated steel wire) surrounded by an insulating layer and all enclosed by a shield, typically one to four layers of woven metallic braid and metallic tape. The cable is protected by an outer insulating jacket. Normally, the outside of the shield is kept at ground potential and a signal carrying voltage is applied to the center conductor. When using differential signaling, coaxial cable provides an advantage of equal push-pull currents on the inner conductor and inside of the outer conductor that restrict the signal's electric and magnetic fields to the dielectric, with little leakage outside the shield. Further, electric and magnetic fields outside the cable are largely kept from interfering with signals inside the cable, if unequal currents are filtered out at the receiving end of the line. This property makes coaxial cable a good choice both for carrying weak signals that cannot tolerate interference from the environment, and for stronger electrical signals that must not be allowed to radiate or couple into adjacent structures or circuits. Larger diameter cables and cables with multiple shields have less leakage.
Common applications of coaxial cable include video and CATV distribution, RF and microwave transmission, and computer and instrumentation data connections.
The characteristic impedance of the cable () is determined by the dielectric constant of the inner insulator and the radii of the inner and outer conductors. In radio frequency systems, where the cable length is comparable to the wavelength of the signals transmitted, a uniform cable characteristic impedance is important to minimize loss. The source and load impedances are chosen to match the impedance of the cable to ensure maximum power transfer and minimum standing wave ratio. Other important properties of coaxial cable include attenuation as a function of frequency, voltage handling capability, and shield quality.
Construction
Coaxial cable design choices affect physical size, frequency performance, attenuation, power handling capabilities, flexibility, strength, and cost. The inner conductor might be solid or stranded; stranded is more flexible. To get better high-frequency performance, the inner conductor may be silver-plated. Copper-plated steel wire is often used as an inner conductor for cable used in the cable TV industry.
The insulator surrounding the inner conductor may be solid plastic, a foam plastic, or air with spacers supporting the inner wire. The properties of the dielectric insulator determine some of the electrical properties of the cable. A common choice is a solid polyethylene (PE) insulator, used in lower-loss cables. Solid Teflon (PTFE) is also used as an insulator, and exclusively in plenum-rated cables. Some coaxial lines use air (or some other gas) and have spacers to keep the inner conductor from touching the shield.
Many conventional coaxial cables use braided copper wire forming the shield. This allows the cable to be flexible, but it also means there are gaps in the shield layer, and the inner dimension of the shield varies slightly because the braid cannot be flat. Sometimes the braid is silver-plated. For better shield performance, some cables have a double-layer shield. The shield might be just two braids, but it is more common now to have a thin foil shield covered by a wire braid. Some cables may invest in more than two shield layers, such as "quad-shield", which uses four alternating layers of foil and braid. Other shield designs sacrifice flexibility for better performance; some shields are a solid metal tube. Those cables cannot be bent sharply, as the shield will kink, causing losses in the cable. When a foil shield is used a small wire conductor incorporated into the foil makes soldering the shield termination easier.
For high-power radio-frequency transmission up to about 1 GHz, coaxial cable with a solid copper outer conductor is available in sizes of 0.25 inch upward. The outer conductor is corrugated like a bellows to permit flexibility and the inner conductor is held in position by a plastic spiral to approximate an air dielectric. One brand name for such cable is Heliax.
Coaxial cables require an internal structure of an insulating (dielectric) material to maintain the spacing between the center conductor and shield. The dielectric losses increase in this order: Ideal dielectric (no loss), vacuum, air, polytetrafluoroethylene (PTFE), polyethylene foam, and solid polyethylene. An inhomogeneous dielectric needs to be compensated by a non-circular conductor to avoid current hot-spots.
While many cables have a solid dielectric, many others have a foam dielectric that contains as much air or other gas as possible to reduce the losses by allowing the use of a larger diameter center conductor. Foam coax will have about 15% less attenuation but some types of foam dielectric can absorb moisture—especially at its many surfaces—in humid environments, significantly increasing the loss. Supports shaped like stars or spokes are even better but more expensive and very susceptible to moisture infiltration. Still more expensive were the air-spaced coaxials used for some inter-city communications in the mid-20th century. The center conductor was suspended by polyethylene discs every few centimeters. In some low-loss coaxial cables such as the RG-62 type, the inner conductor is supported by a spiral strand of polyethylene, so that an air space exists between most of the conductor and the inside of the jacket. The lower dielectric constant of air allows for a greater inner diameter at the same impedance and a greater outer diameter at the same cutoff frequency, lowering ohmic losses. Inner conductors are sometimes silver-plated to smooth the surface and reduce losses due to skin effect. A rough surface extends the current path and concentrates the current at peaks, thus increasing ohmic loss.
The insulating jacket can be made from many materials. A common choice is PVC, but some applications may require fire-resistant materials. Outdoor applications may require the jacket to resist ultraviolet light, oxidation, rodent damage, or direct burial. Flooded coaxial cables use a water-blocking gel to protect the cable from water infiltration through minor cuts in the jacket. For internal chassis connections the insulating jacket may be omitted.
Signal propagation
Twin-lead transmission lines have the property that the electromagnetic wave propagating down the line extends into the space surrounding the parallel wires. These lines have low loss, but also have undesirable characteristics. They cannot be bent, tightly twisted, or otherwise shaped without changing their characteristic impedance, causing reflection of the signal back toward the source. They also cannot be buried or run along or attached to anything conductive, as the extended fields will induce currents in the nearby conductors causing unwanted radiation and detuning of the line. Standoff insulators are used to keep them away from parallel metal surfaces. Coaxial lines largely solve this problem by confining virtually all of the electromagnetic wave to the area inside the cable. Coaxial lines can therefore be bent and moderately twisted without negative effects, and they can be strapped to conductive supports without inducing unwanted currents in them, so long as provisions are made to ensure differential signalling push-pull currents in the cable.
In radio-frequency applications up to a few gigahertz, the wave propagates primarily in the transverse electric magnetic (TEM) mode, which means that the electric and magnetic fields are both perpendicular to the direction of propagation. However, above a certain cutoff frequency, transverse electric (TE) or transverse magnetic (TM) modes can also propagate, as they do in a hollow waveguide. It is usually undesirable to transmit signals above the cutoff frequency, since it may cause multiple modes with different phase velocities to propagate, interfering with each other. The outer diameter is roughly inversely proportional to the cutoff frequency. A propagating surface-wave mode that only involves the central conductor also exists, but is effectively suppressed in coaxial cable of conventional geometry and common impedance. Electric field lines for this TM mode have a longitudinal component and require line lengths of a half-wavelength or longer.
Coaxial cable may be viewed as a type of waveguide. Power is transmitted through the radial electric field and the circumferential magnetic field in the TEM mode. This is the dominant mode from zero frequency (DC) to an upper limit determined by the electrical dimensions of the cable.
Connectors
Coaxial connectors are designed to maintain a coaxial form across the connection and have the same impedance as the attached cable. Connectors are usually plated with high-conductivity metals such as silver or tarnish-resistant gold. Due to the skin effect, the RF signal is only carried by the plating at higher frequencies and does not penetrate to the connector body. Silver however tarnishes quickly and the silver sulfide that is produced is poorly conductive, degrading connector performance, making silver a poor choice for this application.
Important parameters
Coaxial cable is a particular kind of transmission line, so the circuit models developed for general transmission lines are appropriate. See Telegrapher's equation.
Physical parameters
In the following section, these symbols are used:
Length of the cable,
Outside diameter of inner conductor,
Inside diameter of the shield,
Dielectric constant of the insulator, The dielectric constant is often quoted as the relative dielectric constant referred to the dielectric constant of free space When the insulator is a mixture of different dielectric materials (e.g., polyethylene foam is a mixture of polyethylene and air), then the term effective dielectric constant is often used.
Magnetic permeability of the insulator, Permeability is often quoted as the relative permeability referred to the permeability of free space The relative permeability will almost always be .
Fundamental electrical parameters
Shunt capacitance per unit length, in farads per metre.
Series inductance per unit length, in henries per metre, considering the central conductor to be a thin hollow cylinder (due to skin effect).
Series resistance per unit length, in ohms per metre. The resistance per unit length is just the resistance of inner conductor and the shield at low frequencies. At higher frequencies, skin effect increases the effective resistance by confining the conduction to a thin layer of each conductor.
Shunt conductance per unit length, in siemens per metre. The shunt conductance is usually very small because insulators with good dielectric properties are used (a very low loss tangent). At high frequencies, a dielectric can have a significant resistive loss.
Derived electrical parameters
Characteristic impedance in ohms (Ω). The complex impedance of an infinite length of transmission line is:
Where is the resistance per unit length, is the inductance per unit length, is the conductance per unit length of the dielectric, is the capacitance per unit length, and is the frequency. The "per unit length" dimensions cancel out in the impedance formula.
At DC the two reactive terms are zero, so the impedance is real-valued, and is extremely high. It looks like
With increasing frequency, the reactive components take effect and the impedance of the line is complex-valued. At very low frequencies (audio range, of interest to telephone systems) is typically much smaller than , so the impedance at low frequencies is
which has a phase value of -45 degrees.
At higher frequencies, the reactive terms usually dominate and , and the cable impedance again becomes real-valued. That value is , the characteristic impedance of the cable:
Assuming the dielectric properties of the material inside the cable do not vary appreciably over the operating range of the cable, the characteristic impedance is frequency independent above about five times the shield cutoff frequency. For typical coaxial cables, the shield cutoff frequency is 600 Hz (for RG-6A) to 2,000 Hz (for RG-58C).
The parameters and are determined from the ratio of the inner () and outer () diameters and the dielectric constant (). The characteristic impedance is given by
Attenuation (loss) per unit length, in decibels per meter. This is dependent on the loss in the dielectric material filling the cable, and resistive losses in the center conductor and outer shield. These losses are frequency dependent, the losses becoming higher as the frequency increases. Skin effect losses in the conductors can be reduced by increasing the diameter of the cable. A cable with twice the diameter will have half the skin effect resistance. Ignoring dielectric and other losses, the larger cable would halve the dB/meter loss. In designing a system, engineers consider not only the loss in the cable but also the loss in the connectors.
Velocity of propagation, in meters per second. The velocity of propagation depends on the dielectric constant and permeability (which is usually ).
Single-mode band. In coaxial cable, the dominant mode (the mode with the lowest cutoff frequency) is the TEM mode, which has a cutoff frequency of zero; it propagates all the way down to DC. The mode with the next lowest cutoff is the TE mode. This mode has one 'wave' (two reversals of polarity) in going around the circumference of the cable. To a good approximation, the condition for the TE mode to propagate is that the wavelength in the dielectric is no longer than the average circumference of the insulator; that is that the frequency is at least
Hence, the cable is single-mode from DC up to this frequency, and might in practice be used up to 90% of this frequency.
Peak Voltage. The peak voltage is set by the breakdown voltage of the insulator.:
where
is the peak voltage
is the insulator's breakdown voltage in volts per meter
is the inner diameter in meters
is the outer diameter in meters
The calculated peak voltage is often reduced by a safety factor.
Choice of impedance
The best coaxial cable impedances were experimentally determined at Bell Laboratories in 1929 to be 77 Ω for low-attenuation, 60 Ω for high-voltage, and 30 Ω for high-power. For a coaxial cable with air dielectric and a shield of a given inner diameter, the attenuation is minimized by choosing the diameter of the inner conductor to give a characteristic impedance of 76.7 Ω. When more common dielectrics are considered, the lowest insertion loss impedance drops down to a value between 52 and 64 Ω. Maximum power handling is achieved at 30 Ω.
The approximate impedance required to match a centre-fed dipole antenna in free space (i.e., a dipole without ground reflections) is 73 Ω, so 75 Ω coax was commonly used for connecting shortwave antennas to receivers. These typically involve such low levels of RF power that power-handling and high-voltage breakdown characteristics are unimportant when compared to attenuation. Likewise with CATV, although many broadcast TV installations and CATV headends use 300 Ω folded dipole antennas to receive off-the-air signals, 75 Ω coax makes a convenient 4:1 balun transformer for these as well as possessing low attenuation.
The arithmetic mean between 30 Ω and 77 Ω is 53.5 Ω; the geometric mean is 48 Ω. The selection of 50 Ω as a compromise between power-handling capability and attenuation is in general cited as the reason for the number. 50 Ω also works out tolerably well because it corresponds approximately to the feedpoint impedance of a half-wave dipole, mounted approximately a half-wave above "normal" ground (ideally 73 Ω, but reduced for low-hanging horizontal wires).
RG-62 is a 93 Ω coaxial cable originally used in mainframe computer networks in the 1970s and early 1980s (it was the cable used to connect IBM 3270 terminals to IBM 3274/3174 terminal cluster controllers). Later, some manufacturers of LAN equipment, such as Datapoint for ARCNET, adopted RG-62 as their coaxial cable standard. The cable has the lowest capacitance per unit-length when compared to other coaxial cables of similar size.
All of the components of a coaxial system should have the same impedance to avoid internal reflections at connections between components (see Impedance matching). Such reflections may cause signal attenuation. They introduce standing waves, which increase losses and can even result in cable dielectric breakdown with high-power transmission. In analog video or TV systems, reflections cause ghosting in the image; multiple reflections may cause the original signal to be followed by more than one echo. If a coaxial cable is open (not connected at the end), the termination has nearly infinite resistance, which causes reflections. If the coaxial cable is short-circuited, the termination resistance is nearly zero, which causes reflections with the opposite polarity. Reflections will be nearly eliminated if the coaxial cable is terminated in a pure resistance equal to its impedance.
Issues
Signal leakage
Signal leakage is the passage of electromagnetic fields through the shield of a cable and occurs in both directions. Ingress is the passage of an outside signal into the cable and can result in noise and disruption of the desired signal. Egress is the passage of signal intended to remain within the cable into the outside world and can result in a weaker signal at the end of the cable and radio frequency interference to nearby devices. Severe leakage usually results from improperly installed connectors or faults in the cable shield.
For example, in the United States, signal leakage from cable television systems is regulated by the FCC, since cable signals use the same frequencies as aeronautical and radionavigation bands. CATV operators may also choose to monitor their networks for leakage to prevent ingress. Outside signals entering the cable can cause unwanted noise and picture ghosting. Excessive noise can overwhelm the signal, making it useless. In-channel ingress can be digitally removed by ingress cancellation.
An ideal shield would be a perfect conductor with no holes, gaps, or bumps connected to a perfect ground. However, a smooth solid highly conductive shield would be heavy, inflexible, and expensive. Such coax is used for straight-line feeds to commercial radio broadcast towers. More economical cables must make compromises between shield efficacy, flexibility, and cost, such as the corrugated surface of flexible hardline, flexible braid, or foil shields. Since shields cannot be perfect conductors, current flowing on the inside of the shield produces an electromagnetic field on the outer surface of the shield.
Consider the skin effect. The magnitude of an alternating current in a conductor decays exponentially with distance beneath the surface, with the depth of penetration being proportional to the square root of the resistivity. This means that, in a shield of finite thickness, some small amount of current will still be flowing on the opposite surface of the conductor. With a perfect conductor (i.e., zero resistivity), all of the current would flow at the surface, with no penetration into and through the conductor. Real cables have a shield made of an imperfect, although usually very good, conductor, so there must always be some leakage.
The gaps or holes, allow some of the electromagnetic field to penetrate to the other side. For example, braided shields have many small gaps. The gaps are smaller when using a foil (solid metal) shield, but there is still a seam running the length of the cable. Foil becomes increasingly rigid with increasing thickness, so a thin foil layer is often surrounded by a layer of braided metal, which offers greater flexibility for a given cross-section.
Signal leakage can be severe if there is poor contact at the interface to connectors at either end of the cable or if there is a break in the shield.
To greatly reduce signal leakage into or out of the cable, by a factor of 1000, or even 10,000, superscreened cables are often used in critical applications, such as for neutron flux counters in nuclear reactors.
Superscreened cables for nuclear use are defined in IEC 96-4-1, 1990, however as there have been long gaps in the construction of nuclear power stations in Europe, many existing installations are using superscreened cables to the UK standard AESS(TRG) 71181 which is referenced in IEC 61917.
Ground loops
A continuous current, even if small, along the imperfect shield of a coaxial cable can cause visible or audible interference. In CATV systems distributing analog signals the potential difference between the coaxial network and the electrical grounding system of a house can cause a visible "hum bar" in the picture. This appears as a wide horizontal distortion bar in the picture that scrolls slowly upward. Such differences in potential can be reduced by proper bonding to a common ground at the house. See ground loop.
Noise
External fields create a voltage across the inductance of the outside of the outer conductor between sender and receiver. The effect is less when there are several parallel cables, as this reduces the inductance and, therefore, the voltage. Because the outer conductor carries the reference potential for the signal on the inner conductor, the receiving circuit measures the wrong voltage.
Transformer effect
The transformer effect is sometimes used to mitigate the effect of currents induced in the shield. The inner and outer conductors form the primary and secondary winding of the transformer, and the effect is enhanced in some high-quality cables that have an outer layer of mu-metal. Because of this 1:1 transformer, the aforementioned voltage across the outer conductor is transformed onto the inner conductor so that the two voltages can be cancelled by the receiver. Many senders and receivers have means to reduce the leakage even further. They increase the transformer effect by passing the whole cable through a ferrite core one or more times.
Common mode current and radiation
Common mode current occurs when stray currents in the shield flow in the same direction as the current in the center conductor, causing the coax to radiate. They are the opposite of the desired "push-pull" differential signalling currents, where the signal currents on the inner and outer conductor are equal and opposite.
Most of the shield effect in coax results from opposing currents in the center conductor and shield creating opposite magnetic fields that cancel, and thus do not radiate. The same effect helps ladder line. However, ladder line is extremely sensitive to surrounding metal objects, which can enter the fields before they completely cancel. Coax does not have this problem, since the field is enclosed in the shield. However, it is still possible for a field to form between the shield and other connected objects, such as the antenna the coax feeds. The current formed by the field between the antenna and the coax shield would flow in the same direction as the current in the center conductor, and thus not be canceled. Energy would radiate from the coax itself, affecting the radiation pattern of the antenna. With sufficient power, this could be a hazard to people near the cable. A properly placed and properly sized balun can prevent common-mode radiation in coax. An isolating transformer or blocking capacitor can be used to couple a coaxial cable to equipment, where it is desirable to pass radio-frequency signals but to block direct current or low-frequency power.
Higher impedance at audio frequencies
The characteristic impedance formula above is a good approximation at radio frequencies however for frequencies below 100 kHz (such as audio) it becomes important to use the complete telegrapher's equation:
Applying this formula to typical 75 ohm coax we find the measured impedance across the audio spectrum will range from ~150 ohms to ~5K ohms, much higher than nominal. The velocity of propagation also slows considerably. Thus we can expect coax cable impedances to be consistent at RF frequencies but variable across audio frequencies. This effect was manifested when trying to send a plain voice signal across the transatlantic telegraph cable, with poor results.
Standards
Most coaxial cables have a characteristic impedance of either 50, 52, 75, or 93 Ω. The RF industry uses standard type-names for coaxial cables. Thanks to television, RG-6 is the most commonly used coaxial cable for home use, and the majority of connections outside Europe are by F connectors.
A series of standard types of coaxial cable were specified for military uses, in the form "RG-#" or "RG-#/U". They date from World War II and were listed in MIL-HDBK-216 published in 1962. These designations are now obsolete. The RG designation stands for Radio Guide; the U designation stands for Universal. The current military standard is MIL-SPEC MIL-C-17. MIL-C-17 numbers, such as "M17/75-RG214", are given for military cables and manufacturer's catalog numbers for civilian applications. However, the RG-series designations were so common for generations that they are still used, although critical users should be aware that since the handbook is withdrawn there is no standard to guarantee the electrical and physical characteristics of a cable described as "RG-# type". The RG designators are mostly used to identify compatible connectors that fit the inner conductor, dielectric, and jacket dimensions of the old RG-series cables.
Dielectric material codes
FPE is foamed polyethylene
PE is solid polyethylene
PF is polyethylene foam
PTFE is polytetrafluoroethylene;
ASP is air space polyethylene
VF is the Velocity Factor; it is determined by the effective and
VF for solid PE is about 0.66
VF for foam PE is about 0.78 to 0.88
VF for air is about 1.00
VF for solid PTFE is about 0.70
VF for foam PTFE is about 0.84
There are also other designation schemes for coaxial cables such as the URM, CT, BT, RA, PSF and WF series.
Uses
Short coaxial cables are commonly used to connect home video equipment, in ham radio setups, and in Nuclear Instrumentation Modules. While formerly common for implementing computer networks, in particular Ethernet ("thick" 10BASE5 and "thin" 10BASE2), twisted pair cables have replaced them in most applications except in the consumer cable modem market for broadband Internet access.
Long distance coaxial cable was used in the 20th century to connect radio networks, television networks, and long-distance telephone networks though this has largely been superseded by later methods (fibre optics, T1/E1, satellite).
Shorter coaxials still carry cable television signals to the majority of television receivers, and this purpose consumes the majority of coaxial cable production. In 1980s and early 1990s coaxial cable was also used in computer networking, most prominently in Ethernet networks, where it was later in late 1990s to early 2000s replaced by UTP cables in North America and STP cables in Western Europe, both with 8P8C modular connectors.
Micro coaxial cables are used in a range of consumer devices, military equipment, and also in ultrasound scanning equipment.
The most common impedances that are widely used are 50 or 52 ohms and 75 ohms, although other impedances are available for specific applications. The 50 / 52 ohm cables are widely used for industrial and commercial two-way radio frequency applications (including radio, and telecommunications), although 75 ohms is commonly used for broadcast television and radio.
Coaxial cable is often used to carry signals from an antenna to a receiver. In many cases, the same cable carries power toward the antenna, to power a preamplifier. In some cases, a single cable carries unidirectional power and bidirectional data/signals, as in DiSEqC.
Types
Hard line
Larger varieties of hardline may have a center conductor that is constructed from either rigid or corrugated copper tubing. The dielectric in hard line may consist of polyethylene foam, air, or a pressurized gas such as nitrogen or desiccated air (dried air). In gas-charged lines, hard plastics such as nylon are used as spacers to separate the inner and outer conductors. The addition of these gases into the dielectric space reduces moisture contamination, provides a stable dielectric constant, and provides a reduced risk of internal arcing. Gas-filled hardlines are usually used on high-power RF transmitters such as television or radio broadcasting, military transmitters, and high-power amateur radio applications but may also be used on some critical lower-power applications such as those in the microwave bands. However, in the microwave region, waveguide is more often used than hard line for transmitter-to-antenna, or antenna-to-receiver applications. The various shields used in hard line also differ; some forms use rigid tubing, or pipe, while others may use a corrugated tubing, which makes bending easier, as well as reduces kinking when the cable is bent to conform. Smaller varieties of hard line may be used internally in some high-frequency applications, in particular in equipment within the microwave range, to reduce interference between stages of the device.
Radiating
Radiating or leaky cable is another form of coaxial cable which is constructed in a similar fashion to hard line, however it is constructed with tuned slots cut into the shield. These slots are tuned to the specific RF wavelength of operation or tuned to a specific radio frequency band. This type of cable is to provide a tuned bi-directional "desired" leakage effect between transmitter and receiver. It is often used in elevator shafts, US Navy Ships, underground transportation tunnels and in other areas where an antenna is not feasible. One example of this type of cable is Radiax (CommScope).
RG-6
RG-6 is available in four different types designed for various applications. In addition, the core may be copper clad steel (CCS) or bare solid copper (BC). "Plain" or "house" RG-6 is designed for indoor or external house wiring. "Flooded" cable is infused with water-blocking gel for use in underground conduit or direct burial. "Messenger" may contain some waterproofing but is distinguished by the addition of a steel messenger wire along its length to carry the tension involved in an aerial drop from a utility pole. "Plenum" cabling is expensive and comes with a special Teflon-based outer jacket designed for use in ventilation ducts to meet fire codes. It was developed since the plastics used as the outer jacket and inner insulation in many "Plain" or "house" cabling gives off poisonous gas when burned.
Triaxial cable
Triaxial cable or triax is coaxial cable with a third layer of shielding, insulation and sheathing. The outer shield, which is earthed (grounded), protects the inner shield from electromagnetic interference from outside sources.
Semi-rigid
Semi-rigid cable is a coaxial form using a solid copper outer sheath. This type of coax offers superior screening compared to cables with a braided outer conductor, especially at higher frequencies. The major disadvantage is that the cable, as its name implies, is not very flexible, and is not intended to be flexed after initial forming. (See )
Conformable cable is a flexible reformable alternative to semi-rigid coaxial cable used where flexibility is required. Conformable cable can be stripped and formed by hand without the need for specialized tools, similar to standard coaxial cable.
Rigid line
Rigid line is a coaxial line formed by two copper tubes maintained concentric every other meter using PTFE-supports. Rigid lines cannot be bent, so they often need elbows. Interconnection with rigid line is done with an inner bullet/inner support and a flange or connection kit. Typically, rigid lines are connected using standardised EIA RF Connectors whose bullet and flange sizes match the standard line diameters. For each outer diameter, either 75 or 50 ohm inner tubes can be obtained.
Rigid line is commonly used indoors for interconnection between high-power transmitters and other RF-components, but more rugged rigid line with weatherproof flanges is used outdoors on antenna masts, etc. In the interests of saving weight and costs, on masts and similar structures the outer line is often aluminium, and special care must be taken to prevent corrosion.
With a flange connector, it is also possible to go from rigid line to hard line. Many broadcasting antennas and antenna splitters use the flanged rigid line interface even when connecting to flexible coaxial cables and hard line.
Rigid line is produced in a number of different sizes:
Interference and troubleshooting
Coaxial cable insulation may degrade, requiring replacement of the cable, especially if it has been exposed to the elements on a continuous basis. The shield is normally grounded, and if even a single thread of the braid or filament of foil touches the center conductor, the signal will be shorted causing significant or total signal loss. This most often occurs at improperly installed end connectors and splices. Also, the connector or splice must be properly attached to the shield, as this provides the path to ground for the interfering signal.
Despite being shielded, interference can occur on coaxial cable lines. Susceptibility to interference has little relationship to broad cable type designations (e.g. RG-59, RG-6) but is strongly related to the composition and configuration of the cable's shielding. For cable television, with frequencies extending well into the UHF range, a foil shield is normally provided, and will provide total coverage as well as high effectiveness against high-frequency interference. Foil shielding is ordinarily accompanied by a tinned copper or aluminum braid shield, with anywhere from 60 to 95% coverage. The braid is important to shield effectiveness because (1) it is more effective than foil at preventing low-frequency interference, (2) it provides higher conductivity to ground than foil, and (3) it makes attaching a connector easier and more reliable. "Quad-shield" cable, using two low-coverage aluminum braid shields and two layers of foil, is often used in situations involving troublesome interference, but is less effective than a single layer of foil and single high-coverage copper braid shield such as is found on broadcast-quality precision video cable.
In the United States and some other countries, cable television distribution systems use extensive networks of outdoor coaxial cable, often with in-line distribution amplifiers. Leakage of signals into and out of cable TV systems can cause interference to cable subscribers and to over-the-air radio services using the same frequencies as those of the cable system.
History
1858 — Coaxial cable used in first (1858) transatlantic cable.
1880 — Coaxial cable patented in England by Oliver Heaviside, patent no. 1,407.
1884 — Siemens & Halske patent coaxial cable in Germany (Patent No. 28,978, 27 March 1884).
1894 — Nikola Tesla (U.S. Patent 514,167)
1929 — First modern coaxial cable patented by Lloyd Espenschied and Herman Affel of AT&T's Bell Telephone Laboratories.
1936 — First closed circuit transmission of TV pictures on coaxial cable, from the 1936 Summer Olympics in Berlin to Leipzig.
1936 — Underwater coaxial cable installed between Apollo Bay, near Melbourne, Australia, and Stanley, Tasmania. The cable can carry one 8.5-kHz broadcast channel and seven telephone channels.
1936 — AT&T installs experimental coaxial telephone and television cable between New York and Philadelphia, with automatic booster stations every . Completed in December, it can transmit 240 telephone calls simultaneously.
1936 — Coaxial cable laid by the General Post Office (now BT) between London and Birmingham, providing 40 telephone channels.
1941 — First commercial use in US by AT&T, between Minneapolis, Minnesota and Stevens Point, Wisconsin. L1 system with capacity of one TV channel or 480 telephone circuits.
1949 — On January 11, eight stations on the US East Coast and seven Midwestern stations are linked via a long-distance coaxial cable.
1956 — First transatlantic telephone coaxial cable laid, TAT-1.
1962 — Sydney–Melbourne co-axial cable commissioned, carrying 3 x 1,260 simultaneous telephone connections, and-or simultaneous inter-city television transmission.
| Technology | Signal transmission | null |
46408 | https://en.wikipedia.org/wiki/Magenta | Magenta | Magenta () is a purplish-red color. On color wheels of the RGB (additive) and CMY (subtractive) color models, it is located precisely midway between blue and red. It is one of the four colors of ink used in color printing by an inkjet printer, along with yellow, cyan, and black to make all the other colors. The tone of magenta used in printing, printer's magenta, is redder than the magenta of the RGB (additive) model, the former being closer to rose.
Magenta took its name from an aniline dye made and patented in 1859 by the French chemist François-Emmanuel Verguin, who originally called it fuchsine. It was renamed to celebrate the Italian-French victory at the Battle of Magenta fought between the French and Austrians on 4 June 1859 near the Italian town of Magenta in Lombardy. A virtually identical color, called roseine, was created in 1860 by two British chemists, Edward Chambers Nicholson, and George Maule.
The web color magenta is also called fuchsia.
In optics and color science
Magenta is an extra-spectral color, meaning that it is not a hue associated with monochromatic visible light. Magenta is associated with perception of spectral power distributions concentrated mostly in two bands: longer wavelength reddish components and shorter wavelength blueish components.
In the RGB color system, used to create all the colors on a television or computer display, magenta is a secondary color, made by combining equal amounts of red and blue light at a high intensity. In this system, magenta is the complementary color of green, and combining green and magenta light on a black screen will create white.
In the CMYK color model, used in color printing, it is one of the three primary colors, along with cyan and yellow, used to print all the rest of the colors. If magenta, cyan, and yellow are printed on top of each other on a page, they make black. In this model, magenta is the complementary color of green. If combined, green and magenta ink will look dark brown or black. The magenta used in color printing, sometimes called process magenta, is a darker shade than the color used on computer screens.
In terms of physiology, the color is stimulated in the brain when the eye reports input from short wave blue cone cells along with a sub-sensitivity of the long wave cones which respond secondarily to that same deep blue color, but with little or no input from the middle wave cones. The brain interprets that combination as some hue of magenta or purple, depending on the relative strengths of the cone responses.
In the Munsell color system, magenta is called red-purple.
If the spectrum is wrapped to form a color wheel, magenta (additive secondary) appears midway between red and violet. Violet and red, the two components of magenta, are at opposite ends of the visible spectrum and have very different wavelengths. The additive secondary color magenta is made by combining violet and red light at equal intensity; it is not present in the spectrum itself.
Fuchsia and magenta
The web colors fuchsia and magenta are identical, made by mixing the same proportions of blue and red light. In design and printing, there is more variation. The French version of fuchsia in the RGB color model and in printing contains a higher proportion of red than the American version of fuchsia.
Gallery
History
Fuchsine and magenta dye (1859)
The color magenta was the result of the industrial chemistry revolution of the mid-nineteenth century, which began with the invention by William Perkin of mauveine in 1856, which was the first synthetic aniline dye. The enormous commercial success of the dye and the new color it produced, mauve, inspired other chemists in Europe to develop new colors made from aniline dyes.
In France, François-Emmanuel Verguin, the director of the chemical factory of Louis Rafard near Lyon, tried many different formulae before finally in late 1858 or early 1859, mixing aniline with carbon tetrachloride, producing a reddish-purple dye which he called "fuchsine", after the color of the flower of the fuchsia plant. He quit the Rafard factory and took his color to a firm of paint manufacturers, Francisque and Joseph Renard, who began to manufacture the dye in 1859.
In the same year, two British chemists, Edward Chambers Nicholson and George Maule, working at the laboratory of the paint manufacturer George Simpson, located in Walworth, south of London, made another aniline dye with a similar red-purple color, which they began to manufacture in 1860 under the name "roseine". In 1860, they changed the name of the color to "magenta", in honor of the Battle of Magenta fought by the armies of France and Sardinia against Austrians at Magenta, Lombardy the year before, and the new color became a commercial success.
Starting in 1935, the family of quinacridone dyes was developed. These have colors ranging from red to violet, so nowadays a quinacridone dye is often used for magenta. Various tones of magenta—light, bright, brilliant, vivid, rich, or deep—may be formulated by adding varying amounts of white to quinacridone artist's paints.
Another dye used for magenta is Lithol Rubine BK. One of its uses is as a food coloring.
Process magenta (pigment magenta; printer's magenta) (1890s)
In color printing, the color called process magenta, pigment magenta, or printer's magenta is one of the three primary pigment colors which, along with yellow and cyan, constitute the three subtractive primary colors of pigment. (The secondary colors of pigment are blue, green, and red.) As such, the hue magenta is the complement of green: magenta pigments absorb green light; thus magenta and green are opposite colors.
The CMYK printing process was invented in the 1890s, when newspapers began to publish color comic strips.
Process magenta is not an RGB color, and there is no fixed conversion from CMYK primaries to RGB. Different formulations are used for printer's ink, so there may be variations in the printed color that is pure magenta ink.
Web colors magenta and fuchsia
The web color magenta is one of the three secondary colors in the RGB color model.
On the RGB color wheel, magenta is the color between rose and violet, and halfway between red and blue.
This color is called magenta in X11 and fuchsia in HTML. In the RGB color model, it is created by combining equal intensities of red and blue light. The two web colors magenta and fuchsia are exactly the same color. Sometimes the web color magenta is called electric magenta or electronic magenta.
While the magenta used in printing and the web color have the same name, they have important differences. Process magenta (the color used for magenta printing ink—also called printer's or pigment magenta) is much less vivid than the color magenta achievable on a computer screen. CMYK printing technology cannot accurately reproduce on paper the color on the computer screen. When the web color magenta is reproduced on paper, it is called fuchsia and it is physically impossible for it to appear on paper as vivid as on a computer screen.
Colored pencils and crayons called "magenta" are usually colored the color of process magenta (printer's magenta).
In science and culture
In art
Paul Gauguin (1848–1903) used a shade of magenta in 1890 in his portrait of Marie Lagadu, and in some of his South Seas paintings.
Henri Matisse and the members of the Fauvist movement used magenta and other non-traditional colors to surprise viewers, and to move their emotions through the use of bold colors.
Since the mid-1960s, water based fluorescent magenta paint has been available to paint psychedelic black light paintings. (Fluorescent cerise, fluorescent chartreuse yellow, fluorescent blue, and fluorescent green.)
In literature
The color plays a central role in Craig Laurance Gidney's novel A Spectral Hue.
In film
The titular alien entity in the 2019 horror film Color Out of Space, an adaptation of the 1927 H. P. Lovecraft short story The Colour Out of Space, is depicted as being magenta due to the color's extra-spectral status.
In astronomy
Astronomers have reported that spectral class T brown dwarfs (the ones with the coolest temperatures except for the recently discovered Y brown dwarfs) are colored magenta because of absorption by sodium and potassium atoms of light in the green portion of the spectrum.
In biology: magenta insects, birds, fish, and mammals
In botany
Magenta is a common color for flowers, particularly in the tropics and sub-tropics. Because magenta is the complementary color of green, magenta flowers have the highest contrast with the green foliage, and therefore are more visible to the animals needed for their pollination.
In business
The German telecommunications company Deutsche Telekom uses a magenta logo. It has sought to prevent use of any similar color by other businesses, even those in unrelated fields, such as the insurance company Lemonade.
In public transport
Magenta was the English name of Tokyo's Oedo subway line color. It was later changed to ruby.
It is also the color of the Metropolitan line of the London Underground.
In transportation
In aircraft autopilot systems, the path that pilot or plane should follow to its destination is usually indicated in cockpit displays using the color magenta.
In numismatics
The Reserve Bank of India (RBI) issued a Magenta colored banknote of ₹2000 denomination on 8 November 2016 under Mahatma Gandhi New Series. This is the highest currency note printed by RBI that is in active circulation in India.
In vexillology and heraldry
Magenta is an extremely rare color to find on heraldic flags and coats of arms, since its adoption dates back to relatively recent times. However, there are some examples of its use:
In politics
Throughout much of Europe, the color of magenta (or variants of such, such as Pink or Amaranth) is used to symbolise social liberalism or classical liberalism
The color magenta is used to symbolize anti-racism by the Amsterdam-based anti-racism Magenta Foundation.
In Danish politics, magenta is the color of Det Radikale Venstre, the Danish social-liberal party.
In Austrian politics, it is used to represent NEOS – The New Austria and Liberal Forum, a social liberal party.
In Belgium, it is used by DéFI, a social liberal party.
In Germany, Magenta is one of the colors of the Free Democratic Party, or FDP.
| Physical sciences | Colors | Physics |
46415 | https://en.wikipedia.org/wiki/Crayon | Crayon | A crayon (or wax pastel) is a stick of pigmented wax used for writing or drawing. Wax crayons differ from pastels, in which the pigment is mixed with a dry binder such as gum arabic, and from oil pastels, where the binder is a mixture of wax and oil.
Crayons are available in a range of prices, and are easy to work with. They are less messy than most paints and markers, blunt (removing the risk of sharp points present when using a pencil or pen), typically non-toxic, and available in a wide variety of colors. These characteristics make them particularly good instruments for teaching small children to draw in addition to being used widely by student and professional artists.
Composition
In the modern English-speaking world, the term crayon is commonly associated with the standard wax crayon, such as those widely available for use by children. Such crayons are usually approximately in length and made mostly of paraffin wax. Paraffin wax is heated and cooled to achieve the correct temperature at which a usable wax substance can be dyed and then manufactured and shipped for use around the world. Paraffin waxes are used for cosmetics, candles, for the preparation of printing ink, fruit preserving, in the pharmaceutical industry, for lubricating purposes, and crayons.
Colin Snedeker, a chemist for Binney & Smith (the then-parent company of Crayola), developed the first washable crayons in response to consumer complaints regarding stained fabrics and walls. A patent for the washable solid marking composition utilized in the washable crayons was awarded to Snedeker in 1990.
History
The history of the crayon is not entirely clear. The French word crayon, originally meaning "chalk pencil", dates to around the 16th century, and is derived from the word craie (chalk), which comes from the Latin word creta (Earth). The meaning later changed to simply "pencil", which it still means in modern French.
The notion to combine a form of wax with pigment goes back thousands of years. Encaustic painting is a technique that uses hot beeswax combined with colored pigment to bind color into stone. A heat source was then used to "burn in" and fix the image in place. Pliny the Elder, a Roman scholar, was thought to describe the first techniques of wax crayon drawings.
This method, employed by the Egyptians, Romans, Greeks, and indigenous people in the Philippines, is still used today. However, the process was not used to make crayons into a form intended to be held and colored with and was therefore ineffective for use in a classroom or as crafts for children.
Contemporary crayons are purported to have originated in Europe, where some of the first cylinder shaped crayons were made with charcoal and oil. Pastels are an art medium sharing roots with the modern crayon and date back to Leonardo da Vinci in 1495. Conté crayons, out of Paris, are a hybrid between a pastel and a conventional crayon, used since the late 1790s as a drawing crayon for artists. Later, various hues of powdered pigment eventually replaced the primary charcoal ingredient found in most early 19th century products. | Technology | Artist's and drafting tools | null |
46470 | https://en.wikipedia.org/wiki/Crop%20rotation | Crop rotation | Crop rotation is the practice of growing a series of different types of crops in the same area across a sequence of growing seasons. This practice reduces the reliance of crops on one set of nutrients, pest and weed pressure, along with the probability of developing resistant pests and weeds.
Growing the same crop in the same place for many years in a row, known as monocropping, gradually depletes the soil of certain nutrients and selects for both a highly competitive pest and weed community. Without balancing nutrient use and diversifying pest and weed communities, the productivity of monocultures is highly dependent on external inputs that may be harmful to the soil's fertility. Conversely, a well-designed crop rotation can reduce the need for synthetic fertilizers and herbicides by better using ecosystem services from a diverse set of crops. Additionally, crop rotations can improve soil structure and organic matter, which reduces erosion and increases farm system resilience.
History
Farmers have long recognized that suitable rotations such as planting spring crops for livestock in place of grains for human consumption make it possible to restore or to maintain productive soils. Ancient Near Eastern farmers practiced crop rotation in 6000 BC, alternately planting legumes and cereals.
Two-field systems
Under a two-field rotation, half the land was planted in a year, while the other half lay fallow. Then, in the next year, the two fields were reversed. In China both the two- and three-field systems had been used since the Eastern Zhou period.
Three-field systems
From the 9th century to the 11th century, farmers in Europe transitioned from a two-field system to a three-field system. This system persisted until the 20th century. Available land was divided into three sections. One section was planted in the autumn with rye or winter wheat, followed by spring oats or barley; the second section grew crops such as one of the legumes, namely peas, lentils, or beans; and the third field was left fallow. The three fields were rotated in this manner so that every three years, one of the fields would rest and lie fallow. Under the two-field system, only half the land was planted in any year. Under the new three-field rotation system, two thirds of the land was planted, potentially yielding a larger harvest. But the additional crops had a more significant effect than mere quantitative productivity. Since the spring crops were mostly legumes, which fix nitrogen needed for plants to make proteins, they increased the overall nutrition of the people of Europe.
Four-field rotations
Farmers in the region of Waasland (in present-day northern Belgium) pioneered a four-field rotation in the early 16th century, and the British agriculturist Charles Townshend (1674–1738) popularised this system in the 18th century. The sequence of four crops (wheat, turnips, barley and clover), included a fodder crop and a grazing crop, allowing livestock to be bred year-round. The four-field crop rotation became a key development in the British Agricultural Revolution.
Modern developments
George Washington Carver (1860s–1943) studied crop-rotation methods in the United States, teaching southern farmers to rotate soil-depleting crops like cotton with soil-enriching crops like peanuts and peas.
In the Green Revolution of the mid-20th century, crop rotation gave way in the developed world to the practice of supplementing the chemical inputs to the soil through topdressing with fertilizers, adding (for example) ammonium nitrate or urea and restoring soil pH with lime. Such practices aimed to increase yields, to prepare soil for specialist crops, and to reduce waste and inefficiency by simplifying planting, harvesting, and irrigation.
Crop choice
A preliminary assessment of crop interrelationships can be found in how each crop:
Contributes to soil organic matter (SOM) content.
Provides for pest management.
Manages deficient or excess nutrients.
Contributes to or controls for soil erosion.
Interbreeds with other crops to produce hybrid offspring.
Impacts surrounding food webs and field ecosystems.
Crop choice is often related to the goal the farmer is looking to achieve with the rotation, which could be weed management, increasing available nitrogen in the soil, controlling for erosion, or increasing soil structure and biomass, to name a few. When discussing crop rotations, crops are classified in different ways depending on what quality is being assessed: by family, by nutrient needs/benefits, and/or by profitability (i.e. cash crop versus cover crop). For example, giving adequate attention to plant family is essential to mitigating pests and pathogens. However, many farmers have success managing rotations by planning sequencing and cover crops around desirable cash crops. The following is a simplified classification based on crop quality and purpose.
Row crops
Many crops which are critical for the market, like vegetables, are row crops (that is, grown in tight rows). While often the most profitable for farmers, these crops are more taxing on the soil. Row crops typically have low biomass and shallow roots: this means the plant contributes low residue to the surrounding soil and has limited effects on structure. With much of the soil around the plant exposed to disruption by rainfall and traffic, fields with row crops experience faster break down of organic matter by microbes, leaving fewer nutrients for future plants.
In short, while these crops may be profitable for the farm, they are nutrient depleting. Crop rotation practices exist to strike a balance between short-term profitability and long-term productivity.
Legumes
A great advantage of crop rotation comes from the interrelationship of nitrogen-fixing crops with nitrogen-demanding crops. Legumes, like alfalfa and clover, collect available nitrogen from the atmosphere and store it in nodules on their root structure. When the plant is harvested, the biomass of uncollected roots breaks down, making the stored nitrogen available to future crops.
Grasses and cereals
Cereal and grasses are frequent cover crops because of the many advantages they supply to soil quality and structure. The dense and far-reaching root systems give ample structure to surrounding soil and provide significant biomass for soil organic matter.
Grasses and cereals are key in weed management as they compete with undesired plants for soil space and nutrients.
Green manure
Green manure is a crop that is mixed into the soil. Both nitrogen-fixing legumes and nutrient scavengers, like grasses, can be used as green manure. Green manure of legumes is an excellent source of nitrogen, especially for organic systems, however, legume biomass does not contribute to lasting soil organic matter like grasses do.
Planning a rotation
There are numerous factors that must be taken into consideration when planning a crop rotation. Planning an effective rotation requires weighing fixed and fluctuating production circumstances: market, farm size, labor supply, climate, soil type, growing practices, etc. Moreover, a crop rotation must consider in what condition one crop will leave the soil for the succeeding crop and how one crop can be seeded with another crop. For example, a nitrogen-fixing crop, like a legume, should always precede a nitrogen depleting one; similarly, a low residue crop (i.e. a crop with low biomass) should be offset with a high biomass cover crop, like a mixture of grasses and legumes.
There is no limit to the number of crops that can be used in a rotation, or the amount of time a rotation takes to complete. Decisions about rotations are made years prior, seasons prior, or even at the last minute when an opportunity to increase profits or soil quality presents itself.
Implementation
Relationship to other systems
Crop rotation systems may be enriched by other practices such as the addition of livestock and manure, and by growing more than one crop at a time in a field. A monoculture is a crop grown by itself in a field. A polyculture involves two or more crops growing in the same place at the same time. Crop rotations can be applied to both monocultures and polycultures, resulting in multiple ways of increasing agricultural biodiversity (table).
Incorporation of livestock
Introducing livestock makes the most efficient use of critical sod and cover crops; livestock (through manure) are able to distribute the nutrients in these crops throughout the soil rather than removing nutrients from the farm through the sale of hay.
Mixed farming or the practice of crop cultivation with the incorporation of livestock can help manage crops in a rotation and cycle nutrients. Crop residues provide animal feed, while the animals provide manure for replenishing crop nutrients and draft power. These processes promote internal nutrient cycling and minimize the need for synthetic fertilizers and large-scale machinery. As an additional benefit, the cattle, sheep and/or goat provide milk and can act as a cash crop in the times of economic hardship.
Polyculture
Polyculture systems, such as intercropping or companion planting, offer more diversity and complexity within the same season or rotation. An example is the Three Sisters, the inter-planting of corn with pole beans and vining squash or pumpkins. In this system, the beans provide nitrogen; the corn provides support for the beans and a "screen" against squash vine borer; the vining squash provides a weed suppressive canopy and a discouragement for corn-hungry raccoons.
Double-cropping is common where two crops, typically of different species, are grown sequentially in the same growing season, or where one crop (e.g. vegetable) is grown continuously with a cover crop (e.g. wheat). This is advantageous for small farms, which often cannot afford to leave cover crops to replenish the soil for extended periods of time, as larger farms can. When multiple cropping is implemented on small farms, these systems can maximize benefits of crop rotation on available land resources.
Organic farming
Crop rotation is a required practice, in the United States, for farms seeking organic certification. The “Crop Rotation Practice Standard” for the National Organic Program under the U.S. Code of Federal Regulations, section §205.205, states that
In addition to lowering the need for inputs (by controlling for pests and weeds and increasing available nutrients), crop rotation helps organic growers increase the amount of biodiversity their farms. Biodiversity is also a requirement of organic certification, however, there are no rules in place to regulate or reinforce this standard. Increasing the biodiversity of crops has beneficial effects on the surrounding ecosystem and can host a greater diversity of fauna, insects, and beneficial microorganisms in the soil as found by McDaniel et al 2014 and Lori et al 2017. Some studies point to increased nutrient availability from crop rotation under organic systems compared to conventional practices as organic practices are less likely to inhibit of beneficial microbes in soil organic matter.
While multiple cropping and intercropping benefit from many of the same principals as crop rotation, they do not satisfy the requirement under the NOP.
Benefits
Agronomists describe the benefits to yield in rotated crops as "The Rotation Effect". There are many benefits of rotation systems. The factors related to the increase are broadly due to alleviation of the negative factors of monoculture cropping systems. Specifically, improved nutrition; pest, pathogen, and weed stress reduction; and improved soil structure have been found in some cases to be correlated to beneficial rotation effects.
Other benefits include reduced production cost. Overall financial risks are more widely distributed over more diverse production of crops and/or livestock. Less reliance is placed on purchased inputs and over time crops can maintain production goals with fewer inputs. This in tandem with greater short and long term yields makes rotation a powerful tool for improving agricultural systems.
Soil organic matter
The use of different species in rotation allows for increased soil organic matter (SOM), greater soil structure, and improvement of the chemical and biological soil environment for crops. With more SOM, water infiltration and retention improves, providing increased drought tolerance and decreased erosion.
Soil organic matter is a mix of decaying material from biomass with active microorganisms. Crop rotation, by nature, increases exposure to biomass from sod, green manure, and various other plant debris. The reduced need for intensive tillage under crop rotation allows biomass aggregation to lead to greater nutrient retention and utilization, decreasing the need for added nutrients. With tillage, disruption and oxidation of soil creates a less conducive environment for diversity and proliferation of microorganisms in the soil. These microorganisms are what make nutrients available to plants. So, where "active" soil organic matter is a key to productive soil, soil with low microbial activity provides significantly fewer nutrients to plants; this is true even though the quantity of biomass left in the soil may be the same.
Soil microorganisms also decrease pathogen and pest activity through competition. In addition, plants produce root exudates and other chemicals which manipulate their soil environment as well as their weed environment. Thus rotation allows increased yields from nutrient availability but also alleviation of allelopathy and competitive weed environments.
Carbon sequestration
Crop rotations greatly increase soil organic carbon (SOC) content, the main constituent of soil organic matter. Carbon, along with hydrogen and oxygen, is a macronutrient for plants. Highly diverse rotations spanning long periods of time have shown to be even more effective in increasing SOC, while soil disturbances (e.g. from tillage) are responsible for exponential decline in SOC levels. In Brazil, conversion to no-till methods combined with intensive crop rotations has been shown an SOC sequestration rate of 0.41 tonnes per hectare per year.
In addition to enhancing crop productivity, sequestration of atmospheric carbon has great implications in reducing rates of climate change by removing carbon dioxide from the air.
Nitrogen fixing
Rotations can add nutrients to the soil. Legumes, plants of the family Fabaceae, have nodules on their roots which contain nitrogen-fixing bacteria called rhizobia. During a process called nodulation, the rhizobia bacteria use nutrients and water provided by the plant to convert atmospheric nitrogen into ammonia, which is then converted into an organic compound that the plant can use as its nitrogen source. It therefore makes good sense agriculturally to alternate them with cereals (family Poaceae) and other plants that require nitrates. How much nitrogen made available to the plants depends on factors such as the kind of legume, the effectiveness of rhizobia bacteria, soil conditions, and the availability of elements necessary for plant food.
Pathogen and pest control
Crop rotation is also used to control pests and diseases that can become established in the soil over time. The changing of crops in a sequence decreases the population level of pests by (1) interrupting pest life cycles and (2) interrupting pest habitat. Plants within the same taxonomic family tend to have similar pests and pathogens. By regularly changing crops and keeping the soil occupied by cover crops instead of lying fallow, pest cycles can be broken or limited, especially cycles that benefit from overwintering in residue. For example, root-knot nematode is a serious problem for some plants in warm climates and sandy soils, where it slowly builds up to high levels in the soil, and can severely damage plant productivity by cutting off circulation from the plant roots. Growing a crop that is not a host for root-knot nematode for one season greatly reduces the level of the nematode in the soil, thus making it possible to grow a susceptible crop the following season without needing soil fumigation.
This principle is of particular use in organic farming, where pest control must be achieved without synthetic pesticides.
Weed management
Integrating certain crops, especially cover crops, into crop rotations is of particular value to weed management. These crops crowd out weeds through competition. In addition, the sod and compost from cover crops and green manure slows the growth of what weeds are still able to make it through the soil, giving the crops further competitive advantage. By slowing the growth and proliferation of weeds while cover crops are cultivated, farmers greatly reduce the presence of weeds for future crops, including shallow rooted and row crops, which are less resistant to weeds. Cover crops are, therefore, considered conservation crops because they protect otherwise fallow land from becoming overrun with weeds.
This system has advantages over other common practices for weeds management, such as tillage. Tillage is meant to inhibit growth of weeds by overturning the soil; however, this has a countering effect of exposing weed seeds that may have gotten buried and burying valuable crop seeds. Under crop rotation, the number of viable seeds in the soil is reduced through the reduction of the weed population.
In addition to their negative impact on crop quality and yield, weeds can slow down the harvesting process. Weeds make farmers less efficient when harvesting, because weeds like bindweeds, and knotgrass, can become tangled in the equipment, resulting in a stop-and-go type of harvest.
Reducing soil erosion
Crop rotation can significantly reduce the amount of soil lost from erosion by water. In areas that are highly susceptible to erosion, farm management practices such as zero and reduced tillage can be supplemented with specific crop rotation methods to reduce raindrop impact, sediment detachment, sediment transport, surface runoff, and soil loss.
Protection against soil loss is maximized with rotation methods that leave the greatest mass of crop stubble (plant residue left after harvest) on top of the soil. Stubble cover in contact with the soil minimizes erosion from water by reducing overland flow velocity, stream power, and thus the ability of the water to detach and transport sediment. Soil erosion and seal prevent the disruption and detachment of soil aggregates that cause macropores to block, infiltration to decline, and runoff to increase. This significantly improves the resilience of soils when subjected to periods of erosion and stress.
When a forage crop breaks down, binding products are formed that act like an adhesive on the soil, which makes particles stick together, and form aggregates. The formation of soil aggregates is important for erosion control, as they are better able to resist raindrop impact, and water erosion. Soil aggregates also reduce wind erosion, because they are larger particles, and are more resistant to abrasion through tillage practices.
The effect of crop rotation on erosion control varies by climate. In regions under relatively consistent climate conditions, where annual rainfall and temperature levels are assumed, rigid crop rotations can produce sufficient plant growth and soil cover. In regions where climate conditions are less predictable, and unexpected periods of rain and drought may occur, a more flexible approach for soil cover by crop rotation is necessary. An opportunity cropping system promotes adequate soil cover under these erratic climate conditions. In an opportunity cropping system, crops are grown when soil water is adequate and there is a reliable sowing window. This form of cropping system is likely to produce better soil cover than a rigid crop rotation because crops are only sown under optimal conditions, whereas rigid systems are not necessarily sown in the best conditions available.
Crop rotations also affect the timing and length of when a field is subject to fallow. This is very important because depending on a particular region's climate, a field could be the most vulnerable to erosion when it is under fallow. Efficient fallow management is an essential part of reducing erosion in a crop rotation system. Zero tillage is a fundamental management practice that promotes crop stubble retention under longer unplanned fallows when crops cannot be planted. Such management practices that succeed in retaining suitable soil cover in areas under fallow will ultimately reduce soil loss. In a recent study that lasted a decade, it was found that a common winter cover crop after potato harvest such as fall rye can reduce soil run-off by as much as 43%, and this is typically the most nutritional soil.
Biodiversity
Increasing the biodiversity of crops has beneficial effects on the surrounding ecosystem and can host a greater diversity of fauna, insects, and beneficial microorganisms in the soil as found by McDaniel et al 2014 and Lori et al 2017. Some studies point to increased nutrient availability from crop rotation under organic systems compared to conventional practices as organic practices are less likely to inhibit of beneficial microbes in soil organic matter, such as arbuscular mycorrhizae, which increase nutrient uptake in plants. Increasing biodiversity also increases the resilience of agro-ecological systems.
Farm productivity
Crop rotation contributes to increased yields through improved soil nutrition. By requiring planting and harvesting of different crops at different times, more land can be farmed with the same amount of machinery and labour.
Risk management
Different crops in the rotation can reduce the risks of adverse weather for the individual farmer.
Challenges
While crop rotation requires a great deal of planning, crop choice must respond to a number of fixed conditions (soil type, topography, climate, and irrigation) in addition to conditions that may change dramatically from year to the next (weather, market, labor supply). In this way, it is unwise to plan crops years in advance. Improper implementation of a crop rotation plan may lead to imbalances in the soil nutrient composition or a buildup of pathogens affecting a critical crop. The consequences of faulty rotation may take years to become apparent even to experienced soil scientists and can take just as long to correct.
Many challenges exist within the practices associated with crop rotation. For example, green manure from legumes can lead to an invasion of snails or slugs and the decay from green manure can occasionally suppress the growth of other crops.
| Technology | Soil and soil management | null |
46573 | https://en.wikipedia.org/wiki/Oat | Oat | The oat (Avena sativa), sometimes called the common oat, is a species of cereal grain grown for its seed, which is known by the same name (usually in the plural). Oats appear to have been domesticated as a secondary crop, as their seeds resembled those of other cereals closely enough for them to be included by early cultivators. Oats tolerate cold winters less well than cereals such as wheat, barley, and rye, but need less summer heat and more rain, making them important in areas such as Northwest Europe that have cool wet summers. They can tolerate low-nutrient and acid soils. Oats grow thickly and vigorously, allowing them to outcompete many weeds, and compared to other cereals are relatively free from diseases.
Oats are used for human consumption as oatmeal, including as steel cut oats or rolled oats. Global production is dominated by Canada and Russia; global trade is a small part of production, most of the grain being consumed within the producing countries. Oats are a nutrient-rich food associated with lower blood cholesterol and reduced risk of human heart disease when consumed regularly. One of the most common uses of oats is as livestock feed; the crop can also be grown as groundcover and ploughed in as a green manure.
Origins
Phylogeny
Phylogenetic analysis using molecular DNA and morphological evidence places the oat genus Avena in the Pooideae subfamily. That subfamily includes the cereals wheat, barley, and rye; they are in the Triticeae tribe, while Avena is in the Poeae, along with grasses such as Briza and Agrostis. The wild ancestor of Avena sativa and the closely related minor crop – A. byzantina – is A. sterilis, a naturally hexaploid wild oat, one that has its DNA in six sets of chromosomes. Genetic evidence shows that the ancestral forms of A. sterilis grew in the Fertile Crescent of the Near East.
Analysis of maternal lineages of 25 Avena species using chloroplast and mitochondrial DNA showed that A. sativa hexaploid genome derives from three diploid oat species (each with two sets of chromosomes); the sets are dubbed A, B, C, and D. The diploid species are the CC A. ventricosa, the AA A. canariensis, and the AA A. longiglumis, along with two tetraploid oats (each with four sets), namely the AACC A. insularis and the AABB A. agadiriana. Tetraploids were formed as much as 10.6 mya, and hexaploids as much as 7.4 mya.
Domestication
Genomic study suggests that the hulled variety and the naked variety A. sativa var. nuda diverged around 51,200 years ago, long before domestication. This implies that the two varieties were domesticated independently.
Oats are thought to have emerged as a secondary crop. This means that they are derived from what was considered a weed of the primary cereal domesticates such as wheat. They survived as a Vavilovian mimic by having grains that Neolithic people found hard to distinguish from the primary crop.
Oats were cultivated for some thousands of years before they were domesticated. A granary from the Pre-Pottery Neolithic, about 11,400 to 11,200 years ago in the Jordan Valley in the Middle East contained a large number of wild oat grains (120,000 seeds of A. sterilis). The find implies intentional cultivation. Domesticated oat grains first appear in the archaeological record in Europe around 3000 years ago.
Description
The oat is a tall stout grass, a member of the family Poaceae; it can grow to a height of . The leaves are long, narrow, and pointed, and grow upwards; they can be some in length, and around in width. At the top of the stem, the plant branches into a loose cluster or panicle of spikelets. These contain the wind-pollinated flowers, which mature into the oat seeds or grains. Botanically the grain is a caryopsis, as the wall of the fruit is fused on to the actual seed. Like other cereal grains, the caryopsis contains the outer husk or bran, the starchy food store or endosperm which occupies most of the seed, and the protein-rich germ which if planted in soil can grow into a new plant.
Agronomy
Cultivation
Oats are annual plants best grown in temperate regions. They tolerate cold winters less well than wheat, rye, or barley; they are harmed by sustained cold below . They have a lower summer heat requirement and greater tolerance of (and need for) rain than the other cereals mentioned, so they are particularly important in areas with cool, wet summers, such as Northwest Europe.
Oats can grow in most fertile, drained soils, being tolerant of a wide variety of soil types. Although better yields are achieved at a soil pH of 5.3 to 5.7, oats can tolerate soils with a pH as low as 4.5. They are better able to grow in low-nutrient soils than wheat or maize, but generally are less tolerant of high soil salinity than other cereals. Traditionally, US farmers grew oats alongside red clover and alfalfa, which fixed nitrogen and provided animal forage. With less use of horses and more use of fertilizers, growth of these crops in the US declined. For example, the state of Iowa led US oat production until 1989, but has largely switched to maize and soybeans.
Weeds, pests, and diseases
Oats can outcompete many weeds, as they grow thickly (with many leafy shoots) and vigorously, but are still subject to some broadleaf weeds. Control can be by herbicides, or by integrated pest management with measures such as sowing seed that is free of weeds.
Oats are relatively free from diseases. Nonetheless, they suffer from some leaf diseases, such as stem rust (Puccinia graminis f. sp. avenae) and crown rust (P. coronata var. avenae).
Crown rust infection can greatly reduce photosynthesis and overall physiological activities of oat leaves, thereby reducing growth and crop yield.
Processing
Harvested oats go through multiple stages of milling. The first stage is cleaning, to remove seeds of other plants, stones and any other extraneous materials. Next is dehulling to remove the indigestible bran, leaving the seed or "groat". Heating denatures enzymes in the seed that would make it go sour or rancid; the grain is then dried to minimise the risk of spoilage by bacteria and fungi. There may follow numerous stages of cutting or grinding the grain, depending on which sort of product is required. For oatmeal (oat flour), the grain is ground to a specified fineness. For home use such as making porridge, oats are often rolled flat to make them quicker to cook.
Oat flour can be ground for small scale use by pulsing rolled oats or old-fashioned (not quick) oats in a food processor or spice mill.
Production and trade
In 2022, global production of oats was 26 million tonnes, led by Canada with 20% of the total and Russia with 17% (table). This compares to over 100 million tonnes for wheat, for example. Global trade represents a modest percentage of production, less than 10%, most of the grain being consumed within producing countries. The main exporter is Canada, followed by Sweden and Finland; the US is the main importer.
Oats futures are traded in US dollars in quantities of 5000 bushels on the Chicago Board of Trade and have delivery dates in March, May, July, September, and December.
Genomics
Genome
Avena sativa is an allohexaploid species with three ancestral genomes (2n=6x=42; AACCDD). As a result, the genome is large (12.6 Gb, 1C-value=12.85) and complex. Cultivated hexaploid oat has a unique mosaic chromosome architecture that is the result of numerous translocations between the three subgenomes. These translocations may cause breeding barriers and incompatibilities when crossing varieties with different chromosomal architecture. Hence, oat breeding and the crossing of desired traits has been hampered by the lack of a reference genome assembly. In May 2022, a fully annotated reference genome sequence of Avena sativa was reported. The AA subgenome is presumed to be derived from Avena longiglumis and the CCDD from the tetraploid Avena insularis.
Genetics and breeding
Species of Avena can hybridize, and genes introgressed (brought in) from other "A" genome species have contributed many valuable traits, like resistance to oat crown rust. is one such trait, introgressed from A. sterilis CAV 1979, conferring all stage resistance (ASR) against Pca.
It is possible to hybridize oats with grasses in other genera, allowing plant breeders the ready introgression of traits. In contrast to wheat, oats sometimes retain chromosomes from maize or pearl millet after such crosses. These wide crosses are typically made to generate doubled haploid breeding material; the rapid loss of the alien chromosomes from the unrelated pollen donor results in a plant with only a single set of chromosomes (a haploid).
The addition lines with alien chromosomes can be used as a source for novel traits in oats. For example, research on oat-maize-addition lines has been used to map genes involved in C4 photosynthesis. To obtain Mendelian inheritance of these novel traits, radiation hybrid lines have been established, where maize chromosome segments have been introgressed into the oat genome. This potentially transfers thousands of genes from a species that is distantly related, but is not considered a GMO technique.
A 2013 study applied simple sequence repeat and found five major groupings, namely commercial cultivars and four landrace groups.
Nutritive value
Nutrients
Uncooked oats are 66% carbohydrates, including 11% dietary fiber and 4% beta-glucans, 7% fat, 17% protein, and 8% water (table). In a reference serving of , oats provide and are a rich source (20% or more of the Daily Value, DV) of protein (34% DV), dietary fiber (44% DV), several B vitamins, and numerous dietary minerals, especially manganese (213% DV) (table).
Health effects
Regular consumption of oat products lowers blood levels of low-density lipoprotein and total cholesterol, reducing the risk of cardiovascular disease. The beneficial effect of oat consumption on lowering blood lipids is attributed to oat beta-glucan. Oat consumption can help to reduce body mass index in obese people.
The United States Food and Drug Administration allows companies to make health claims on labels of food products that contain soluble fiber from whole oats, as long as the food provides 0.75 grams of soluble fiber per serving.
Uses
As food
When used in foods, oats are most commonly rolled or crushed into oatmeal or ground into fine oat flour. Oatmeal is chiefly eaten as porridge, but may also be used in a variety of baked goods, such as oatcakes (which may be made with coarse steel-cut oats for a rougher texture), oatmeal cookies and oat bread. Oats are an ingredient in many cold cereals, in particular muesli and granola; the Quaker Oats Company introduced instant oatmeal in 1966. Oats are also used to produce milk substitutes ("oat milk"). the oat milk market became the second-largest among plant milks in the United States, following almond milk, but exceeding the sales of soy milk. As a mainstay of West Wales for centuries, until changes in farming practices in the 1960s, oats were used in many traditional Welsh dishes, including laverbread, a Welsh breakfast, and "cockles and eggs" served with oatbread.
In Britain, oats are sometimes used for brewing beer, such as oatmeal stout where a percentage of oats, often 30%, is added to the barley for the wort. Oatmeal caudle, made of ale and oatmeal with spices, was a traditional British drink and a favourite of Oliver Cromwell.
Animal feed
Oats are commonly used as feed for horses when extra carbohydrates and the subsequent boost in energy are required. The oat hull may be crushed ("rolled" or "crimped") to make them easier to digest, or may be fed whole. They may be given alone or as part of a blended food pellet. Cattle are also fed oats, either whole or ground into a coarse flour using a roller mill, burr mill, or hammermill. Oat forage is commonly used to feed all kinds of ruminants, as pasture, straw, hay or silage.
Ground cover
Winter oats may be grown as an off-season groundcover and ploughed under in the spring as a green fertilizer, or harvested in early summer. They also can be used for pasture; they can be grazed a while, then allowed to head out for grain production, or grazed continuously until other pastures are ready.
Other uses
Oat straw is used as animal bedding; it absorbs liquids better than wheat straw. The straw can be used for making corn dollies, small decorative woven figures. Tied in a muslin bag, oat straw has been used to soften bath water.
Celiac disease
Celiac (or coeliac) disease is a permanent autoimmune disease triggered by gluten proteins. It almost always occurs in genetically predisposed people, having a prevalence of about 1% in the developed world. Oat products are frequently contaminated by other gluten-containing grains, mainly wheat and barley, requiring caution in the use of oats if people are sensitive to the gluten in those grains. For example, oat bread often contains only a small proportion of oats alongside wheat or other cereals. Use of pure oats in a gluten-free diet offers improved nutritional value, but remains controversial because a small proportion of people with celiac disease react to pure oats.
In human culture
In his 1755 Dictionary of the English Language, Samuel Johnson defined oats as "A grain, which in England is generally given to horses, but in Scotland supports the people."
"Oats and Beans and Barley Grow" is the first line of a traditional folksong (1380 in the Roud Folk Song Index), recorded in different forms from 1870. Similar songs are recorded from France, Canada, Belgium, Sweden, and Italy.
In English, oats are associated with sexual intercourse, as in the idioms "sowing one's (wild) oats", meaning having many sexual partners in one's youth, and "getting your oats", meaning having sex regularly.
| Biology and health sciences | Poales | null |
46574 | https://en.wikipedia.org/wiki/Rye | Rye | Rye (Secale cereale) is a grass grown extensively as a grain, a cover crop and a forage crop. It is grown principally in an area from Eastern and Northern Europe into Russia. It is much more tolerant of cold weather and poor soil than other cereals, making it useful in those regions; its vigorous growth suppresses weeds and provides abundant forage for animals early in the year. It is a member of the wheat tribe (Triticeae) which includes the cereals wheat and barley. Rye grain is used for bread, beer, rye whiskey, and animal fodder. In Scandinavia, rye was a staple food in the Middle Ages, and rye crispbread remains a popular food in the region. Europe produces around half of the world's rye; relatively little is traded between countries. A wheat-rye hybrid, triticale, combines the qualities of the two parent crops and is produced in large quantities worldwide. In European folklore, the ("rye wolf") is a carnivorous corn demon or .
Origins
The rye genus Secale is in the grass tribe Triticeae, which contains other cereals such as barley (Hordeum) and wheat (Triticum).
The generic name Secale, related to Italian and French meaning "rye", is of unknown origin but may derive from a Balkan language. The English name rye derives from Old English , related to Dutch , German , and Russian , again all with the same meaning.
Rye is one of several cereals that grow wild in the Levant, central and eastern Turkey and adjacent areas. Evidence uncovered at the Epipalaeolithic site of Tell Abu Hureyra in the Euphrates valley of northern Syria suggests that rye was among the first cereal crops to be systematically cultivated, around 13,000 years ago. However, that claim remains controversial; critics point to inconsistencies in the radiocarbon dates, and identifications based solely on grain, rather than on chaff.
Domesticated rye occurs in small quantities at a number of Neolithic sites in Asia Minor (Anatolia, now Turkey), such as the Pre-Pottery Neolithic B Can Hasan III near Çatalhöyük, but is otherwise absent from the archaeological record until the Bronze Age of central Europe, c. 1800–1500 BCE.
It is likely that rye was brought westwards from Asia Minor as a secondary crop, meaning that it was a minor admixture in wheat as a result of Vavilovian mimicry, and was only later cultivated in its own right. Archeological evidence of this grain has been found in Roman contexts along the Rhine and the Danube and in Ireland and Britain. The Roman naturalist Pliny the Elder was dismissive of a grain that may have been rye, writing that it "is a very poor food and only serves to avert starvation". He said it was mixed with spelt "to mitigate its bitter taste, and even then is most unpleasant to the stomach".
Description
Rye is a tall grass grown for its seeds; it can be an annual or a biennial. Depending on environmental conditions and variety it reaches in height. Its leaves are blue-green, long, and pointed. The seeds are carried in a curved head or spike some long. The head is composed of many spikelets, each of which holds two small flowers; the spikelets alternate left and right up the head.
Cultivation
Since the Middle Ages, people have cultivated rye widely in Central and Eastern Europe. It serves as the main bread cereal in most areas east of the France–Germany border and north of Hungary. In Southern Europe, it was cultivated on marginal lands.
Rye grows well in much poorer soils than those necessary for most cereal grains. Thus, it is an especially valuable crop in regions where the soil has sand or peat. Rye plants withstand cold better than other small grains, surviving snow cover that would kill winter wheat. Winter rye is the most popular: it is planted and begins to grow in autumn. In spring, the plants develop rapidly. This allows it to provide spring grazing, at a time when spring-planted wheat has only just germinated.
The physical properties of rye affect attributes of the final food product such as seed size, surface area, and porosity. The surface area of the seed directly correlates to the drying and heat transfer time. Smaller seeds have increased heat transfer, which leads to lower drying time. Seeds with lower porosity lose water more slowly during the process of drying.
Rye is harvested like wheat with a combine harvester, which cuts the plants, threshes and winnows the grain, and releases the straw to the field where it is later pressed into bales or left as soil amendment. The resultant grain is stored in local silos or transported to regional grain elevators and combined with other lots for storage and distant shipment. Before the era of mechanised agriculture, rye harvesting was a manual task performed with scythes or sickles.
Agroecology
Winter rye is any breed of rye planted in the autumn to provide ground cover for the winter. It grows during warmer days of the winter when sunlight temporarily warms the plant above freezing, even while there is general snow cover. It can be used as a cover crop to prevent the growth of winter-hardy weeds.
Rye grows better than any other cereal in heavy clay and light sandy soil, and infertile or drought-affected soils. It can tolerate pH between 4.5 and 8.0, but soils having pH 5.0 to 7.0 are best suited for rye cultivation. Rye grows best in fertile, well-drained loam or clay-loam soils. As for temperature, the crop can thrive in subzero environments, assisted by the production of antifreeze polypeptides (different from those produced by some fish and insects) by the leaves of winter rye.
Rye is a common, unwanted invader of winter wheat fields. If allowed to grow and mature, it may cause substantially reduced prices (docking) for harvested wheat.
Pests and diseases
Pests including the nematode Ditylenchus dipsaci and a variety of herbivorous insects can seriously affect plant health.
Rye is highly susceptible to the ergot fungus. Consumption of ergot-infected rye by humans and animals results in ergotism, which causes convulsions, miscarriage, necrosis of digits, hallucinations and death. Historically, damp northern countries that depended on rye as a staple crop were subject to periodic epidemics. Modern grain-cleaning and milling methods have practically eliminated ergotism, but it remains a risk if food safety vigilance breaks down.
After an absence of 60 years, stem rust (Puccinia graminis f. sp. tritici) has returned to Europe in the 2020s. Areas affected include Germany, Russia (Western Siberia), Spain, and Sweden.
Production and consumption
Rye is grown primarily in Eastern, Central and Northern Europe. The main rye belt stretches from northern Germany through Poland, Ukraine, and eastwards into central and northern Russia. Rye is also grown in North America, in South America including Argentina, in Oceania (Australia and New Zealand), in Turkey, and in northern China. Production levels of rye have fallen since 1992 in most of the producing nations, ; for instance, production of rye in Russia fell from 13.9 metric tons in 1992 to 2.2 metric tons in 2022.
World trade of rye is low compared with other grains such as wheat. The total export of rye for 2016 was $186 million compared with $30.1 billion for wheat.
Poland consumes the most rye per person at per capita (2009), followed by the Nordic and Baltic countries. The EU in general is around per capita. The entire world only consumes per capita.
Nutritional value
Raw rye contains 11% water, 76% carbohydrates, 10% protein, and 2% fat (table). A reference amount of rye provides of food energy, and is a rich source (20% or more of the Daily Value, DV) of essential nutrients, including dietary fiber, B vitamins, such as thiamine and niacin (each at 25% DV), and several dietary minerals. Highest micronutrient contents are for manganese (130% DV) and phosphorus (27% DV) (table).
Health effects
According to Health Canada and the U.S. Food and Drug Administration, consuming at least per day of rye beta-glucan or per serving of soluble fiber can lower levels of blood cholesterol, a risk factor for cardiovascular diseases.
Eating whole-grain rye, as well as other high-fibre grains, improves regulation of blood sugar (i.e., reduces blood glucose response to a meal). Consuming breakfast cereals containing rye over weeks to months also improved cholesterol levels and glucose regulation.
Health concerns
Like wheat, barley, and their hybrids and derivatives, rye contains glutens and related prolamines, which makes it an unsuitable grain for consumption by people with gluten-related disorders, such as celiac disease, non-celiac gluten sensitivity, and wheat allergy, among others. Nevertheless, some wheat allergy patients can tolerate rye or barley.
Uses
Food and drink
Rye grain is refined into a flour high in gliadin but low in glutenin and rich in soluble fiber. Alkylresorcinols are phenolic lipids present in high amounts in the bran layer (e.g. pericarp, testa and aleurone layers) of wheat and rye (0.1–0.3% of dry weight). Rye bread, including pumpernickel, is made using rye flour and is a widely eaten food in Northern and Eastern Europe. In Scandinavia, rye is widely used to make crispbread (); in the Middle Ages it was a staple food in the region, and it remains popular in the 21st century.
Rye grain is used to make alcoholic drinks, such as rye whiskey and rye beer. The traditional cloudy and sweet-sour low-alcohol beverage kvass is fermented from rye bread or rye flour and malt.
Other uses
Rye is a useful forage crop in cool climates; it grows vigorously and provides plentiful fodder for grazing animals, or green manure to improve the soil. It forms a good cover crop in winter with its rapid growth and deep roots.
Rye straw is used as livestock bedding, despite the risk of ergot poisoning. It is used on a small scale to make crafts such as corn dollies. More recently it has found uses as a raw material for bioconversion to products such as the sweetener xylitol.
Rye flour is mixed with linseed oil and iron oxide to make traditional Falun red paint, widely used as a house paint in Sweden.
Production of hybrids
Plant breeders, starting in the 19th century in Germany and Scotland, but mainly from the 1950s, worked to develop a hybrid cereal with the best qualities of wheat and rye, now called triticale. Modern triticales are hexaploid with six sets of chromosomes; they are used to produce millions of tons of cereal annually.
Varieties of rye hold much genetic diversity, which can be used to improve other crops such as wheat. For example, the pollination abilities of wheat can be improved by the addition of the rye chromosome 4R; this increases the size of the wheat anther and the amount of pollen. The chromosome is the source of many crop disease resistance genes. Varieties such as Petkus, Insave, Amigo, and Imperial have donated 1R-originating resistance to wheat. AC Hazlet rye is a medium-sized winter rye with resistance to both lodging and shattering. Rye was the gene donor of Sr31 – a stem rust resistance gene – introgressed into wheat.
The characteristics of S. cereale have been combined with another perennial rye, S. montanum, to produce S. cereanum, which has the beneficial characteristics of each. The hybrid rye can be grown in harsh environments and on poor soil. It provides improved forage with digestible fiber and protein.
In human culture
In European folklore, the Roggenwolf ("rye wolf") is a carnivorous corn demon or Feldgeist, a field spirit shaped like a wolf. The Roggenwolf steals children and feeds on them. The last grain heads are often left at their place as a sacrifice for the agricultural spirits.
In contrast, the Roggenmuhme or Roggenmutter ("rye aunt" or "rye mother") is an anthropomorphic female corn demon with fiery fingers. Her bosoms are filled with tar and may end in tips of iron. Her bosoms are also long, and as such must be thrown over her shoulders when she runs. The Roggenmuhme is completely black or white, and in her hand she has a birch or whip from which lightning sparks. She can change herself into different animals, such as snakes, turtles, and frogs.
The classical scholar Carl A. P. Ruck writes that the Roggenmutter was believed to go through the fields, rustling like the wind, with a pack of rye wolves running after her. They spread ergot through the sheaves of harvested rye. According to Ruck, they then lured children into the fields to nurse on the infected grains "like the iron teats of the Roggenmutter". The enlarged reddish ergot-infected grains were known as Wulfzähne (wolf teeth).
| Biology and health sciences | Poales | null |
46576 | https://en.wikipedia.org/wiki/Turnip | Turnip | The turnip or white turnip (Brassica rapa subsp. rapa) is a root vegetable commonly grown in temperate climates worldwide for its white, fleshy taproot. Small, tender varieties are grown for human consumption, while larger varieties are grown as feed for livestock. The name turnip used in many regions may also be used to refer to rutabaga (or neep or swede), which is a different but related vegetable.
Etymology
The origin of the word turnip is uncertain, though it is hypothesised that it could be a compound of turn as in turned/rounded on a lathe and neep, derived from Latin napus, the word for the plant. According to An Universal Etymological English Dictionary, turn refers to "round napus to distinguish it from the napi, which were generally long".
Description
The most common type of turnip is mostly white-skinned, apart from the upper , which protrude above the ground and are purple or red or greenish where the sun has hit. This above-ground part develops from stem tissue but is fused with the root. The interior flesh is entirely white. The root is roughly globular, from in diameter, and lacks side roots. Underneath, the taproot (the normal root below the swollen storage root) is thin and or more in length; it is often trimmed off before the vegetable is sold. The leaves grow directly from the above-ground shoulder of the root, with little or no visible crown or neck (as found in rutabagas).
Turnip leaves are sometimes eaten as "turnip greens" ("turnip tops" in the UK), and they resemble mustard greens (to which they are closely related) in flavor. Turnip greens are a common side dish in southeastern U.S. cooking, primarily during late fall and winter. Smaller leaves are preferred. Varieties of turnip grown specifically for their leaves resemble mustard greens and have small roots. These include rapini (broccoli rabe), bok choy, and Chinese cabbage. Similar to raw cabbage or radish, turnip leaves and roots have a pungent flavor that becomes milder after cooking.
Turnip roots weigh up to , although they are usually harvested when smaller. Size is partly a function of variety and partly a function of the length of time a turnip has grown.
Nutrition
Boiled green leaves of the turnip top ("turnip greens") provide of food energy in a reference serving of , and are 93% water, 4% carbohydrates, and 1% protein, with negligible fat (table). The boiled greens are a rich source (more than 20% of the Daily Value, DV) particularly of vitamin K (350% DV), with vitamin A, vitamin C, and folate also in significant content (30% DV or greater, table). Boiled turnip greens also contain substantial lutein (8440 micrograms per 100 g).
In a 100-gram reference amount, boiled turnip root supplies , with only vitamin C in a moderate amount (14% DV). Other micronutrients in boiled turnip are in low or negligible content (table). Boiled turnip is 94% water, 5% carbohydrates, and 1% protein, with negligible fat.
History
Wild forms of the turnip and its relatives, the mustards and radishes, are found over western Asia and Europe. Starting as early as 2000 BCE, related oilseed subspecies of Brassica rapa like oleifera may have been domesticated several times from the Mediterranean to India, though these are not the same turnips cultivated for its roots. Previous estimates of domestication dates are limited to linguistic analyses of plant names.
Edible turnips were first domesticated in Central Asia several thousand years ago, supported by genetic studies of both wild and domesticated varieties showing Central Asian varieties are the most genetically diverse crops. Ancient literary references to turnips in Central Asia, and the existence of words for 'turnip' in ancestral languages of the region, also support the turnip as the original domesticated form of Brassica rapa subsp. rapa. It later spread to Europe and East Asia with farmers in both areas later selecting for larger leaves; it subsequently became an important food in the Hellenistic and Roman world. The turnip spread to China, and reached Japan by 700 CE.
Turnips were an important crop in the cuisine of Antebellum America. They were grown for their greens as well as the roots, and could yield edible greens within a few weeks of planting, making them a staple of new plantations still in the process of becoming productive. They could be planted as late as the fall and still provide newly arrived settlers with a source of food. The typical southern way of cooking turnip greens was to boil them with a chunk of salt pork. The broth obtained from this process was known as pot likker and was served with crumbled corn pone, often made from coarse meal when little else was available along the antebellum frontier.
Cultivation
The 1881 American Household Cyclopedia advises that turnips can be grown in fields that have been harrowed and ploughed. It recommends planting in late May or June and weeding and thinning with a hoe throughout the summer.
As a root crop, turnips grow best in cool weather; hot temperatures cause the roots to become woody and bad-tasting. They are typically planted in the spring in cold-weather climates (such as the northern US and Canada) where the growing season is only 3–4 months. In temperate climates (ones with a growing season of 5–6 months), turnips may also be planted in late summer for a second fall crop. In warm-weather climates (7 or more month growing season), they are planted in the fall. 55–60 days is the average time from planting to harvest.
Turnips are a biennial plant, taking two years from germination to reproduction. The root spends the first year growing and storing nutrients, and the second year flowers, produces seeds, and dies. The flowers of the turnip are tall and yellow, with the seeds forming in pea-like pods. In areas with less than seven-month growing seasons, temperatures are too cold for the roots to survive the winter. To produce seeds, pulling the turnips and storing them over winter is necessary, taking care not to damage the leaves. During the spring, they may be set back in the ground to complete their lifecycle.
Relevance in human use
In England around 1700, Charles "Turnip" Townshend promoted the use of turnips in a four-year crop-rotation system that enabled year-round livestock feeding.
In Scottish and some other English dialects, the word turnip can also refer to rutabagas (North American English), also known as swedes in England, a variety of Brassica napus, which is a hybrid between the turnip, Brassica rapa, and the cabbage. Turnips are generally smaller with white flesh, while rutabagas are larger with yellow flesh. Scottish English sometimes distinguish turnips as white turnips, and sometimes distinguishes rutabagas as neeps.
In the Austrian region of Wildschönau farmers produce a kind of schnaps called Krautinger from a variation of Brassica rapa ssp. Rapa, since they were granted permission to do so under Empress Maria Theresia in the 18th century. It is notorious for its distinct taste and smell.
Heraldry
The turnip is an old vegetable charge in heraldry. It was used by Leonhard von Keutschach, prince-archbishop of Salzburg. The turnip is still the heart shield in the arms of Keutschach am See.
The arms of the former municipality of Kiikala, Finland, were Gules, a turnip Or.
| Biology and health sciences | Brassicales | null |
46590 | https://en.wikipedia.org/wiki/Hedgehog | Hedgehog | A hedgehog is a spiny mammal of the subfamily Erinaceinae, in the eulipotyphlan family Erinaceidae. There are 17 species of hedgehog in five genera found throughout parts of Europe, Asia, and Africa, and in New Zealand by introduction. There are no hedgehogs native to Australia and no living species native to the Americas. However, the extinct genus Amphechinus was once present in North America.
Hedgehogs share distant ancestry with shrews (family Soricidae), with gymnures possibly being the intermediate link, and they have changed little over the last 15 million years. Like many of the first mammals, they have adapted to a nocturnal way of life. Their spiny protection resembles that of porcupines, which are rodents, and echidnas, a type of monotreme.
Etymology
The name hedgehog came into use around the year 1450, derived from the Middle English , from , , because it frequents hedgerows, and , , from its piglike snout. Another name that is used is hedgepig.
Description
Hedgehogs are easily recognized by their spines, which are hollow hairs made stiff with keratin. Their spines are not poisonous or barbed and, unlike the quills of a porcupine, do not easily detach from their bodies. However, the immature animal's spines normally fall out as they are replaced with adult spines. This is called "quilling". Spines can also shed when the animal is diseased or under extreme stress. Hedgehogs are usually brown, with pale tips to the spines, though blonde hedgehogs are found on the Channel Island of Alderney.
Hedgehogs roll into a tight spiny ball when threatened, tucking in the furry face, feet, and belly. The hedgehog's back contains two large muscles that direct the quills. Some light-weight desert hedgehog species with fewer spines are more likely to flee or attack, ramming an intruder with the spines, rolling up only as a last resort.
Hedgehogs are primarily nocturnal, with some species also active during the day. Hedgehogs sleep for a large portion of the day under bushes, grasses, rocks, or most commonly in dens dug underground. All wild hedgehogs can hibernate, though the duration depends on temperature, species, and abundance of food.
Hedgehogs are fairly vocal, with a variety of grunts, snuffles and/or squeals.
They occasionally perform a ritual called anointing. When the animal encounters a new scent, it will lick and bite the source, then form a scented froth in its mouth and paste it on its spines with its tongue. Some experts believe this might serve to camouflage the hedgehog with the local scent, and might also lead to infection of predators poked by the spines. Anointing is sometimes also called anting after a similar behavior in birds.
Like opossums, mice, and moles, hedgehogs have some natural immunity against some snake venom through the protein erinacin in their muscles, though in such small amounts that a viper bite may still be fatal. In addition, hedgehogs are one of four known mammalian groups with natural protection against another snake venom, α-neurotoxin. Developing independently, pigs, honey badgers, mongooses, and hedgehogs all have mutations in the nicotinic acetylcholine receptor that prevent the binding of the snake venom α-neurotoxin.
The sense of smell has been little studied in the hedgehog, as the olfactory part of the mammal brain is obscured inside the neopallium. Tests have suggested that hedgehogs share the same olfactory electrical activity as cats.
Diet
Although traditionally classified in the abandoned order Insectivora, hedgehogs are omnivorous. They feed on insects, snails, frogs and toads, snakes, bird eggs, carrion, mushrooms, grass roots, berries, and melons. Afghan hedgehogs devour berries in early spring after hibernation.
Hedgehogs have been observed eating cat food left outdoors for pets, but this may not be a proper food for hedgehogs in captivity.[video:1]
Hibernation
When a hedgehog hibernates, its normal body temperature decreases to .
Reproduction and lifespan
Hedgehog gestation lasts 35–58 days, depending on species. The average litter is three to four newborns for larger species and five to six for smaller ones. As with many animals, it is not unusual for an adult male hedgehog to kill newborn males.
Hedgehogs have a relatively long lifespan for their size. In captivity, lack of predators and controlled diet contribute to a lifespan of eight to ten years depending on size. In the wild, larger species live four to seven years (some recorded up to 16 years), and smaller species live two to four years (four to seven in captivity). This compares to a mouse at two years and a large rat at three to five years.
Newborn hoglets are blind, with their quills covered by a protective membrane which dries and shrinks over several hours, and falls off after cleaning, allowing the quills to emerge.
Predators
The various species have many predators: while forest hedgehogs are prey primarily to birds (especially owls) and ferrets, smaller species like the long-eared hedgehog are prey to foxes, wolves, and mongooses. Hedgehog bones have been found in the pellets of the Eurasian eagle owl.
In Britain, the main predator is the European badger. European hedgehog populations in the United Kingdom are lower in areas with many badgers, and hedgehog rescue societies will not release hedgehogs into known badger territories. Badgers also compete with hedgehogs for food.
Domestication
The most common pet species of hedgehog are hybrids of the white-bellied hedgehog or four-toed hedgehog (Atelerix albiventris, sometimes known as the African pygmy hedgehog) and the smaller North African hedgehog (A. algirus, pygmy hedgehog). Other species kept as pets are the long-eared hedgehog (Hemiechinus auritus) and the Indian long-eared hedgehog (H. collaris).
, it is illegal to own a hedgehog as a pet in the US states of Hawaii, Georgia, Pennsylvania, and California, as well as in New York City, Washington, D.C. and some Canadian municipalities. Breeding licenses are required. No such restrictions exist in most European countries with the exception of Scandinavia. In Italy, it is illegal to keep wild hedgehogs as pets.
As invasive species
In areas where hedgehogs have been introduced, such as New Zealand and the islands of Scotland, the hedgehog has become a pest, lacking natural predators. In New Zealand it has decimated native species including insects, snails, lizards and ground-nesting birds, particularly shore birds.
Eradication can be troublesome. Attempts to eliminate hedgehogs from bird colonies on the Scottish islands of North Uist and Benbecula in the Outer Hebrides were met with international protest. Eradication began in 2003 with 690 hedgehogs killed, though animal welfare groups attempted rescues. By 2007, legal injunctions prohibited the killing, and in 2008, the elimination process was changed to trapping and releasing on the mainland.
In 2022, it was reported that the hedgehog population in rural Britain was declining rapidly, down by 30%-75% since 2000.
Diseases
Hedgehogs suffer many diseases common to mammals, including cancer, fatty liver disease, and cardiovascular disease.
Cancer is very common in hedgehogs. The most common is squamous cell carcinoma, which spreads quickly from bone to the organs, unlike in humans. Surgery to remove the bone tumors is impractical.
Fatty liver and heart disease are believed to be caused by bad diet and obesity. Hedgehogs will eagerly eat foods high in fat and sugar, despite a metabolism adapted for low-fat, protein-rich insects.
Hedgehogs are also highly susceptible to pneumonia, with difficulty breathing and nasal discharge, caused by the bacterium Bordetella bronchiseptica.
Hedgehogs uncommonly transmit a fungal ringworm or dermatophytosis skin infection to human handlers and other hedgehogs, caused by Trichophyton erinacei, a distinct mating group among the Arthroderma benhamiae fungi.
Hedgehogs can suffer from balloon syndrome, a rare condition in which gas is trapped under the skin from injury or infection, causing the animal to inflate. The condition is unique to hedgehogs because their skin is baggy enough to curl up. In 2017, the BBC reported a case of a male hedgehog "almost twice its natural size, literally blown up like a beach ball with incredibly taut skin". At Stapeley's Wildlife Hospital, vet Bev Panto, said, "I have seen three or four of these cases and they are very strange every time and quite shocking ... When you first see them they appear to be very big hedgehogs but when you pick them up they feel so light because they are mostly air". The British Hedgehog Preservation Society advises:
There is no single cause for this condition. The air can be removed by incising or aspirating through the skin over the back. Antibiotic cover should be given. This may be associated with lung/chest wall damage or a small external wound acting like a valve or a clostridium type infection.
Human influence
As with most small mammals living around humans, many are run over as they attempt to cross roadways. In Ireland, hedgehogs are one of the most common mammalian road fatalities. Between April 2008 and November 2010 on two stretches of road measuring 227 km and 32.5 km, there were 133 recorded hedgehog fatalities. Of another 135 hedgehog carcasses collected from throughout Ireland, there were significantly more males than females collected, with peaks in male deaths occurring in May and June. Female deaths outnumbered males only in August, with further peaks in female deaths observed in June and July. It is suggested that these peaks are related to the breeding season (adults) and dispersal/exploration following independence.
Domesticated hedgehogs can get their heads stuck in tubes such as toilet paper tubes, and walk around with them. Some owners call this "tubing" and promote the behavior, providing a tube cut lengthwise to allow the hedgehog to remove it. Some hedgehogs intentionally wear tubes for hours.
Culinary and medicinal use
Hedgehogs are a food source in many cultures. They were eaten in Ancient Egypt and some recipes of the Late Middle Ages call for hedgehog meat. They are traded throughout Eurasia and Africa for traditional medicine and witchcraft. In the Middle East and especially among Bedouins, hedgehog meat is considered medicine against rheumatism and arthritis. Hedgehogs are also said to cure a variety of disorders from tuberculosis to impotence. In Morocco, inhaling the smoke of the burnt skin or bristles supposedly remedies fever, impotence, and urinary illnesses; the blood is sold as a cure for ringworm, cracked skin and warts, and the flesh is eaten as a remedy for witchcraft. Romani people still eat hedgehogs, boiled or roasted, and also use the blood and the fat as a medicine.
In 1981, British publican Philip Lewis developed a line of Hedgehog Flavoured Crisps, whose taste was apparently based on the flavourings used by Romani to bake hedgehogs. As they did not contain any actual hedgehog product, the Office of Fair Trading ordered him to change the name to Hedgehog Flavour Crisps.
Genera and species
Subfamily Erinaceinae (hedgehogs)
Genus Atelerix
Four-toed hedgehog, Atelerix albiventris
North African hedgehog, Atelerix algirus
Southern African hedgehog, Atelerix frontalis
Somali hedgehog, Atelerix sclateri
Genus Erinaceus
Amur hedgehog, Erinaceus amurensis
Southern white-breasted hedgehog, Erinaceus concolor
European hedgehog, Erinaceus europaeus
Northern white-breasted hedgehog, Erinaceus roumanicus
Genus Hemiechinus
Long-eared hedgehog, Hemiechinus auritus
Indian long-eared hedgehog, Hemiechinus collaris
Genus Mesechinus
Daurian hedgehog, Mesechinus dauuricus
Hugh's hedgehog, Mesechinus hughi
Small-toothed forest hedgehog, Mesechinus miodon
Gaoligong forest hedgehog, Mesechinus wangi
Genus Paraechinus
Desert hedgehog, Paraechinus aethiopicus
Brandt's hedgehog, Paraechinus hypomelas
Indian hedgehog, Paraechinus micropus
Bare-bellied hedgehog, Paraechinus nudiventris
Society and culture
In worldwide folklore, hedgehogs are associated with intelligence and wisdom (Asia, Europe), and magic (Africa).
| Biology and health sciences | Erinaceids | null |
46593 | https://en.wikipedia.org/wiki/Hay | Hay | Hay is grass, legumes, or other herbaceous plants that have been cut and dried to be stored for use as animal fodder, either for large grazing animals raised as livestock, such as cattle, horses, goats, and sheep, or for smaller domesticated animals such as rabbits and guinea pigs. Pigs can eat hay, but do not digest it as efficiently as herbivores do.
Hay can be used as animal fodder when or where there is not enough pasture or rangeland on which to graze an animal, when grazing is not feasible due to weather (such as during the winter), or when lush pasture by itself would be too rich for the health of the animal. It is also fed when an animal cannot access any pastures—for example, when the animal is being kept in a stable or barn.
Hay production and harvest, commonly known as "making hay", "haymaking", "haying" or "doing hay", involves a multiple step process: cutting, drying or "curing", raking, processing, and storing. Hayfields do not have to be reseeded each year in the way that grain crops are, but regular fertilizing is usually desirable, and overseeding a field every few years helps increase yield.
Composition
Commonly used plants for hay include mixtures of grasses such as ryegrass (Lolium species), timothy, brome, fescue, Bermuda grass, orchard grass, and other species, depending on region. Hay may also include legumes, such as alfalfa (lucerne) and clovers (red, white and subterranean). Legumes in hay are ideally cut pre-bloom. Other pasture forbs are also sometimes a part of the mix, though these plants are not necessarily desired as certain forbs are toxic to some animals.
In the UK some hay is harvested from traditionally managed hay meadows which have a highly diverse flora and which support a rich eco-system. The hay produced by these meadows is species rich and was traditionally used to feed horses.
Oat, barley, and wheat plant materials are occasionally cut green and made into hay for animal fodder, and more usually used in the form of straw, a harvest byproduct of stems and dead leaves that are baled after the grain has been harvested and threshed. Straw is used mainly for animal bedding. Although straw is also used as fodder, particularly as a source of dietary fiber, it has lower nutritional value than hay.
In agroforestry systems are developed to produce tree hay.
It is the leaf and seed material in the hay that determines its quality, because they contain more of the nutrition value for the animal than the stems do. Farmers try to harvest hay at the point when the seed heads are not quite ripe and the leaf is at its maximum when the grass is mowed in the field. The cut material is allowed to dry so that the bulk of the moisture is removed but the leafy material is still robust enough to be picked up from the ground by machinery and processed into storage in bales, stacks or pits. Methods of haymaking thus aim to minimize the shattering and falling away of the leaves during handling.
Hay production is highly sensitive to weather conditions, particularly during the harvest period. In drought conditions, both seed and leaf production are stunted, resulting in hay with a high ratio of dry, coarse stems that possess very low nutritional value. Conversely, excessively wet weather can cause cut hay to spoil in the field before it can be baled. Consequently, the primary challenge and risk for farmers in hay production is managing the weather, especially during the critical few weeks when the plants are at optimal maturity for harvesting. A lucky break in the weather often moves the haymaking tasks (such as mowing, tedding, and baling) to the top priority on the farm's to-do list. This is reflected in the idiom to make hay while the sun shines. Hay that was too wet at cutting may develop rot and mold after being baled, creating the potential for toxins to form in the feed, which could make the animals sick.
After harvest, hay also has to be stored in a manner to prevent it from getting wet. Mold and spoilage reduce nutritional value and may cause illness in animals. A symbiotic fungus in fescue may cause illness in horses and cattle.
The successful harvest of maximum yields of high-quality hay is entirely dependent on the coincident occurrence of optimum crop, field, and weather conditions. When this occurs, there may be a period of intense activity on the hay farm while harvest proceeds until weather conditions become unfavourable.
Use
Hay or grass is the foundation of the diet for all grazing animals, and can provide as much as 100% of the fodder required for an animal. Hay is usually fed to an animal during times when winter, drought, or other conditions make pasture unavailable. Animals that can eat hay vary in the types of grasses suitable for consumption, the ways they consume hay, and how they digest it. Therefore, different types of animals require hay that consists of similar plants to what they would eat while grazing, and, likewise, plants that are toxic to an animal in pasture are generally also toxic if they are dried into hay.
Most animals are fed hay in two daily feedings, morning and evening, more for the convenience of humans, as most grazing animals on pasture naturally consume fodder in multiple feedings throughout the day. Some animals, especially those being raised for meat, may be given enough hay that they simply are able to eat all day. Other animals, especially those that are ridden or driven as working animals may be given a more limited amount of hay to prevent them from getting too fat. The proper amount of hay and the type of hay required varies somewhat between different species. Some animals are also fed concentrated feeds such as grain or vitamin supplements in addition to hay. In most cases, hay or pasture forage must make up 50% or more of the diet by weight.
One of the most significant differences in hay digestion is between ruminant animals, such as cattle and sheep, and nonruminant, hindgut fermentors, such as horses. Both types of animals can digest cellulose in grass and hay, but do so by different mechanisms. Because of the four-chambered stomach of cattle, they are often able to break down older forage and have more tolerance of mold and changes in diet. The single-chambered stomach and cecum or "hindgut" of the horse uses bacterial processes to break down cellulose that are more sensitive to changes in feeds and the presence of mold or other toxins, requiring horses to be fed hay of a more consistent type and quality.
Different animals also use hay in different ways: cattle evolved to eat forages in relatively large quantities at a single feeding, and then, due to the process of rumination, take a considerable amount of time for their stomachs to digest food, often accomplished while the animal is lying down, at rest. Thus quantity of hay is important for cattle, who can effectively digest hay of low quality if fed in sufficient amounts. Sheep will eat between two and four percent of their body weight per day in dry feed, such as hay, and are very efficient at obtaining the most nutrition possible from three to five pounds per day of hay or other forage. They require three to four hours per day to eat enough hay to meet their nutritional requirements.
Unlike ruminants, horses digest their food in small portions throughout the day and can utilize only about 2.5% of their body weight in feed within a 24-hour period. Horses evolved to graze continuously while on the move, covering up to 50 miles (80 km) per day in the wild. Their stomachs digest food quickly, allowing them to extract a higher nutritional value from smaller quantities of feed When horses are fed low-quality hay, they may develop an unhealthy, obese, "hay belly" due to over-consumption of "empty" calories. If their type of feed is changed dramatically, or if they are fed moldy hay or hay containing toxic plants, they can become ill; colic is the leading cause of death in horses. Contaminated hay can also lead to respiratory problems in horses. Hay can be soaked in water, sprinkled with water or subjected to steaming to reduce dust.
Harvest and transport
Methods and the terminology to describe the steps of making hay have varied greatly throughout history, and many regional variations still exist today. Whether done by hand or by modern mechanized equipment, tall grass and legumes at the proper stage of maturity must be cut, then allowed to dry (preferably by the sun), then raked into long, narrow piles known as windrows. Next, the cured hay is gathered up in some form (usually by some type of baling process) and placed for storage into a haystack or into a barn or shed to protect it from moisture and rot.
During the growing season, which is spring and early summer in temperate climates, grass grows at a fast pace. Hay reaches its peak nutritional value when all leaves are fully developed and seed or flower heads are just shy of full maturity. At this stage of maximum growth in the pasture or field, if timed correctly, the hay is cut. Grass hay cut too early retains high moisture content, making it harder to cure and resulting in a lower yield per acre compared to more mature grass. However, hay cut too late becomes coarser, has a lower resale value, and loses some of its nutrients. Typically, there is a two-week "window" during which grass is at its ideal stage for harvesting hay. The time for cutting alfalfa hay is ideally done when plants reach maximum height and are producing flower buds or just beginning to bloom, cutting during or after full bloom results in lower nutritional value of the hay.
Hay can be raked into rows as it is cut, then turned periodically to dry, particularly if a modern swather is used. Or, especially with older equipment or methods, the hay is cut and allowed to lie spread out in the field until it is dry, then raked into rows for processing into bales afterwards. During the drying period, which can take several days, the process is usually sped up by turning the cut hay over with a hay rake or spreading it out with a tedder. If it rains while the hay is drying, turning the windrow can also allow it to dry faster. Turning the hay too often or too roughly can also cause drying leaf matter to fall off, reducing the nutrients available to animals. Drying can also be sped up by mechanized processes, such as the use of a hay conditioner, or by the use of chemicals sprayed onto the hay to speed evaporation of moisture, though these are more expensive techniques, not in general use except in areas where there is a combination of modern technology, high prices for hay, and too much rain for hay to dry properly.
Once hay is cut, dried and raked into windrows, it is usually gathered into bales or bundles, and then hauled to a central location for storage. In some places, depending on geography, region, climate, and culture, hay is gathered loose and stacked without being baled first.
History
Early methods
Columella in his De re rustica describes the usual haying process of the early Roman Empire. Much hay was originally cut by scythe by teams of workers, dried in the field and gathered loose on wagons. Later, haying was accomplished with horse-drawn implements such as mowers.
After hay was cut and dried, it was raked or rowed up by raking it into a linear heap by hand or with a horse-drawn implement. Turning hay, when needed, originally was done by hand with a fork or rake. Once the dried hay was rowed up, pitchforks were used to pile it loose, originally onto a horse-drawn cart or wagon, later onto a truck or tractor-drawn trailer, for which a sweep could be used instead of pitch forks.
Loose hay was transported to a designated storage area, typically a slightly elevated location to ensure proper drainage, where it was constructed into a haystack. Building the stack was a skilled task, as it needed to be made waterproof during construction. The haystack would compress under its own weight, allowing the hay to cure through the release of heat generated by the residual moisture and compression forces. The haystack was usually enclosed in a fenced-off area, known as a rick yard, to separate it from the rest of the paddock, and was often thatched or covered with sheets to protect it from moisture. When needed, slices of hay would be cut using a hay knife and fed to animals each day.
On some farms, the loose hay was stored in a barrack, shed, or barn, normally in such a way that it would compress down and cure. Hay could be stored in a specially designed barn with little internal structure to allow more room for the hay loft. Alternatively, an upper storey of a cow-shed or stable was used, with hatches in the floor to allow hay to be thrown down into hay-racks below. Depending on the region, the term "hay rick" could refer to the machine for cutting hay, the haystack or the wagon used to collect the hay.
With the invention of agricultural machinery such as the tractor and the baler, most hay production became mechanized by the 1930s. Hay baling began with the invention of the first hay press in about 1850. Timothy grass and clover were the most common plants used for hay in the early 20th century in the United States, though both plants are native to Europe. Hay was baled for easier handling and to reduce space required for storage and shipment. The first bales weighed about 300 pounds. The original machines were of a vertical design similar to the one photographed by the Greene Co. Historical Society. They used a horse-driven screw-press mechanism or a dropped weight to compress the hay. The first patent went to HL Emery for a horse-powered, screw-operated hay press in 1853. Other models were reported as early as 1843 built by PK Dederick's Sons of Albany, New York, or Samuel Hewitt of Switzerland County, Indiana. Later, horizontal machines were devised. One was the “Perpetual Press” made by PK Dederick of Albany in 1872. They could be powered by steam engines by about 1882. The continuous hay baler arrived in 1914.
Modern mechanized techniques
Modern mechanized hay production today is usually performed by a number of machines. While small operations use a tractor to pull various implements for mowing and raking, larger operations use specialized machines such as a mower or a swather, which are designed to cut the hay and arrange it into a windrow in one step. Balers are usually pulled by a tractor, with larger balers requiring more powerful tractors.
Mobile balers, machines that gather and bale hay in one process were first developed around 1940. The initial balers produced rectangular bales that were small enough for an individual to lift, typically weighing between 70 and 100 pounds (32 to 45 kg) each. The size and shape of these bales allowed for manual handling, including lifting, stacking on transport vehicles, and constructing a haystack by hand. To reduce labor and enhance safety, loaders and stackers were subsequently developed to mechanize the transportation of small bales from the field to the haystack or hay barn. Later in the 20th century, balers were developed capable of producing large bales that weigh up to .
Conditioning of hay crop during cutting or soon thereafter is popular. The basic idea is that it decreases drying time, particularly in humid climates or if rain threatens to interfere with haying. Usually, rollers or flails inside a mower conditioner crimp, crack or strip the alfalfa or grass stems to increase evaporation rate. Sometimes, a salt solution is sprayed over the top of the hay (generally alfalfa) that helps to dry the hay.
Fertilization and weed control
Modern hay production often relies on artificial fertilizer and herbicides. Traditionally, manure has been used on hayfields, but modern chemical fertilizers are used today as well. Hay that is to be certified as weed-free for use in wilderness areas must often be sprayed with chemical herbicides to keep unwanted weeds from the field, and sometimes even non-certified hayfields are sprayed to limit the production of noxious weeds. Organic forms of fertilization and weed control are required for hay grown for consumption by animals whose meat will ultimately be certified organic. To that end, compost and field rotation can enhance soil fertility, and regular mowing of fields in the growth phase of the hay will often reduce the prevalence of undesired weeds. In recent times, some producers have experimented with human sewage sludge to grow hay. This is not a certified organic method and no warning labels are mandated by EPA. One concern with hay grown on human sewage sludge is that the hay can take up heavy metals, which are then consumed by animals. Molybdenum poisoning is a particular concern in ruminants such as cows and goats, and there have been animal deaths. Another concern is with a herbicide known as aminopyralid, which can pass through the digestive tract in animals, making their resulting manure toxic to many plants and thus unsuitable as fertilizer for food crops. Aminopyralid and related herbicides can persist in the environment for several years.
Baling
Small square bales are made in two main variations. The smaller "two-tie" (two twines to hold the bale together) or larger "three-tie" (three twines to hold the bale together). They vary in size within both groups but are generally popular in different markets. The smaller two-tie bales are favored in the hobby animal market and are preferred for their convenient size. The larger, three-tie bales are favored by producers wanting to export bales because of the increase of efficiency in transportation and also by customers for a better price per ton. The two-tie small bales are the original form factor of hay bales. Balers for both types of small bales are still manufactured, as well as stackers, bundlers and bale accumulators for handling them. Some farms still use equipment manufactured over 50 years ago to produce small bales. The small bale remains part of overall ranch lore and tradition with "hay bucking" competitions still held for fun at many rodeos and county fairs. Small square bales are often stacked mechanically or by hand in a crisscrossed fashion sometimes called a "haystack", "rick" or "hayrick". Rain tends to wash nutrition out of hay and can cause spoilage or mold; hay in small square bales is particularly susceptible. Small bales are, therefore, often stored in a haymow or hayshed. Haystacks built outside are usually protected by tarpaulins. If this is not done, the top two layers of the stack are often lost to rot and mold, and if the stack is not arranged in a proper haystack, moisture can seep even deeper into the stack. The rounded shape and tighter compaction of round bales make them less susceptible to spoilage, as the water is less likely to penetrate the bale. Adding net wrap, which is not used on square bales, offers even more weather resistance. People who keep small numbers of animals may prefer small bales that one person can handle without machinery. There is also a risk that hay bales may be moldy or contain decaying carcasses of tiny creatures accidentally killed by baling equipment and swept up into the bale, producing toxins such as botulinum toxin. Both can be deadly to non-ruminant herbivores such as horses, and when this occurs, the entire contaminated bale generally is thrown out, another reason some people continue to support the market for small bales.
Farmers who need to make large amounts of hay will likely choose balers that produce much larger bales, maximizing the amount of hay protected from the elements. Large bales come in two types: round and square. Large square bales, which can weigh up to , can be stacked and easily transported on trucks. Large round bales, which typically weigh , are more moisture-resistant and pack the hay more densely (especially at the center). Round bales are quickly fed with the use of mechanized equipment. The volume-to-surface area ratio allows many dry-area farmers to leave large bales outside until consumed. Wet-area farmers and those in climates with heavy snowfall can stack round bales under a shed or tarp and use a light but durable plastic wrap that partially encloses outside bales. The wrap repels moisture but leaves the ends of the bale exposed so that the hay itself can "breathe" and does not begin to ferment. When it is possible to store round bales under a shed, they last longer, and less hay is lost to rot and moisture.
For animals that eat silage, a bale wrapper may be used to seal a round bale completely and trigger the fermentation process. It is a technique used as a money-saving process by producers who do not have access to a superior silo, and for producing silage that is transported to other locations. In very damp climates, it is a legitimate alternative to drying hay completely. When processed properly, the natural fermentation process prevents mold and rot. Round bale silage is also sometimes called "haylage", and is seen more commonly in Europe than in either the United States or Australia. Hay stored in this fashion must remain completely sealed in plastic, as any holes or tears will allow the entrance of oxygen and can stop the preservation properties of fermentation and lead to spoilage.
Stacking
Hay requires protection from the weather, and is optimally stored inside buildings, but weather protection is also provided in other ways involving outdoor storage, either in haystacks or in large tight bales (round or rectangular); these methods all depend on the surface of an outdoor mass of hay (stack or bale) taking the hit of the weather and thereby preserving the main body of hay underneath.
Traditionally, outdoor hay storage was done with haystacks of loose hay, where most of the hay was sufficiently preserved to last through the winter, and the top surface of the stack (being weathered) was consigned to become compost the next summer. The term "loose" means not pressed or baled but does not necessarily mean a light, fluffy lay of randomly oriented stems. Especially in wet climates, such as those of Britain, the degree of shedding of rainwater by the stack's outer surface is an important factor, and the stacking of loose hay was developed into a skilled-labor task that in its more advanced forms even involved thatching the top. In many stacking methods (with or without thatched tops), stems were oriented in sheaves, which were laid in oriented sequence.
With the advent of large bales since the 1960s, today hay is often stored outdoors because the outer surface of the large bale performs the weather-shedding function. The large bales can also be stacked, which allows a given degree of exposed surface area to count for a larger volume of protected interior hay. Plastic tarpaulins are sometimes used to shed the rain, with the goal of reduced hay wastage, but the cost of the tarpaulins must be weighed against the cost of the hay spoilage percentage difference; it may not be worth the cost, or the plastic's environmental footprint.
After World War II, British farmers found that the demand outstripped supply for skilled farm laborers experienced in the thatching of haystacks. This no doubt contributed to the pressure for baling in large bales to increasingly replace stacking, which was happening anyway as haymaking technology (like other farm technology) continued toward extensive mechanization with one-person operation of many tasks. Today tons of hay can be cut, conditioned, dried, raked, and baled by one person, as long as the right equipment is at hand (although that equipment is expensive). These tons of hay can also be moved by one person, again with the right (expensive) equipment, as loaders with long spikes run by hydraulic circuits pick up each large bale and move it to its feeding location.
A fence may be built to enclose a haystack and prevent roaming animals from eating it, or animals may feed directly from a field-constructed stack as part of their winter feeding.
Haystacks are also sometimes called haycocks; among some users this term refers more specifically to small piles of cut-and-gathered hay awaiting stacking into larger stacks. The words (haystack, haycock) are usually styled as solid compounds, but not always. Haystacks are also sometimes called stooks, shocks, or ricks.
Loose stacks are built to prevent the accumulation of moisture and promote drying or curing. In some places, this is accomplished by constructing stacks with a conical or ridged top. The exterior may look gray on the surface after weathering, but the inner hay retains traces of its fresh-cut aroma and maintains a faded green tint. They can be covered with thatch, or kept within a protective structure. One such structure is a moveable roof supported by four posts, historically called a Dutch roof, hay barrack, or hay cap. Haystacks may also be built on top of a foundation laid on the ground to reduce spoilage, in some places made of wood or brush. In other areas, hay is stacked loose, built around a central pole, a tree, or within an area of three or four poles to add stability to the stack.
One loose hay stacking technique seen in the British Isles is to initially stack freshly cut hay into smaller mounds called foot cocks, hay coles, kyles, hayshocks or haycocks, to facilitate initial curing. These are sometimes built atop platforms or tripods formed of three poles, used to keep hay off the ground and let air into the center for better drying. The shape causes dew and rainwater to roll down the sides, allowing the hay within to cure. People who handle the hay may use hayforks or pitchforks to move or pitch the hay in building haycocks and haystacks. Construction of tall haystacks is sometimes aided with a ramp, ranging from simple poles to a device for building large loose stacks called a beaverslide.
Safety
Mold
Hay is generally one of the safest feeds to provide to domesticated grazing herbivores. Amounts must be monitored so animals do not get too fat or too thin. Supplemental feed may be required for working animals with high energy requirements.
Animals who eat spoiled hay may develop a variety of illnesses, from coughs related to dust and mold, to various other illnesses, the most serious of which may be botulism, which can occur if a small animal, such as a rodent or snake, is killed by the baling equipment, then rots inside the bale, causing a toxin to form. Some animals are sensitive to particular fungi or molds that may grow on living plants. For example, an endophytic fungus that sometimes grows on fescue can cause abortion in pregnant mares. Some plants themselves may also be toxic to some animals. For example, Pimelea, a native Australian plant, also known as flax weed, is highly toxic to cattle.
Farmer's lung is a hypersensitivity pneumonitis induced by the inhalation of biologic dusts coming from hay dust or mold spores or other agricultural products. Exposure to hay can also trigger allergic rhinitis for people who are hypersensitive to airborne allergens.
Spontaneous combustion
Hay must be fully dried when baled and kept dry in storage. If hay is baled while too moist or becomes wet while in storage, there is a significant risk of spontaneous combustion. Hay stored outside must be stacked in such a way that moisture contact is minimal. Some stacks are arranged in such a manner that the hay itself sheds water when it falls. Other methods of stacking use the first layers or bales of hay as a cover to protect the rest. To completely keep out moisture, outside haystacks can also be covered by tarps, and many round bales are partially wrapped in plastic as part of the baling process. Hay is also stored under a roof when resources permit. It is frequently placed inside sheds, or stacked inside of a barn. On the other hand, care must also be taken that hay is never exposed to any possible source of heat or flame, as dry hay and the dust it produces are highly flammable.
Hay baled before it is fully dry can produce enough heat to start a fire. Haystacks produce internal heat due to bacterial fermentation. If hay is stacked with wet grass, the heat produced can be sufficient to ignite the hay causing a fire. Farmers have to be careful about moisture levels to avoid spontaneous combustion, which is a leading cause of haystack fires. Heat is produced by the respiration process, which occurs until the moisture content of drying hay drops below 40%. Hay is considered fully dry when it reaches 20% moisture. Combustion problems typically occur within five to seven days of baling. A bale cooler than is in little danger, but bales between need to be removed from a barn or structure and separated so that they can cool off. If the temperature of a bale exceeds more than , it can combust.
To check hay moisture content, the farmer can use a hand, an oven or a moisture tester. The most efficient way is to use a moisture tester which shows the moisture content in a few seconds.
Weight
Due to its weight, hay can cause a number of injuries to humans, particularly those related to lifting and moving bales, as well as risks related to stacking and storing. Hazards include the danger of having a poorly constructed stack collapse, causing either falls to people on the stack or injuries to people on the ground who are struck by falling bales. Large round hay bales present a particular danger to those who handle them, because they can weigh over and cannot be moved without special equipment. Nonetheless, because they are cylindrical in shape, and thus can roll easily, it is not uncommon for them to fall from stacks or roll off the equipment used to handle them. From 1992 to 1998, 74 farm workers in the United States were killed in large round hay bale accidents, usually when bales were being moved from one location to another, such as when feeding animals.
Chemical composition
| Technology | Animal husbandry | null |
46594 | https://en.wikipedia.org/wiki/Straw | Straw | Straw is an agricultural byproduct consisting of the dry stalks of cereal plants after the grain and chaff have been removed. It makes up about half of the yield by weight of cereal crops such as barley, oats, rice, rye and wheat. It has a number of different uses, including fuel, livestock bedding and fodder, thatching and basket making.
Straw is usually gathered and stored in a straw bale, which is a bale, or bundle, of straw tightly bound with twine, wire, or string. Straw bales may be square, rectangular, star shaped or round, and can be very large, depending on the type of baler used.
Uses
Current and historic uses of straw include:
Animal feed
Straw may be fed as part of the roughage component of the diet to cattle or horses that are on a near maintenance level of energy requirement. It has a low digestible energy and nutrient content (as opposed to hay, which is much more nutritious). The heat generated when microorganisms in a herbivore's gut digest straw can be useful in maintaining body temperature in cold climates. Due to the risk of impaction and its poor nutrient profile, it should always be restricted to part of the diet. It may be fed as it is, or chopped into short lengths, known as chaff.
Basketry
Bee skeps and linen baskets are made from coiled and bound together continuous lengths of straw. The technique is known as lip work.
Bedding
Straw is commonly used as bedding for ruminants and horses. It may be used as bedding and food for small animals, but this often leads to injuries to mouth, nose and eyes as straw is quite sharp.
The straw-filled mattress, also known as a palliasse, is still used by people in many parts of the world.
Bioplastic
Rice straw, an agricultural waste which is not usually recovered, can be turned into bioplastic with mechanical properties akin to polystyrene in its dry state.
Chemicals
Straw is being investigated as a source of fine chemicals including alkaloids, flavonoids, lignins, phenols, and steroids.
Construction material
In many parts of the world, straw is used to bind clay and concrete. A mixture of clay and straw, known as cob, can be used as a building material. There are many recipes for making cob.
When baled, straw has moderate insulation characteristics (about R-1.5/inch according to Oak Ridge National Lab and Forest Product Lab testing). It can be used, alone or in a post-and-beam construction, to build straw bale houses. When bales are used to build or insulate buildings, the straw bales are commonly finished with earthen plaster. The plastered walls provide some thermal mass, compressive and ductile structural strength, and acceptable fire resistance as well as thermal resistance (insulation), somewhat in excess of North American building code. Straw is an abundant agricultural waste product, and requires little energy to bale and transport for construction. For these reasons, straw bale construction is gaining popularity as part of passive solar and other renewable energy projects.
Wheat straw can be used as a fibrous filler combined with polymers to produce composite lumber.
Enviroboard can be made from straw.
Strawblocks are strawbales that have been recompressed to the density of woodblocks, for compact cargo container shipment, or for straw-bale construction of load-bearing walls that support roof-loads, such as a "living" or green roofs.
Crafts
Craft usages of straw include:
Corn dollies
Straw marquetry
Straw mobile (straw art)
Straw painting
Straw plaiting
Scarecrows
Japanese Traditional Cat's House
Japanese wara art
Construction site sediment control
Straw bales are sometimes used for sediment control at construction sites. However, bales are often ineffective in protecting water quality and are maintenance-intensive. For these reasons the U.S. Environmental Protection Agency (EPA) and various state agencies recommend use of alternative sediment control practices where possible, such as silt fences, fiber rolls and geotextiles.
They can also be used as burned area emergency response, as ground cover or as in-stream check dams.
Fuel source
The use of straw as a carbon-neutral energy source is increasing rapidly, especially for biobutanol. Straw or hay briquettes are a biofuel substitute to coal.
Straw, processed first as briquettes, has been fed into a biogas plant in Aarhus University, Denmark, in a test to see if higher gas yields could be attained.
The use of straw in large-scale biomass power plants is becoming mainstream in the EU, with several facilities already online. The straw is either used directly in the form of bales, or densified into pellets which allows for the feedstock to be transported over longer distances. Finally, torrefaction of straw with pelletisation is gaining attention, because it increases the energy density of the resource, making it possible to transport it still further. This processing step also makes storage much easier, because torrefied straw pellets are hydrophobic. Torrefied straw in the form of pellets can be directly co-fired with coal or natural gas at very high rates and make use of the processing infrastructures at existing coal and gas plants. Because the torrefied straw pellets have superior structural, chemical and combustion properties to coal, they can replace all coal and turn a coal plant into an entirely biomass-fed power station. First generation pellets are limited to a co-firing rate of 15% in modern IGCC plants.
Gardening
Straw bale gardening is also popular among gardeners who do not have enough space for soil gardening. When properly conditioned, straw bales can be used as a perfect soil substitute.
Hats
There are several styles of straw hats that are made of woven straw.
Many thousands of women and children in England (primarily in the Luton district of Bedfordshire), and large numbers in the United States (mostly Massachusetts), were employed in plaiting straw for making hats. By the late 19th century, vast quantities of plaits were being imported to England from Canton in China, and in the United States most of the straw plait was imported.
A fiber analogous to straw is obtained from the plant Carludovica palmata, and is used to make Panama hats.
Traditional Japanese rain protection consisted of a straw hat and a mino cape.
Horticulture
Straw is used in cucumber houses and for mushroom growing.
In Japan, certain trees are wrapped with straw to protect them from the effects of a hard winter as well as to use them as a trap for parasite insects. (see Komomaki)
It is also used in ponds to reduce algae by changing the nutrient ratios in the water.
The soil under strawberries is covered with straw to protect the ripe berries from dirt, and straw is also used to cover the plants during winter to prevent the cold from killing them.
Straw also makes an excellent mulch.
Packaging
Straw is resistant to being crushed and therefore makes a good packing material. A company in France makes a straw mat sealed in thin plastic sheets.
Straw envelopes for wine bottles have become rarer, but are still to be found at some wine merchants.
Wheat straw is also used in compostable food packaging such as compostable plates. Packaging made from wheat straw can be certified compostable and will biodegrade in a commercial composting environment.
Paper
Straw can be pulped to make paper.
Rope
Rope made from straw was used by thatchers, in the packaging industry and even in iron foundries.
Saekki is a traditional Korean rope made of woven straw.
Shoes
The Chinese wore cailu or caixie, shoes and sandals made of straw, well into modernity.
Koreans wear jipsin, sandals made of straw.
Several types of traditional Japanese shoes, such as waraji and zōri, are made of straw.
In some parts of Germany like Black Forest and Hunsrück people wear straw shoes at home or at carnival.
Targets
Heavy-gauge straw rope is coiled and sewn tightly together to make archery targets. This is no longer done entirely by hand, but is partially mechanised. Sometimes a paper or plastic target is set up in front of straw bales, which serve to support the target and provide a safe backdrop.
Thatching
Thatching uses straw, reed or similar materials to make a waterproof, lightweight roof with good insulation properties. Straw for this purpose (often wheat straw) is grown specially and harvested using a reaper-binder.
Health and safety
Dried straw presents a fire hazard that can ignite easily if exposed to sparks or an open flame. It can also trigger allergic rhinitis in people who are hypersensitive to airborne allergens such as straw dust.
| Technology | Animal husbandry | null |
46595 | https://en.wikipedia.org/wiki/Loom | Loom | A loom is a device used to weave cloth and tapestry. The basic purpose of any loom is to hold the warp threads under tension to facilitate the interweaving of the weft threads. The precise shape of the loom and its mechanics may vary, but the basic function is the same.
Etymology and usage
The word "loom" derives from the Old English geloma, formed from ge- (perfective prefix) and loma, a root of unknown origin; the whole word geloma meant a utensil, tool, or machine of any kind. In 1404 "lome" was used to mean a machine to enable weaving thread into cloth.
By 1838 "loom" had gained the additional meaning of a machine for interlacing thread.
Components and actions
Basic structure
Weaving is done on two sets of threads or yarns, which cross one another. The warp threads are the ones stretched on the loom (from the Proto-Indo-European *werp, "to bend"). Each thread of the weft (i.e. "that which is woven") is inserted so that it passes over and under the warp threads.
The ends of the warp threads are usually fastened to beams. One end is fastened to one beam, the other end to a second beam, so that the warp threads all lie parallel and are all the same length. The beams are held apart to keep the warp threads taut.
The textile is woven starting at one end of the warp threads, and progressing towards the other end. The beam on the finished-fabric end is called the cloth beam. The other beam is called the warp beam.
Beams may be used as rollers to allow the weaver to weave a piece of cloth longer than the loom. As the cloth is woven, the warp threads are gradually unrolled from the warp beam, and the woven portion of the cloth is rolled up onto the cloth beam (which is also called the takeup roll). The portion of the fabric that has already been formed but not yet rolled up on the takeup roll is called the fell.
Not all looms have two beams. For instance, warp-weighted looms have only one beam; the warp yarns hang from this beam. The bottom ends of the warp yarns are tied to dangling loom weights.
Motions
A loom has to perform three principal motions: shedding, picking, and battening.
Shedding. Shedding is pulling part of the warp threads aside to form a shed (the space between the raised and unraised warp yarns). The shed is the space through which the filling yarn, carried by the shuttle, can be inserted, forming the weft.
Sheds may be simple: for instance, lifting all the odd threads and all the even threads alternately produces a tabby weave (the two sheds are called the shed and countershed). More intricate shedding sequences can produce more complex weaves, such as twill.
Picking. A single crossing of the weft thread from one side of the loom to the other, through the shed, is known as a pick. Picking is passing the weft through the shed. A new shed is then formed before a new pick is inserted.
Conventional shuttle looms can operate at speeds of about 150 to 160 picks per minute.
Battening. After the pick, the new pass of weft thread has to be tamped up against the fell, to avoid making a fabric with large, irregular gaps between the weft threads. This compression of the weft threads is called battening.
There are also usually two secondary motions, because the newly constructed fabric must be wound onto cloth beam. This process is called taking up. At the same time, the warp yarns must be let off or released from the warp beam, unwinding from it. To become fully automatic, a loom needs a tertiary motion, the filling stop motion. This will brake the loom if the weft thread breaks. An automatic loom requires 0.125 hp to 0.5 hp to operate (100W to 400W).
Components
A loom, then, usually needs two beams, and some way to hold them apart. It generally has additional components to make shedding, picking, and battening faster and easier. There are also often components to help take up the fell.
The nature of the loom frame and the shedding, picking, and battening devices vary. Looms come in a wide variety of types, many of them specialized for specific types of weaving. They are also specialized for the lifestyle of the weaver. For instance, nomadic weavers tend to use lighter, more portable looms, while weavers living in cramped city dwellings are more likely to use a tall upright loom, or a loom that folds into a narrow space when not in use.
Shedding methods
It is possible to weave by manually threading the weft over and under the warp threads, but this is slow. Some tapestry techniques use manual shedding. Pin looms and peg looms also generally have no shedding devices. Pile carpets generally do not use shedding for the pile, because each pile thread is individually knotted onto the warps, but there may be shedding for the weft holding the carpet together.
Usually weaving uses shedding devices. These devices pull some of the warp threads to each side, so that a shed is formed between them, and the weft is passed through the shed. There are a variety of methods for forming the shed. At least two sheds must be formed, the shed and the countershed. Two sheds is enough for tabby weave; more complex weaves, such as twill weaves, satin weaves, diaper weaves, and figured (picture-forming) weaves, require more sheds.
Heddle-bar and shed-rod
Heddle-rods and shedding-sticks are not the fastest way to weave, but they are very simple to make, needing only sticks and yarn. They are often used on vertical and backstrap looms. They allow the creation of elaborate supplementary-weft brocades. They are also used on modern tapestry looms; the frequent changing of weft colour in tapestry makes weaving tapestry slow, so using faster, more complex shedding systems isn't worthwhile. The same is true of looms for handmade knotted-pile carpet; hand-knotting each pile thread to the warp takes far more time than weaving a couple of weft threads to hold the pile in place.
At its simplest, a heddle-bar is simply a stick placed across the warp and tied to individual warp threads. It is not tied to all of the warp threads; for a plain tabby weave, it is tied to every other thread. The little loops of string used to tie the wraps to the heddle bar are called heddles or leashes. When the heddle-bar is pulled perpendicular to the warp, it pulls the warp threads it is tied to out of position, creating a shed.
A warp-weighted loom (see diagram) typically uses a heddle-bar, or several. It has two upright posts (C); they support a horizontal beam (D), which is cylindrical so that the finished cloth can be rolled around it, allowing the loom to be used to weave a piece of cloth taller than the loom, and preserving an ergonomic working height. The warp threads (F, and A and B) hang from the beam and rest against the shed rod (E). The heddle-bar (G) is tied to some of the warp threads (A, but not B), using loops of string called leashes (H). So when the heddle rod is pulled out and placed in the forked sticks protruding from the posts (not lettered, no technical term given in citation), the shed (1) is replaced by the counter-shed (2). By passing the weft through the shed and the counter-shed, alternately, cloth is woven.
Several heddle-bars can be used side-by-side; three or more can be used to weave twill weaves, for instance.
There are also other ways to create counter-sheds. A shed-rod is simpler and easier to set up than a heddle-bar, and can make a counter-shed. A shed-rod (shedding stick, shed roll) is simply a stick woven through the warp threads. When pulled perpendicular to the threads (or rotated to stand on edge, for wide, flat shedding rods), it creates a counter shed. The combination of a heddle-bar and a shedding-stick can create the shed and countershed needed for a plain tabby weave, as in the video.
There are also slitted heddle-rods, which are sawn partway through, with evenly-placed slits. Each warp thread goes in a slit. The odd-numbered slits are at 90 degrees to the even slits. The rod is rotated back and forth to create the shed and countershed, so it is often large-diameter.
Tablet weaving
Tablet weaving uses cards punched with holes. The warp threads pass through the holes, and the cards are twisted and shifted to created varied sheds. This shedding technique is used for narrow work. It is also used to finish edges, weaving decorative selvage bands instead of hemming.
Rotating-hook heddles
There are heddles made of flip-flopping rotating hooks, which raise and lower the warp, creating sheds. The hooks, when vertical, have the weft threads looped around them horizontally. If the hooks are flopped over on side or another, the loop of weft twists, raising one or the other side of the loop, which creates the shed and countershed.
Rigid heddles
Rigid heddles are generally used on single-shaft looms. Odd warp threads go through the slots, and even ones through the circular holes, or vice-versa. The shed is formed by lifting the heddle, and the countershed by depressing it. The warp threads in the slots stay where they are, and the ones in the circular holes are pulled back and forth. A single rigid heddle can hold all the warp threads, though sometimes multiple rigid heddles are used.
Treadles may be used to drive the rigid heddle up and down.
Non-rigid heddles
Rigid heddles or (above) are called "rigid" to distinguish them from string and wire heddles. Rigid heddles are one-piece, by non-rigid ones are multi-piece. Each warp thread has its own heald (also, confusingly, called a heddle). The heald has an eyelet at each end (for the staves, also called shafts) and one in the middle, called the mail, (for the warp thread). A row of these healds is slid onto two staves, the upper and lower staves; the staves together, or the staves together with the healds, may be called a heald frame, which is, confusingly, also called a shaft and a harness. Replacable, interchangable healds can be smaller, allowing finer weaves.
Unlike a rigid heddle, a flexible heddle cannot push the warp thread. This means that two heald frames are needed even for a plain tabby weave. Twill weaves require three or more heald frames (depending on the type of twill), and more complex figured weaves require still more frames.
The different heald frames must be controlled by some mechanism, and the mechanism must be able to pull them in both directions. They are mostly controlled by treadles; creating the shed with the feet leaves the hands free to ply the shuttle. However in some tabletop looms, heald frames are also controlled by levers.
Treadle-controlled looms
In treadle looms, the weaver controls the shedding with their feet, by treading on treadles. Different treadles and combinations of treadles produce different sheds. The weaver must remember the sequence of treadling needed to produce the pattern.
The precise mechanism by which the treadles control the heddles varies. Rigid-heddle treadle looms do exist, but the heddles are usually flexible. Sometimes, the treadles are tied directly to the staves (with a Y-shaped bridle so they stay level). Alternately, they may be tied to a stick called a lamm, which in turn is tied to the stave, to make the motion more controlled and regular. The lamm may pivot or slide.
Counterbalance looms are the most common type of treadle loom globally, as they are simple and give a smooth, quiet, quick motion. The heald frames are joined together in pairs, by a cord running over heddle pulleys or a heddle roller. When one heald frame rises, the other falls. It takes a pair of treadles to control a pair of frames. Counterbalance looms are usually used with two or four frames, though some have as many as ten.
In theory each pair of heald frames has to have an equal number to warps pulled by each frame, so the patterns that can be made on them are limited. In practice, fairly unbalanced tie-ups just make the shed a bit smaller, and as the shed on a counterbalance loom is adjustable in size and quite large to start with (compared to other types of loom), so it is entirely possible to weave good cloth on a counterbalance loom with unbalanced heald frames, unless the loom is extremely shallow (that is, the length of warp being pulled on is short, less than 1 meter or 3 feet), which exacerbates the slightly uneven tension. Limited patterns are not, of course, a disadvantage when weaving plainer patterns, such as tabbies and twills.
Jack looms (also called single-tieup-looms and rising-shed looms), have their treadles connected to jacks, levers that push or pull the heald frames up; the harnesses are weighted to fall back into place by gravity. Several frames can be connected to a single treadle. Frames can also be raised by more than one treadle. This allows treadles to control arbitrary combinations of frames, which vastly increases the number of different sheds that can be created from the same number of frames. Any number of treadles can also be engaged at once, meaning that the number of different sheds that can be selected is two to the power of the number of treadles. Eight is a large but reasonable number of treadles, giving a maximum of 28=256 sheds (some of which will probably not have enough threads on one side to be useful). Having more possible sheds allows more complex patterns, such as diaper weaves.
Jack looms are easy to make and to tie up (if not quite as easy as counterbalance looms). The gravity return makes jack looms heavy to operate. The shed of a jack loom is smaller for a given length of warp being pulled aside by the heddles (loom depth). The warp threads being pulled up by the jacks are also tauter than the other warp threads (unlike a counter balance loom, where the threads are pulled an equal amount in opposite directions). Uneven tension makes weaving evenly harder. It also lowers the maximum tension at which one can practically weave. If the threads are rough, closely-spaced, very long or numerous, it can be hard to open the sheds on the jack loom. Jack looms without castles (the superstructure above the weft) have to lift the heald frames from below, and are noiser due to the impact of wood on wood; elastomer pads can reduce the noise.
In countermarch looms, the treadles are tied to lamms, which may pivot at one end or slide up and down. Half of the lamms in turn connect to jacks, which also pivot, and push or pull the staves up or down. Some countermarches have two horizontal jacks per shaft, others a single vertical jack. Each treadle is tied to all of the heald frames, moving some of them up and the rest of them down. This allows the complex combinatorial treadles of a jack loom, with the large shed and balanced, even tension of a counterbalance loom, with its quiet, light operation. Unfortunately, countermarch looms are more complex, harder to build, slower to tie up, and more prone to malfunction.
Figure harness and the drawloom
A drawloom is for weaving figured cloth. In a drawloom, a "figure harness" is used to control each warp thread separately, allowing very complex patterns. A drawloom requires two operators, the weaver, and an assistant called a "drawboy" to manage the figure harness.
The earliest confirmed drawloom fabrics come from the State of Chu and date c. 400 BC. Some scholars speculate an independent invention in ancient Syria, since drawloom fabrics found in Dura-Europas are thought to date before 256 AD. The draw loom was invented in China during the Han dynasty (State of Liu?); foot-powered multi-harness looms and jacquard looms were used for silk weaving and embroidery, both of which were cottage industries with imperial workshops. The drawloom enhanced and sped up the production of silk and played a significant role in Chinese silk weaving. The loom was introduced to Persia, India, and Europe.
Dobby head
A dobby head is a device that replaces the drawboy, the weaver's helper who used to control the warp threads by pulling on draw threads. "Dobby" is a corruption of "draw boy". Mechanical dobbies pull on the draw threads using pegs in bars to lift a set of levers. The placement of the pegs determines which levers are lifted. The sequence of bars (they are strung together) effectively remembers the sequence for the weaver. Computer-controlled dobbies use solenoids instead of pegs.
Jacquard head
The Jacquard loom is a mechanical loom, invented by Joseph Marie Jacquard in 1801, which simplifies the process of manufacturing figured textiles with complex patterns such as brocade, damask, and matelasse. The loom is controlled by punched cards with punched holes, each row of which corresponds to one row of the design. Multiple rows of holes are punched on each card and the many cards that compose the design of the textile are strung together in order. It is based on earlier inventions by the Frenchmen Basile Bouchon (1725), Jean Baptiste Falcon (1728), and Jacques Vaucanson (1740). To call it a loom is a misnomer. A Jacquard head could be attached to a power loom or a handloom, the head controlling which warp thread was raised during shedding. Multiple shuttles could be used to control the colour of the weft during picking. The Jacquard loom is the predecessor to the computer punched card readers of the 19th and 20th centuries.
Picking (weft insertion)
The weft may be passed across the shed as a ball of yarn, but usually this is too bulky and unergonomic. Shuttles are designed to be slim, so they pass through the shed; to carry a lot of yarn, so the weaver does not need to refill them too often; and to be an ergonomic size and shape for the particular weaver, loom, and yarn. They may also be designed for low friction.
Stick shuttles
Unnotched stick shuttles
At their simplest, these are just sticks wrapped with yarn. They may be specially shaped, as with the bobbins and bones used in tapestry-making (bobbins are used on vertical warps, and bones on horizontal ones).
Notched stick shuttles, rag shuttles, and ski shuttles
Boat shuttles
Boat shuttles may be closed (central hollow with a solid bottom) or open (central hole goes right through). The yarn may be side-feed or end-feed. They are commonly made for 10-cm (4-inch) and 15-cm (6-inch) bobbin lengths.
Flying shuttle
Hand weavers who threw a shuttle could only weave a cloth as wide as their armspan. If cloth needed to be wider, two people would do the task (often this would be an adult with a child). John Kay (1704–1779) patented the flying shuttle in 1733. The weaver held a picking stick that was attached by cords to a device at both ends of the shed. With a flick of the wrist, one cord was pulled and the shuttle was propelled through the shed to the other end with considerable force, speed and efficiency. A flick in the opposite direction and the shuttle was propelled back. A single weaver had control of this motion but the flying shuttle could weave much wider fabric than an arm's length at much greater speeds than had been achieved with the hand thrown shuttle.
The flying shuttle was one of the key developments in weaving that helped fuel the Industrial Revolution. The whole picking motion no longer relied on manual skill and it was just a matter of time before it could be powered by something other than a human.
Weft insertion in power looms
Different types of power looms are most often defined by the way that the weft, or pick, is inserted into the warp. Many advances in weft insertion have been made in order to make manufactured cloth more cost effective. Weft insertion rate is a limiting factor in production speed. , industrial looms can weave at 2,000 weft insertions per minute.
There are five main types of weft insertion and they are as follows:
Shuttle: The first-ever powered looms were shuttle-type looms. Spools of weft are unravelled as the shuttle travels across the shed. This is very similar to projectile methods of weaving, except that the weft spool is stored on the shuttle. These looms are considered obsolete in modern industrial fabric manufacturing because they can only reach a maximum of 300 picks per minute.
Air jet: An air-jet loom uses short quick bursts of compressed air to propel the weft through the shed in order to complete the weave. Air jets are the fastest traditional method of weaving in modern manufacturing and they are able to achieve up to 1,500 picks per minute. However, the amounts of compressed air required to run these looms, as well as the complexity in the way the air jets are positioned, make them more costly than other looms.
Water jet: Water-jet looms use the same principle as air-jet looms, but they take advantage of pressurized water to propel the weft. The advantage of this type of weaving is that water power is cheaper where water is directly available on site. Picks per minute can reach as high as 1,000.
Rapier loom: This type of weaving is very versatile, in that rapier looms can weave using a large variety of threads. There are several types of rapiers, but they all use a hook system attached to a rod or metal band to pass the pick across the shed. These machines regularly reach 700 picks per minute in normal production.
Projectile: Projectile looms utilize an object that is propelled across the shed, usually by spring power, and is guided across the width of the cloth by a series of reeds. The projectile is then removed from the weft fibre and it is returned to the opposite side of the machine so it can be reused. Multiple projectiles are in use in order to increase the pick speed. Maximum speeds on these machines can be as high as 1,050 ppm.
Circular: Modern circular looms use up to ten shuttles, driven in a circular motion from below by electromagnets, for the weft yarns, and cams to control the warp threads. The warps rise and fall with each shuttle passage, unlike the common practice of lifting all of them at once. Circular looms are used to create seamless tubes of fabric for products such as hosiery, sacks, clothing, fabric hoses (such as fire hoses) and the like.
Battening
The newest weft thread must be beaten against the fell. Battening can be done with a long stick placed in the shed parallel to the weft (a sword batten), a shorter stick threaded between the warp threads perpendicular to warp and weft (a pin batten), a comb, or a reed (a comb with both ends closed, so that it has to be sleyed, that is have the warp threads threaded through it, when the loom is warped). For rigid-heddle looms, the heddle may be used as a reed.
Secondary motions
Dandy mechanism
Patented in 1802, dandy looms automatically rolled up the finished cloth, keeping the fell always the same length. They significantly speeded up hand weaving (still a major part of the textile industry in the 1800s). Similar mechanisms were used in power looms.
Temples
The temples act to keep the cloth from shrinking sideways as it is woven. Some warp-weighted looms had temples made of loom weights, suspended by strings so that they pulled the cloth breadthwise. Other looms may have temples tied to the frame, or temples that are hooks with an adjustable shaft between them. Power looms may use temple cylinders. Pins can leave a series of holes in the selvages (these may be from stenter pins used in post-processing).
Frames
Loom frames can be roughly divided, by the orientation of the warp threads, into horizontal looms and vertical looms. There are many finer divisions. Most handloom frame designs can be constructed fairly simply.
Backstrap loom
The back-strap loom (also known as belt loom) is a simple loom with ancient roots, still used in many cultures around the world (such as Andean textiles, and in Central, East and South Asia). It consists of two sticks or bars between which the warps are stretched. One bar is attached to a fixed object and the other to the weaver, usually by means of a strap around the weaver's back. The weaver leans back and uses their body weight to tension the loom.
Both simple and complex textiles can be woven on backstrap looms. They produce narrowcloth: width is limited to the weaver's armspan. They can readily produce warp-faced textiles, often decorated with intricate pick-up patterns woven in complementary and supplementary warp techniques, and brocading. Balanced weaves are also possible on the backstrap loom.
Warp-weighted loom
The warp-weighted loom is a vertical loom that may have originated in the Neolithic period. Its defining characteristic is hanging weights (loom weights) which keep bundles of the warp threads taut. Frequently, extra warp thread is wound around the weights. When a weaver has woven far enough down, the completed section (fell) can be rolled around the top beam, and additional lengths of warp threads can be unwound from the weights to continue. This frees the weaver from vertical size constraint. Horizontally, breadth is limited by armspan; making broadwoven cloth requires two weavers, standing side by side at the loom.
Simple weaves, and complex weaves that need more than two different sheds, can both be woven on a warp-weighted loom. They can also be used to produce tapestries.
Pegged or floor loom
In pegged looms, the beams can be simply held apart by hooking them behind pegs driven into the ground, with wedges or lashings used to adjust the tension. Pegged looms may, however, also have horizontal sidepieces holding the beams apart.
Such looms are easy to set up and dismantle, and are easy to transport, so they are popular with nomadic weavers. They are generally only used for comparatively small woven articles. Urbanites are unlikely to use horizontal floor looms as they take up a lot of floor space, and full-time professional weavers are unlikely to use them as they are unergonomic. Their cheapness and portability is less valuable to urban professional weavers.
Treadle loom
In a treadle loom, the shedding is controlled by the feet, which tread on the treadles.
The earliest evidence of a horizontal loom is found on a pottery dish in ancient Egypt, dated to 4400 BC. It was a frame loom, equipped with treadles to lift the warp threads, leaving the weaver's hands free to pass and beat the weft thread.
A pit loom has a pit for the treadles, reducing the stress transmitted through the much shorter frame.
In a wooden vertical-shaft loom, the heddles are fixed in place in the shaft. The warp threads pass alternately through a heddle, and through a space between the heddles (the shed), so that raising the shaft raises half the threads (those passing through the heddles), and lowering the shaft lowers the same threads — the threads passing through the spaces between the heddles remain in place.
A treadle loom for figured weaving may have a large number of harnesses or a control head. It can, for instance, have a Jacquard machine attached to it .
Tapestry looms
Tapestry can have extremely complex wefts, as different strands of wefts of different colours are used to form the pattern. Speed is lower, and shedding and picking devices may be simpler. Looms used for weaving traditional tapestry are called not as "vertical-warp" and "horizontal-warp", but as "high-warp" or "low-warp" (the French terms haute-lisse and are also used in English).
Ribbon, Band, and Inkle weaving
Inkle looms are narrow looms used for narrow work. They are used to make narrow warp-faced strips such as ribbons, bands, or tape. They are often quite small; some are used on a tabletop. others are backstraps looms with a rigid heddle, and very portable.
Darning looms
There exist very small hand-held looms known as darning looms. They are made to fit under the fabric being mended, and are often held in place by an elastic band on one side of the cloth and a groove around the loom's darning-egg portion on the other. They may have heddles made of flip-flopping rotating hooks . Other devices sold as darning looms are just a darning egg and a separate comb-like piece with teeth to hook the warp over; these are used for repairing knitted garments and are like a linear knitting spool. Darning looms were sold during World War Two clothing rationing in the United Kingdom and Canada, and some are homemade.
Circular handlooms
Circular looms are used to create seamless tubes of fabric for products such as hosiery, sacks, clothing, fabric hoses (such as fire hoses) and the like. Tablet weaving can be used to knit tubes, including tubes that split and join.
Small jigs also used for circular knitting are also sometimes called circular looms, but they are used for knitting, not weaving.
Handlooms to power looms
A power loom is a loom powered by a source of energy other than the weaver's muscles. When power looms were developed, other looms came to be referred to as handlooms. Most cloth is now woven on power looms, but some is still woven on handlooms.
The development of power looms was gradual. The capabilities of power looms gradually expanded, but handlooms remained the most cost-effective way to make some types of textiles for most of the 1800s. Many improvements in loom mechanisms were first applied to hand looms (like the dandy loom), and only later integrated into power looms.
Edmund Cartwright built and patented a power loom in 1785, and it was this that was adopted by the nascent cotton industry in England. The silk loom made by Jacques Vaucanson in 1745 operated on the same principles but was not developed further. The invention of the flying shuttle by John Kay allowed a hand weaver to weave broadwoven cloth without an assistant, and was also critical to the development of a commercially successful power loom. Cartwright's loom was impractical but the ideas behind it were developed by numerous inventors in the Manchester area of England. By 1818, there were 32 factories containing 5,732 looms in the region.
The Horrocks loom was viable, but it was the Roberts Loom in 1830 that marked the turning point. Incremental changes to the three motions continued to be made. The problems of sizing, stop-motions, consistent take-up, and a temple to maintain the width remained. In 1841, Kenworthy and Bullough produced the Lancashire Loom which was self-acting or semi-automatic. This enabled a youngster to run six looms at the same time. Thus, for simple calicos, the power loom became more economical to run than the handloom – with complex patterning that used a dobby or Jacquard head, jobs were still put out to handloom weavers until the 1870s. Incremental changes were made such as the Dickinson Loom, culminating in the fully automatic Northrop Loom, developed by the Keighley-born inventor Northrop, who was working for the Draper Corporation in Hopedale. This loom recharged the shuttle when the pirn was empty. The Draper E and X models became the leading products from 1909. They were challenged by synthetic fibres such as rayon.
By 1942, faster, more efficient, and shuttleless Sulzer and rapier looms had been introduced.
Symbolism and cultural significance
The loom is a symbol of cosmic creation and the structure upon which individual destiny is woven. This symbolism is encapsulated in the classical myth of Arachne who was changed into a spider by the goddess Athena, who was jealous of her skill at the godlike craft of weaving. In Maya civilization the goddess Ixchel taught the first woman how to weave at the beginning of time.
Gallery
| Technology | Industrial machinery | null |
46596 | https://en.wikipedia.org/wiki/Drainage | Drainage | Drainage is the natural or artificial removal of a surface's water and sub-surface water from an area with excess water. The internal drainage of most agricultural soils can prevent severe waterlogging (anaerobic conditions that harm root growth), but many soils need artificial drainage to improve production or to manage water supplies.
History
Early history
The Indus Valley Civilization had sewerage and drainage systems. All houses in the major cities of Harappa and Mohenjo-daro had access to water and drainage facilities. Waste water was directed to covered gravity sewers, which lined the major streets.
18th and 19th century
The invention of hollow-pipe drainage is credited to Sir Hugh Dalrymple, who died in 1753.
Current practices
Simple infrastructure such as open drains, pipes, and berms are still common. In modern times, more complex structures involving substantial earthworks and new technologies have been common as well.
Geotextiles
New storm water drainage systems incorporate geotextile filters that retain and prevent fine grains of soil from passing into and clogging the drain. Geotextiles are synthetic textile fabrics specially manufactured for civil and environmental engineering applications. Geotextiles are designed to retain fine soil particles while allowing water to pass through. In a typical drainage system, they would be laid along a trench which would then be filled with coarse granular material: gravel, sea shells, stone or rock. The geotextile is then folded over the top of the stone and the trench is then covered by soil. Groundwater seeps through the geotextile and flows through the stone to an outfell. In high groundwater conditions a perforated plastic (PVC or PE) pipe is laid along the base of the drain to increase the volume of water transported in the drain.
Alternatively, a prefabricated plastic drainage system made of HDPE, often incorporating geotextile, coco fiber or rag filters can be considered. The use of these materials has become increasingly more common due to their ease of use, since they eliminate the need for transporting and laying stone drainage aggregate, which is invariably more expensive than a synthetic drain and concrete liners.
Over the past 30 years, geotextile, PVC filters, and HDPE filters have become the most commonly used soil filter media. They are cheap to produce and easy to lay, with factory controlled properties that ensure long term filtration performance even in fine silty soil conditions.
21st century alternatives
Seattle's Public Utilities created a pilot program called Street Edge Alternatives Project. The project focuses on designing a system "to provide drainage that more closely mimics the natural landscape prior to development than traditional piped systems".
The streets are characterized by ditches along the side of the roadway, with plantings designed throughout the area.
An emphasis on non-curbed sidewalks allows water to flow more freely into the areas of permeable surface on the side of the streets. Because of the plantings, the run off water from the urban area does not all directly go into the ground, but can also be absorbed into the surrounding environment.
Monitoring conducted by Seattle Public Utilities reports a 99 percent reduction of storm water leaving the drainage project.
Drainage has undergone a large-scale environmental review in the recent past in the United Kingdom. Sustainable urban drainage systems (SUDS) are designed to encourage contractors to install drainage system that more closely mimic the natural flow of water in nature. Since 2010 local and neighbourhood planning in the UK is required by law to factor SUDS into any development projects that they are responsible for.
Slot drainage is a channel drainage system designed to eliminate the need for further pipework systems to be installed in parallel to the drainage, reducing the environmental impact of production as well as improving water collection. Stainless steel, concrete channel, PVC and HDPE are all materials available for slot drainage which have become industry standards on construction projects.
In the construction industry
The civil engineer is responsible for drainage in construction projects. During the construction process, they set out all the necessary levels for roads, street gutters, drainage, culverts and sewers involved in construction operations.
Civil engineers and construction managers work alongside architects and supervisors, planners, quantity surveyors, and the general workforce, as well as subcontractors. Typically, most jurisdictions have some body of drainage law to govern to what degree a landowner can alter the drainage from their parcel.
Drainage options for the construction industry include:
Point drainage, which intercepts water at gullies (points). Gullies connect to drainage pipes beneath the ground surface, so deep excavation is required to facilitate this system. Support for deep trenches is required in the shape of planking, strutting or shoring.
Channel drainage, which intercepts water along the entire run of the channel. Channel drainage is typically manufactured from concrete, steel, polymer or composites. The interception rate of channel drainage is greater than point drainage and the excavation required is usually much less deep.
The surface opening of channel drainage usually comes in the form of gratings (polymer, plastic, steel or iron) or a single slot (slot drain) that run along the ground surface (typically manufactured from steel or iron).
In retaining walls
Earth retaining structures such as retaining walls also need to have groundwater drainage considered during their construction. Typical retaining walls are constructed of impermeable material, which can block the path of groundwater. When groundwater flow is obstructed, hydrostatic water pressure buildups against the wall and may cause significant damage. If the water pressure is not drained appropriately, retaining walls can bow, move, and fracture, causing seams to separate. The water pressure can also erode soil particles, leading to voids behind the wall and sinkholes in the above soil. Traditional retaining wall drainage systems can include French drains, drain pipes or weep holes. To prevent soil erosion, geotextile filter fabrics are installed with the drainage system.
In planters
Drainage in planters refers to the implementation of effective drainage systems specifically designed for plant containers or pots. Proper drainage is crucial in planters to prevent waterlogging and promote healthy plant growth. Planter Drainage involves the incorporation of drainage holes, drainage layers, or specialized drainage systems to ensure excess water can escape from the planter. This helps to prevent root rot, water accumulation, and other issues that can negatively impact plant health. By providing adequate drainage in planters, it supports optimal plant growth and contributes to the overall success of gardening or landscaping projects.
Drainage options for the planter include:
Surface drains are typically used to manage runoff from paved surfaces, such as sidewalks and parking lots. Catch basins, which collect water and debris, are connected to underground pipes that carry the water away from the site.
Subsurface drains, on the other hand, are designed to manage water that seeps into the soil beneath the planting surface. French drains, which are gravel-filled trenches with perforated pipes at the bottom, are the most common type of subsurface drain. Trench drains, which are similar but shallower and wider, are also used in some situations.
Reasons for artificial drainage
Wetland soils may need drainage to be used for agriculture. In the northern United States and Europe, glaciation created numerous small lakes, which gradually filled with humus to make marshes. Some of these were drained using open ditches and trenches to make mucklands, which are primarily used for high-value crops such as vegetables.
The world's largest project of this type has been in process for centuries in the Netherlands. The area between Amsterdam, Haarlem and Leiden was, in prehistoric times, swampland and small lakes. Turf cutting (peat mining), subsidence and shoreline erosion gradually caused the formation of one large lake, the Haarlemmermeer, or lake of Haarlem. The invention of wind-powered pumping engines in the 15th century permitted some of the marginal land drainage. Still, the final drainage of the lake had to await the design of large steam-powered pumps and agreements between regional authorities. The lake was eliminated between 1849 and 1852, creating thousands of km2 of new land.
Coastal plains and river deltas may have seasonally or permanently high water tables and must have drainage improvements if they are to be used for agriculture. An example is the flatwoods citrus-growing region of Florida, United States. After periods of high rainfall, drainage pumps are employed to prevent damage to the citrus groves from overly wet soils. Rice production requires complete water control, as fields must be flooded or drained at different stages of the crop cycle. The Netherlands has also led the way in this type of drainage by draining lowlands along the shore and pushing back the sea until the original nation has been greatly enlarged.
In moist climates, soils may be adequate for cropping with the exception that they become waterlogged for brief periods each year, from snow melt or from heavy rains. Soils that are predominantly clay will pass water very slowly downward. Meanwhile, plant roots suffocate because the excessive water around the roots eliminates air movement through the soil.
Other soils may have an impervious layer of mineralized soil, called a hardpan, or relatively impervious rock layers may underlie shallow soils. Drainage is especially important in tree fruit production. Soils that are otherwise excellent may be waterlogged for a week of the year, which is sufficient to kill fruit trees and cost the productivity of the land until replacements can be established. In each of these cases, appropriate drainage carries off temporary flushes of water to prevent damage to annual or perennial crops.
Drier areas are often farmed by irrigation, and one would not consider drainage necessary. However, irrigation water always contains minerals and salts, which can be concentrated to toxic levels by evapotranspiration. Irrigated land may need periodic flushes with excessive irrigation water and drainage to control soil salinity.
| Technology | Hydraulic infrastructure | null |
46630 | https://en.wikipedia.org/wiki/Embedded%20system | Embedded system | An embedded system is a specialized computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is embedded as part of a complete device often including electrical or electronic hardware and mechanical parts.
Because an embedded system typically controls physical operations of the machine that it is embedded within, it often has real-time computing constraints. Embedded systems control many devices in common use. , it was estimated that ninety-eight percent of all microprocessors manufactured were used in embedded systems.
Modern embedded systems are often based on microcontrollers (i.e. microprocessors with integrated memory and peripheral interfaces), but ordinary microprocessors (using external chips for memory and peripheral interface circuits) are also common, especially in more complex systems. In either case, the processor(s) used may be types ranging from general purpose to those specialized in a certain class of computations, or even custom designed for the application at hand. A common standard class of dedicated processors is the digital signal processor (DSP).
Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce the size and cost of the product and increase its reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale.
Embedded systems range in size from portable personal devices such as digital watches and MP3 players to bigger machines like home appliances, industrial assembly lines, robots, transport vehicles, traffic light controllers, and medical imaging systems. Often they constitute subsystems of other machines like avionics in aircraft and astrionics in spacecraft. Large installations like factories, pipelines, and electrical grids rely on multiple embedded systems networked together. Generalized through software customization, embedded systems such as programmable logic controllers frequently comprise their functional units.
Embedded systems range from those low in complexity, with a single microcontroller chip, to very high with multiple units, peripherals and networks, which may reside in equipment racks or across large geographical areas connected via long-distance communications lines.
History
Background
The origins of the microprocessor and the microcontroller can be traced back to the MOS integrated circuit, which is an integrated circuit chip fabricated from MOSFETs (metal–oxide–semiconductor field-effect transistors) and was developed in the early 1960s. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor system could be contained on several MOS LSI chips.
The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, released in 1971. It was developed by Federico Faggin, using his silicon-gate MOS technology, along with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima.
Development
One of the first recognizably modern embedded systems was the Apollo Guidance Computer, developed ca. 1965 by Charles Stark Draper at the MIT Instrumentation Laboratory. At the project's inception, the Apollo guidance computer was considered the riskiest item in the Apollo project as it employed the then newly developed monolithic integrated circuits to reduce the computer's size and weight.
An early mass-produced embedded system was the Autonetics D-17 guidance computer for the Minuteman missile, released in 1961. When the Minuteman II went into production in 1966, the D-17 was replaced with a new computer that represented the first high-volume use of integrated circuits.
Since these early applications in the 1960s, embedded systems have come down in price and there has been a dramatic rise in processing power and functionality. An early microprocessor, the Intel 4004 (released in 1971), was designed for calculators and other small systems but still required external memory and support chips. By the early 1980s, memory, input and output system components had been integrated into the same chip as the processor forming a microcontroller. Microcontrollers find applications where a general-purpose computer would be too costly. As the cost of microprocessors and microcontrollers fell, the prevalence of embedded systems increased.
A comparatively low-cost microcontroller may be programmed to fulfill the same role as a large number of separate components. With microcontrollers, it became feasible to replace, even in consumer products, expensive knob-based analog components such as potentiometers and variable capacitors with up/down buttons or knobs read out by a microprocessor. Although in this context an embedded system is usually more complex than a traditional solution, most of the complexity is contained within the microcontroller itself. Very few additional components may be needed and most of the design effort is in the software. Software prototype and test can be quicker compared with the design and construction of a new circuit not using an embedded processor.
Applications
Embedded systems are commonly found in consumer, industrial, automotive, home appliances, medical, telecommunication, commercial, aerospace and military applications.
Telecommunications systems employ numerous embedded systems from telephone switches for the network to cell phones at the end user. Computer networking uses dedicated routers and network bridges to route data.
Consumer electronics include MP3 players, television sets, mobile phones, video game consoles, digital cameras, GPS receivers, and printers. Household appliances, such as microwave ovens, washing machines and dishwashers, include embedded systems to provide flexibility, efficiency and features. Advanced heating, ventilation, and air conditioning (HVAC) systems use networked thermostats to more accurately and efficiently control temperature that can change by time of day and season. Home automation uses wired and wireless networking that can be used to control lights, climate, security, audio/visual, surveillance, etc., all of which use embedded devices for sensing and controlling.
Transportation systems from flight to automobiles increasingly use embedded systems. New airplanes contain advanced avionics such as inertial guidance systems and GPS receivers that also have considerable safety requirements. Spacecraft rely on astrionics systems for trajectory correction. Various electric motors — brushless DC motors, induction motors and DC motors — use electronic motor controllers. Automobiles, electric vehicles, and hybrid vehicles increasingly use embedded systems to maximize efficiency and reduce pollution. Other automotive safety systems using embedded systems include anti-lock braking system (ABS), electronic stability control (ESC/ESP), traction control (TCS) and automatic four-wheel drive.
Medical equipment uses embedded systems for monitoring, and various medical imaging (positron emission tomography (PET), single-photon emission computed tomography (SPECT), computed tomography (CT), and magnetic resonance imaging (MRI) for non-invasive internal inspections. Embedded systems within medical equipment are often powered by industrial computers.
Embedded systems are used for safety-critical systems in aerospace and defense industries. Unless connected to wired or wireless networks via on-chip 3G cellular or other methods for IoT monitoring and control purposes, these systems can be isolated from hacking and thus be more secure. For fire safety, the systems can be designed to have a greater ability to handle higher temperatures and continue to operate. In dealing with security, the embedded systems can be self-sufficient and be able to deal with cut electrical and communication systems.
Miniature wireless devices called motes are networked wireless sensors. Wireless sensor networking makes use of miniaturization made possible by advanced integrated circuit (IC) design to couple full wireless subsystems to sophisticated sensors, enabling people and companies to measure a myriad of things in the physical world and act on this information through monitoring and control systems. These motes are completely self-contained and will typically run off a battery source for years before the batteries need to be changed or charged.
Characteristics
Embedded systems are designed to perform a specific task, in contrast with general-purpose computers designed for multiple tasks. Some have real-time performance constraints that must be met, for reasons such as safety and usability; others may have low or no performance requirements, allowing the system hardware to be simplified to reduce costs.
Embedded systems are not always standalone devices. Many embedded systems are a small part within a larger device that serves a more general purpose. For example, the Gibson Robot Guitar features an embedded system for tuning the strings, but the overall purpose of the Robot Guitar is to play music. Similarly, an embedded system in an automobile provides a specific function as a subsystem of the car itself.
The program instructions written for embedded systems are referred to as firmware, and are stored in read-only memory or flash memory chips. They run with limited computer hardware resources: little memory, small or non-existent keyboard or screen.
User interfaces
Embedded systems range from no user interface at all, in systems dedicated to one task, to complex graphical user interfaces that resemble modern computer desktop operating systems. Simple embedded devices use buttons, light-emitting diodes (LED), graphic or character liquid-crystal displays (LCD) with a simple menu system. More sophisticated devices that use a graphical screen with touch sensing or screen-edge soft keys provide flexibility while minimizing space used: the meaning of the buttons can change with the screen, and selection involves the natural behavior of pointing at what is desired.
Some systems provide user interface remotely with the help of a serial (e.g. RS-232) or network (e.g. Ethernet) connection. This approach extends the capabilities of the embedded system, avoids the cost of a display, simplifies the board support package (BSP) and allows designers to build a rich user interface on the PC. A good example of this is the combination of an embedded HTTP server running on an embedded device (such as an IP camera or a network router). The user interface is displayed in a web browser on a PC connected to the device.
Processors in embedded systems
Examples of properties of typical embedded computers when compared with general-purpose counterparts, are low power consumption, small size, rugged operating ranges, and low per-unit cost. This comes at the expense of limited processing resources.
Numerous microcontrollers have been developed for embedded systems use. General-purpose microprocessors are also used in embedded systems, but generally, require more support circuitry than microcontrollers.
Ready-made computer boards
PC/104 and PC/104+ are examples of standards for ready-made computer boards intended for small, low-volume embedded and ruggedized systems. These are mostly x86-based and often physically small compared to a standard PC, although still quite large compared to most simple (8/16-bit) embedded systems. They may use DOS, FreeBSD, Linux, NetBSD, OpenHarmony or an embedded real-time operating system (RTOS) such as MicroC/OS-II, QNX or VxWorks.
In certain applications, where small size or power efficiency are not primary concerns, the components used may be compatible with those used in general-purpose x86 personal computers. Boards such as the VIA EPIA range help to bridge the gap by being PC-compatible but highly integrated, physically smaller or have other attributes making them attractive to embedded engineers. The advantage of this approach is that low-cost commodity components may be used along with the same software development tools used for general software development. Systems built in this way are still regarded as embedded since they are integrated into larger devices and fulfill a single role. Examples of devices that may adopt this approach are automated teller machines (ATM) and arcade machines, which contain code specific to the application.
However, most ready-made embedded systems boards are not PC-centered and do not use the ISA or PCI busses. When a system-on-a-chip processor is involved, there may be little benefit to having a standardized bus connecting discrete components, and the environment for both hardware and software tools may be very different.
One common design style uses a small system module, perhaps the size of a business card, holding high density BGA chips such as an ARM-based system-on-a-chip processor and peripherals, external flash memory for storage, and DRAM for runtime memory. The module vendor will usually provide boot software and make sure there is a selection of operating systems, usually including Linux and some real-time choices. These modules can be manufactured in high volume, by organizations familiar with their specialized testing issues, and combined with much lower volume custom mainboards with application-specific external peripherals. Prominent examples of this approach include Arduino and Raspberry Pi.
ASIC and FPGA SoC solutions
A system on a chip (SoC) contains a complete system - consisting of multiple processors, multipliers, caches, even different types of memory and commonly various peripherals like interfaces for wired or wireless communication on a single chip. Often graphics processing units (GPU) and DSPs are included such chips. SoCs can be implemented as an application-specific integrated circuit (ASIC) or using a field-programmable gate array (FPGA) which typically can be reconfigured.
ASIC implementations are common for very-high-volume embedded systems like mobile phones and smartphones. ASIC or FPGA implementations may be used for not-so-high-volume embedded systems with special needs in kind of signal processing performance, interfaces and reliability, like in avionics.
Peripherals
Embedded systems talk with the outside world via peripherals, such as:
Serial communication interfaces (SCI): RS-232, RS-422, RS-485, etc.
Synchronous Serial Interface: I2C, SPI, SSC and ESSI (Enhanced Synchronous Serial Interface)
Universal Serial Bus (USB)
Media cards (SD cards, CompactFlash, etc.)
Network interface controller: Ethernet, WiFi, etc.
Fieldbuses: CAN bus, LIN-Bus, PROFIBUS, etc.
Timers: Phase-locked loops, programmable interval timers
General Purpose Input/Output (GPIO)
Analog-to-digital and digital-to-analog converters
Debugging: JTAG, In-system programming, background debug mode interface port, BITP, and DB9 ports.
Tools
As with other software, embedded system designers use compilers, assemblers, and debuggers to develop embedded system software. However, they may also use more specific tools:
In circuit debuggers or emulators (see next section).
Utilities to add a checksum or CRC to a program, so the embedded system can check if the program is valid.
For systems using digital signal processing, developers may use a computational notebook to simulate the mathematics.
System-level modeling and simulation tools help designers to construct simulation models of a system with hardware components such as processors, memories, DMA, interfaces, buses and software behavior flow as a state diagram or flow diagram using configurable library blocks. Simulation is conducted to select the right components by performing power vs. performance trade-offs, reliability analysis and bottleneck analysis. Typical reports that help a designer to make architecture decisions include application latency, device throughput, device utilization, power consumption of the full system as well as device-level power consumption.
A model-based development tool creates and simulates graphical data flow and UML state chart diagrams of components like digital filters, motor controllers, communication protocol decoding and multi-rate tasks.
Custom compilers and linkers may be used to optimize specialized hardware.
An embedded system may have its own special language or design tool, or add enhancements to an existing language such as Forth or Basic.
Another alternative is to add a RTOS or embedded operating system
Modeling and code generating tools often based on state machines
Software tools can come from several sources:
Software companies that specialize in the embedded market
Ported from the GNU software development tools
Sometimes, development tools for a personal computer can be used if the embedded processor is a close relative to a common PC processor
As the complexity of embedded systems grows, higher-level tools and operating systems are migrating into machinery where it makes sense. For example, cellphones, personal digital assistants and other consumer computers often need significant software that is purchased or provided by a person other than the manufacturer of the electronics. In these systems, an open programming environment such as Linux, NetBSD, FreeBSD, OSGi or Embedded Java is required so that the third-party software provider can sell to a large market.
Debugging
Embedded debugging may be performed at different levels, depending on the facilities available. Considerations include: does it slow down the main application, how close is the debugged system or application to the actual system or application, how expressive are the triggers that can be set for debugging (e.g., inspecting the memory when a particular program counter value is reached), and what can be inspected in the debugging process (such as, only memory, or memory and registers, etc.).
From simplest to most sophisticated debugging techniques and systems are roughly grouped into the following areas:
Interactive resident debugging, using the simple shell provided by the embedded operating system (e.g. Forth and Basic)
Software-only debuggers have the benefit that they do not need any hardware modification but have to carefully control what they record in order to conserve time and storage space.
External debugging using logging or serial port output to trace operation using either a monitor in flash or using a debug server like the Remedy Debugger that even works for heterogeneous multicore systems.
An in-circuit debugger (ICD), a hardware device that connects to the microprocessor via a JTAG or Nexus interface. This allows the operation of the microprocessor to be controlled externally, but is typically restricted to specific debugging capabilities in the processor.
An in-circuit emulator (ICE) replaces the microprocessor with a simulated equivalent, providing full control over all aspects of the microprocessor.
A complete emulator provides a simulation of all aspects of the hardware, allowing all of it to be controlled and modified, and allowing debugging on a normal PC. The downsides are expense and slow operation, in some cases up to 100 times slower than the final system.
For SoC designs, the typical approach is to verify and debug the design on an FPGA prototype board. Tools such as Certus are used to insert probes in the FPGA implementation that make signals available for observation. This is used to debug hardware, firmware and software interactions across multiple FPGAs in an implementation with capabilities similar to a logic analyzer.
Unless restricted to external debugging, the programmer can typically load and run software through the tools, view the code running in the processor, and start or stop its operation. The view of the code may be as high-level programming language, assembly code or mixture of both.
Tracing
Real-time operating systems often support tracing of operating system events. A graphical view is presented by a host PC tool, based on a recording of the system behavior. The trace recording can be performed in software, by the RTOS, or by special tracing hardware. RTOS tracing allows developers to understand timing and performance issues of the software system and gives a good understanding of the high-level system behaviors. Trace recording in embedded systems can be achieved using hardware or software solutions. Software-based trace recording does not require specialized debugging hardware and can be used to record traces in deployed devices, but it can have an impact on CPU and RAM usage. One example of a software-based tracing method used in RTOS environments is the use of empty macros which are invoked by the operating system at strategic places in the code, and can be implemented to serve as hooks.
Reliability
Embedded systems often reside in machines that are expected to run continuously for years without error, and in some cases recover by themselves if an error occurs. Therefore, the software is usually developed and tested more carefully than that for personal computers, and unreliable mechanical moving parts such as disk drives, switches or buttons are avoided.
Specific reliability issues may include:
The system cannot safely be shut down for repair, or it is too inaccessible to repair. Examples include space systems, undersea cables, navigational beacons, bore-hole systems, and automobiles.
The system must be kept running for safety reasons. Reduced functionality in the event of failure may be intolerable. Often backups are selected by an operator. Examples include aircraft navigation, reactor control systems, safety-critical chemical factory controls, train signals.
The system will lose large amounts of money when shut down: Telephone switches, factory controls, bridge and elevator controls, funds transfer and market making, automated sales and service.
A variety of techniques are used, sometimes in combination, to recover from errors—both software bugs such as memory leaks, and also soft errors in the hardware:
watchdog timer that resets and restarts the system unless the software periodically notifies the watchdog subsystems
Designing with a trusted computing base (TCB) architecture ensures a highly secure and reliable system environment
A hypervisor designed for embedded systems is able to provide secure encapsulation for any subsystem component so that a compromised software component cannot interfere with other subsystems, or privileged-level system software. This encapsulation keeps faults from propagating from one subsystem to another, thereby improving reliability. This may also allow a subsystem to be automatically shut down and restarted on fault detection.
Immunity-aware programming can help engineers produce more reliable embedded systems code. Guidelines and coding rules such as MISRA C/C++ aim to assist developers produce reliable, portable firmware in a number of different ways: typically by advising or mandating against coding practices which may lead to run-time errors (memory leaks, invalid pointer uses), use of run-time checks and exception handling (range/sanity checks, divide-by-zero and buffer index validity checks, default cases in logic checks), loop bounding, production of human-readable, well commented and well structured code, and avoiding language ambiguities which may lead to compiler-induced inconsistencies or side-effects (expression evaluation ordering, recursion, certain types of macro). These rules can often be used in conjunction with code static checkers or bounded model checking for functional verification purposes, and also assist in determination of code timing properties.
High vs. low volume
For high-volume systems such as mobile phones, minimizing cost is usually the primary design consideration. Engineers typically select hardware that is just good enough to implement the necessary functions.
For low-volume or prototype embedded systems, general-purpose computers may be adapted by limiting the programs or by replacing the operating system with an RTOS.
Embedded software architectures
In 1978 National Electrical Manufacturers Association released ICS 3-1978, a standard for programmable microcontrollers, including almost any computer-based controllers, such as single-board computers, numerical, and event-based controllers.
There are several different types of software architecture in common use.
Simple control loop
In this design, the software simply has a loop which monitors the input devices. The loop calls subroutines, each of which manages a part of the hardware or software. Hence it is called a simple control loop or programmed input-output.
Interrupt-controlled system
Some embedded systems are predominantly controlled by interrupts. This means that tasks performed by the system are triggered by different kinds of events; an interrupt could be generated, for example, by a timer at a predefined interval, or by a serial port controller receiving data.
This architecture is used if event handlers need low latency, and the event handlers are short and simple. These systems run a simple task in a main loop also, but this task is not very sensitive to unexpected delays. Sometimes the interrupt handler will add longer tasks to a queue structure. Later, after the interrupt handler has finished, these tasks are executed by the main loop. This method brings the system close to a multitasking kernel with discrete processes.
Cooperative multitasking
Cooperative multitasking is very similar to the simple control loop scheme, except that the loop is hidden in an API. The programmer defines a series of tasks, and each task gets its own environment to run in. When a task is idle, it calls an idle routine which passes control to another task.
The advantages and disadvantages are similar to that of the control loop, except that adding new software is easier, by simply writing a new task, or adding to the queue.
Preemptive multitasking or multi-threading
In this type of system, a low-level piece of code switches between tasks or threads based on a timer invoking an interrupt. This is the level at which the system is generally considered to have an operating system kernel. Depending on how much functionality is required, it introduces more or less of the complexities of managing multiple tasks running conceptually in parallel.
As any code can potentially damage the data of another task (except in systems using a memory management unit) programs must be carefully designed and tested, and access to shared data must be controlled by some synchronization strategy such as message queues, semaphores or a non-blocking synchronization scheme.
Because of these complexities, it is common for organizations to use an off-the-shelf RTOS, allowing the application programmers to concentrate on device functionality rather than operating system services. The choice to include an RTOS brings in its own issues, however, as the selection must be made prior to starting the application development process. This timing forces developers to choose the embedded operating system for their device based on current requirements and so restricts future options to a large extent.
The level of complexity in embedded systems is continuously growing as devices are required to manage peripherals and tasks such as serial, USB, TCP/IP, Bluetooth, Wireless LAN, trunk radio, multiple channels, data and voice, enhanced graphics, multiple states, multiple threads, numerous wait states and so on. These trends are leading to the uptake of embedded middleware in addition to an RTOS.
Microkernels and exokernels
A microkernel allocates memory and switches the CPU to different threads of execution. User-mode processes implement major functions such as file systems, network interfaces, etc.
Exokernels communicate efficiently by normal subroutine calls. The hardware and all the software in the system are available to and extensible by application programmers.
Monolithic kernels
A monolithic kernel is a relatively large kernel with sophisticated capabilities adapted to suit an embedded environment. This gives programmers an environment similar to a desktop operating system like Linux or Microsoft Windows, and is therefore very productive for development. On the downside, it requires considerably more hardware resources, is often more expensive, and, because of the complexity of these kernels, can be less predictable and reliable.
Common examples of embedded monolithic kernels are embedded Linux, VXWorks and Windows CE.
Despite the increased cost in hardware, this type of embedded system is increasing in popularity, especially on the more powerful embedded devices such as wireless routers and GPS navigation systems.
Additional software components
In addition to the core operating system, many embedded systems have additional upper-layer software components. These components include networking protocol stacks like CAN, TCP/IP, FTP, HTTP, and HTTPS, and storage capabilities like FAT and flash memory management systems. If the embedded device has audio and video capabilities, then the appropriate drivers and codecs will be present in the system. In the case of the monolithic kernels, many of these software layers may be included in the kernel. In the RTOS category, the availability of additional software components depends upon the commercial offering.
Domain-specific architectures
In the automotive sector, AUTOSAR is a standard architecture for embedded software.
| Technology | Computer hardware | null |
46656 | https://en.wikipedia.org/wiki/Radio%20telescope | Radio telescope | A radio telescope is a specialized antenna and radio receiver used to detect radio waves from astronomical radio sources in the sky. Radio telescopes are the main observing instrument used in radio astronomy, which studies the radio frequency portion of the electromagnetic spectrum, just as optical telescopes are used to make observations in the visible portion of the spectrum in traditional optical astronomy. Unlike optical telescopes, radio telescopes can be used in the daytime as well as at night.
Since astronomical radio sources such as planets, stars, nebulas and galaxies are very far away, the radio waves coming from them are extremely weak, so radio telescopes require very large antennas to collect enough radio energy to study them, and extremely sensitive receiving equipment. Radio telescopes are typically large parabolic ("dish") antennas similar to those employed in tracking and communicating with satellites and space probes. They may be used individually or linked together electronically in an array. Radio observatories are preferentially located far from major centers of population to avoid electromagnetic interference (EMI) from radio, television, radar, motor vehicles, and other man-made electronic devices.
Radio waves from space were first detected by engineer Karl Guthe Jansky in 1932 at Bell Telephone Laboratories in Holmdel, New Jersey using an antenna built to study radio receiver noise. The first purpose-built radio telescope was a 9-meter parabolic dish constructed by radio amateur Grote Reber in his back yard in Wheaton, Illinois in 1937. The sky survey he performed is often considered the beginning of the field of radio astronomy.
Early radio telescopes
The first radio antenna used to identify an astronomical radio source was built by Karl Guthe Jansky, an engineer with Bell Telephone Laboratories, in 1932. Jansky was assigned the task of identifying sources of static that might interfere with radiotelephone service. Jansky's antenna was an array of dipoles and reflectors designed to receive short wave radio signals at a frequency of 20.5 MHz (wavelength about 14.6 meters). It was mounted on a turntable that allowed it to rotate in any direction, earning it the name "Jansky's merry-go-round." It had a diameter of approximately and stood tall. By rotating the antenna, the direction of the received interfering radio source (static) could be pinpointed. A small shed to the side of the antenna housed an analog pen-and-paper recording system. After recording signals from all directions for several months, Jansky eventually categorized them into three types of static: nearby thunderstorms, distant thunderstorms, and a faint steady hiss above shot noise, of unknown origin. Jansky finally determined that the "faint hiss" repeated on a cycle of 23 hours and 56 minutes. This period is the length of an astronomical sidereal day, the time it takes any "fixed" object located on the celestial sphere to come back to the same location in the sky. Thus Jansky suspected that the hiss originated outside of the Solar System, and by comparing his observations with optical astronomical maps, Jansky concluded that the radiation was coming from the Milky Way Galaxy and was strongest in the direction of the center of the galaxy, in the constellation of Sagittarius.
An amateur radio operator, Grote Reber, was one of the pioneers of what became known as radio astronomy. He built the first parabolic "dish" radio telescope, in diameter, in his back yard in Wheaton, Illinois in 1937. He repeated Jansky's pioneering work, identifying the Milky Way as the first off-world radio source, and he went on to conduct the first sky survey at very high radio frequencies, discovering other radio sources. The rapid development of radar during World War II created technology which was applied to radio astronomy after the war, and radio astronomy became a branch of astronomy, with universities and research institutes constructing large radio telescopes.
Types
The range of frequencies in the electromagnetic spectrum that makes up the radio spectrum is very large. As a consequence, the types of antennas that are used as radio telescopes vary widely in design, size, and configuration. At wavelengths of 30 meters to 3 meters (10–100 MHz), they are generally either directional antenna arrays similar to "TV antennas" or large stationary reflectors with movable focal points. Since the wavelengths being observed with these types of antennas are so long, the "reflector" surfaces can be constructed from coarse wire mesh such as chicken wire.
At shorter wavelengths parabolic "dish" antennas predominate. The angular resolution of a dish antenna is determined by the ratio of the diameter of the dish to the wavelength of the radio waves being observed. This dictates the dish size a radio telescope needs for a useful resolution. Radio telescopes that operate at wavelengths of 3 meters to 30 cm (100 MHz to 1 GHz) are usually well over 100 meters in diameter. Telescopes working at wavelengths shorter than 30 cm (above 1 GHz) range in size from 3 to 90 meters in diameter.
Frequencies
The increasing use of radio frequencies for communication makes astronomical observations more and more difficult (see Open spectrum).
Negotiations to defend the frequency allocation for parts of the spectrum most useful for observing the universe are coordinated in the Scientific Committee on Frequency Allocations for Radio Astronomy and Space Science.
Some of the more notable frequency bands used by radio telescopes include:
Every frequency in the United States National Radio Quiet Zone
Channel 37: 608 to 614 MHz
The "Hydrogen line", also known as the "21 centimeter line": 1,420.40575177 MHz, used by many radio telescopes including The Big Ear in its discovery of the Wow! signal
1,406 MHz and 430 MHz
The Waterhole: 1,420 to 1,666 MHz
The Arecibo Observatory had several receivers that together covered the whole 1–10 GHz range.
The Wilkinson Microwave Anisotropy Probe mapped the cosmic microwave background radiation in 5 different frequency bands, centered on 23 GHz, 33 GHz, 41 GHz, 61 GHz, and 94 GHz.
Big dishes
The world's largest filled-aperture (i.e. full dish) radio telescope is the Five-hundred-meter Aperture Spherical Telescope (FAST) completed in 2016 by China. The dish with an area as large as 30 football fields is built into a natural karst depression in the landscape in Guizhou province and cannot move; the feed antenna is in a cabin suspended above the dish on cables. The active dish is composed of 4,450 moveable panels controlled by a computer. By changing the shape of the dish and moving the feed cabin on its cables, the telescope can be steered to point to any region of the sky up to 40° from the zenith. Although the dish is 500 meters in diameter, only a 300-meter circular area on the dish is illuminated by the feed antenna at any given time, so the actual effective aperture is 300 meters. Construction began in 2007 and was completed July 2016 and the telescope became operational September 25, 2016.
The world's second largest filled-aperture telescope was the Arecibo radio telescope located in Arecibo, Puerto Rico, though it suffered catastrophic collapse on 1 December 2020. Arecibo was one of the world's few radio telescope also capable of active (i.e., transmitting) radar imaging of near-Earth objects (see: radar astronomy); most other telescopes employ passive detection, i.e., receiving only. Arecibo was another stationary dish telescope like FAST. Arecibo's dish was built into a natural depression in the landscape, the antenna was steerable within an angle of about 20° of the zenith by moving the suspended feed antenna, giving use of a 270-meter diameter portion of the dish for any individual observation.
The largest individual radio telescope of any kind is the RATAN-600 located near Nizhny Arkhyz, Russia, which consists of a 576-meter circle of rectangular radio reflectors, each of which can be pointed towards a central conical receiver.
The above stationary dishes are not fully "steerable"; they can only be aimed at points in an area of the sky near the zenith, and cannot receive from sources near the horizon. The largest fully steerable dish radio telescope is the 100 meter Green Bank Telescope in West Virginia, United States, constructed in 2000. The largest fully steerable radio telescope in Europe is the Effelsberg 100-m Radio Telescope near Bonn, Germany, operated by the Max Planck Institute for Radio Astronomy, which also was the world's largest fully steerable telescope for 30 years until the Green Bank antenna was constructed. The third-largest fully steerable radio telescope is the 76-meter Lovell Telescope at Jodrell Bank Observatory in Cheshire, England, completed in 1957. The fourth-largest fully steerable radio telescopes are six 70-meter dishes: three Russian RT-70, and three in the NASA Deep Space Network. The planned Qitai Radio Telescope, at a diameter of , is expected to become the world's largest fully steerable single-dish radio telescope when completed in 2028.
A more typical radio telescope has a single antenna of about 25 meters diameter. Dozens of radio telescopes of about this size are operated in radio observatories all over the world.
Gallery of big dishes
Radio Telescopes in space
Since 1965, humans have launched three space-based radio telescopes. The first one, KRT-10, was attached to Salyut 6 orbital space station in 1979. In 1997, Japan sent the second, HALCA. The last one was sent by Russia in 2011 called Spektr-R.
Radio interferometry
One of the most notable developments came in 1946 with the introduction of the technique called astronomical interferometry, which means combining the signals from multiple antennas so that they simulate a larger antenna, in order to achieve greater resolution. Astronomical radio interferometers usually consist either of arrays of parabolic dishes (e.g., the One-Mile Telescope), arrays of one-dimensional antennas (e.g., the Molonglo Observatory Synthesis Telescope) or two-dimensional arrays of omnidirectional dipoles (e.g., Tony Hewish's Pulsar Array). All of the telescopes in the array are widely separated and are usually connected using coaxial cable, waveguide, optical fiber, or other type of transmission line. Recent advances in the stability of electronic oscillators also now permit interferometry to be carried out by independent recording of the signals at the various antennas, and then later correlating the recordings at some central processing facility. This process is known as Very Long Baseline Interferometry (VLBI). Interferometry does increase the total signal collected, but its primary purpose is to vastly increase the resolution through a process called aperture synthesis. This technique works by superposing (interfering) the signal waves from the different telescopes on the principle that waves that coincide with the same phase will add to each other while two waves that have opposite phases will cancel each other out. This creates a combined telescope that is equivalent in resolution (though not in sensitivity) to a single antenna whose diameter is equal to the spacing of the antennas furthest apart in the array.
A high-quality image requires a large number of different separations between telescopes. Projected separation between any two telescopes, as seen from the radio source, is called a baseline. For example, the Very Large Array (VLA) near Socorro, New Mexico has 27 telescopes with 351 independent baselines at once, which achieves a resolution of 0.2 arc seconds at 3 cm wavelengths. Martin Ryle's group in Cambridge obtained a Nobel Prize for interferometry and aperture synthesis. The Lloyd's mirror interferometer was also developed independently in 1946 by Joseph Pawsey's group at the University of Sydney. In the early 1950s, the Cambridge Interferometer mapped the radio sky to produce the famous 2C and 3C surveys of radio sources. An example of a large physically connected radio telescope array is the Giant Metrewave Radio Telescope, located in Pune, India. The largest array, the Low-Frequency Array (LOFAR), finished in 2012, is located in western Europe and consists of about 81,000 small antennas in 48 stations distributed over an area several hundreds of kilometers in diameter and operates between 1.25 and 30 m wavelengths. VLBI systems using post-observation processing have been constructed with antennas thousands of miles apart. Radio interferometers have also been used to obtain detailed images of the anisotropies and the polarization of the Cosmic Microwave Background, like the CBI interferometer in 2004.
The world's largest physically connected telescope, the Square Kilometre Array (SKA), is planned to start operations in 2025.
Astronomical observations
Many astronomical objects are not only observable in visible light but also emit radiation at radio wavelengths. Besides observing energetic objects such as pulsars and quasars, radio telescopes are able to "image" most astronomical objects such as galaxies, nebulae, and even radio emissions from planets.
| Technology | Optical instruments | null |
46675 | https://en.wikipedia.org/wiki/Xylem | Xylem | Xylem is one of the two types of transport tissue in vascular plants, the other being phloem; both of these are part of the vascular bundle. The basic function of the xylem is to transport water upward from the roots to parts of the plants such as stems and leaves, but it also transports nutrients. The word xylem is derived from the Ancient Greek word, (xylon), meaning "wood"; the best-known xylem tissue is wood, though it is found throughout a plant. The term was introduced by Carl Nägeli in 1858.
Structure
The most distinctive xylem cells are the long tracheary elements that transport water. Tracheids and vessel elements are distinguished by their shape; vessel elements are shorter, and are connected together into long tubes that are called vessels.
Xylem also contains two other type of cells: parenchyma and fibers.
Xylem can be found:
in vascular bundles, present in non-woody plants and non-woody parts of woody plants
in secondary xylem, laid down by a meristem called the vascular cambium in woody plants
as part of a stelar arrangement not divided into bundles, as in many ferns.
In transitional stages of plants with secondary growth, the first two categories are not mutually exclusive, although usually a vascular bundle will contain primary xylem only.
The branching pattern exhibited by xylem follows Murray's law.
Primary and secondary xylem
Primary xylem is formed during primary growth from procambium. It includes protoxylem and metaxylem. Metaxylem develops after the protoxylem but before secondary xylem. Metaxylem has wider vessels and tracheids than protoxylem.
Secondary xylem is formed during secondary growth from vascular cambium. Although secondary xylem is also found in members of the gymnosperm groups Gnetophyta and Ginkgophyta and to a lesser extent in members of the Cycadophyta, the two main groups in which secondary xylem can be found are:
conifers (Coniferae): there are approximately 600 known species of conifers. All species have secondary xylem, which is relatively uniform in structure throughout this group. Many conifers become tall trees: the secondary xylem of such trees is used and marketed as softwood.
angiosperms (Angiospermae): there are approximately 250,000 known species of angiosperms. Within this group secondary xylem is rare in the monocots. Many non-monocot angiosperms become trees, and the secondary xylem of these is used and marketed as hardwood.
Main function – upwards water transport
The xylem, vessels and tracheids of the roots, stems and leaves are interconnected to form a continuous system of water-conducting channels reaching all parts of the plants. The system transports water and soluble mineral nutrients from the roots throughout the plant. It is also used to replace water lost during transpiration and photosynthesis. Xylem sap consists mainly of water and inorganic ions, although it can also contain a number of organic chemicals as well. The transport is passive, not powered by energy spent by the tracheary elements themselves, which are dead by maturity and no longer have living contents. Transporting sap upwards becomes more difficult as the height of a plant increases and upwards transport of water by xylem is considered to limit the maximum height of trees. Three phenomena cause xylem sap to flow:
Pressure flow hypothesis: Sugars produced in the leaves and other green tissues are kept in the phloem system, creating a solute pressure differential versus the xylem system carrying a far lower load of solutes—water and minerals. The phloem pressure can rise to several MPa, far higher than atmospheric pressure. Selective inter-connection between these systems allows this high solute concentration in the phloem to draw xylem fluid upwards by negative pressure.
Transpirational pull: Similarly, the evaporation of water from the surfaces of mesophyll cells to the atmosphere also creates a negative pressure at the top of a plant. This causes millions of minute menisci to form in the mesophyll cell wall. The resulting surface tension causes a negative pressure or tension in the xylem that pulls the water from the roots and soil.
Root pressure: If the water potential of the root cells is more negative than that of the soil, usually due to high concentrations of solute, water can move by osmosis into the root from the soil. This causes a positive pressure that forces sap up the xylem towards the leaves. In some circumstances, the sap will be forced from the leaf through a hydathode in a phenomenon known as guttation. Root pressure is highest in the morning before the opening of stomata and allow transpiration to begin. Different plant species can have different root pressures even in a similar environment; examples include up to 145 kPa in Vitis riparia but around zero in Celastrus orbiculatus.
The primary force that creates the capillary action movement of water upwards in plants is the adhesion between the water and the surface of the xylem conduits. Capillary action provides the force that establishes an equilibrium configuration, balancing gravity. When transpiration removes water at the top, the flow is needed to return to the equilibrium.
Transpirational pull results from the evaporation of water from the surfaces of cells in the leaves. This evaporation causes the surface of the water to recess into the pores of the cell wall. By capillary action, the water forms concave menisci inside the pores. The high surface tension of water pulls the concavity outwards, generating enough force to lift water as high as a hundred meters from ground level to a tree's highest branches.
Transpirational pull requires that the vessels transporting the water be very small in diameter; otherwise, cavitation would break the water column. And as water evaporates from leaves, more is drawn up through the plant to replace it. When the water pressure within the xylem reaches extreme levels due to low water input from the roots (if, for example, the soil is dry), then the gases come out of solution and form a bubble – an embolism forms, which will spread quickly to other adjacent cells, unless bordered pits are present (these have a plug-like structure called a torus, that seals off the opening between adjacent cells and stops the embolism from spreading). Even after an embolism has occurred, plants are able to refill the xylem and restore the functionality.
Cohesion-tension theory
The cohesion-tension theory is a theory of intermolecular attraction that explains the process of water flow upwards (against the force of gravity) through the xylem of plants. It was proposed in 1894 by John Joly and Henry Horatio Dixon. Despite numerous objections, this is the most widely accepted theory for the transport of water through a plant's vascular system based on the classical research of Dixon-Joly (1894), Eugen Askenasy (1845–1903) (1895), and Dixon (1914,1924).
Water is a polar molecule. When two water molecules approach one another, the slightly negatively charged oxygen atom of one forms a hydrogen bond with a slightly positively charged hydrogen atom in the other. This attractive force, along with other intermolecular forces, is one of the principal factors responsible for the occurrence of surface tension in liquid water. It also allows plants to draw water from the root through the xylem to the leaf.
Water is constantly lost through transpiration from the leaf. When one water molecule is lost another is pulled along by the processes of cohesion and tension. Transpiration pull, utilizing capillary action and the inherent surface tension of water, is the primary mechanism of water movement in plants. However, it is not the only mechanism involved. Any use of water in leaves forces water to move into them.
Transpiration in leaves creates tension (differential pressure) in the cell walls of mesophyll cells. Because of this tension, water is being pulled up from the roots into the leaves, helped by cohesion (the pull between individual water molecules, due to hydrogen bonds) and adhesion (the stickiness between water molecules and the hydrophilic cell walls of plants). This mechanism of water flow works because of water potential (water flows from high to low potential), and the rules of simple diffusion.
Over the past century, there has been a great deal of research regarding the mechanism of xylem sap transport; today, most plant scientists continue to agree that the cohesion-tension theory best explains this process, but multiforce theories that hypothesize several alternative mechanisms have been suggested, including longitudinal cellular and xylem osmotic pressure gradients, axial potential gradients in the vessels, and gel- and gas-bubble-supported interfacial gradients.
Measurement of pressure
Until recently, the differential pressure (suction) of transpirational pull could only be measured indirectly, by applying external pressure with a pressure bomb to counteract it. When the technology to perform direct measurements with a pressure probe was developed, there was initially some doubt about whether the classic theory was correct, because some workers were unable to demonstrate negative pressures. More recent measurements do tend to validate the classic theory, for the most part. Xylem transport is driven by a combination of transpirational pull from above and root pressure from below, which makes the interpretation of measurements more complicated.
Evolution
Xylem appeared early in the history of terrestrial plant life. Fossil plants with anatomically preserved xylem are known from the Silurian (more than 400 million years ago), and trace fossils resembling individual xylem cells may be found in earlier Ordovician rocks. The earliest true and recognizable xylem consists of tracheids with a helical-annular reinforcing layer added to the cell wall. This is the only type of xylem found in the earliest vascular plants, and this type of cell continues to be found in the protoxylem (first-formed xylem) of all living groups of vascular plants. Several groups of plants later developed pitted tracheid cells independently through convergent evolution. In living plants, pitted tracheids do not appear in development until the maturation of the metaxylem (following the protoxylem).
In most plants, pitted tracheids function as the primary transport cells. The other type of vascular element, found in angiosperms, is the vessel element. Vessel elements are joined end to end to form vessels in which water flows unimpeded, as in a pipe. The presence of xylem vessels (also called trachea) is considered to be one of the key innovations that led to the success of the angiosperms. However, the occurrence of vessel elements is not restricted to angiosperms, and they are absent in some archaic or "basal" lineages of the angiosperms: (e.g., Amborellaceae, Tetracentraceae, Trochodendraceae, and Winteraceae), and their secondary xylem is described by Arthur Cronquist as "primitively vesselless". Cronquist considered the vessels of Gnetum to be convergent with those of angiosperms. Whether the absence of vessels in basal angiosperms is a primitive condition is contested, the alternative hypothesis states that vessel elements originated in a precursor to the angiosperms and were subsequently lost.
To photosynthesize, plants must absorb from the atmosphere. However, this comes at a price: while stomata are open to allow to enter, water can evaporate. Water is lost much faster than is absorbed, so plants need to replace it, and have developed systems to transport water from the moist soil to the site of photosynthesis. Early plants sucked water between the walls of their cells, then evolved the ability to control water loss (and acquisition) through the use of stomata. Specialized water transport tissues soon evolved in the form of hydroids, tracheids, then secondary xylem, followed by an endodermis and ultimately vessels.
The high levels of Silurian-Devonian times, when plants were first colonizing land, meant that the need for water was relatively low. As was withdrawn from the atmosphere by plants, more water was lost in its capture, and more elegant transport mechanisms evolved. As water transport mechanisms, and waterproof cuticles, evolved, plants could survive without being continually covered by a film of water. This transition from poikilohydry to homoiohydry opened up new potential for colonization. Plants then needed a robust internal structure that held long narrow channels for transporting water from the soil to all the different parts of the above-soil plant, especially to the parts where photosynthesis occurred.
During the Silurian, was readily available, so little water needed expending to acquire it. By the end of the Carboniferous, when levels had lowered to something approaching today's, around 17 times more water was lost per unit of uptake. However, even in these "easy" early days, water was at a premium, and had to be transported to parts of the plant from the wet soil to avoid desiccation. This early water transport took advantage of the cohesion-tension mechanism inherent in water. Water has a tendency to diffuse to areas that are drier, and this process is accelerated when water can be wicked along a fabric with small spaces. In small passages, such as that between the plant cell walls (or in tracheids), a column of water behaves like rubber – when molecules evaporate from one end, they pull the molecules behind them along the channels. Therefore, transpiration alone provided the driving force for water transport in early plants. However, without dedicated transport vessels, the cohesion-tension mechanism cannot transport water more than about 2 cm, severely limiting the size of the earliest plants. This process demands a steady supply of water from one end, to maintain the chains; to avoid exhausting it, plants developed a waterproof cuticle. Early cuticle may not have had pores but did not cover the entire plant surface, so that gas exchange could continue. However, dehydration at times was inevitable; early plants cope with this by having a lot of water stored between their cell walls, and when it comes to it sticking out the tough times by putting life "on hold" until more water is supplied.
To be free from the constraints of small size and constant moisture that the parenchymatic transport system inflicted, plants needed a more efficient water transport system. During the early Silurian, they developed specialized cells, which were lignified (or bore similar chemical compounds) to avoid implosion; this process coincided with cell death, allowing their innards to be emptied and water to be passed through them. These wider, dead, empty cells were a million times more conductive than the inter-cell method, giving the potential for transport over longer distances, and higher diffusion rates.
The earliest macrofossils to bear water-transport tubes are Silurian plants placed in the genus Cooksonia. The early Devonian pretracheophytes Aglaophyton and Horneophyton have structures very similar to the hydroids of modern mosses.
Plants continued to innovate new ways of reducing the resistance to flow within their cells, thereby increasing the efficiency of their water transport. Bands on the walls of tubes, in fact apparent from the early Silurian onwards, are an early improvisation to aid the easy flow of water. Banded tubes, as well as tubes with pitted ornamentation on their walls, were lignified and, when they form single celled conduits, are considered to be tracheids. These, the "next generation" of transport cell design, have a more rigid structure than hydroids, allowing them to cope with higher levels of water pressure. Tracheids may have a single evolutionary origin, possibly within the hornworts, uniting all tracheophytes (but they may have evolved more than once).
Water transport requires regulation, and dynamic control is provided by stomata.
By adjusting the amount of gas exchange, they can restrict the amount of water lost through transpiration. This is an important role where water supply is not constant, and indeed stomata appear to have evolved before tracheids, being present in the non-vascular hornworts.
An endodermis probably evolved during the Silu-Devonian, but the first fossil evidence for such a structure is Carboniferous. This structure in the roots covers the water transport tissue and regulates ion exchange (and prevents unwanted pathogens etc. from entering the water transport system). The endodermis can also provide an upwards pressure, forcing water out of the roots when transpiration is not enough of a driver.
Once plants had evolved this level of controlled water transport, they were truly homoiohydric, able to extract water from their environment through root-like organs rather than relying on a film of surface moisture, enabling them to grow to much greater size. As a result of their independence from their surroundings, they lost their ability to survive desiccation – a costly trait to retain.
During the Devonian, maximum xylem diameter increased with time, with the minimum diameter remaining pretty constant. By the middle Devonian, the tracheid diameter of some plant lineages (Zosterophyllophytes) had plateaued. Wider tracheids allow water to be transported faster, but the overall transport rate depends also on the overall cross-sectional area of the xylem bundle itself. The increase in vascular bundle thickness further seems to correlate with the width of plant axes, and plant height; it is also closely related to the appearance of leaves and increased stomatal density, both of which would increase the demand for water.
While wider tracheids with robust walls make it possible to achieve higher water transport tensions, this increases the likelihood of cavitation. Cavitation occurs when a bubble of air forms within a vessel, breaking the bonds between chains of water molecules and preventing them from pulling more water up with their cohesive tension. A tracheid, once cavitated, cannot have its embolism removed and return to service (except in a few advanced angiosperms which have developed a mechanism of doing so). Therefore, it is well worth plants' while to avoid cavitation occurring. For this reason, pits in tracheid walls have very small diameters, to prevent air entering and allowing bubbles to nucleate. Freeze-thaw cycles are a major cause of cavitation. Damage to a tracheid's wall almost inevitably leads to air leaking in and cavitation, hence the importance of many tracheids working in parallel.
Once cavitation has occurred, plants have a range of mechanisms to contain the damage. Small pits link adjacent conduits to allow fluid to flow between them, but not air – although these pits, which prevent the spread of embolism, are also a major cause of them. These pitted surfaces further reduce the flow of water through the xylem by as much as 30%. The diversification of xylem strand shapes with tracheid network topologies increasingly resistant to the spread of embolism likely facilitated increases in plant size and the colonization of drier habitats during the Devonian radiation. Conifers, by the Jurassic, developed bordered pits had valve-like structures to isolate cavitated elements. These torus-margo structures have an impermeable disc (torus) suspended by a permeable membrane (margo) between two adjacent pores. When a tracheid on one side depressurizes, the disc is sucked into the pore on that side, and blocks further flow. Other plants simply tolerate cavitation. For instance, oaks grow a ring of wide vessels at the start of each spring, none of which survive the winter frosts. Maples use root pressure each spring to force sap upwards from the roots, squeezing out any air bubbles.
Growing to height also employed another trait of tracheids – the support offered by their lignified walls. Defunct tracheids were retained to form a strong, woody stem, produced in most instances by a secondary xylem. However, in early plants, tracheids were too mechanically vulnerable, and retained a central position, with a layer of tough sclerenchyma on the outer rim of the stems. Even when tracheids do take a structural role, they are supported by sclerenchymatic tissue.
Tracheids end with walls, which impose a great deal of resistance on flow; vessel members have perforated end walls, and are arranged in series to operate as if they were one continuous vessel. The function of end walls, which were the default state in the Devonian, was probably to avoid embolisms. An embolism is where an air bubble is created in a tracheid. This may happen as a result of freezing, or by gases dissolving out of solution. Once an embolism is formed, it usually cannot be removed (but see later); the affected cell cannot pull water up, and is rendered useless.
End walls excluded, the tracheids of prevascular plants were able to operate under the same hydraulic conductivity as those of the first vascular plant, Cooksonia.
The size of tracheids is limited as they comprise a single cell; this limits their length, which in turn limits their maximum useful diameter to 80 μm. Conductivity grows with the fourth power of diameter, so increased diameter has huge rewards; vessel elements, consisting of a number of cells, joined at their ends, overcame this limit and allowed larger tubes to form, reaching diameters of up to 500 μm, and lengths of up to 10 m.
Vessels first evolved during the dry, low periods of the late Permian, in the horsetails, ferns and Selaginellales independently, and later appeared in the mid Cretaceous in angiosperms and gnetophytes.
Vessels allow the same cross-sectional area of wood to transport around a hundred times more water than tracheids! This allowed plants to fill more of their stems with structural fibers, and also opened a new niche to vines, which could transport water without being as thick as the tree they grew on. Despite these advantages, tracheid-based wood is a lot lighter, thus cheaper to make, as vessels need to be much more reinforced to avoid cavitation.
Development
Xylem development can be described by four terms: centrarch, exarch, endarch and mesarch. As it develops in young plants, its nature changes from protoxylem to metaxylem (i.e. from first xylem to after xylem). The patterns in which protoxylem and metaxylem are arranged are essential in studying plant morphology.
Protoxylem and metaxylem
As a young vascular plant grows, one or more strands of primary xylem form in its stems and roots. The first xylem to develop is called 'protoxylem'. In appearance, protoxylem is usually distinguished by narrower vessels formed of smaller cells. Some of these cells have walls that contain thickenings in the form of rings or helices. Functionally, protoxylem can extend: the cells can grow in size and develop while a stem or root is elongating. Later, 'metaxylem' develops in the strands of xylem. Metaxylem vessels and cells are usually larger; the cells have thickenings typically either in the form of ladderlike transverse bars (scalariform) or continuous sheets except for holes or pits (pitted). Functionally, metaxylem completes its development after elongation ceases when the cells no longer need to grow in size.
Patterns of protoxylem and metaxylem
There are four primary patterns to the arrangement of protoxylem and metaxylem in stems and roots.
Centrarch refers to the case in which the primary xylem forms a single cylinder in the center of the stem and develops from the center outwards. The protoxylem is thus found in the central core, and the metaxylem is in a cylinder around it. This pattern was common in early land plants, such as "rhyniophytes", but is not present in any living plants.
The other three terms are used where there is more than one strand of primary xylem.
Exarch is used when there is more than one strand of primary xylem in a stem or root, and the xylem develops from the outside inwards towards the center, i.e., centripetally. The metaxylem is thus closest to the center of the stem or root, and the protoxylem is closest to the periphery. The roots of vascular plants are generally considered to have exarch development.
Endarch is used when there is more than one strand of primary xylem in a stem or root, and the xylem develops from the inside outwards towards the periphery, i.e., centrifugally. The protoxylem is thus closest to the center of the stem or root, and the metaxylem is closest to the periphery. The stems of seed plants typically have endarch development.
Mesarch is used when there is more than one strand of primary xylem in a stem or root, and the xylem develops from the middle of a strand in both directions. The metaxylem is thus on both the peripheral and central sides of the strand, with the protoxylem between the metaxylem (possibly surrounded by it). The leaves and stems of many ferns have mesarch development.
History
In his book De plantis libri XVI (On Plants, in 16 books) (1583), the Italian physician and botanist Andrea Cesalpino proposed that plants draw water from soil not by magnetism (ut magnes ferrum trahit, as magnetic iron attracts) nor by suction (vacuum), but by absorption, as occurs in the case of linen, sponges, or powders. The Italian biologist Marcello Malpighi was the first person to describe and illustrate xylem vessels, which he did in his book Anatome plantarum ... (1675). Although Malpighi believed that xylem contained only air, the British physician and botanist Nehemiah Grew, who was Malpighi's contemporary, believed that sap ascended both through the bark and through the xylem. However, according to Grew, capillary action in the xylem would raise the sap by only a few inches; to raise the sap to the top of a tree, Grew proposed that the parenchymal cells become turgid and thereby not only squeeze the sap in the tracheids but force some sap from the parenchyma into the tracheids. In 1727, English clergyman and botanist Stephen Hales showed that transpiration by a plant's leaves causes water to move through its xylem. By 1891, the Polish-German botanist Eduard Strasburger had shown that the transport of water in plants did not require the xylem cells to be alive.
| Biology and health sciences | Plant tissues | Biology |
46740 | https://en.wikipedia.org/wiki/Euler%27s%20identity | Euler's identity | In mathematics, Euler's identity (also known as Euler's equation) is the equality
where
is Euler's number, the base of natural logarithms,
is the imaginary unit, which by definition satisfies , and
is pi, the ratio of the circumference of a circle to its diameter.
Euler's identity is named after the Swiss mathematician Leonhard Euler. It is a special case of Euler's formula when evaluated for . Euler's identity is considered to be an exemplar of mathematical beauty as it shows a profound connection between the most fundamental numbers in mathematics. In addition, it is directly used in a proof that is transcendental, which implies the impossibility of squaring the circle.
Mathematical beauty
Euler's identity is often cited as an example of deep mathematical beauty. Three of the basic arithmetic operations occur exactly once each: addition, multiplication, and exponentiation. The identity also links five fundamental mathematical constants:
The number 0, the additive identity
The number 1, the multiplicative identity
The number ( = 3.14159...), the fundamental circle constant
The number ( = 2.71828...), also known as Euler's number, which occurs widely in mathematical analysis
The number , the imaginary unit such that
The equation is often given in the form of an expression set equal to zero, which is common practice in several areas of mathematics.
Stanford University mathematics professor Keith Devlin has said, "like a Shakespearean sonnet that captures the very essence of love, or a painting that brings out the beauty of the human form that is far more than just skin deep, Euler's equation reaches down into the very depths of existence". And Paul Nahin, a professor emeritus at the University of New Hampshire, who has written a book dedicated to Euler's formula and its applications in Fourier analysis, describes Euler's identity as being "of exquisite beauty".
Mathematics writer Constance Reid has opined that Euler's identity is "the most famous formula in all mathematics". And Benjamin Peirce, a 19th-century American philosopher, mathematician, and professor at Harvard University, after proving Euler's identity during a lecture, stated that the identity "is absolutely paradoxical; we cannot understand it, and we don't know what it means, but we have proved it, and therefore we know it must be the truth".
A poll of readers conducted by The Mathematical Intelligencer in 1990 named Euler's identity as the "most beautiful theorem in mathematics". In another poll of readers that was conducted by Physics World in 2004, Euler's identity tied with Maxwell's equations (of electromagnetism) as the "greatest equation ever".
At least three books in popular mathematics have been published about Euler's identity:
Dr. Euler's Fabulous Formula: Cures Many Mathematical Ills, by Paul Nahin (2011)
A Most Elegant Equation: Euler's formula and the beauty of mathematics, by David Stipp (2017)
Euler's Pioneering Equation: The most beautiful theorem in mathematics, by Robin Wilson (2018).
Explanations
Imaginary exponents
Euler's identity asserts that is equal to −1. The expression is a special case of the expression , where is any complex number. In general, is defined for complex by extending one of the definitions of the exponential function from real exponents to complex exponents. For example, one common definition is:
Euler's identity therefore states that the limit, as approaches infinity, of is equal to −1. This limit is illustrated in the animation to the right.
Euler's identity is a special case of Euler's formula, which states that for any real number ,
where the inputs of the trigonometric functions sine and cosine are given in radians.
In particular, when ,
Since
and
it follows that
which yields Euler's identity:
Geometric interpretation
Any complex number can be represented by the point on the complex plane. This point can also be represented in polar coordinates as , where is the absolute value of (distance from the origin), and is the argument of (angle counterclockwise from the positive x-axis). By the definitions of sine and cosine, this point has cartesian coordinates of , implying that . According to Euler's formula, this is equivalent to saying .
Euler's identity says that . Since is for = 1 and , this can be interpreted as a fact about the number −1 on the complex plane: its distance from the origin is 1, and its angle from the positive x-axis is radians.
Additionally, when any complex number is multiplied by , it has the effect of rotating counterclockwise by an angle of on the complex plane. Since multiplication by −1 reflects a point across the origin, Euler's identity can be interpreted as saying that rotating any point radians around the origin has the same effect as reflecting the point across the origin. Similarly, setting equal to yields the related equation which can be interpreted as saying that rotating any point by one turn around the origin returns it to its original position.
Generalizations
Euler's identity is also a special case of the more general identity that the th roots of unity, for , add up to 0:
Euler's identity is the case where .
A similar identity also applies to quaternion exponential: let be the basis quaternions; then,
More generally, let be a quaternion with a zero real part and a norm equal to 1; that is, with Then one has
The same formula applies to octonions, with a zero real part and a norm equal to 1. These formulas are a direct generalization of Euler's identity, since and are the only complex numbers with a zero real part and a norm (absolute value) equal to 1.
History
While Euler's identity is a direct result of Euler's formula, published in his monumental work of mathematical analysis in 1748, Introductio in analysin infinitorum, it is questionable whether the particular concept of linking five fundamental constants in a compact form can be attributed to Euler himself, as he may never have expressed it.
Robin Wilson states the following.
| Mathematics | Calculus and analysis | null |
46750 | https://en.wikipedia.org/wiki/Virginia%20opossum | Virginia opossum | The Virginia opossum (Didelphis virginiana), also known as the North American opossum, is a member of the opossum family found from southern Canada to northern Costa Rica (making it the northernmost marsupial in the world). Commonly referred to simply as the possum, it is a solitary nocturnal animal about the size of a domestic cat, and a successful opportunist.
Opossums are familiar to many North Americans as they frequently inhabit settled areas near food sources like trash cans, pet food, compost piles, gardens or housemice. Their slow, nocturnal nature and their attraction to roadside carrion make opossums more likely to become roadkill.
Name
The Virginia opossum is the original animal named "opossum", a word which comes from Algonquian wapathemwa, meaning "white animal". Colloquially, the Virginia opossum is frequently just called a "possum". The term is applied more generally to any of the other marsupials of the families Didelphidae and Caenolestidae. The generic name (Didelphis) is derived from Ancient Greek: , "two", and , "womb".
The possums of Australia, whose name derives from their similarity to the American species, are also marsupials, but of the order Diprotodontia.
The Virginia opossum is known in Mexico as tlacuache, tacuachi, and tlacuachi, from the Nahuatl word tlacuatzin.
Range
The Virginia opossum's ancestors evolved in South America, but spread into North America as part of the Great American Interchange, which occurred mainly after the formation of the Isthmus of Panama about 3 million years ago. Didelphis was apparently one of the later migrants, entering North America about 0.8 million years ago. It is now found throughout Central America and North America from Costa Rica to southern Ontario and is expanding its range northward, northwesterly and northeasterly at a significant pace.
Its pre-European settlement range was generally as far north as Maryland; southern Ohio, Indiana and Illinois; Missouri and Kansas. The clearing of dense forests in these areas and further north by settlers allowed the opossum to move northward. Elimination of the opossum's main predators in these areas also contributed to their expansion. Since 1900, it has expanded its range to include most of New England (including Maine); New York, extreme southwestern Quebec; most of southern and eastern Ontario; most of Michigan and Wisconsin; most of Minnesota, southeastern South Dakota and most of Nebraska.
Areas such as Rhode Island and Waterloo Region and Simcoe County in southern Ontario rarely had sightings of opossums in the 1960s, but now have them regularly; some speculate that this is likely due to global warming causing winters to be warmer. Some people speculate the expansion into Ontario mostly occurred by opossums accidentally being transferred across the St. Lawrence, Niagara, Detroit and St. Clair rivers by motor vehicles or trains they may have climbed upon. As the opossum is not adapted to colder winters or heavy snow, its population may be significantly reduced if a colder winter with heavier snow occurs in a particular northern region.
The Virginia opossum was not originally native to the West Coast of the United States. It was intentionally introduced into the West during the Great Depression, probably as a source of food, and now occupies much of the Pacific coast. Its range has been expanding steadily northward into British Columbia.
Description
Virginia opossums can vary considerably in size, with larger specimens found to the north of the opossum's range and smaller specimens in the tropics. They measure long from their snout to the base of the tail, with the tail adding another . Males are slightly larger, with an average body length of with an average tail length of , while females are long with a tail. Weight for males ranges from and for females from .
Their coats are a dull grayish brown, other than on their faces, which are white. Opossums have long, hairless, prehensile tails, which can be used to grab branches and carry small objects. They also have hairless ears and a long, flat nose. Opossums have 50 teeth, more than any other North American land mammal, and opposable, clawless thumbs on their rear limbs. Opossums have 13 nipples, arranged in a circle of 12 with one in the middle.
The dental formula of an opossum is . No other mammal in North America has more than 6 upper incisors, but the Virginia opossum has 10.
Perhaps surprisingly for such a widespread and successful species, the Virginia opossum has one of the lowest encephalization quotients of any marsupial.
Its brain is one-fifth the size of a raccoon's.
Tracks
Virginia opossum tracks generally show five finger-like toes in both the fore and hind prints. The hind tracks are unusual and distinctive due to the opossum's opposable thumb, which generally prints at an angle of 90° or greater to the other fingers (sometimes near 180°). Individual adult tracks generally measure 1.9 in long by 2.0 in wide (4.8 × 5.1 cm) for the fore prints and 2.5 in long by 2.3 in wide (6.4 × 5.7 cm) for the hind prints. Opossums have claws on all fingers fore and hind except on the two thumbs (in the photograph, claw marks show as small holes just beyond the tip of each finger); these generally show in the tracks. In a soft medium, such as the mud in this photograph, the foot pads clearly show (these are the deep, darker areas where the fingers and toes meet the rest of the hand or foot, which have been filled with plant debris by wind due to the advanced age of the tracks).
The tracks in the photograph were made while the opossum was walking with its typical pacing gait. The four aligned toes on the hind print show the approximate direction of travel.
In a pacing gait, the limbs on one side of the body are moved simultaneously, just prior to moving both limbs on the other side of the body. This is illustrated in the pacing diagram, which explains why the left-fore and right-hind tracks are generally found together (and vice versa). If the opossum was not walking (perhaps running), the prints would fall in a different pattern. Other animals that generally employ a pacing gait are raccoons, bears, skunks, badgers, woodchucks, porcupines, and beavers.
When pacing, the opossum's 'stride' generally measures from 7 to 10 in, or 18 to 25 cm (in the pacing diagram the stride is 8.5 in, where one grid square is equal to 1 in2). To determine the stride of a pacing gait, measure from the tip (just beyond the fingers or toes in the direction of travel, disregarding claw marks) of one set of fore/hind tracks to the tip of the next set. By taking careful stride and track-size measurements, one can usually determine what species of animal created a set of tracks, even when individual track details are vague or obscured.
Behavior
"Playing possum"
If threatened, an opossum will either flee or take a stand. To appear threatening, an opossum will first bare its 50 teeth, snap its jaw, hiss, drool, and stand its fur on end to look bigger. If this does not work, the Virginia opossum is noted for feigning death in response to extreme fear. This is the genesis of the term "playing possum", which means pretending to be dead or injured with intent to deceive.
In this inactive state it lies limp and motionless on its side, mouth and eyes open, tongue hanging out, and feet clenched. Fear can also cause the opossum to release a green fluid from its anus with a putrid odor that repels predators. Heart rate drops by half, and breathing rate is so slow and shallow it is hardly detectable. Death feigning normally stops when the threat withdraws, and it can last for several hours. Besides discouraging animals that eat live prey, playing possum also convinces some large animals that the opossum is no threat to their young. "Playing possum" in response to threats from oncoming traffic often results in death.
Diet
Opossums are omnivorous (sometimes said to be insectivorous) and eat a wide range of plant-based food, as well as animal-based food like small invertebrates, carrion, eggs, fish, amphibians, reptiles, birds, small mammals, and other small animals.
Insects such as grasshoppers, crickets, and beetles make up the bulk of the animal foods eaten by opossums. It has been stated that opossums eat up to 95% of the ticks they encounter and may eat up to 5,000 ticks per season, helping to prevent the spread of tick-born illnesses, including Lyme disease and Rocky Mountain spotted fever. This interpretation has been challenged. A widely publicized 2009 study by the Cary Institute indicated that Virginia opossums in a laboratory setting could eat thousands of ticks per week grooming. However, subsequent studies of the stomach contents of wild Virginia opossums have not found any ticks in their diet.
Small animals include young rabbits, meadow voles, mice, rats, birds, snakes, lizards, frogs, fish, crayfish, gastropods, and earthworms. The Virginia opossum has been found to be very resistant to snake venom. Attracted to carrion on the side of the highway, opossums are at an increased risk of being hit by motor vehicles.
Plant foods are mainly eaten in late summer, autumn, and early winter. These include raspberries, blackberries, apples, acorns, beechnuts, seeds, grains, bulbs, and vegetables. Persimmons are one of the opossum's favorite foods during the autumn. Opossums in urban areas scavenge from bird feeders, vegetable gardens, compost piles, garbage cans, and food dishes intended for dogs and cats.
Opossums in captivity are known to engage in cannibalism, though this is probably uncommon in the wild. Because of this, placing an injured opossum in a confined space with its healthy counterparts is inadvisable.
Seasonality
The Virginia opossum is most active during the spring and summer. It does not hibernate but reduces its activity during the winter. It may not leave its den for several days if the temperature drops below . Both males and females are at greater risk of injury during breeding season. Males extend their range in search of mates which puts them at greater risk of injury from motor vehicles and predators as they venture into unfamiliar territory. Females carrying young are slower moving and have to forage earlier in the evening and later into the night, also increasing their risk of injury from motor vehicles and predation.
Reproduction
The breeding season for the Virginia opossum can begin as early as December and continue through October with most young born between February and June. A female opossum may have one to three litters per year. During the mating season, the male attracts the female by making clicking sounds with his mouth. The female's estrus cycle is 28 days and lasts 36 hours. Gestation lasts 11–13 days and the average litter size is 8–9 infants, although over 20 infants may be born. Opossums have a very high mortality rate of their young; only one in ten offspring survive to reproductive adulthood.
Newborns are the size of a honeybee. Once delivered through the median vagina or central birth canal, newborn opossums climb up into the female opossum's pouch and latch onto one of her 13 teats. The young remain latched for two months and in the pouch for months. The young then climb onto the mother's back, where she carries them for the remainder of their time together. It is during this time that the young learn survival skills. They leave their mother after about four or five months.
Like all female marsupials, the female's reproductive system is bifid, with two lateral vaginae, uteri, and ovaries. The male's penis is also bifid, with two heads, and as is common in New World marsupials, the sperm pair up in the testes and only separate as they come close to the egg. Males have three pairs of Cowper's glands.
Lifespan
Compared to other mammals, including most other marsupials except dasyuromorphians, opossums have unusually short lifespans for their size and metabolic rate. The Virginia opossum has a maximal lifespan in the wild of only about two years. Even in captivity, opossums live only about four years. The rapid senescence of opossums is thought to reflect the fact that they have few defenses against predators; given that they would have little prospect of living very long regardless, they are not under selective pressure to develop biochemical mechanisms to enable a long lifespan. In support of this hypothesis, one population on Sapelo Island, off the coast of Georgia, which has been isolated for thousands of years without natural predators, was found by Dr. Steven Austad to have evolved lifespans up to 50% longer than those of mainland populations.
Historical references
An early description of the opossum comes from explorer John Smith, who wrote in Map of Virginia, with a Description of the Countrey, the Commodities, People, Government and Religion in 1608 that "An Opassom hath an head like a Swine, and a taile like a Rat, and is of the bignes of a Cat. Under her belly she hath a bagge, wherein she lodgeth, carrieth, and sucketh her young."
The opossum was more formally described in 1698 in a published letter entitled "Carigueya, Seu Marsupiale Americanum Masculum. Or, The Anatomy of a Male Opossum: In a Letter to Dr Edward Tyson", from Mr William Cowper, Chirurgeon, and Fellow of the Royal Society, London, by Edward Tyson, M.D. Fellow of the College of Physicians and of the Royal Society. The letter suggests even earlier descriptions.
Relationship with humans
Opossums are not considered dangerous to humans. Though their open-mouth hiss when frightened is often mistaken as rabid behavior, opossums are naturally resistant to rabies due to their low body temperature. Opossums can however host parasites and carry diseases such as tuberculosis, leptospirosis, and tularemia, among others.
Like raccoons, opossums can be found in urban environments, where they eat pet food, rotten fruit, and human garbage. They also are considered a common predator of poultry farming in North America. Research suggests that proximity to humans causes an increase in body size for opossums living in or near urban environments. Though sometimes mistakenly considered to be rats, opossums are not closely related to rodents or any other placental mammals.
The opossum was once a favorite game animal in the United States, particularly in the southern regions which have a large body of recipes and folklore relating to it. Their past wide consumption in regions where present is evidenced by recipes available online and in books such as older editions of The Joy of Cooking. A traditional method of preparation is baking, sometimes in a pie or pastry, though at present "possum pie" most often refers to a sweet confection containing no meat of any kind.
Around the turn of the 20th century, the opossum was the subject of numerous songs, including "Carve dat Possum", a minstrel song written in 1875 by Sam Lucas.
Although it is widely distributed in the United States, the Virginia opossum's appearance in folklore and popularity as a food item has tied it closely to the American Southeast. In animation, it is often used to depict uncivilized characters or "hillbillies". Not surprisingly, then, the Virginia opossum is featured in several episodes of the hit TV show The Beverly Hillbillies, such as the "Possum Day" episode in 1965. The title character in Walt Kelly's long-running comic strip Pogo was an opossum. In an attempt to create another icon like the teddy bear, President William Howard Taft was tied to the character Billy Possum. The character did not do well, as public perception of the opossum led to its downfall. In December 2010, a cross-eyed Virginia opossum in Germany's Leipzig Zoo named Heidi became an international celebrity. She appeared on a TV talk show to predict the 2011 Oscar winners, similar to the World Cup predictions made previously by Paul the Octopus, also in Germany.
The Perelman Building in Philadelphia, Pennsylvania, an annex of the Philadelphia Museum of Art, was formerly the Fidelity Mutual Life Insurance Company Building. Built in the late 1920s its facade is decorated with polychrome sculptures of animals symbolizing various attributes of insurance, including a possum to represent "protection".
| Biology and health sciences | Marsupials | Animals |
46764 | https://en.wikipedia.org/wiki/Artiodactyl | Artiodactyl | Artiodactyls are placental mammals belonging to the order Artiodactyla ( , ). Typically, they are ungulates which bear weight equally on two (an even number) of their five toes (the third and fourth, often in the form of a hoof). The other three toes are either present, absent, vestigial, or pointing posteriorly. By contrast, most perissodactyls bear weight on an odd number of the five toes. Another difference between the two orders is that many artiodactyls (except for Suina) digest plant cellulose in one or more stomach chambers rather than in their intestine (as perissodactyls do). Molecular biology, along with new fossil discoveries, has found that cetaceans (whales, dolphins, and porpoises) fall within this taxonomic branch, being most closely related to hippopotamuses. Some modern taxonomists thus apply the name Cetartiodactyla () to this group, while others opt to include cetaceans within the existing name of Artiodactyla. Some researchers use "even-toed ungulates" to exclude cetaceans and only include terrestrial artiodactyls, making the term paraphyletic in nature.
The roughly 270 land-based even-toed ungulate species include pigs, peccaries, hippopotamuses, antelopes, deer, giraffes, camels, llamas, alpacas, sheep, goats and cattle. Many are herbivores, but suids are omnivorous, and cetaceans are entirely carnivorous. Artiodactyls are also known by many extinct groups such as anoplotheres, cainotheriids, merycoidodonts, entelodonts, anthracotheres, basilosaurids, and palaeomerycids. Many artiodactyls are of great dietary, economic, and cultural importance to humans.
Evolutionary history
The oldest fossils of even-toed ungulates date back to the early Eocene (about 53 million years ago). Since these findings almost simultaneously appeared in Europe, Asia, and North America, it is very difficult to accurately determine the origin of artiodactyls. The fossils are classified as belonging to the family Diacodexeidae; their best-known and best-preserved member is Diacodexis. These were small animals, some as small as a hare, with a slim build, lanky legs, and a long tail. Their hind legs were much longer than their front legs. The early to middle Eocene saw the emergence of the ancestors of most of today's mammals.
Two formerly widespread, but now extinct, families of even-toed ungulates were Entelodontidae and Anthracotheriidae. Entelodonts existed from the middle Eocene to the early Miocene in Eurasia and North America. They had a stocky body with short legs and a massive head, which was characterized by two humps on the lower jaw bone. Anthracotheres had a large, porcine (pig-like) build, with short legs and an elongated muzzle. This group appeared in the middle Eocene up until the Pliocene, and spread throughout Eurasia, Africa, and North America. Anthracotheres are thought to be the ancestors of hippos, and, likewise, probably led a similar aquatic lifestyle. Hippopotamuses appeared in the late Miocene and occupied Africa and Asia—they never got to the Americas.
The camels (Tylopoda) were, during large parts of the Cenozoic, limited to North America; early forms like Cainotheriidae occupied Europe. Among the North American camels were groups like the stocky, short-legged Merycoidodontidae. They first appeared in the late Eocene and developed a great diversity of species in North America. Only in the late Miocene or early Pliocene did they migrate from North America into Eurasia. The North American varieties became extinct around 10,000 years ago.
Suina (including pigs) have been around since the Eocene. In the late Eocene or the Oligocene, two families stayed in Eurasia and Africa; the peccaries, which became extinct in the Old World, exist today only in the Americas.
South America was settled by even-toed ungulates only in the Pliocene, after the land bridge at the Isthmus of Panama formed some three million years ago. With only the peccaries, lamoids (or llamas), and various species of capreoline deer, South America has comparatively fewer artiodactyl families than other continents, except Australia, which has no native species.
Taxonomy and phylogeny
The classification of artiodactyls was hotly debated because ocean-dwelling cetaceans evolved from land-dwelling even-toed ungulates. Some semiaquatic even-toed ungulates (hippopotamuses) are more closely related to ocean-dwelling cetaceans than to other even-toed ungulates.
Phylogenetic classification only recognizes monophyletic taxa; that is, groups that descend from a common ancestor and include all of its descendants. To address this problem, the traditional order Artiodactyla and infraorder Cetacea are sometimes subsumed into the more inclusive Cetartiodactyla taxon. An alternative approach is to include both land-dwelling even-toed ungulates and ocean-dwelling cetaceans in a revised Artiodactyla taxon.
Classification
Order Artiodactyla/Clade Cetartiodactyla
Family Diacodexeidae
Family Amphimerycidae
Family Robiacinidae
Family Cainotheriidae
Suborder Tylopoda
Family Anoplotheriidae?
Family Merycoidodontidae
Family Agriochoeridae
Family Camelidae: camels, llamas, alpacas, vicuñas, and guanacos (7 extant and 13 extinct species)
Family Oromerycidae
Family Xiphodontidae?
Family Protoceratidae?
Clade Artiofabula
Suborder Suina
Family Suidae: pigs (19 species)
Family Tayassuidae: peccaries (4 species)
Family Sanitheriidae
Family Doliochoeridae
Clade Cetruminantia
Clade Cetancodontamorpha
Genus Andrewsarchus?
Family Entelodontidae
Suborder Whippomorpha
Family Raoellidae
Superfamily Dichobunoidea – paraphyletic to Cetacea and Raoellidae
Family Dichobunidae
Family Helohyidae
Family Choeropotamidae
Family Cebochoeridae (Family contains Cebochoerus)
Family Mixtotheriidae
Infraorder Ancodonta
Family Anthracotheriidae – paraphyletic to Hippopotamidae
Family Hippopotamidae: hippos (two species)
Infraorder Cetacea: whales (about 90 species)
Parvorder Archaeoceti
Family Pakicetidae
Family Ambulocetidae
Family Remingtonocetidae
Family Basilosauridae
Clade Neoceti
Parvorder Mysticeti: baleen whales
Superfamily Balaenoidea: right whales
Family Balaenidae: greater right whales (four species)
Family Cetotheriidae: pygmy right whale (one species)
Superfamily Balaenopteroidea: large baleen whales
Family Balaenopteridae: slender-back rorquals and humpback whale (eight species)
Family Eschrichtiidae: gray whale (one species)
Parvorder Odontoceti: toothed whales
Superfamily Delphinoidea: oceanic dolphins, porpoises, and others
Family Delphinidae: oceanic true dolphins (38 species)
Family Monodontidae: Arctic whales; narwhal and beluga (two species)
Family Phocoenidae: porpoises (six species)
Superfamily Physeteroidea: sperm whales
Family Kogiidae: lesser sperm whales (two species)
Family Physeteridae: sperm whale (one species)
Superfamily Platanistoidea: river dolphins
Family Iniidae: South American river dolphins (two species)
Family Lipotidae: Chinese river dolphin (one species, possibly extinct)
Family Platanistidae: South Asian river dolphin (one species)
Family Pontoporiidae: La Plata dolphin (one species)
Superfamily Ziphioidea
Family Ziphiidae: beaked whales (22 species)
Total-group Ruminantia
Suborder Ruminantia
Infraorder Tragulina
Family Leptomerycidae
Family Hypertragulidae
Family Praetragulidae
Family Gelocidae
Family Bachitheriidae
Family Tragulidae: chevrotains (ten species)
Family Archaeomerycidae
Family Lophiomerycidae
Infraorder Pecora
Family Palaeomerycidae
Family Dromomerycidae
Family Antilocapridae: pronghorn (one species)
Family Climacoceratidae
Family Giraffidae: okapi and four species of giraffe (five species total)
Family Hoplitomerycidae
Family Cervidae: deer (49 species)
Family Moschidae: musk deer (7 species)
Family Bovidae: cattle, buffaloes, goats, sheep, antelopes, caprines, and bison (135 species)
Research history
In the 1990s, biological systematics used not only morphology and fossils to classify organisms, but also molecular biology. Molecular biology involves sequencing an organism's DNA and RNA and comparing the sequence with that of other living beings—the more similar they are, the more closely they are related. Comparison of even-toed ungulate and cetaceans genetic material has shown that the closest living relatives of whales and hippopotamuses is the paraphyletic group Artiodactyla.
Dan Graur and Desmond Higgins were among the first to come to this conclusion, and included a paper published in 1994. However, they did not recognize hippopotamuses and classified the ruminants as the sister group of cetaceans. Subsequent studies established the close relationship between hippopotamuses and cetaceans; these studies were based on casein genes, SINEs, fibrinogen sequences, cytochrome and rRNA sequences, IRBP (and vWF) gene sequences, adrenergic receptors, and apolipoproteins.
In 2001, the fossil limbs of a Pakicetus (amphibioid cetacean the size of a wolf) and Ichthyolestes (an early whale the size of a fox) were found in Pakistan. They were both archaeocetes ("ancient whales") from about 48 million years ago (in the Eocene). These findings showed that archaeocetes were more terrestrial than previously thought, and that the special construction of the talus (ankle bone) with a double-rolled joint surface, previously thought to be unique to even-toed ungulates, were also in early cetaceans. The mesonychians, another type of ungulate, did not show this special construction of the talus, and thus was concluded to not have the same ancestors as cetaceans.
The oldest cetaceans date back to the early Eocene (53 million years ago), whereas the oldest known hippopotamus dates back only to the Miocene (15 million years ago). The hippopotamids are descended from the anthracotheres, a family of semiaquatic and terrestrial artiodactyls that appeared in the late Eocene, and are thought to have resembled small- or narrow-headed hippos. Research is therefore focused on anthracotheres (family Anthracotheriidae); one dating from the Eocene to Miocene was declared to be "hippo-like" upon discovery in the 19th century. A study from 2005 showed that the anthracotheres and hippopotamuses had very similar skulls, but differed in the adaptations of their teeth. It was nevertheless believed that cetaceans and anthracothereres descended from a common ancestor, and that hippopotamuses developed from anthracotheres. A study published in 2015 confirmed this, but also revealed that hippopotamuses were derived from older anthracotherians. The newly introduced genus Epirigenys from Eastern Africa is thus the sister group of hippos.
Historical classification of Artiodactyla
Linnaeus postulated a close relationship between camels and ruminants as early as the mid-1700s. Henri de Blainville recognized the similar anatomy of the limbs of pigs and hippos, and British zoologist Richard Owen coined the term "even-toed ungulates" and the scientific name "Artiodactyla" in 1848.
Internal morphology (mainly the stomach and the molars) were used for classification. Suines (including pigs) and hippopotamuses have molars with well-developed roots and a simple stomach that digests food. Thus, they were grouped together as non-ruminants (porcine). All other even-toed ungulates have molars with a selenodont construction (crescent-shaped cusps) and have the ability to ruminate, which requires regurgitating food and re-chewing it. Differences in stomach construction indicated that rumination evolved independently between tylopods and ruminants; therefore, tylopods were excluded from Ruminantia.
The taxonomy that was widely accepted by the end of the 20th century was:
Historical classification of Cetacea
Modern cetaceans are highly adapted sea creatures which, morphologically, have little in common with land mammals; they are similar to other marine mammals, such as seals and sea cows, due to convergent evolution. However, they evolved from originally terrestrial mammals. The most likely ancestors were long thought to be mesonychians—large, carnivorous animals from the early Cenozoic (Paleocene and Eocene), which had hooves instead of claws on their feet. Their molars were adapted to a carnivorous diet, resembling the teeth in modern toothed whales, and, unlike other mammals, had a uniform construction.
The suspected relations can be shown as follows:
Inner systematics
Molecular findings and morphological indications suggest that artiodactyls, as traditionally defined, are paraphyletic with respect to cetaceans. Cetaceans are deeply nested within the former; the two groups together form a monophyletic taxon, for which the name Cetartiodactyla is sometimes used. Modern nomenclature divides Artiodactyla (or Cetartiodactyla) in four subordinate taxa: camelids (Tylopoda), pigs and peccaries (Suina), ruminants (Ruminantia), and hippos plus cetaceans (Whippomorpha).
The presumed lineages within Artiodactyla can be represented in the following cladogram:
The four summarized Artiodactyla taxa are divided into ten extant families:
The camelids (Tylopoda) comprise only one family, Camelidae. It is a species-poor artiodactyl suborder of North American origin that is well adapted to extreme habitats—the dromedary and Bactrian camels in the Old World deserts and the guanacos, llamas, vicuñas, and alpacas in South American high mountain regions.
The pig-like creatures (Suina) are made up of two families:
The pigs (Suidae) are limited to the Old World. These include the wild boar and the domesticated form, the domestic pig.
The peccaries (Tayassuidae) are named after glands on their belly and are indigenous to Central and South America.
The ruminants (Ruminantia) consist of six families:
The mouse deer (Tragulidae) are the smallest and most primitive even-toed-ruminants; they inhabit forests of Africa and Asia.
The giraffe-like creatures (Giraffidae) are composed of two species: the giraffe and the okapi.
The musk deer (Moschidae) is indigenous to East Asia.
The antilocaprids (Antilocapridae) of North America comprise only one extant species: the pronghorn.
The deer (Cervidae) are made up of about 45 species, which are characterized by a pair of antlers (generally only in males). They are spread across Europe, Asia, and the Americas. This group includes, among other species, the red deer, moose, elk (wapiti), and reindeer (caribou).
The bovids (Bovidae) are the most species-rich. Among them are cattle, sheep, caprines, and antelopes, and more.
The whippomorphs include hippos and cetaceans:
The hippos (Hippopotamidae) comprise two groups, the common hippo and the pygmy hippo.
The cetaceans comprise 72 species and two parvorders: toothed whales (Odontoceti) and baleen whales (Mysticeti)
Although deer, musk deer, and pronghorns have traditionally been summarized as cervids (Cervioidea), molecular studies provide different—and inconsistent—results, so the question of phylogenetic systematics of infraorder Pecora (the horned ruminants) for the time being, cannot be answered.
Anatomy
Artiodactyls are generally quadrupeds. Two major body types are known: suinids and hippopotamuses are characterized by a stocky body, short legs, and a large head; camels and ruminants, though, have a more slender build and lanky legs. Size varies considerably; the smallest member, the mouse deer, often reaches a body length of only and a weight of . The largest member, the hippopotamus, can grow up to in length and weigh , and the giraffe can grow to be tall and in body length. All even-toed ungulates display some form of sexual dimorphism: the males are consistently larger and heavier than the females. In deer, only the males boast antlers, and the horns of bovines are usually small or not present in females. Male Indian antelopes have a much darker coat than females.
Almost all even-toed ungulates have fur, with the exception being the nearly hairless hippopotamus. Fur varies in length and coloration depending on the habitat. Species in cooler regions can shed their coat. Camouflaged coats come in colors of yellow, gray, brown, or black tones.
Limbs
Even-toed ungulates bear their name because they have an even number of toes (two or four)—in some peccaries, the hind legs have a reduction in the number of toes to three. The central axis of the leg is between the third and fourth toe. The first toe is missing in modern artiodactyls, and can only be found in now-extinct genera. The second and fifth toes are adapted differently between species:
When camels have only two toes present, the claws are transformed into nails (while both are made of keratin, claws are curved and pointed while nails are flat and dull). These claws consist of three parts: the plate (top and sides), the sole (bottom), and the bale (rear). In general, the claws of the forelegs are wider and blunter than those of the hind legs, and they are farther apart. Aside from camels, all even-toed ungulates put just the tip of the foremost phalanx on the ground.
In even-toed ungulates, the bones of the stylopodium (upper arm or thigh bone) and zygopodiums (tibia and fibula) are usually elongated. The muscles of the limbs are predominantly localized, which ensures that artiodactyls often have very slender legs. A clavicle is never present, and the scapula is very agile and swings back and forth for added mobility when running. The special construction of the legs causes the legs to be unable to rotate, which allows for greater stability when running at high speeds. In addition, many smaller artiodactyls have a very flexible body, contributing to their speed by increasing their stride length.
Head
Many even-toed ungulates have a relatively large head. The skull is elongated and rather narrow; the frontal bone is enlarged near the back and displaces the parietal bone, which forms only part of the side of the cranium (especially in ruminants).
Horns and antlers
Four families of even-toed ungulates have cranial appendages. These Pecora (with the exception of the musk deer), have one of four types of cranial appendages: true horns, antlers, ossicones, or pronghorns.
True horns have a bone core that is covered in a permanent sheath of keratin, and are found only in the bovids. Antlers are bony structures that are shed and replaced each year; they are found in deer (members of the family Cervidae). They grow from a permanent outgrowth of the frontal bone called the pedicle and can be branched, as in the white-tailed deer (Odocoileus virginianus), or palmate, as in the moose (Alces alces). Ossicones are permanent bone structures that fuse to the frontal or parietal bones during an animal's life and are found only in the Giraffidae. Pronghorns, while similar to horns in that they have keratinous sheaths covering permanent bone cores, are deciduous.
All these cranial appendages can serve for posturing, battling for mating privilege, and for defense. In almost all cases, they are sexually dimorphic, and are often found only on the males. One exception is the species Rangifer tarandus, known as reindeer in Europe or caribou in North America, where both sexes can grow antlers yearly, though the females' antlers are typically smaller and not always present.
Teeth
There are two trends in terms of teeth within Artiodactyla. The Suina and hippopotamuses have a relatively large number of teeth (with some pigs having 44); their dentition is more adapted to a squeezing mastication, which is characteristic of omnivores. Camels and ruminants have fewer teeth; there is often a yawning diastema, a designated gap in the teeth where the molars are aligned for crushing plant matter.
The incisors are often reduced in ruminants, and are completely absent in the upper jaw. The canines are enlarged and tusk-like in the Suina, and are used for digging in the ground and for defense. In ruminants, the males' upper canines are enlarged and used as a weapon in certain species (mouse deer, musk deer, water deer); species with frontal weapons are usually missing the upper canines. The lower canines of ruminants resemble the incisors, so that these animals have eight uniform teeth in the frontal part of the lower jaw.
The molars of porcine have only a few bumps. In contrast, camels and ruminants have bumps that are crescent-shaped cusps (selenodont).
Senses
Artiodactyls have a well-developed sense of smell and sense of hearing. Unlike many other mammals, they have a poor sense of sight—moving objects are much easier to see than stationary ones. Similar to many other prey animals, their eyes are on the sides of the head, giving them an almost panoramic view.
Digestive system
The ruminants (Ruminantia) ruminate their food—they regurgitate and re-chew it. Ruminants' mouths often have additional salivary glands, and the oral mucosa is often heavily calloused to avoid injury from hard plant parts and to allow easier transport of roughly chewed food. Their stomachs are divided into three to four sections: the rumen, the reticulum, the omasum, and the abomasum. After the food is ingested, it is mixed with saliva in the rumen and reticulum and separates into layers of solid versus liquid material. The solids lump together to form a bolus (also known as the cud); this is regurgitated by reticular contractions while the glottis is closed. When the bolus enters the mouth, the fluid is squeezed out with the tongue and re-swallowed. The bolus is chewed slowly to completely mix it with saliva and to break it down. Ingested food passes to the "fermentation chamber" (rumen and reticulum), where it is kept in continual motion by rhythmic contractions. Cellulytic microbes (bacteria, protozoa, and fungi) produce cellulase, which is needed to break down the cellulose found in plant material. This form of digestion has two advantages: plants that are indigestible to other species can be digested and used, and the duration of the actual food consumption shortened; the animal spends only a short time out in the open with its head to the ground—rumination can take place later, in a sheltered area.
Tylopoda (camels, llamas, and alpacas) and chevrotains have three-chambered stomachs, while the rest of Ruminantia have four-chambered stomachs. The handicap of a heavy digestive system has increased selective pressure towards limbs that allow the animal to quickly escape predators. Most species within Suina have a simple two-chambered stomach that allows for an omnivorous diet. The babirusa, however, is a herbivore, and has extra maxillary teeth to allow for proper mastication of plant material. Most of the fermentation occurs with the help of cellulolytic microorganisms within the caecum of the large intestine. Peccaries have a complex stomach that contains four compartments. Their fore stomach has fermentation carried out by microbes and has high levels of volatile fatty acid; it has been proposed that their complex fore-stomach is a means to slow digestive passage and increase digestive efficiency. Hippopotamuses have three-chambered stomachs and do not ruminate. They consume around of grass and other plant matter each night. They may cover distances up to to obtain food, which they digest with the help of microbes that produce cellulase. Their closest living relatives, the whales, are obligate carnivores.
Unlike other even-toed ungulates, pigs have a simple sack-shaped stomach. Some artiodactyls, such as white-tailed deer, lack a gall bladder.
Genitourinary system
The penises of even-toed ungulates have an S-shape at rest and lie in a pocket under the skin on the belly. The corpora cavernosa are only slightly developed; and an erection mainly causes this curvature to extend, which leads to an extension, but not a thickening, of the penis. Cetaceans have similar penises. In some even-toed ungulates, the penis contains a structure called the urethral process or penile vermiform appendix.
The testicles are located in the scrotum and thus outside the abdominal cavity. The ovaries of many females descend—as the testicles descend of many male mammals—and are close to the pelvic inlet at the level of the fourth lumbar vertebra. The uterus has two horns (uterus bicornis).
Other
The number of mammary glands is variable and correlates, as in all mammals, with litter size. Pigs, which have the largest litter size of all even-toed ungulates, have two rows of teats lined from the armpit to the groin area. In most cases, however, even-toed ungulates have only one or two pairs of teats. In some species these form an udder in the groin region.
Secretory glands in the skin are present in virtually all species and can be located in different places, such as in the eyes, behind the horns, the neck, or back, on the feet, or in the anal region.
Artiodactyls have a carotid rete heat exchange that enables them, unlike perissodactyls which lack one, to regulate their brain temperature independently of their bodies. It has been argued that its presence explains the greater success of artiodactyls compared to perissodactyls in being able to adapt to diverse environments from the Arctic Circle to deserts and tropical savannahs.
Lifestyle
Distribution and habitat
Artiodactyls are native to almost all parts of the world, with the exception of Oceania and Antarctica. Humans have introduced different artiodactyls worldwide as hunting animals. Artiodactyls inhabit almost every habitat, from tropical rainforests and steppes to deserts and high mountain regions. The greatest biodiversity prevails in open habitats such as grasslands and open forests.
Social behavior
The social behavior of even-toed ungulates varies from species to species. Generally, there is a tendency to merge into larger groups, but some live alone or in pairs. Species living in groups often have a hierarchy, both among males and females. Some species also live in harem groups, with one male, several females, and their common offspring. In other species, the females and juveniles stay together, while males are solitary or live in bachelor groups and seek out females only during mating season.
Many artiodactyls are territorial and mark their territory, for example, with glandular secretions or urine. In addition to year-round sedentary species, there are animals that migrate seasonally.
There are diurnal, crepuscular, and nocturnal artiodactyls. Some species' pattern of wakefulness varies with season or habitat.
Reproduction and life expectancy
Generally, even-toed ungulates tend to have long gestation periods, smaller litter sizes, and more highly developed newborns. As with many other mammals, species in temperate or polar regions have a fixed mating season, while those in tropical areas breed year-round. They carry out polygynous mating behavior, meaning a male mates with several females and suppresses all competition.
The length of the gestation period varies from four to five months for porcine, deer, and musk deer; six to ten months for hippos, deer, and bovines; ten to thirteen months with camels; and fourteen to fifteen months with giraffes. Most deliver one or two babies, but some pigs can deliver up to ten.
The newborns are precocial (born relatively mature) and come with open eyes and are hairy (with the exception of the hairless hippos). Juvenile deer and pigs have striped or spotted coats; the pattern disappears as they grow older. The juveniles of some species spend their first weeks with their mother in a safe location, where others may be running and following the herd within a few hours or days.
Life expectancy is typically twenty to thirty years; as in many mammals, smaller species often have a shorter lifespan than larger species. The artiodactyls with the longest lifespans are the hippos, cows, and camels, which can live 40 to 50 years.
Predators and parasites
Artiodactyls have different natural predators depending on their size and habitat. There are several carnivores that prey on them, including large cats (e.g., lions) and bears. Other predators are crocodiles, wolves and dogs, large raptors, and for small species and young animals, large snakes. For cetaceans, possible predators include sharks, polar bears, and other cetaceans; in the latter is the orca, the top predator of the oceans.
Parasites include nematodes, botflies, fleas, lice, or flukes, but they have debilitating effects only when the infestation is severe.
Interactions with humans
Domestication
Artiodactyls have been hunted by primitive humans for various reasons: for meat or fur, as well as to use their bones and teeth as weapons or tools. Their domestication began around 8000 BCE. To date, humans have domesticated goats, sheep, cattle, camels, llamas, alpacas, and pigs. Initially, livestock was used primarily for food, but they began being used for work activities around 3000 BCE. Clear evidence exists of antelope being used for food 2 million years ago in the Olduvai Gorge, part of the Great Rift Valley. Cro-Magnons relied heavily on reindeer for food, skins, tools, and weapons; with dropping temperatures and increased reindeer numbers at the end of the Pleistocene, they became the prey of choice. Reindeer remains accounted for 94% of bones and teeth found in a cave above the river Céou that was inhabited around 12,500 years ago. In general, most even-toed ungulates can be consumed as a kosher meat, with the principal exception of Suina (pigs, etc.) and hippopotamids, which are even-toed but do not chew the cud, and of Cetacea, which, for the purpose of rabbinic law, are considered to be scaleless fish, and thus not kosher.
Today, artiodactyls are kept primarily for their meat, milk, and wool, fur, or hide for clothing. Domestic cattle, the water buffalo, the yak, and camels are used for work, as rides, or as pack animals.
Threats
The endangerment level of each even-toed ungulate is different. Some species are synanthropic (such as the wild boar) and have spread into areas that they are not indigenous to, either having been brought in as farm animals or having run away as people's pets. Some artiodactyls also benefit from the fact that their predators (e.g., the Tasmanian tiger) were severely decimated by ranchers, who saw them as competition.
Conversely, many artiodactyls have declined significantly in numbers, and some have even gone extinct, largely due to over-hunting, and, more recently, habitat destruction. Extinct species include several gazelles, the aurochs, the Malagasy hippopotamus, the bluebuck, and Schomburgk's deer. Two species, the scimitar-horned oryx and Père David's deer, are extinct in the wild. 14 species are considered critically endangered, including the addax, the kouprey, the wild Bactrian camel, Przewalski's gazelle, the saiga, and the pygmy hog. 24 species are considered endangered.
| Biology and health sciences | Artiodactyla | null |
46770 | https://en.wikipedia.org/wiki/Fixed-wing%20aircraft | Fixed-wing aircraft | A fixed-wing aircraft is a heavier-than-air aircraft, such as an airplane, which is capable of flight using aerodynamic lift. Fixed-wing aircraft are distinct from rotary-wing aircraft (in which a rotor mounted on a spinning shaft generates lift), and ornithopters (in which the wings oscillate to generate lift). The wings of a fixed-wing aircraft are not necessarily rigid; kites, hang gliders, variable-sweep wing aircraft, and airplanes that use wing morphing are all classified as fixed wing.
Gliding fixed-wing aircraft, including free-flying gliders and tethered kites, can use moving air to gain altitude. Powered fixed-wing aircraft (airplanes) that gain forward thrust from an engine include powered paragliders, powered hang gliders and ground effect vehicles. Most fixed-wing aircraft are operated by a pilot, but some are unmanned and controlled either remotely or autonomously.
History
Kites
Kites were used approximately 2,800 years ago in China, where kite building materials were available. Leaf kites may have been flown earlier in what is now Sulawesi, based on their interpretation of cave paintings on nearby Muna Island. By at least 549 AD paper kites were flying, as recorded that year, a paper kite was used as a message for a rescue mission. Ancient and medieval Chinese sources report kites used for measuring distances, testing the wind, lifting men, signaling, and communication for military operations.
Kite stories were brought to Europe by Marco Polo towards the end of the 13th century, and kites were brought back by sailors from Japan and Malaysia in the 16th and 17th centuries. Although initially regarded as curiosities, by the 18th and 19th centuries kites were used for scientific research.
Gliders and powered devices
Around 400 BC in Greece, Archytas was reputed to have designed and built the first self-propelled flying device, shaped like a bird and propelled by a jet of what was probably steam, said to have flown some . This machine may have been suspended during its flight.
One of the earliest attempts with gliders was by 11th-century monk Eilmer of Malmesbury, which failed. A 17th-century account states that 9th-century poet Abbas Ibn Firnas made a similar attempt, though no earlier sources record this event.
In 1799, Sir George Cayley laid out the concept of the modern airplane as a fixed-wing machine with systems for lift, propulsion, and control. Cayley was building and flying models of fixed-wing aircraft as early as 1803, and built a successful passenger-carrying glider in 1853. In 1856, Frenchman Jean-Marie Le Bris made the first powered flight, had his glider L'Albatros artificiel towed by a horse along a beach. In 1884, American John J. Montgomery made controlled flights in a glider as a part of a series of gliders he built between 1883 and 1886. Other aviators who made similar flights at that time were Otto Lilienthal, Percy Pilcher, and protégés of Octave Chanute.
In the 1890s, Lawrence Hargrave conducted research on wing structures and developed a box kite that lifted the weight of a man. His designs were widely adopted. He also developed a type of rotary aircraft engine, but did not create a powered fixed-wing aircraft.
Powered flight
Sir Hiram Maxim built a craft that weighed 3.5 tons, with a 110-foot (34-meter) wingspan powered by two 360-horsepower (270-kW) steam engines driving two propellers. In 1894, his machine was tested with overhead rails to prevent it from rising. The test showed that it had enough lift to take off. The craft was uncontrollable, and Maxim abandoned work on it.
The Wright brothers' flights in 1903 with their Flyer I are recognized by the Fédération Aéronautique Internationale (FAI), the standard setting and record-keeping body for aeronautics, as "the first sustained and controlled heavier-than-air powered flight". By 1905, the Wright Flyer III was capable of fully controllable, stable flight for substantial periods.
In 1906, Brazilian inventor Alberto Santos Dumont designed, built and piloted an aircraft that set the first world record recognized by the Aéro-Club de France by flying the 14 bis in less than 22 seconds. The flight was certified by the FAI.
The Bleriot VIII design of 1908 was an early aircraft design that had the modern monoplane tractor configuration. It had movable tail surfaces controlling both yaw and pitch, a form of roll control supplied either by wing warping or by ailerons and controlled by its pilot with a joystick and rudder bar. It was an important predecessor of his later Bleriot XI Channel-crossing aircraft of the summer of 1909.
World War I
World War I served initiated the use of aircraft as weapons and observation platforms. The earliest known aerial victory with a synchronized machine gun-armed fighter aircraft occurred in 1915, flown by German Luftstreitkräfte Lieutenant Kurt Wintgens. Fighter aces appeared; the greatest (by number of air victories) was Manfred von Richthofen.
Alcock and Brown crossed the Atlantic non-stop for the first time in 1919. The first commercial flights traveled between the United States and Canada in 1919.
Interwar aviation; the "Golden Age"
The so-called Golden Age of Aviation occurred between the two World Wars, during which updated interpretations of earlier breakthroughs. Innovations include Hugo Junkers' all-metal air frames in 1915 leading to multi-engine aircraft of up to 60+ meter wingspan sizes by the early 1930s, adoption of the mostly air-cooled radial engine as a practical aircraft power plant alongside V-12 liquid-cooled aviation engines, and longer and longer flights – as with a Vickers Vimy in 1919, followed months later by the U.S. Navy's NC-4 transatlantic flight; culminating in May 1927 with Charles Lindbergh's solo trans-Atlantic flight in the Spirit of St. Louis spurring ever-longer flight attempts.
World War II
Airplanes had a presence in the major battles of World War II. They were an essential component of military strategies, such as the German Blitzkrieg or the American and Japanese aircraft carrier campaigns of the Pacific.
Military gliders were developed and used in several campaigns, but were limited by the high casualty rate encountered. The Focke-Achgelis Fa 330 Bachstelze (Wagtail) rotor kite of 1942 was notable for its use by German U-boats.
Before and during the war, British and German designers worked on jet engines. The first jet aircraft to fly, in 1939, was the German Heinkel He 178. In 1943, the first operational jet fighter, the Messerschmitt Me 262, went into service with the German Luftwaffe. Later in the war the British Gloster Meteor entered service, but never saw action – top air speeds for that era went as high as , with the early July 1944 unofficial record flight of the German Me 163B V18 rocket fighter prototype.
Postwar
In October 1947, the Bell X-1 was the first aircraft to exceed the speed of sound, flown by Chuck Yeager.
In 1948–49, aircraft transported supplies during the Berlin Blockade. New aircraft types, such as the B-52, were produced during the Cold War.
The first jet airliner, the de Havilland Comet, was introduced in 1952, followed by the Soviet Tupolev Tu-104 in 1956. The Boeing 707, the first widely successful commercial jet, was in commercial service for more than 50 years, from 1958 to 2010. The Boeing 747 was the world's largest passenger aircraft from 1970 until it was surpassed by the Airbus A380 in 2005. The most successful aircraft is the Douglas DC-3 and its military version, the C-47, a medium sized twin engine passenger or transport aircraft that has been in service since 1936 and is still used throughout the world. Some of the hundreds of versions found other purposes, like the AC-47, a Vietnam War era gunship, which is still used in the Colombian Air Force.
Types
Airplane/aeroplane
An airplane (aeroplane or plane) is a powered fixed-wing aircraft propelled by thrust from a jet engine or propeller. Planes come in many sizes, shapes, and wing configurations. Uses include recreation, transportation of goods and people, military, and research.
Seaplane
A seaplane (hydroplane) is capable of taking off and landing (alighting) on water. Seaplanes that can also operate from dry land are a subclass called amphibian aircraft. Seaplanes and amphibians divide into two categories: float planes and flying boats.
A float plane is similar to a land-based airplane. The fuselage is not specialized. The wheels are replaced/enveloped by floats, allowing the craft to make remain afloat for water landings.
A flying boat is a seaplane with a watertight hull for the lower (ventral) areas of its fuselage. The fuselage lands and then rests directly on the water's surface, held afloat by the hull. It does not need additional floats for buoyancy, although small underwing floats or fuselage-mounted sponsons may be used to stabilize it. Large seaplanes are usually flying boats, embodying most classic amphibian aircraft designs.
Powered gliders
Many forms of glider may include a small power plant. These include:
Motor glider – a conventional glider or sailplane with an auxiliary power plant that may be used when in flight to increase performance.
Powered hang glider – a hang glider with a power plant added.
Powered parachute – a paraglider type of parachute with an integrated air frame, seat, undercarriage and power plant hung beneath.
Powered paraglider or paramotor – a paraglider with a power plant suspended behind the pilot.
Ground effect vehicle
A ground effect vehicle (GEV) flies close to the terrain, making use of the ground effect – the interaction between the wings and the surface. Some GEVs are able to fly higher out of ground effect (OGE) when required – these are classed as powered fixed-wing aircraft.
Glider
A glider is a heavier-than-air craft whose free flight does not require an engine. A sailplane is a fixed-wing glider designed for soaring – gaining height using updrafts of air and to fly for long periods.
Gliders are mainly used for recreation but have found use for purposes such as aerodynamics research, warfare and spacecraft recovery.
Motor gliders are equipped with a limited propulsion system for takeoff, or to extend flight duration.
As is the case with planes, gliders come in diverse forms with varied wings, aerodynamic efficiency, pilot location, and controls.
Large gliders are most commonly born aloft by a tow-plane or by a winch. Military gliders have been used in combat to deliver troops and equipment, while specialized gliders have been used in atmospheric and aerodynamic research. Rocket-powered aircraft and spaceplanes have made unpowered landings similar to a glider.
Gliders and sailplanes that are used for the sport of gliding have high aerodynamic efficiency. The highest lift-to-drag ratio is 70:1, though 50:1 is common. After take-off, further altitude can be gained through the skillful exploitation of rising air. Flights of thousands of kilometers at average speeds over 200 km/h have been achieved.
One small-scale example of a glider is the paper airplane. An ordinary sheet of paper can be folded into an aerodynamic shape fairly easily; its low mass relative to its surface area reduces the required lift for flight, allowing it to glide some distance.
Gliders and sailplanes share many design elements and aerodynamic principles with powered aircraft. For example, the Horten H.IV was a tailless flying wing glider, and the delta-winged Space Shuttle orbiter glided during its descent phase. Many gliders adopt similar control surfaces and instruments as airplanes.
Types
The main application of modern glider aircraft is sport and recreation.
Sailplane
Gliders were developed in the 1920s for recreational purposes. As pilots began to understand how to use rising air, sailplane gliders were developed with a high lift-to-drag ratio. These allowed the craft to glide to the next source of "lift", increasing their range. This gave rise to the popular sport of gliding.
Early gliders were built mainly of wood and metal, later replaced by composite materials incorporating glass, carbon or aramid fibers. To minimize drag, these types have a streamlined fuselage and long narrow wings incorporating a high aspect ratio. Single-seat and two-seat gliders are available.
Initially, training was done by short "hops" in primary gliders, which have no cockpit and minimal instruments. Since shortly after World War II, training is done in two-seat dual control gliders, but high-performance two-seaters can make long flights. Originally skids were used for landing, later replaced by wheels, often retractable. Gliders known as motor gliders are designed for unpowered flight, but can deploy piston, rotary, jet or electric engines. Gliders are classified by the FAI for competitions into glider competition classes mainly on the basis of wingspan and flaps.
A class of ultralight sailplanes, including some known as microlift gliders and some known as airchairs, has been defined by the FAI based on weight. They are light enough to be transported easily, and can be flown without licensing in some countries. Ultralight gliders have performance similar to hang gliders, but offer some crash safety as the pilot can strap into an upright seat within a deform-able structure. Landing is usually on one or two wheels which distinguishes these craft from hang gliders. Most are built by individual designers and hobbyists.
Military gliders
Military gliders were used during World War II for carrying troops (glider infantry) and heavy equipment to combat zones. The gliders were towed into the air and most of the way to their target by transport planes, e.g. C-47 Dakota, or by one-time bombers that had been relegated to secondary activities, e.g. Short Stirling. The advantage over paratroopers were that heavy equipment could be landed and that troops were quickly assembled rather than dispersed over a parachute drop zone. The gliders were treated as disposable, constructed from inexpensive materials such as wood, though a few were re-used. By the time of the Korean War, transport aircraft had become larger and more efficient so that even light tanks could be dropped by parachute, obsoleting gliders.
Research gliders
Even after the development of powered aircraft, gliders continued to be used for aviation research. The NASA Paresev Rogallo flexible wing was developed to investigate alternative methods of recovering spacecraft. Although this application was abandoned, publicity inspired hobbyists to adapt the flexible-wing airfoil for hang gliders.
Initial research into many types of fixed-wing craft, including flying wings and lifting bodies was also carried out using unpowered prototypes.
Hang glider
A hang glider is a glider aircraft in which the pilot is suspended in a harness suspended from the air frame, and exercises control by shifting body weight in opposition to a control frame. Hang gliders are typically made of an aluminum alloy or composite-framed fabric wing. Pilots can soar for hours, gain thousands of meters of altitude in thermal updrafts, perform aerobatics, and glide cross-country for hundreds of kilometers.
Paraglider
A paraglider is a lightweight, free-flying, foot-launched glider with no rigid body. The pilot is suspended in a harness below a hollow fabric wing whose shape is formed by its suspension lines. Air entering vents in the front of the wing and the aerodynamic forces of the air flowing over the outside power the craft. Paragliding is most often a recreational activity.
Unmanned gliders
A paper plane is a toy aircraft (usually a glider) made out of paper or paperboard.
Model glider aircraft are models of aircraft using lightweight materials such as polystyrene and balsa wood. Designs range from simple glider aircraft to accurate scale models, some of which can be very large.
Glide bombs are bombs with aerodynamic surfaces to allow a gliding flight path rather than a ballistic one. This enables stand-off aircraft to attack a target from a distance.
Kite
A kite is a tethered aircraft held aloft by wind that blows over its wing(s). High pressure below the wing deflects the airflow downwards. This deflection generates horizontal drag in the direction of the wind. The resultant force vector from the lift and drag force components is opposed by the tension of the tether.
Kites are mostly flown for recreational purposes, but have many other uses. Early pioneers such as the Wright Brothers and J.W. Dunne sometimes flew an aircraft as a kite in order to confirm its flight characteristics, before adding an engine and flight controls.
Applications
Military
Kites have been used for signaling, for delivery of munitions, and for observation, by lifting an observer above the field of battle, and by using kite aerial photography.
Science and meteorology
Kites have been used for scientific purposes, such as Benjamin Franklin's famous experiment proving that lightning is electricity. Kites were the precursors to the traditional aircraft, and were instrumental in the development of early flying craft. Alexander Graham Bell experimented with large man-lifting kites, as did the Wright brothers and Lawrence Hargrave. Kites had a historical role in lifting scientific instruments to measure atmospheric conditions for weather forecasting.
Radio aerials and light beacons
Kites can be used to carry radio antennas. This method was used for the reception station of the first transatlantic transmission by Marconi. Captive balloons may be more convenient for such experiments, because kite-carried antennas require strong wind, which may be not always available with heavy equipment and a ground conductor.
Kites can be used to carry light sources such as light sticks or battery-powered lights.
Kite traction
Kites can be used to pull people and vehicles downwind. Efficient foil-type kites such as power kites can also be used to sail upwind under the same principles as used by other sailing craft, provided that lateral forces on the ground or in the water are redirected as with the keels, center boards, wheels and ice blades of traditional sailing craft. In the last two decades, kite sailing sports have become popular, such as kite buggying, kite landboarding, kite boating and kite surfing. Snow kiting is also popular.
Kite sailing opens several possibilities not available in traditional sailing:
Wind speeds are greater at higher altitudes
Kites may be maneuvered dynamically, which dramatically increases the available force
Mechanical structures are not needed to withstand bending forces; vehicles/hulls can be light or eliminated.
Power generation
Research and development projects investigate kites for harnessing high altitude wind currents for electricity generation.
Cultural uses
Kite festivals are a popular form of entertainment throughout the world. They include local events, traditional festivals and major international festivals.
Designs
Bermuda kite
Bowed kite, e.g. Rokkaku
Cellular or box kite
Chapi-chapi
Delta kite
Foil, parafoil or bow kite
Malay kite see also wau bulan
Tetrahedral kite
Types
Expanded polystyrene kite
Fighter kite
Indoor kite
Inflatable single-line kite
Kytoon
Man-lifting kite
Rogallo parawing kite
Stunt (sport) kite
Water kite
Characteristics
Air frame
The structural element of a fixed-wing aircraft is the air frame. It varies according to the aircraft's type, purpose, and technology. Early airframes were made of wood with fabric wing surfaces, When engines became available for powered flight, their mounts were made of metal. As speeds increased metal became more common until by the end of World War II, all-metal (and glass) aircraft were common. In modern times, composite materials became more common.
Typical structural elements include:
One or more mostly horizontal wings, often with an airfoil cross-section. The wing deflects air downward as the aircraft moves forward, generating lifting force to support it in flight. The wing also provides lateral stability to stop the aircraft level in steady flight. Other roles are to hold the fuel and mount the engines.
A fuselage, typically a long, thin body, usually with tapered or rounded ends to make its shape aerodynamically slippery. The fuselage joins the other parts of the air frame and contains the payload, and flight systems.
A vertical stabilizer or fin is a rigid surface mounted at the rear of the plane and typically protruding above it. The fin stabilizes the plane's yaw (turn left or right) and mounts the rudder which controls its rotation along that axis.
A horizontal stabilizer, usually mounted at the tail near the vertical stabilizer. The horizontal stabilizer is used to stabilize the plane's pitch (tilt up or down) and mounts the elevators that provide pitch control.
Landing gear, a set of wheels, skids, or floats that support the plane while it is not in flight. On seaplanes, the bottom of the fuselage or floats (pontoons) support it while on the water. On some planes, the landing gear retracts during the flight to reduce drag.
Wings
The wings of a fixed-wing aircraft are static planes extending to either side of the aircraft. When the aircraft travels forwards, air flows over the wings that are shaped to create lift.
Structure
Kites and some lightweight gliders and airplanes have flexible wing surfaces that are stretched across a frame and made rigid by the lift forces exerted by the airflow over them. Larger aircraft have rigid wing surfaces.
Whether flexible or rigid, most wings have a strong frame to give them shape and to transfer lift from the wing surface to the rest of the aircraft. The main structural elements are one or more spars running from root to tip, and ribs running from the leading (front) to the trailing (rear) edge.
Early airplane engines had little power and light weight was critical. Also, early airfoil sections were thin, and could not support a strong frame. Until the 1930s, most wings were so fragile that external bracing struts and wires were added. As engine power increased, wings could be made heavy and strong enough that bracing was unnecessary. Such an unbraced wing is called a cantilever wing.
Configuration
The number and shape of wings vary widely. Some designs blend the wing with the fuselage, while left and right wings separated by the fuselage are more common.
Occasionally more wings have been used, such as the three-winged triplane from World War I. Four-winged quadruplanes and other multiplane designs have had little success.
Most planes are monoplanes, with one or two parallel wings. Biplanes and triplanes stack one wing above the other. Tandem wings place one wing behind the other, possibly joined at the tips. When the available engine power increased during the 1920s and 1930s and bracing was no longer needed, the unbraced or cantilever monoplane became the most common form.
The planform is the shape when seen from above/below. To be aerodynamically efficient, wings are straight with a long span, but a short chord (high aspect ratio). To be structurally efficient, and hence lightweight, wingspan must be as small as possible, but offer enough area to provide lift.
To travel at transonic speeds, variable geometry wings change orientation, angling backward to reduce drag from supersonic shock waves. The variable-sweep wing transforms between an efficient straight configuration for takeoff and landing, to a low-drag swept configuration for high-speed flight. Other forms of variable planform have been flown, but none have gone beyond the research stage. The swept wing is a straight wing swept backward or forwards.
The delta wing is a triangular shape that serves various purposes. As a flexible Rogallo wing, it allows a stable shape under aerodynamic forces, and is often used for kites and other ultralight craft. It is supersonic capable, combining high strength with low drag.
Wings are typically hollow, also serving as fuel tanks. They are equipped with flaps, which allow the wing to increase/decrease drag/lift, for take-off and landing, and acting in opposition, to change direction.
Fuselage
The fuselage is typically long and thin, usually with tapered or rounded ends to make its shape aerodynamically smooth. Most fixed-wing aircraft have a single fuselage. Others may have multiple fuselages, or the fuselage may be fitted with booms on either side of the tail to allow the extreme rear of the fuselage to be utilized.
The fuselage typically carries the flight crew, passengers, cargo, and sometimes fuel and engine(s). Gliders typically omit fuel and engines, although some variations such as motor gliders and rocket gliders have them for temporary or optional use.
Pilots of manned commercial fixed-wing aircraft control them from inside a cockpit within the fuselage, typically located at the front/top, equipped with controls, windows, and instruments, separated from passengers by a secure door. In small aircraft, the passengers typically sit behind the pilot(s) in the cabin, Occasionally, a passenger may sit beside or in front of the pilot. Larger passenger aircraft have a separate passenger cabin or occasionally cabins that are physically separated from the cockpit.
Aircraft often have two or more pilots, with one in overall command (the "pilot") and one or more "co-pilots". On larger aircraft a navigator is typically also seated in the cockpit as well. Some military or specialized aircraft may have other flight crew members in the cockpit as well.
Wings vs. bodies
Flying wing
A flying wing is a tailless aircraft that has no distinct fuselage, housing the crew, payload, and equipment inside.
The flying wing configuration was studied extensively in the 1930s and 1940s, notably by Jack Northrop and Cheston L. Eshelman in the United States, and Alexander Lippisch and the Horten brothers in Germany. After the war, numerous experimental designs were based on the flying wing concept. General interest continued into the 1950s, but designs did not offer a great advantage in range and presented technical problems. The flying wing is most practical for designs in the slow-to-medium speed range, and drew continual interest as a tactical airlifter design.
Interest in flying wings reemerged in the 1980s due to their potentially low radar cross-sections. Stealth technology relies on shapes that reflect radar waves only in certain directions, thus making it harder to detect. This approach eventually led to the Northrop B-2 Spirit stealth bomber (pictured). The flying wing's aerodynamics are not the primary concern. Computer-controlled fly-by-wire systems compensated for many of the aerodynamic drawbacks, enabling an efficient and stable long-range aircraft.
Blended wing body
Blended wing body aircraft have a flattened airfoil-shaped body, which produces most of the lift to keep itself aloft, and distinct and separate wing structures, though the wings are blended with the body.
Blended wing bodied aircraft incorporate design features from both fuselage and flying wing designs. The purported advantages of the blended wing body approach are efficient, high-lift wings and a wide, airfoil-shaped body. This enables the entire craft to contribute to lift generation with potentially increased fuel economy.
Lifting body
A lifting body is a configuration in which the body produces lift. In contrast to a flying wing, which is a wing with minimal or no conventional fuselage, a lifting body can be thought of as a fuselage with little or no conventional wing. Whereas a flying wing seeks to maximize cruise efficiency at subsonic speeds by eliminating non-lifting surfaces, lifting bodies generally minimize the drag and structure of a wing for subsonic, supersonic, and hypersonic flight, or, spacecraft re-entry. All of these flight regimes pose challenges for flight stability.
Lifting bodies were a major area of research in the 1960s and 1970s as a means to build small and lightweight manned spacecraft. The US built lifting body rocket planes to test the concept, as well as several rocket-launched re-entry vehicles. Interest waned as the US Air Force lost interest in the manned mission, and major development ended during the Space Shuttle design process when it became clear that highly shaped fuselages made it difficult to fit fuel tanks.
Empennage and foreplane
The classic airfoil section wing is unstable in flight. Flexible-wing planes often rely on an anchor line or the weight of a pilot hanging beneath to maintain the correct attitude. Some free-flying types use an adapted airfoil that is stable, or other mechanisms including electronic artificial stability.
In order to achieve trim, stability, and control, most fixed-wing types have an empennage comprising a fin and rudder that act horizontally, and a tailplane and elevator that act vertically. This is so common that it is known as the conventional layout. Sometimes two or more fins are spaced out along the tailplane.
Some types have a horizontal "canard" foreplane ahead of the main wing, instead of behind it. This foreplane may contribute to the trim, stability or control of the aircraft, or to several of these.
Aircraft controls
Kite control
Kites are controlled by one or more tethers.
Free-flying aircraft controls
Gliders and airplanes have sophisticated control systems, especially if they are piloted.
The controls allow the pilot to direct the aircraft in the air and on the ground. Typically these are:
The yoke or joystick controls rotation of the plane about the pitch and roll axes. A yoke resembles a steering wheel. The pilot can pitch the plane down by pushing on the yoke or joystick, and pitch the plane up by pulling on it. Rolling the plane is accomplished by turning the yoke in the direction of the desired roll, or by tilting the joystick in that direction.
Rudder pedals control rotation of the plane about the yaw axis. Two pedals pivot so that when one is pressed forward the other moves backward, and vice versa. The pilot presses on the right rudder pedal to make the plane yaw to the right, and pushes on the left pedal to make it yaw to the left. The rudder is used mainly to balance the plane in turns, or to compensate for winds or other effects that push the plane about the yaw axis.
On powered types, an engine stop control ("fuel cutoff", for example) and, usually, a Throttle or thrust lever and other controls, such as a fuel-mixture control (to compensate for air density changes with altitude change).
Other common controls include:
Flap levers, which are used to control the deflection position of flaps on the wings.
Spoiler levers, which are used to control the position of spoilers on the wings, and to arm their automatic deployment in planes designed to deploy them upon landing. The spoilers reduce lift for landing.
Trim controls, which usually take the form of knobs or wheels and are used to adjust pitch, roll, or yaw trim. These are often connected to small airfoils on the trailing edge of the control surfaces and are called "trim tabs". Trim is used to reduce the amount of pressure on the control forces needed to maintain a steady course.
On wheeled types, brakes are used to slow and stop the plane on the ground, and sometimes for turns on the ground.
A craft may have two pilot seats with dual controls, allowing two to take turns.
The control system may allow full or partial automation, such as an autopilot, a wing leveler, or a flight management system. An unmanned aircraft has no pilot and is controlled remotely or via gyroscopes, computers/sensors or other forms of autonomous control.
Cockpit instrumentation
On manned fixed-wing aircraft, instruments provide information to the pilots, including flight, engines, navigation, communications, and other aircraft systems that may be installed.
The six basic instruments, sometimes referred to as the six pack, are:
The airspeed indicator (ASI) shows the speed at which the plane is moving through the air.
The attitude indicator (AI), sometimes called the artificial horizon, indicates the exact orientation of the aircraft about its pitch and roll axes.
The altimeter indicates the altitude or height of the plane above mean sea level (AMSL).
The vertical speed indicator (VSI), or variometer, shows the rate at which the plane is climbing or descending.
The heading indicator (HI), sometimes called the directional gyro (DG), shows the magnetic compass orientation of the fuselage. The direction is affected by wind conditions and magnetic declination.
The turn coordinator (TC), or turn and bank indicator, helps the pilot to control the plane in a coordinated attitude while turning.
Other cockpit instruments include:
A two-way radio, to enable communications with other planes and with air traffic control.
A horizontal situation indicator (HSI) indicates the position and movement of the plane as seen from above with respect to the ground, including course/heading and other information.
Instruments showing the status of the plane's engines (operating speed, thrust, temperature, and other variables).
Combined display systems such as primary flight displays or navigation aids.
Information displays such as onboard weather radar displays.
A radio direction finder (RDF), to indicate the direction to one or more radio beacons, which can be used to determine the plane's position.
A satellite navigation (satnav) system, to provide an accurate position.
Some or all of these instruments may appear on a computer display and be operated with touches, ala a phone.
| Technology | Aviation | null |
46818 | https://en.wikipedia.org/wiki/Surveillance%20aircraft | Surveillance aircraft | Surveillance aircraft are aircraft used for surveillance. They are primarily operated by military forces and government agencies in roles including intelligence gathering, maritime patrol, battlefield and airspace surveillance, observation (e.g. artillery spotting), and law enforcement.
Surveillance aircraft usually carry limited defensive armament, if any. They do not require high-performance capability or stealth characteristics and may be modified civilian aircraft. Surveillance aircraft have also included moored balloons (e.g. TARS) and unmanned aerial vehicles (UAVs).
Definitions
The terms "surveillance" and "reconnaissance" have sometimes been used interchangeably. In the military context, a distinction can be drawn between surveillance, which monitors a changing situation in real time, and reconnaissance, which captures a static picture for analysis. Surveillance is sometimes grouped with intelligence, target acquisition, and reconnaissance to form ISTAR.
The term observation was used when the main sensor was the human eye.
History
Pre World War I
The French were the first to adopt hydrogen-filled balloons on the battlefield for reconnaissance. In the early 1790s, the French would deploy a hydrogen-filled balloon that held two soldiers: one who possessed a telescope, and the other would relay information to troops on the ground. These balloons did not cross into enemy lines; they were deployed on friendly lines for the purpose of surveillance from a higher point of view. These balloons formed the first air force in 1794, which was referred to as the Compagnie d'Aéronautiers. Also in 1794, during the Battle of Fleurus, the French Aerostatic Corps balloon L'Entreprenant remained afloat for nine hours. French officers used the balloon to observe the movements of the Austrian Army, dropping notes to the ground for collection by the French Army [2] and also signaled messages using semaphores.
This method of surveillance would eventually be adopted by the Union Army in the Civil War. American inventor Thadeus Low proposed this invention to President Abraham Lincoln, to which a similar idea would be adopted. The Union Army would use balloons that could hold as many as five soldiers, and they would use telegraphs to relay information.
In the 1880s, a British meteorologist named Douglas Archibald experimented with unmanned surveillance vehicles. Douglas rigged cameras to a kite and used a long cable attached to the kite's string to activate the shutter. This invention would eventually catch the eyes of American Army Corporal William Eddy.
During the Spanish-American War of 1898, Eddy adopted his own version of Archibald’s kite-mounted camera. Eddy’s kite was responsible for creating the first-ever military aerial surveillance photos.
World War I
One of the first aircraft used for surveillance was the Rumpler Taube during World War I, when aviators like Fred Zinn evolved entirely new methods of reconnaissance and photography. The translucent wings of the plane made it very difficult for ground-based observers to detect a Taube at an altitude above 400 m. The French also called this plane "the Invisible Aircraft", and it is sometimes also referred to as the "world's very first stealth plane". German Taube aircraft were able to detect the advancing Russian army during the Battle of Tannenberg (1914).
Aircraft were initially used for reconnaissance missions. The pilots of these initial aircraft would track the movement of enemy troops using photographs. These photos would be used to understand enemy formations and create maps that would eventually be used by infantry. By 1916, these aircraft would assist in the spotting of artillery, and the guidance and coordination of infantry. These aircraft forced enemy troops to camouflage their position to hide from aerial observation.
Eventually, surveillance aircraft would be highly valued due to commander’s reliance on their information. However, surveillance aircraft would fly a low, slow, and predictable flight path, and with the introduction of aerial combat, surveillance aircraft were an easy target.
World War II
Pre-war, the British built and flew two Fleet Shadower aircraft, including the General Aircraft Fleet Shadower, that could follow and observe the enemy fleet at a distance. However, they were made obsolete by the 1940s with the introduction of airborne radar.
Air observation posts were developed during World War II. Light aircraft such as the Auster were used by the British Royal Artillery for artillery spotting. By the mid-1960s, air observation was generally taken over by light observation helicopters.
Cold War
Spy flights were a source of major contention between the United States and the Soviet Union during most of the 1960s. Due to the difficulty of surveillance in the USSR, US policymakers established the National Reconnaissance Office. To combat this difficulty of surveillance, the US military developed the U2. This aircraft could fly at altitudes of 70,000 feet to avoid detection from KGB surveillance. The U2 was also equipped with a Hycon 73B camera. This camera was capable of capturing details as small as 2.5 feet wide. In 1962, a U2 captured images that discovered nuclear missiles in Cuba. These photos would initiate what we know as the Cuban Missile Crisis.
Aerial Reconnaissance was dangerous: Out of 152 cryptologists who died in the Cold War, 64 of them were participating in aerial reconnaissance missions. During the time period of 1945-1977, more than forty reconnaissance aircraft were shot down in the European and Pacific areas.
The US Military originally used standard aircraft like B-29s for reconnaissance missions. Eventually, variants of the aircraft were designed for reconnaissance, e.g. the C-130 and RC-130. These repurposed aircraft were sometimes referred to as “ferret” aircraft, and intelligence personnel commanding these aircraft were nicknamed “backenders”.
The United States also performed surveillance using repurposed Ryan Firebee unmanned target drones. Variants of these vehicles, designated the Model 147, could fly for 2500 miles.
In May 1991, the Department of the Navy reported that at least one UAV was airborne at all times during Operation Desert Storm.
War on terror
During the global war on terror, the US military developed defenses to surveillance aircraft to combat surveillance use. The United States military used precision cameras, drones that detect drones, and direct-energy weapons that disrupt control links and GPS navigation.
Roles
Maritime patrol
The main components of maritime surveillance consist of sightings from ship captains and aircraft pilots. However, due to the radar horizon, surveillance aircraft are preferred as they can identify targets hundreds of miles further than vessels. An example of this today is the Coast Guard’s use of unmanned aerial systems (UASs) to improve their capabilities while reducing the risk for service members. Currently, the Coast Guard has roughly 250 drone certified officers across the US. The main uses of UASs within maritime activities are search and rescue operations and responding to different environmental disasters. The Coast Guard’s use of unmanned drones specifically led them to creating an “Unmanned Systems Strategic Plan.” This plan would expand the use of current aerial surveillance systems to new challenges such as drug trafficking surveillance, migrant interdiction, and ice operations. With regards to environmental tasks, UASs will be expanded to address marine safety, fishing activity, and navigational uses. The Coast Guard outlines the future of aerial surveillance in maritime patrol as improving current UAS systems, integrating improved sensors and AI/ML, and creating more organized command and control plans/operations.
Maritime patrol aircraft are typically large, slow machines capable of flying continuously for many hours, with a wide range of sensors. Such aircraft include the Hawker-Siddeley Nimrod, the Breguet Atlantique, the Tupolev Tu-95, the Lockheed P-2 Neptune and the Lockheed P-3 Orion/CP-140 Aurora. Smaller ship-launched observation seaplanes were used from World War I through World War II.
Law enforcement
Unmanned aircraft systems (UAS) are being increasingly deployed by U.S. law enforcement agencies. In August 2023, a Congressional Research Service to members of Congress described the multiple uses of these aircraft, including general surveillance and intelligence or evidence gathering. Unmanned surveillance drones can also be used to identify the locations of suspects who may be hiding or analyze the physical layout of a room before officers enter. Furthermore, unmanned surveillance drones can be used by law enforcement to light up large areas where it may be dark and difficult for officers to use traditional means of illumination. There are a few federal laws that apple to the use of unmanned surveillance systems, the Federal Aviation Administration (FAA) currently only has 2 options for the use of this technology by law enforcement. The first is that they can only operate them under 400 feet and need to maintain visual of the aircraft. Second, operators of the aircraft need to receive specific license and certifications to operate them. In response to the few and vague laws, the Department of Justice (DOJ) and Department of Homeland Security (DHS) has created policies to regulate the use and deployment of these drones domestically.
Predator UAVs have been used by the US for border patrol.
Battlefield and airspace surveillance
Current military applications
Unmanned Aerial Vehicle (UAV) surveillance aircraft have been "deployed or are under development in many countries, including Israel, Iran, the UK, the United States, Canada, China, India, South Africa and Pakistan." Most air forces around the world lack dedicated surveillance planes.
Several countries adapt aircraft for electronic intelligence (ELINT) gathering. The Beech RC-12 Super King Air and Boeing RC-135 Rivet Joint are examples of this activity.
Unmanned surveillance UAVs include both airships—such as Sky Sentinel and HiSentinel 80—and airplanes.
South China Sea
The United States military has flown reconnaissance flights, called sensitive reconnaissance operations (SRO) by the U.S. Air Force, to monitor expansionist developments by the People’s Republic of China, North Korea, and Russia in the Indo-Pacific region for decades; however, recent operations in the region have focused on monitoring movements by the People’s Republic of China. More than ten different aircraft are used for SRO missions in the theater, including manned aircraft USAF RC-135 Rivet Joint and U-2 Dragon Lady, and the unmanned aircraft RQ-4 Global Hawk. Reconnaissance missions are capable of changing course within minutes to monitor activity and therefore used for reconnaissance missions more often than satellites, which can take hours or days to change position and are vulnerable to anti-satellite weapons.
Russian invasion of Ukraine
Small unmanned drones have been used by the Ukrainian military to identify enemy units and navigate artillery fire for safer and more efficient attacks on Russian targets, record propaganda videos of ambushes for posting on social media, and document alleged Russian war crimes and damages. Class I and III drone systems, classified by NATO as those of less than 150 kilograms and more than 600 kilograms, respectively, have been the most frequently used in the region. Turkish Bayraktar TB2 military drones have often been utilized by Ukraine in both reconnaissance and strike missions, and both Ukrainian and Russian militaries have used hobby drones donated to them by civilians, such as DJI Mavic mini drones, to conduct surveillance and strikes on enemy troops.
Israel-Hamas War
The United States military had flown MQ-9 Reapers, unmanned aerial vehicles capable of more than 20 consecutive hours of flight, over the Gaza Strip for at least a month after the surprise attack on Israel by Hamas on October 7, 2023. According to the U.S. Defense Department, flights collected surveillance with the purpose of locating hostages taken by Hamas during the surprise attack on Israel and finding signs of life, but did not aid Israeli military ground operations. The British military also carried out flights over Gaza to locate hostages initially using unarmed Shadow R1 aircraft. As of March 2024, the Israeli military conducted hundreds of flight hours and almost 100 sorties in Gaza using the Oron reconnaissance aircraft, previously used as a business jet and upgraded to include advanced sensors and defense systems.
Israel-Hezbollah Conflict
On June 18, 2024, Hezbollah released drone footage capturing sensitive sites in northern Israel, including military complexes and naval bases around Haifa. This action showcased areas such as the Rafael Military Industries Complex and various naval facilities. Hezbollah's campaign aims to intimidate and threaten Israel by displaying its surveillance capabilities and asserting its ability to penetrate Israeli defenses. This act highlights Hezbollah's growing technological and operational threats against Israel's security.
Business aircraft
With smaller equipment, long-range business aircraft can be modified in surveillance aircraft to perform specialized missions cost-effectively, from ground surveillance to maritime patrol:
the , 6,000 nmi Bombardier Global 6000 is the platform for the USAF Northrop Grumman E-11A Battlefield Airborne Communications Node, the radar-carrying ground-surveillance Raytheon Sentinel for the UK Royal Air Force, and Saab's GlobalEye AEW&C carrying its Erieye AESA radar as UK's Marshall ADG basis for Elint/Sigint for the United Arab Emirates; it is also the base for the proposed Saab AB Swordfish MPA and the USAF Lockheed Martin J-Stars Recap battlefield-surveillance program, while IAI's ELI-3360 MPA is based on the Global 5000;
The , 6,750 nmi Gulfstream G550 was selected for the IAI EL/W-2085 Conformal Airborne Early Warning AESA radar for Italy, Singapore and Israel (which also has IAI Sigint G550s) while L3 Technologies transfers the U.S. Compass Call electronic-attack system to the G550 CAEW-based EC-37B, like the NC-37B range-support aircraft, and will modify others for Australia's program, Northrop Grumman proposes the G550 for the J-Stars Recap;
Dassault Aviation developed the Falcon 900 MPA and Falcon 2000 Maritime Multirole Aircraft for France (which delayed its Avsimar requirement), South Korea and the Japan Coast Guard with a mission system developed with L3 and Thales Group;
Embraer delivered several EMB-145s as a platform for AEW&C, MPA and multi-intelligence;
the Beechcraft King Air 350ER is a platform for ISR versions, including L3's Spyder II and Sierra Nevada Corp.'s Scorpion and as the MC-12W for the U.S. Army.
Current civilian applications
Drones are increasingly used in conservation work to complete tasks such as mapping forest cover, tracking wildlife, and enforcing environmental laws by catching illegal loggers or poachers.
Monitoring protests
Surveillance drones, helicopters, and airplanes were deployed over 15 cities during the 2020 George Floyd protests. Unmanned aircraft were used to track the movements of protestors and to provide aerial views of violent acts and arson. The recorded video was sent to a digital network that could be accessed by various federal agencies and local law enforcement for use in criminal investigations. However, the National Air Security Operations Center stated the drones flew at a height that made it impossible to identify individuals or license plates.
Border patrol
Surveillance aircraft have recently been used to patrol maritime borders that are much longer than land borders and typically have less personnel. The Schengen Area in the European Union has recently used it to monitor their southern border in the Mediterranean. They gather intelligence including illegal crossings, search and rescue operations, smuggling, and fishing. Belgium has also deployed drones to monitor irregular maritime activity and to find children lost on the beach.
Ethics and regulations
Public opinion
A 2014 survey from the Pew Center showed that pluralities or majorities of people in 39 of 44 countries oppose American drone strikes in the Middle East. Only in Israel, Kenya, and the USA do at least half of the public support American drone strikes. Additionally, following the Edward Snowden incident, concern within the US is only increasing regarding the government respecting people’s privacy and civil liberties. Regarding the use of surveillance drones domestically in the US, the public tends to consider the benefits of this kind of surveillance versus the risks to individual privacy. Findings from an ethical analysis suggest people understand the benefits UAVs contribute to protecting the public while at the same time poses a risk to individual safety. A report from 2014 found 70%-73% of U.S. adults believed government use of surveillance drones was “excessive” and “violates personal privacy.” Subsequently, only 39% believed it “increased public safety” and only 10% believed it was “necessary” for surveillance. Furthermore, the public is more opposed to surveillance drones being in the hands of private individuals and businesses, rather than the government.
Applicable law
In the U.S., case law holds that airborne surveillance does not violate privacy rights protected under the 14th Amendment of the Constitution, so long as unmanned aircraft systems are not in "general public use". The lack of widespread use of such systems justifies individuals' reasonable expectations of privacy again this type of surveillance.
In the European Union, Article 7 of the Charter of Fundamental Rights of the European Union 2000 provides that people have a right of privacy and Article 8 protects the right to one's individual personal data. Under these provisions, aerial surveillance of public spaces would be lawful but surveillance of one's private home be subject to administrative oversight.
The Regulation of Investigatory Powers Act (RIPA) of 2000 applies to air surveillance in the United Kingdom. RIPA prohibits large-scale and generalized surveillance, and RIPA authorization is required for individualized surveillance of private residences.
| Technology | Military aviation | null |
46828 | https://en.wikipedia.org/wiki/Fertilisation | Fertilisation | Fertilisation or fertilization (see spelling differences), also known as generative fertilisation, syngamy and impregnation, is the fusion of gametes to give rise to a zygote and initiate its development into a new individual organism or offspring. While processes such as insemination or pollination, which happen before the fusion of gametes, are also sometimes informally referred to as fertilisation, these are technically separate processes. The cycle of fertilisation and development of new individuals is called sexual reproduction. During double fertilisation in angiosperms, the haploid male gamete combines with two haploid polar nuclei to form a triploid primary endosperm nucleus by the process of vegetative fertilisation.
History
In antiquity, Aristotle conceived the formation of new individuals through fusion of male and female fluids, with form and function emerging gradually, in a mode called by him as epigenetic.
In 1784, Spallanzani established the need of interaction between the female's ovum and male's sperm to form a zygote in frogs. In 1827, Karl Ernst von Baer observed a therian mammalian egg for the first time. Oscar Hertwig (1876), in Germany, described the fusion of nuclei of spermatozoa and of ova from sea urchin.
Evolution
The evolution of fertilisation is related to the origin of meiosis, as both are part of sexual reproduction, originated in eukaryotes. One hypothesis states that meiosis originated from mitosis.
Fertilisation in plants
The gametes that participate in fertilisation of plants are the sperm (male) and the egg (female) cell. Various plant groups have differing methods by which the gametes produced by the male and female gametophytes come together and are fertilised. In bryophytes and pteridophytic land plants, fertilisation of the sperm and egg takes place within the archegonium. In seed plants, the male gametophyte is formed within a pollen grain. After pollination, the pollen grain germinates, and a pollen tube grows and penetrates the ovule through a tiny pore called a micropyle. The sperm are transferred from the pollen through the pollen tube to the ovule where the egg is fertilised. In flowering plants, two sperm cells are released from the pollen tube, and a second fertilisation event occurs involving the second sperm cell and the central cell of the ovule, which is a second female gamete.
Pollen tube growth
Unlike animal sperm which is motile, the sperm of most seed plants is immotile and relies on the pollen tube to carry it to the ovule where the sperm is released. The pollen tube penetrates the stigma and elongates through the extracellular matrix of the style before reaching the ovary. Then near the receptacle, it breaks through the ovule through the micropyle (an opening in the ovule wall) and the pollen tube "bursts" into the embryo sac, releasing sperm. The growth of the pollen tube has been believed to depend on chemical cues from the pistil, however these mechanisms were poorly understood until 1995. Work done on tobacco plants revealed a family of glycoproteins called TTS proteins that enhanced growth of pollen tubes. Pollen tubes in a sugar free pollen germination medium and a medium with purified TTS proteins both grew. However, in the TTS medium, the tubes grew at a rate 3x that of the sugar-free medium. TTS proteins were also placed on various locations of semi in vivo pollinated pistils, and pollen tubes were observed to immediately extend toward the proteins. Transgenic plants lacking the ability to produce TTS proteins had slower pollen tube growth and reduced fertility.
Rupture of pollen tube
The rupture of the pollen tube to release sperm in Arabidopsis has been shown to depend on a signal from the female gametophyte. Specific proteins called FER protein kinases present in the ovule control the production of highly reactive derivatives of oxygen called reactive oxygen species (ROS). ROS levels have been shown via GFP to be at their highest during floral stages when the ovule is the most receptive to pollen tubes, and lowest during times of development and following fertilisation. High amounts of ROS activate Calcium ion channels in the pollen tube, causing these channels to take up Calcium ions in large amounts. This increased uptake of calcium causes the pollen tube to rupture, and release its sperm into the ovule. Pistil feeding assays in which plants were fed diphenyl iodonium chloride (DPI) suppressed ROS concentrations in Arabidopsis, which in turn prevented pollen tube rupture.
Flowering plants
After being fertilised, the ovary starts to swell and develop into the fruit. With multi-seeded fruits, multiple grains of pollen are necessary for syngamy with each ovule. The growth of the pollen tube is controlled by the vegetative (or tube) cytoplasm. Hydrolytic enzymes are secreted by the pollen tube that digest the female tissue as the tube grows down the stigma and style; the digested tissue is used as a nutrient source for the pollen tube as it grows. During pollen tube growth towards the ovary, the generative nucleus divides to produce two separate sperm nuclei (haploid number of chromosomes) – a growing pollen tube therefore contains three separate nuclei, two sperm and one tube. The sperms are interconnected and dimorphic, the large one, in a number of plants, is also linked to the tube nucleus and the interconnected sperm and the tube nucleus form the "male germ unit".
Double fertilisation is the process in angiosperms (flowering plants) in which two sperm from each pollen tube fertilise two cells in a female gametophyte (sometimes called an embryo sac) that is inside an ovule. After the pollen tube enters the gametophyte, the pollen tube nucleus disintegrates and the two sperm cells are released; one of the two sperm cells fertilises the egg cell (at the bottom of the gametophyte near the micropyle), forming a diploid (2n) zygote. This is the point when fertilisation actually occurs; pollination and fertilisation are two separate processes. The nucleus of the other sperm cell fuses with two haploid polar nuclei (contained in the central cell) in the centre of the gametophyte. The resulting cell is triploid (3n). This triploid cell divides through mitosis and forms the endosperm, a nutrient-rich tissue, inside the seed. The two central-cell maternal nuclei (polar nuclei) that contribute to the endosperm arise by mitosis from the single meiotic product that also gave rise to the egg. Therefore, maternal contribution to the genetic constitution of the triploid endosperm is double that of the embryo.
One primitive species of flowering plant, Nuphar polysepala, has endosperm that is diploid, resulting from the fusion of a sperm with one, rather than two, maternal nuclei. It is believed that early in the development of angiosperm lineages, there was a duplication in this mode of reproduction, producing seven-celled/eight-nucleate female gametophytes, and triploid endosperms with a 2:1 maternal to paternal genome ratio.
In many plants, the development of the flesh of the fruit is proportional to the percentage of fertilised ovules. For example, with watermelon, about a thousand grains of pollen must be delivered and spread evenly on the three lobes of the stigma to make a normal sized and shaped fruit.
Self-pollination and outcrossing
Outcrossing, or cross-fertilisation, and self-fertilisation represent different strategies with differing benefits and costs. An estimated 48.7% of plant species are either dioecious or self-incompatible obligate outcrossers. It is also estimated that about 42% of flowering plants exhibit a mixed mating system in nature.
In the most common kind of mixed mating system, individual plants produce a single type of flower and fruits may contain self-fertilised, outcrossed or a mixture of progeny types. The transition from cross-fertilisation to self-fertilisation is the most common evolutionary transition in plants, and has occurred repeatedly in many independent lineages. About 10-15% of flowering plants are predominantly self-fertilising.
Under circumstances where pollinators or mates are rare, self-fertilisation offers the advantage of reproductive assurance. Self-fertilisation can therefore result in improved colonisation ability. In some species, self-fertilisation has persisted over many generations. Capsella rubella is a self-fertilising species that became self-compatible 50,000 to 100,000 years ago. Arabidopsis thaliana is a predominantly self-fertilising plant with an out-crossing rate in the wild of less than 0.3%; a study suggested that self-fertilisation evolved roughly a million years ago or more in A. thaliana. In long-established self-fertilising plants, the masking of deleterious mutations and the production of genetic variability is infrequent and thus unlikely to provide a sufficient benefit over many generations to maintain the meiotic apparatus. Consequently, one might expect self-fertilisation to be replaced in nature by an ameiotic asexual form of reproduction that would be less costly. However the actual persistence of meiosis and self-fertilisation as a form of reproduction in long-established self-fertilising plants may be related to the immediate benefit of efficient recombinational repair of DNA damage during formation of germ cells provided by meiosis at each generation.
Fertilisation in animals
The mechanics behind fertilisation has been studied extensively in sea urchins and mice. This research addresses the question of how the sperm and the appropriate egg find each other and the question of how only one sperm gets into the egg and delivers its contents. There are three steps to fertilisation that ensure species-specificity:
Chemotaxis
Sperm activation/acrosomal reaction
Sperm/egg adhesion
Internal vs. external
Consideration as to whether an animal (more specifically a vertebrate) uses internal or external fertilisation is often dependent on the method of birth. Oviparous animals laying eggs with thick calcium shells, such as chickens, or thick leathery shells generally reproduce via internal fertilisation so that the sperm fertilises the egg without having to pass through the thick, protective, tertiary layer of the egg. Ovoviviparous and viviparous animals also use internal fertilisation. Although some organisms reproduce via amplexus, they may still use internal fertilisation, as with some salamanders. Advantages of internal fertilisation include minimal waste of gametes, greater chance of individual egg fertilisation, longer period of egg protection, and selective fertilisation. Many females have the ability to store sperm for extended periods of time and can fertilise their eggs at their own desire.
Oviparous animals producing eggs with thin tertiary membranes or no membranes at all, on the other hand, use external fertilisation methods. Such animals may be more precisely termed ovuliparous. External fertilisation is advantageous in that it minimises contact (which decreases the risk of disease transmission), and greater genetic variation.
Sea urchins
Sperm find the eggs via chemotaxis, a type of ligand/receptor interaction. Resact is a 14 amino acid peptide purified from the jelly coat of A. punctulata that attracts the migration of sperm.
After finding the egg, the sperm penetrates the jelly coat through a process called sperm activation. In another ligand/receptor interaction, an oligosaccharide component of the egg binds and activates a receptor on the sperm and causes the acrosomal reaction. The acrosomal vesicles of the sperm fuse with the plasma membrane and are released. In this process, molecules bound to the acrosomal vesicle membrane, such as bindin, are exposed on the surface of the sperm. These contents digest the jelly coat and eventually the vitelline membrane. In addition to the release of acrosomal vesicles, there is explosive polymerisation of actin to form a thin spike at the head of the sperm called the acrosomal process.
The sperm binds to the egg through another ligand reaction between receptors on the vitelline membrane. The sperm surface protein bindin, binds to a receptor on the vitelline membrane identified as EBR1.
Fusion of the plasma membranes of the sperm and egg are likely mediated by bindin. At the site of contact, fusion causes the formation of a fertilisation cone.
Mammals
Male mammals internally fertilise females and ejaculate semen through the penis during copulation. After ejaculation, many sperm move to the upper vagina (via contractions from the vagina) through the cervix and across the length of the uterus to meet the ovum. In cases where fertilisation occurs, the female usually ovulates during a period that extends from hours before copulation to a few days after; therefore, in most mammals, it is more common for ejaculation to precede ovulation than vice versa.
When sperm are deposited into the anterior vagina, they are not capable of fertilisation (i.e., non-capacitated) and are characterised by slow linear motility patterns. This motility, combined with muscular contractions enables sperm transport towards the uterus and oviducts. There is a pH gradient within the micro-environment of the female reproductive tract such that the pH near the vaginal opening is lower (approximately 5) than the oviducts (approximately 8). The sperm-specific pH-sensitive calcium transport protein called CatSper increases the sperm cell permeability to calcium as it moves further into the reproductive tract. Intracellular calcium influx contributes to sperm capacitation and hyperactivation, causing a more violent and rapid non-linear motility pattern as sperm approach the oocyte. The capacitated spermatozoon and the oocyte meet and interact in the ampulla of the fallopian tube. Rheotaxis, thermotaxis and chemotaxis are known mechanisms that guide sperm towards the egg during the final stage of sperm migration. Spermatozoa respond (see Sperm thermotaxis) to the temperature gradient of ~2 °C between the oviduct and the ampulla, and chemotactic gradients of progesterone have been confirmed as the signal emanating from the cumulus oophorus cells surrounding rabbit and human oocytes. Capacitated and hyperactivated sperm respond to these gradients by changing their behaviour and moving towards the cumulus-oocyte complex. Other chemotactic signals such as formyl Met-Leu-Phe (fMLF) may also guide spermatozoa.
The zona pellucida, a thick layer of extracellular matrix that surrounds the egg and is similar to the role of the vitelline membrane in sea urchins, binds the sperm. Unlike sea urchins, the sperm binds to the egg before the acrosomal reaction. ZP3, a glycoprotein in the zona pellucida, is responsible for egg/sperm adhesion in humans. The receptor galactosyltransferase (GalT) binds to the N-acetylglucosamine residues on the ZP3 and is important for binding with the sperm and activating the acrosome reaction. ZP3 is sufficient though unnecessary for sperm/egg binding. Two additional sperm receptors exist: a 250kD protein that binds to an oviduct secreted protein, and SED1, which independently binds to the zona. After the acrosome reaction, the sperm is believed to remain bound to the zona pellucida through exposed ZP2 receptors. These receptors are unknown in mice but have been identified in guinea pigs.
In mammals, the binding of the spermatozoon to the GalT initiates the acrosome reaction. This process releases the hyaluronidase that digests the matrix of hyaluronic acid in the vestments around the oocyte. Additionally, heparin-like glycosaminoglycans (GAGs) are released near the oocyte that promote the acrosome reaction. Fusion between the oocyte plasma membranes and sperm follows and allows the sperm nucleus, the typical centriole, and atypical centriole that is attached to the flagellum, but not the mitochondria, to enter the oocyte. The protein CD9 likely mediates this fusion in mice (the binding homolog). The egg "activates" itself upon fusing with a single sperm cell and thereby changes its cell membrane to prevent fusion with other sperm. Zinc atoms are released during this activation.
This process ultimately leads to the formation of a diploid cell called a zygote. The zygote divides to form a blastocyst and, upon entering the uterus, implants in the endometrium, beginning pregnancy. Embryonic implantation not in the uterine wall results in an ectopic pregnancy that can kill the mother.
In such animals as rabbits, coitus induces ovulation by stimulating the release of the pituitary hormone gonadotropin; this release greatly increases the likelihood of pregnancy.
Humans
Fertilisation in humans is the union of a human egg and sperm, usually occurring in the ampulla of the fallopian tube, producing a single celled zygote, the first stage of life in the development of a genetically unique organism, and initiating embryonic development. Scientists discovered the dynamics of human fertilisation in the nineteenth century.
The term conception commonly refers to "the process of becoming pregnant involving fertilisation or implantation or both". Its use makes it a subject of semantic arguments about the beginning of pregnancy, typically in the context of the abortion debate.
Upon gastrulation, which occurs around 16 days after fertilisation, the implanted blastocyst develops three germ layers, the endoderm, the ectoderm and the mesoderm, and the genetic code of the father becomes fully involved in the development of the embryo; later twinning is impossible. Additionally, interspecies hybrids survive only until gastrulation and cannot further develop.
However, some human developmental biology literature refers to the conceptus and such medical literature refers to the "products of conception" as the post-implantation embryo and its surrounding membranes. The term "conception" is not usually used in scientific literature because of its variable definition and connotation.
Insects
Insects in different groups, including the Odonata (dragonflies and damselflies) and the Hymenoptera (ants, bees, and wasps) practise delayed fertilisation. Among the Odonata, females may mate with multiple males, and store sperm until the eggs are laid. The male may hover above the female during egg-laying (oviposition) to prevent her from mating with other males and replacing his sperm; in some groups such as the darters, the male continues to grasp the female with his claspers during egg-laying, the pair flying around in tandem. Among social Hymenoptera, honeybee queens mate only on mating flights, in a short period lasting some days; a queen may mate with eight or more drones. She then stores the sperm for the rest of her life, perhaps for five years or more.
Fertilisation in fungi
In many fungi (except chytrids), as in some protists, fertilisation is a two step process. First, the cytoplasms of the two gamete cells fuse (called plasmogamy), producing a dikaryotic or heterokaryotic cell with multiple nuclei. This cell may then divide to produce dikaryotic or heterokaryotic hyphae. The second step of fertilisation is karyogamy, the fusion of the nuclei to form a diploid zygote.
In chytrid fungi, fertilisation occurs in a single step with the fusion of gametes, as in animals and plants.
Fertilisation in protists
Fertilisation in protozoa
There are three types of fertilisation processes in protozoa:
gametogamy;
autogamy;
gamontogamy.
Fertilisation in algae
Algae, like some land plants, undergo alternation of generations. Some algae are isomorphic, where both the sporophyte (2n) and gameteophyte (n) are the same morphologically. When algae reproduction is described as oogamous, the male and female gametes are different morphologically, where there is a large non-motile egg for female gametes, and the male gamete are uniflagellate (motile). Via the process of syngamy, these will form a new zygote, regenerating the sporophyte generation again.
Fertilisation and genetic recombination
Meiosis results in a random segregation of the genes that each parent contributes. Each parent organism is usually identical save for a fraction of their genes; each gamete is therefore genetically unique. At fertilisation, parental chromosomes combine. In humans, (2²²)² = 17.6x1012 chromosomally different zygotes are possible for the non-sex chromosomes, even assuming no chromosomal crossover. If crossover occurs once, then on average (4²²)² = 309x1024 genetically different zygotes are possible for every couple, not considering that crossover events can take place at most points along each chromosome. The X and Y chromosomes undergo no crossover events and are therefore excluded from the calculation. The mitochondrial DNA is only inherited from the maternal parent.
The sperm aster and zygote centrosomes
Shortly after the sperm fuse with the egg, the two sperm centrioles form the embryo first centrosome and microtubule aster. The sperm centriole, found near the male pronucleus, recruit egg Pericentriolar material proteins forming the zygote first centrosome. This centrosome nucleates microtubules in the shape of stars called astral microtubules. The microtubules span the whole valium of the egg, allowing the egg pronucleus to use the cables to get to the male pronucleus. As the male and female pronuclei approach each other, the single centrosome split into two centrosomes located in the interphase between the pronuclei. Then the centrosome via the astral microtubules polarises the genome inside the pronuclei.
Parthenogenesis
Organisms that normally reproduce sexually can also reproduce via parthenogenesis, wherein an unfertilised female gamete produces viable offspring. These offspring may be clones of the mother, or in some cases genetically differ from her but inherit only part of her DNA. Parthenogenesis occurs in many plants and animals and may be induced in others through a chemical or electrical stimulus to the egg cell. In 2004, Japanese researchers led by Tomohiro Kono succeeded after 457 attempts to merge the ova of two mice by blocking certain proteins that would normally prevent the possibility; the resulting embryo normally developed into a mouse.
Allogamy and autogamy
Allogamy, which is also known as cross-fertilisation, refers to the fertilisation of an egg cell from one individual with the male gamete of another.
Autogamy which is also known as self-fertilisation, occurs in such hermaphroditic organisms as plants and flatworms; therein, two gametes from one individual fuse.
Other variants of bisexual reproduction
Some relatively unusual forms of reproduction are:
Gynogenesis: A sperm stimulates the egg to develop without fertilisation or syngamy. The sperm may enter the egg.
Hybridogenesis: One genome is eliminated to produce haploid eggs.
Canina meiosis: (sometimes called "permanent odd polyploidy") one genome is transmitted in the Mendelian fashion, others are transmitted clonally.
Benefits of cross-fertilisation
The major benefit of cross-fertilisation is generally thought to be the avoidance of inbreeding depression. Charles Darwin, in his 1876 book The Effects of Cross and Self Fertilisation in the Vegetable Kingdom (pages 466-467) summed up his findings in the following way.
"It has been shown in the present volume that the offspring from the union of two distinct individuals, especially if their progenitors have been subjected to very different conditions, have an immense advantage in height, weight, constitutional vigour and fertility over the self-fertilised offspring from one of the same parents. And this fact is amply sufficient to account for the development of the sexual elements, that is, for the genesis of the two sexes."
In addition, it is thought by some, that a long-term advantage of out-crossing in nature is increased genetic variability that promotes adaptation or avoidance of extinction (see Genetic variability).
| Biology and health sciences | Health and fitness | null |
46890 | https://en.wikipedia.org/wiki/Frequency-hopping%20spread%20spectrum | Frequency-hopping spread spectrum | Frequency-hopping spread spectrum (FHSS) is a method of transmitting radio signals by rapidly changing the carrier frequency among many frequencies occupying a large spectral band. The changes are controlled by a code known to both transmitter and receiver. FHSS is used to avoid interference, to prevent eavesdropping, and to enable code-division multiple access (CDMA) communications.
The frequency band is divided into smaller sub-bands. Signals rapidly change ("hop") their carrier frequencies among the center frequencies of these sub-bands in a determined order. Interference at a specific frequency will affect the signal only during a short interval.
FHSS offers four main advantages over a fixed-frequency transmission:
FHSS signals are highly resistant to narrowband interference because the signal hops to a different frequency band.
Signals are difficult to intercept if the frequency-hopping pattern is not known.
Jamming is also difficult if the pattern is unknown; the signal can be jammed only for a single hopping period if the spreading sequence is unknown.
FHSS transmissions can share a frequency band with many types of conventional transmissions with minimal mutual interference. FHSS signals add minimal interference to narrowband communications, and vice versa.
Usage
Military
Spread-spectrum signals are highly resistant to deliberate jamming unless the adversary has knowledge of the frequency-hopping pattern. Military radios generate the frequency-hopping pattern under the control of a secret Transmission Security Key (TRANSEC) that the sender and receiver share in advance. This key is generated by devices such as the KY-57 Speech Security Equipment. United States military radios that use frequency hopping include the JTIDS/MIDS family, the HAVE QUICK Aeronautical Mobile communications system, and the SINCGARS Combat Net Radio, Link-16.
Civilian
In the US, since the Federal Communications Commission (FCC) amended rules to allow FHSS systems in the unregulated 2.4 GHz band, many consumer devices in that band have employed various FHSS modes. eFCC CFR 47 part 15.247 covers the regulations in the US for 902–928 MHz, 2400–2483.5 MHz, and 5725–5850 MHz bands, and the requirements for frequency hopping.
Some walkie-talkies that employ FHSS technology have been developed for unlicensed use on the 900 MHz band. FHSS technology is also used in many hobby transmitters and receivers used for radio-controlled model cars, airplanes, and drones. A type of multiple access is achieved allowing hundreds of transmitter/receiver pairs to be operated simultaneously on the same band, in contrast to previous FM or AM radio-controlled systems that had limited simultaneous channels.
Technical considerations
The overall bandwidth required for frequency hopping is much wider than that required to transmit the same information using only one carrier frequency. But because transmission occurs only on a small portion of this bandwidth at any given time, the instantaneous interference bandwidth is really the same. While providing no extra protection against wideband thermal noise, the frequency-hopping approach reduces the degradation caused by narrowband interference sources.
One of the challenges of frequency-hopping systems is to synchronize the transmitter and receiver. One approach is to have a guarantee that the transmitter will use all the channels in a fixed period of time. The receiver can then find the transmitter by picking a random channel and listening for valid data on that channel. The transmitter's data is identified by a special sequence of data that is unlikely to occur over the segment of data for this channel, and the segment can also have a checksum for integrity checking and further identification. The transmitter and receiver can use fixed tables of frequency-hopping patterns, so that once synchronized they can maintain communication by following the table.
In the US, FCC part 15 on unlicensed spread spectrum systems in the 902–928 MHz and 2.4 GHz bands permits more power than is allowed for non-spread-spectrum systems. Both FHSS and direct-sequence spread-spectrum (DSSS) systems can transmit at 1 watt, a thousandfold increase from the 1 milliwatt limit on non-spread-spectrum systems. The FCC also prescribes a minimum number of frequency channels and a maximum dwell time for each channel.
Origins
In 1899, Guglielmo Marconi experimented with frequency-selective reception in an attempt to minimise interference.
The earliest mentions of frequency hopping in open literature are in US patent 725,605, awarded to Nikola Tesla on March 17, 1903, and in radio pioneer Jonathan Zenneck's book Wireless Telegraphy (German, 1908, English translation McGraw Hill, 1915), although Zenneck writes that Telefunken had already tried it. Nikola Tesla doesn't mention the phrase "frequency hopping" directly, but certainly alludes to it. Entitled Method of Signaling, the patent describes a system that would enable radio communication without any danger of the signals or messages being disturbed, intercepted, interfered with in any way.
The German military made limited use of frequency hopping for communication between fixed command points in World War I to prevent eavesdropping by British forces, who did not have the technology to follow the sequence. Jonathan Zenneck's book Wireless Telegraphy was originally published in German in 1908, but was translated into English in 1915 as the enemy started using frequency hopping on the front line.
In 1920, Otto B. Blackwell, De Loss K. Martin, and Gilbert S. Vernam filed a patent application for a "Secrecy Communication System", granted as U.S. Patent 1,598,673 in 1926. This patent described a method of transmitting signals on multiple frequencies in a random manner for secrecy, anticipating key features of later frequency hopping systems.
A Polish engineer and inventor, Leonard Danilewicz, claimed to have suggested the concept of frequency hopping in 1929 to the Polish General Staff, but it was rejected.
In 1932, was awarded to Willem Broertjes, named "Method of maintaining secrecy in the transmission of wireless telegraphic messages", which describes a system where "messages are transmitted by means of a group of frequencies... known to the sender and receiver alone, and alternated at will during transmission of the messages".
During World War II, the US Army Signal Corps was inventing a communication system called SIGSALY, which incorporated spread spectrum in a single frequency context. But SIGSALY was a top-secret communications system, so its existence was not known until the 1980s.
In 1942, actress Hedy Lamarr and composer George Antheil received for their "Secret Communications System", an early version of frequency hopping using a piano-roll to switch among 88 frequencies to make radio-guided torpedoes harder for enemies to detect or jam. They then donated the patent to the U.S. Navy.
Frequency-hopping ideas may have been rediscovered in the 1950s during patent searches when private companies were independently developing direct-sequence Code Division Multiple Access, a non-frequency-hopping form of spread-spectrum. In 1957, engineers at Sylvania Electronic Systems Division adopted a similar idea, using the recently invented transistor instead of Lamarr's and Antheil's clockwork technology. In 1962, the US Navy utilized Sylvania Electronic Systems Division's work during the Cuban Missile Crisis.
A practical application of frequency hopping was developed by Ray Zinn, co-founder of Micrel Corporation. Zinn developed a method allowing radio devices to operate without the need to synchronize a receiver with a transmitter. Using frequency hopping and sweep modes, Zinn's method is primarily applied in low data rate wireless applications such as utility metering, machine and equipment monitoring and metering, and remote control. In 2006 Zinn received for his "Wireless device and method using frequency hopping and sweep modes."
Variations
Adaptive frequency-hopping spread spectrum (AFH) as used in Bluetooth improves resistance to radio frequency interference by avoiding crowded frequencies in the hopping sequence. This sort of adaptive transmission is easier to implement with FHSS than with DSSS.
The key idea behind AFH is to use only the "good" frequencies and avoid the "bad" ones—those experiencing frequency selective fading, those on which a third party is trying to communicate, or those being actively jammed. Therefore, AFH should be complemented by a mechanism for detecting good and bad channels.
But if the radio frequency interference is itself dynamic, then AFH's strategy of "bad channel removal" may not work well. For example, if there are several colocated frequency-hopping networks (as Bluetooth Piconet), they are mutually interfering and AFH's strategy fails to avoid this interference.
The problem of dynamic interference, gradual reduction of available hopping channels and backward compatibility with legacy Bluetooth devices was resolved in version 1.2 of the Bluetooth Standard (2003). Such a situation can often happen in the scenarios that use unlicensed spectrum.
In addition, dynamic radio frequency interference is expected to occur in the scenarios related to cognitive radio, where the networks and the devices should exhibit frequency-agile operation.
Chirp modulation can be seen as a form of frequency-hopping that simply scans through the available frequencies in consecutive order to communicate.
Frequency hopping can be superimposed on other modulations or waveforms to enhance the system performance.
| Technology | Telecommunications | null |
46916 | https://en.wikipedia.org/wiki/Otter | Otter | Otters are carnivorous mammals in the subfamily Lutrinae. The 13 extant otter species are all semiaquatic, aquatic, or marine. Lutrinae is a branch of the Mustelidae family, which includes weasels, badgers, mink, and wolverines, among other animals.
Otters' habitats include dens known as holts or couches, with their social structure described by terms such as dogs or boars for males, bitches or sows for females, and pups or cubs for offspring. Groups of otters can be referred to as a bevy, family, lodge, romp, or raft when in water, indicating their social and playful characteristics. Otters are known for their distinct feces, termed spraints, which can vary in smell from freshly mown hay to putrefied fish.
Otters exhibit a varied life cycle with a gestation period of about 60–86 days, and offspring typically stay with their family for a year. They can live up to 16 years, with their diet mainly consisting of fish and sometimes frogs, birds, or shellfish, depending on the species. Otters are distinguished by their long, slim bodies, powerful webbed feet for swimming, and their dense fur, which keeps them warm and buoyant in water. They are playful animals, engaging in activities like sliding into water on natural slides and playing with stones.
There are 13 known species of otters, ranging in size and habitat preferences, with some species adapted to cold waters requiring a high metabolic rate for warmth. Otter-human interactions have varied over time, with otters being hunted for their pelts, used in fishing practices in southern Bangladesh, and occasionally attacking humans, though such incidents are rare and often a result of provocation. Otters hold a place in various cultures' mythology and religion, symbolizing different attributes and stories, from Norse mythology to Native American totems and Asian folklore, where they are sometimes believed to possess shapeshifting abilities.
Etymology
The word otter derives from the Old English word or . This and cognate words in other Indo-European languages ultimately stem from the Proto-Indo-European root , which also gave rise to the English word "water".
Terminology
An otter's den is called a holt, or couch. Male otters are called dogs or boars; females are called bitches or sows; and their offspring are called pups or cubs. The collective nouns for otters are bevy, family, lodge, romp (being descriptive of their often playful nature), or, when in water, raft.
The feces of otters are typically identified by their distinctive aroma, the smell of which has been described as ranging from freshly mown hay to putrefied fish; these are known as spraints.
Life cycle
The gestation period in otters is about 60 to 86 days. The newborn pup is cared for by the bitch, dog, and older offspring. Female otters reach sexual maturity at approximately two years of age and males at approximately three years. The holt is built under tree roots or a rocky cairn, more common in Scotland. It is lined with moss and grass.
After one month, the pup can leave the holt and after two months, it is able to swim. The pup lives with its family for approximately one year. Otters live up to 16 years; they are by nature playful, and frolic in the water with their pups. Its usual source of food is fish, and further downriver, eels, but it may sample frogs and birds.
Description
Otters have long, slim bodies and relatively short limbs. Their most striking anatomical features are the powerful webbed feet used to swim, and their seal-like abilities for holding breath underwater. Most have sharp claws on their feet and all except the sea otter have long, muscular tails. The 13 species range in adult size from in length and in weight. The Asian small-clawed otter is the smallest otter species and the giant otter and sea otter are the largest. They have very soft, insulated underfur, which is protected by an outer layer of long guard hairs. This traps a layer of air which keeps them dry, warm, and somewhat buoyant under water.
Several otter species live in cold waters and have high metabolic rates to help keep them warm. Eurasian otters must eat 15% of their body weight each day, and sea otters 20 to 25%, depending on the temperature. In water as warm as , an otter needs to catch of fish per hour to survive. Most species hunt for three to five hours each day and nursing mothers up to eight hours each day.
Feeding
For most otters, fish is the staple of their diet. This is often supplemented by frogs, crayfish and crabs. Some otters are experts at opening shellfish, and others will feed on available small mammals or birds. Prey-dependence leaves otters very vulnerable to prey depletion. Sea otters are hunters of clams, sea urchins and other shelled creatures. They are notable for their ability to use stones to break open shellfish on their bellies. This skill must be learned by the young.
Otters are active hunters, chasing prey in the water or searching the beds of rivers, lakes or the seas. Most species live beside water, but river otters usually enter it only to hunt or travel, otherwise spending much of their time on land to prevent their fur becoming waterlogged. Sea otters are considerably more aquatic and live in the ocean for most of their lives.
Otters are playful animals and appear to engage in various behaviors for sheer enjoyment, such as making waterslides and sliding on them into the water. They may also find and play with small stones. Different species vary in their social structure, some being largely solitary, while others live in groups – in a few species these groups may be fairly large.
Species
Extant species
Extinct species
Subfamily Lutrinae
Genus Lutra
†Lutra castiglionis – Corsica, Pleistocene
†Lutra euxena – Malta, Pleistocene
†Japanese otter (Lutra nippon) – Japan, extinct c. 1979 Genus Lutrogale †Lutrogale cretensis Genus Enhydra †Enhydra macrodonta †Enhydra reevei Genus †Algarolutra – Corsica and Sardinia, Pleistocene
Genus †Cyrnaonyx – Europe, Pleistocene
Genus †Enhydriodon – Ethiopia, Late Miocene to Pliocene
Genus †Enhydritherium – North America, Late Miocene to Early Pliocene
Genus †Lutraeximia – Italy, Pleistocene
Genus †Limnonyx – Germany, Late Miocene
Genus †Megalenhydris – Sardinia, Pleistocene
Genus †Paludolutra – Italy, Late Miocene
Genus †Sardolutra – Sardinia, Pleistocene
Genus †Siamogale – eastern Asia, Late Miocene to Early Pliocene
Genus †Sivaonyx – Asia and Africa, Late Miocene to Early Pliocene
Genus †Teruelictis – Spain, Late Miocene
Genus †Torolutra – Africa, Pliocene
Genus †Tyrrhenolutra – Italy, Late Miocene
Genus †Vishnuonyx – Europe, Asia and Africa, Late Miocene to Early Pliocene
Relation with humans
Hunting
Otters have been hunted for their pelts from at least the 1700s, although it may have begun well before then. Early hunting methods included darts, arrows, nets and snares but later, traps were set on land and guns used.
There has been a long history of otter pelts being worn around the world. In China it was standard for the royalty to wear robes made from them. People that were financially high in status also wore them. The tails of otters were often made into items for men to wear. These included hats and belts. Even some types of mittens for children have been made from the fur of otters.
Otters have also been hunted using dogs, especially the otterhound. From 1958 to 1963, the 11 otter hunts in England and Wales killed 1,065 otters between them. In such hunts, the hunters notched their poles after every kill. The prized trophy that hunters would take from the otters was the baculum, which would be worn as a tie-pin.
Traffic (the wildlife trade monitoring network) reported that otters are at serious risk in Southeast Asia and have disappeared from parts of their former range. This decline in populations is due to hunting to supply the demand for skins.
Fishing for humans
For many generations, fishermen in southern Bangladesh have bred smooth-coated otters and used them to chase fish into their nets. Once a widespread practice, passed down from father to son throughout many communities in Asia, this traditional use of domesticated wild animals is still in practice in the district of Narail, Bangladesh.
Attacks on humans
A 2011 review by the IUCN/SSC Otter Specialist Group showed that otter attacks reported between 1875 and 2010 occurred most often in Florida, where human and otter populations have substantially increased since 2000, with the majority involving the North American otter. At least 42 instances of attack were found, including one resulting in death and another case of serious injury. Attacking otters had rabies in 36% of anecdotal reports. 80% of otter bite victims do not seek medical treatment.
Animal welfare groups say that, unless threatened, otters rarely attack humans. In November 2021, about 20 otters ambushed a British man in his 60s during an early morning walk in Singapore Botanic Gardens. Despite weighing over 200 pounds, he was trampled and bitten and could not stand up without help from a nearby rescuer. The man speculated that another runner might have stepped on one of the animals earlier, and wished that there could be more lighting installed at that location.
Religion and mythology
Norse mythology tells of the dwarf Ótr habitually taking the form of an otter. The myth of "Otter's Ransom" is the starting point of the Volsunga saga.
In Irish mythology, the character Lí Ban was turned from a woman into a mermaid, half human and half salmon, and given three hundred years of life to roam the oceans. Her lapdog assumed the form of an otter and shared her prolonged lifetime and her extensive wanderings.
In some Native American cultures, otters are considered totem animals.
The otter is held to be a clean animal belonging to Ahura Mazda in Zoroastrian belief, and taboo to kill.
In popular Korean mythology, it is told that people who see an otter (soodal) will attract 'rain clouds' for the rest of their lives.
In the Buddhist Jataka tales, The Otters and The Wolf, two otters agreed to let a wolf settle their dispute in dividing their caught fish but it was taken away by the cunning wolf.
Japanese folklore
In Japanese, otters are called "kawauso" (). In Japanese folklore, they fool humans in the same way as foxes (kitsune) and tanuki.
In the Noto region, Ishikawa Prefecture, there are stories where they shapeshift into beautiful women or children wearing checker-patterned clothing. If a human attempts to speak to one, they will answer "oraya" and then answer "araya," and if anybody asks them anything, they say cryptic things like "kawai." There are darker stories, such as one from Kaga Province (now Ishikawa Prefecture) in which an otter that lives in the castle's moat shapeshifts into a woman, invites males, and then kills and eats them.
In the kaidan, essays, and legends of the Edo period like the "Urami Kanawa" (), "Taihei Hyaku Monogatari" (), and the "Shifu Goroku" (), there are tales about strange occurrences like otters that shapeshift into beautiful women and kill men.
In the town of Numatachi, Asa District, Hiroshima Prefecture (now Hiroshima), they are called "tomo no kawauso" () and "ato no kawauso" (). It is said that they shapeshift into bōzu (a kind of monk) and appear before passers-by, and if the passer-by tries to get close and look up, its height steadily increases until it becomes a large bōzu.
In the Tsugaru region, Aomori Prefecture, they are said to possess humans. It is said that those possessed by otters lose their stamina as if their soul has been extracted. They are also said to shapeshift into severed heads and get caught in fishing nets.
In the Kashima District and the Hakui District in Ishikawa Prefecture, they are seen as a yōkai under the name kabuso or kawaso. They perform pranks like extinguishing the fire of the paper lanterns of people who walk on roads at night, shapeshifting into a beautiful woman of 18 or 19 years of age and fooling people, or tricking people and making them try to engage in sumo against a rock or a tree stump. It is said that they speak human words, and sometimes people are called and stopped while walking on roads.
In the Ishikawa and Kochi Prefectures, they are said to be a type of kappa, and there are stories told about how they engage in sumo with otters. In places like the Hokuriku region, Kii, and Shikoku, the otters are seen as a type of kappa. In the Kagakushū, a dictionary from the Muromachi period, an otter that grew old becomes a kappa.
In an Ainu folktale, in Urashibetsu (in Abashiri, Hokkaido), there are stories where monster otters shapeshift into humans, go into homes where there are beautiful girls, and try to kill the girl and make her its wife.
In China, like in Japan, there are stories where otters shapeshift into beautiful women in old books like In Search of the Supernatural and the Zhenyizhi ().
| Biology and health sciences | Carnivora | null |
46924 | https://en.wikipedia.org/wiki/Caesarean%20section | Caesarean section | Caesarean section, also known as C-section, cesarean, or caesarean delivery, is the surgical procedure by which one or more babies are delivered through an incision in the mother's abdomen. It is often performed because vaginal delivery would put the mother or child at risk. Reasons for the operation include obstructed labor, twin pregnancy, high blood pressure in the mother, breech birth, shoulder presentation, and problems with the placenta or umbilical cord. A caesarean delivery may be performed based upon the shape of the mother's pelvis or history of a previous C-section. A trial of vaginal birth after C-section may be possible. The World Health Organization recommends that caesarean section be performed only when medically necessary.
A C-section typically takes 45 minutes to an hour. It may be done with a spinal block, where the woman is awake, or under general anesthesia. A urinary catheter is used to drain the bladder, and the skin of the abdomen is then cleaned with an antiseptic. An incision of about 15 cm (6 inches) is then typically made through the mother's lower abdomen. The uterus is then opened with a second incision and the baby delivered. The incisions are then stitched closed. A woman can typically begin breastfeeding as soon as she is out of the operating room and awake. Often, several days are required in the hospital to recover sufficiently to return home.
C-sections result in a small overall increase in poor outcomes in low-risk pregnancies. They also typically take about six weeks to heal from, longer than vaginal birth. The increased risks include breathing problems in the baby and amniotic fluid embolism and postpartum bleeding in the mother. Established guidelines recommend that caesarean sections not be used before 39 weeks of pregnancy without a medical reason. The method of delivery does not appear to have an effect on subsequent sexual function.
In 2012, about 23 million C-sections were done globally. The international healthcare community has previously considered the rate of 10% and 15% to be ideal for caesarean sections. Some evidence finds a higher rate of 19% may result in better outcomes. More than 45 countries globally have C-section rates less than 7.5%, while more than 50 have rates greater than 27%. Efforts are being made to both improve access to and reduce the use of C-section. In the United States as of 2017, about 32% of deliveries are by C-section. The surgery has been performed at least as far back as 715 BC following the death of the mother, with the baby occasionally surviving. A popular idea is that the Roman statesman Julius Caesar was born via caesarean section and is the namesake of the procedure, but if this is the true etymology, it is based on a misconception: until the modern era, C-sections seem to have been invariably fatal to the mother, and Caesar's mother Aurelia not only survived her son's birth but lived for nearly 50 years afterward. There are many ancient and medieval legends, oral histories, and historical records of laws about C-sections around the world, especially in Europe, the Middle East and Asia. The first recorded successful C-section (where both the mother and the infant survived) was performed on a woman in Switzerland in 1500 by her husband, Jakob Nufer, though this was not recorded until 8 decades later. With the introduction of antiseptics and anesthetics in the 19th century, survival of both the mother and baby, and thus the procedure, became significantly more common.
Uses
Caesarean section (C-section) is recommended when vaginal delivery might pose a risk to the mother or baby. C-sections are also carried out for personal and social reasons on maternal request in some countries.
Medical uses
Complications of labor and factors increasing the risk associated with vaginal delivery include:
Abnormal presentation (breech or transverse positions)
Prolonged labor or a failure to progress (obstructed labour, also known as dystocia)
Fetal distress
Cord prolapse
Uterine rupture or an elevated risk thereof
Uncontrolled hypertension, pre-eclampsia, or eclampsia in the mother
Tachycardia in the mother or baby after amniotic rupture (the waters breaking)
Placenta problems (placenta praevia, placental abruption or placenta accreta)
Failed labor induction
Failed instrumental delivery (by forceps or ventouse (Sometimes, a trial of forceps/ventouse delivery is attempted, and if unsuccessful, the baby will need to be delivered by caesarean section.)
Large baby weighing > 4,000 grams (macrosomia)
Umbilical cord abnormalities (vasa previa, multilobate including bilobate and succenturiate-lobed placentas, velamentous insertion)
Other complications of pregnancy, pre-existing conditions, and concomitant disease, include:
Previous (high risk) fetus
HIV infection of the mother with a high viral load (HIV with a low maternal viral load is not necessarily an indication for caesarean section)
An outbreak of genital herpes in the third trimester (which can cause infection in the baby if born vaginally)
Previous classical (longitudinal) caesarean section
Previous uterine rupture
Prior problems with the healing of the perineum (from previous childbirth or Crohn's disease)
Bicornuate uterus
Rare cases of posthumous birth after the death of the mother
Other
Decreasing experience of accoucheurs with the management of breech presentation. Although obstetricians and midwives are extensively trained in proper procedures for breech presentation deliveries using simulation mannequins, there is decreasing experience with actual vaginal breech delivery, which may increase the risk.
Prevention
The prevalence of caesarean section is generally agreed to be higher than needed in many countries, and physicians are encouraged to actively lower the rate, as a caesarean rate higher than 10–15% is not associated with reductions in maternal or infant mortality rates, although some evidence support that a higher rate of 19% may result in better outcomes.
Some of these efforts are: emphasizing a long latent phase of labor is not abnormal and not a justification for C-section; a new definition of the start of active labor from a cervical dilatation of 4 cm to a dilatation of 6 cm; and allowing women who have previously given birth to push for at least 2 hours, with 3 hours of pushing for women who have not previously given birth, before labor arrest is considered. Physical exercise during pregnancy decreases the risk. Additionally, results from a 2021 systematic review of the evidence on outpatient cervical ripening found that in women with low-risk pregnancies, the risk of cesarean delivery with harms to the mother or child were not significantly different from when done in an inpatient setting.
Risks
Adverse outcomes in low-risk pregnancies occur in 8.6% of vaginal deliveries and 9.2% of caesarean section deliveries.
Mother
In those who are low risk, the risk of death for caesarean sections is 13 per 100,000 vs. for vaginal birth 3.5 per 100,000 in the developed world. The United Kingdom National Health Service gives the risk of death for the mother as three times that of a vaginal birth.
In Canada, the difference in serious morbidity or mortality for the mother (e.g. cardiac arrest, wound hematoma, or hysterectomy) was 1.8 additional cases per 100. The difference in in-hospital maternal death was not significant.
A caesarean section is associated with risks of postoperative adhesions, incisional hernias (which may require surgical correction), and wound infections. If a caesarean is performed in an emergency, the risk of the surgery may be increased due to a number of factors. The patient's stomach may not be empty, increasing the risk of anaesthesia. Other risks include severe blood loss (which may require a blood transfusion) and post-dural-puncture spinal- headaches.
Wound infections occur after caesarean sections at a rate of 3–15%. The presence of chorioamnionitis and obesity predisposes the woman to develop a surgical site infection.
Women who had caesarean sections are more likely to have problems with later pregnancies, and women who want larger families should not seek an elective caesarean unless medical indications to do so exist. The risk of placenta accreta, a potentially life-threatening condition which is more likely to develop where a woman has had a previous caesarean section, is 0.13% after two caesarean sections, but increases to 2.13% after four and then to 6.74% after six or more. Along with this is a similar rise in the risk of emergency hysterectomies at delivery.
Mothers can experience an increased incidence of postnatal depression, and can experience significant psychological trauma and ongoing birth-related post-traumatic stress disorder after obstetric intervention during the birthing process. Factors like pain in the first stage of labor, feelings of powerlessness, intrusive emergency obstetric intervention are important in the subsequent development of psychological issues related to labor and delivery.
Subsequent pregnancies
Women who have had a caesarean for any reason are somewhat less likely to become pregnant again as compared to women who have previously delivered only vaginally.
Women who had just one previous caesarean section are more likely to have problems with their second birth. Delivery after previous caesarean section is by either of two main options:
Vaginal birth after caesarean section (VBAC)
Elective repeat caesarean section (ERCS)
Both have higher risks than a vaginal birth with no previous caesarean section. A vaginal birth after caesarean section (VBAC) confers a higher risk of uterine rupture (5 per 1,000), blood transfusion or endometritis (10 per 1,000), and perinatal death of the child (0.25 per 1,000). Furthermore, 20% to 40% of planned VBAC attempts end in caesarean section being needed, with greater risks of complications in an emergency repeat caesarean section than in an elective repeat caesarean section. On the other hand, VBAC confers less maternal morbidity and a decreased risk of complications in future pregnancies than elective repeat caesarean section.
Adhesions
There are several steps that can be taken during abdominal or pelvic surgery to minimize postoperative complications, such as the formation of adhesions. Such techniques and principles may include:
Handling all tissue with absolute care
Using powder-free surgical gloves
Controlling bleeding
Choosing sutures and implants carefully
Keeping tissue moist
Preventing infection with antibiotics given intravenously to the mother before skin incision
Despite these proactive measures, adhesion formation is a recognized complication of any abdominal or pelvic surgery. To prevent adhesions from forming after caesarean section, adhesion barrier can be placed during surgery to minimize the risk of adhesions between the uterus and ovaries, the small bowel, and almost any tissue in the abdomen or pelvis. This is not current UK practice, as there is no compelling evidence to support the benefit of this intervention.
Adhesions can cause long-term problems, such as:
Infertility, which may end when adhesions distort the tissues of the ovaries and tubes, impeding the normal passage of the egg (ovum) from the ovary to the uterus. One in five infertility cases may be adhesion related (stoval)
Chronic pelvic pain, which may result when adhesions are present in the pelvis. Almost 50% of chronic pelvic pain cases are estimated to be adhesion related (stoval)
Small bowel obstruction: the disruption of normal bowel flow, which can result when adhesions twist or pull the small bowel.
The risk of adhesion formation is one reason why vaginal delivery is usually considered safer than elective caesarean section where there is no medical indication for section for either maternal or fetal reasons.
Child
Non-medically indicated (elective) childbirth before 39 weeks gestation "carry significant risks for the baby with no known benefit to the mother." Newborn mortality at 37 weeks may be up to 3 times the number at 40 weeks, and is elevated compared to 38 weeks gestation. These early term births were associated with more death during infancy, compared to those occurring at 39 to 41 weeks (full term). Researchers in one study and another review found many benefits to going full term, but no adverse effects in the health of the mothers or babies.
The American Congress of Obstetricians and Gynecologists and medical policy makers review research studies and find more incidence of suspected or proven sepsis, RDS, hypoglycemia, need for respiratory support, need for NICU admission, and need for hospitalization > 4–5 days. In the case of caesarean sections, rates of respiratory death were 14 times higher in pre-labor at 37 compared with 40 weeks gestation, and 8.2 times higher for pre-labor caesarean at 38 weeks. In this review, no studies found decreased neonatal morbidity due to non-medically indicated (elective) delivery before 39 weeks.
For otherwise healthy twin pregnancies where both twins are head down a trial of vaginal delivery is recommended at between 37 and 38 weeks. Vaginal delivery, in this case, does not worsen the outcome for either infant as compared with caesarean section. There is some controversy on the best method of delivery where the first twin is head first and the second is not, but most obstetricians will recommend normal delivery unless there are other reasons to avoid vaginal birth. When the first twin is not head down, a caesarean section is often recommended. Regardless of whether the twins are delivered by section or vaginally, the medical literature recommends delivery of dichorionic twins at 38 weeks, and monochorionic twins (identical twins sharing a placenta) by 37 weeks due to the increased risk of stillbirth in monochorionic twins who remain in utero after 37 weeks. The consensus is that late preterm delivery of monochorionic twins is justified because the risk of stillbirth for post-37-week delivery is significantly higher than the risks posed by delivering monochorionic twins near term (i.e., 36–37 weeks).
The consensus concerning monoamniotic twins (identical twins sharing an amniotic sac), the highest risk type of twins, is that they should be delivered by caesarean section at or shortly after 32 weeks, since the risks of intrauterine death of one or both twins is higher after this gestation than the risk of complications of prematurity.
In a research study widely publicized, singleton children born earlier than 39 weeks may have developmental problems, including slower learning in reading and math.
Other risks include:
Wet lung (Transient Tachypnea of the Newborn): Failure to pass through the birth canal does not expose the baby to cortisol and epinephrine which typically would reverse the potassium/sodium pumps in the baby's lung. This causes fluid to remain in the lung.
Potential for early delivery and complications: Preterm delivery may be inadvertently carried out if the due-date calculation is inaccurate. One study found an increased complication risk if a repeat elective caesarean section is performed even a few days before the recommended 39 weeks.
Higher infant mortality risk: In caesarean sections performed with no indicated medical risk (singleton at full term in a head-down position with no other obstetric or medical complications), the risk of death in the first 28 days of life has been cited as 1.77 per 1,000 live births among women who had caesarean sections, compared to 0.62 per 1,000 for women who delivered vaginally.
Birth by caesarean section also seems to be associated with worse health outcomes later in life, including overweight or obesity, problems in the immune system, and poor digestive system. However, caesarean deliveries are found to not affect a newborn's risk of developing food allergy. This finding contradicts a previous study that claims babies born via caesarean section have lower levels of Bacteroides that is linked to peanut allergy in infants.
Classification
Caesarean sections have been classified in various ways by different perspectives. One way to discuss all classification systems is to group them by their focus either on the urgency of the procedure (most common), characteristics of the mother, or as a group based on other, less commonly discussed factors.
By urgency
Conventionally, caesarean sections are classified as being either an elective surgery or an emergency operation. Classification is used to help communication between the obstetric, midwifery and anaesthetic team for discussion of the most appropriate method of anaesthesia. The decision whether to perform general anesthesia or regional anesthesia (spinal or epidural anaesthetic) is important and is based on many indications, including how urgent the delivery needs to be as well as the medical and obstetric history of the woman. Regional anaesthetic is almost always safer for the woman and the baby but sometimes general anaesthetic is safer for one or both, and the classification of urgency of the delivery is an important issue affecting this decision.
A planned caesarean (or elective/scheduled caesarean), arranged ahead of time, is most commonly arranged for medical indications which have developed before or during the pregnancy, and ideally after 39 weeks of gestation. In the UK, this is classified as a 'grade 4' section (delivery timed to suit the mother or hospital staff) or as a 'grade 3' section (no maternal or fetal compromise but early delivery needed).
Emergency caesarean sections are performed in pregnancies in which a vaginal delivery was planned initially, but an indication for caesarean delivery has since developed. In the UK they are further classified as grade 2 (delivery required within 90 minutes of the decision but no immediate threat to the life of the woman or the fetus) or grade 1 (delivery required within 30 minutes of the decision: immediate threat to the life of the mother or the baby or both.)
Elective caesarean sections may be performed on the basis of an obstetrical or medical indication, or because of a medically non-indicated maternal request. Among women in the United Kingdom, Sweden and Australia, about 7% preferred caesarean section as a method of delivery. In cases without medical indications the American Congress of Obstetricians and Gynecologists and the UK Royal College of Obstetricians and Gynaecologists recommend a planned vaginal delivery. The National Institute for Health and Care Excellence recommends that if after a woman has been provided information on the risk of a planned caesarean section and she still insists on the procedure it should be provided. If provided this should be done at 39 weeks of gestation or later. There is no evidence that ECS can reduce mother-to-child hepatitis B and hepatitis C virus transmission.
By characteristics of the mother
Caesarean delivery on maternal request
Caesarean delivery on maternal request (CDMR) is a medically unnecessary caesarean section, where the conduct of a childbirth via a caesarean section is requested by the pregnant patient even though there is not a medical indication to have the surgery. Systematic reviews have found no strong evidence about the impact of caesareans for nonmedical reasons. Recommendations encourage counseling to identify the reasons for the request, addressing anxieties and information, and encouraging vaginal birth. Elective caesareans at 38 weeks in some studies showed increased health complications in the newborn. For this reason ACOG and NICE recommend that elective caesarean sections should not be scheduled before 39 weeks gestation unless there is a medical reason. Planned caesarean sections may be scheduled earlier if there is a medical reason.
After previous caesarean
Mothers who have previously had a caesarean section are more likely to have a caesarean section for future pregnancies than mothers who have never had a caesarean section. There is discussion about the circumstances under which women should have a vaginal birth after a previous caesarean.
Vaginal birth after caesarean (VBAC) is the practice of birthing a baby vaginally after a previous baby has been delivered by caesarean section (surgically). According to the American College of Obstetricians and Gynecologists (ACOG), successful VBAC is associated with decreased maternal morbidity and a decreased risk of complications in future pregnancies. According to the American Pregnancy Association, 90% of women who have undergone caesarean deliveries are candidates for VBAC. Approximately 60–80% of women opting for VBAC will successfully give birth vaginally, which is comparable to the overall vaginal delivery rate in the United States in 2010.
Twins
For otherwise healthy twin pregnancies where both twins are head down a trial of vaginal delivery is recommended at between 37 and 38 weeks. Vaginal delivery in this case does not worsen the outcome for either infant as compared with caesarean section. There is controversy on the best method of delivery where the first twin is head first and the second is not. When the first twin is not head down at the point of labor starting, a caesarean section should be recommended. Although the second twin typically has a higher frequency of problems, it is not known if a planned caesarean section affects this. It is estimated that 75% of twin pregnancies in the United States were delivered by caesarean section in 2008.
Breech birth
A breech birth is the birth of a baby from a breech presentation, in which the baby exits the pelvis with the buttocks or feet first as opposed to the normal head-first presentation. In breech presentation, fetal heart sounds are heard just above the umbilicus.
Babies are usually born head first. If the baby is in another position the birth may be complicated. In a 'breech presentation', the unborn baby is bottom-down instead of head-down. Babies born bottom-first are more likely to be harmed during a normal (vaginal) birth than those born head-first. For instance, the baby might not get enough oxygen during the birth. Having a planned caesarean may reduce these problems. A review looking at planned caesarean section for singleton breech presentation with planned vaginal birth, concludes that in the short term, births with a planned caesarean were safer for babies than vaginal births. Fewer babies died or were seriously hurt when they were born by caesarean. There was tentative evidence that children who were born by caesarean had more health problems at age two. Caesareans caused some short-term problems for mothers such as more abdominal pain. They also had some benefits, such as less urinary incontinence and less perineal pain.
The bottom-down position presents some hazards to the baby during the process of birth, and the mode of delivery (vaginal versus caesarean) is controversial in the fields of obstetrics and midwifery.
Though vaginal birth is possible for the breech baby, certain fetal and maternal factors influence the safety of vaginal breech birth. The majority of breech babies born in the United States and the UK are delivered by caesarean section as studies have shown increased risks of morbidity and mortality for vaginal breech delivery, and most obstetricians counsel against planned vaginal breech birth for this reason. As a result of reduced numbers of actual vaginal breech deliveries, obstetricians and midwives are at risk of de-skilling in this important skill. All those involved in delivery of obstetric and midwifery care in the UK undergo mandatory training in conducting breech deliveries in the simulation environment (using dummy pelvises and mannequins to allow practice of this important skill) and this training is carried out regularly to keep skills up to date.
Resuscitative hysterotomy
A resuscitative hysterotomy, also known as a peri-mortem caesarean delivery, is an emergency caesarean delivery carried out where maternal cardiac arrest has occurred, to assist in resuscitation of the mother by removing the aortocaval compression generated by the gravid uterus. Unlike other forms of caesarean section, the welfare of the fetus is a secondary priority only, and the procedure may be performed even prior to the limit of fetal viability if it is judged to be of benefit to the mother.
Other ways, including the surgery technique
There are several types of caesarean section (CS). An important distinction lies in the type of incision (longitudinal or transverse) made on the uterus, apart from the incision on the skin: the vast majority of skin incisions are a transverse suprapubic approach known as a Pfannenstiel incision but there is no way of knowing from the skin scar which way the uterine incision was conducted.
The classical caesarean section involves a longitudinal midline incision on the uterus which allows a larger space to deliver the baby. It is performed at very early gestations where the lower segment of the uterus is unformed as it is safer in this situation for the baby: but it is rarely performed other than at these early gestations, as the operation is more prone to complications than a low transverse uterine incision. Any woman who has had a classical section will be recommended to have an elective repeat section in subsequent pregnancies as the vertical incision is much more likely to rupture in labor than the transverse incision.
The lower uterine segment section is the procedure most commonly used today; it involves a transverse cut just above the edge of the bladder. It results in less blood loss and has fewer early and late complications for the mother, as well as allowing her to consider a vaginal birth in the next pregnancy.
A caesarean hysterectomy consists of a caesarean section followed by the removal of the uterus. This may be done in cases of intractable bleeding or when the placenta cannot be separated from the uterus.
The EXIT procedure is a specialized surgical delivery procedure used to deliver babies who have airway compression.
The Misgav Ladach method is a modified caesarean section which has been used nearly all over the world since the 1990s. It was described by Michael Stark, the president of the New European Surgical Academy, at the time he was the director of Misgav Ladach, a general hospital in Jerusalem. The method was presented during a FIGO conference in Montréal in 1994 and then distributed by the University of Uppsala, Sweden, in more than 100 countries. This method is based on minimalistic principles. He examined all steps in caesarean sections in use, analyzed them for their necessity and, if found necessary, for their optimal way of performance. For the abdominal incision he used the modified Joel Cohen incision and compared the longitudinal abdominal structures to strings on musical instruments. As blood vessels and muscles have lateral sway, it is possible to stretch rather than cut them. The peritoneum is opened by repeat stretching, no abdominal swabs are used, the uterus is closed in one layer with a big needle to reduce the amount of foreign body as much as possible, the peritoneal layers remain unsutured and the abdomen is closed with two layers only. Women undergoing this operation recover quickly and can look after the newborns soon after surgery. There are many publications showing the advantages over traditional caesarean section methods. There is also an increased risk of abruptio placentae and uterine rupture in subsequent pregnancies for women who underwent this method in prior deliveries.
Since 2015, the World Health Organization has endorsed the Robson classification as a holistic means of comparing childbirth rates between different settings, with a view to allowing more accurate comparison of caesarean section rates.
Technique
Antibiotic prophylaxis is used before an incision. The uterus is incised, and this incision is extended with blunt pressure along a cephalad-caudad axis. The infant is delivered, and the placenta is then removed. The surgeon then makes a decision about uterine exteriorization. Single-layer uterine closure is used when the mother does not want a future pregnancy. When subcutaneous tissue is 2 cm thick or more, surgical suture is used. Discouraged practices include manual cervical dilation, any subcutaneous drain, or supplemental oxygen therapy with intent to prevent infection.
Caesarean section can be performed with single or double layer suturing of the uterine incision. Single layer closure compared with double layer closure has been observed to result in reduced blood loss during the surgery. It is uncertain whether this is the direct effect of the suturing technique or if other factors such as the type and site of abdominal incision contribute to reduced blood loss. Standard procedure includes the closure of the peritoneum. Research questions whether this is needed, with some studies indicating peritoneal closure is associated with longer operative time and hospital stay. The Misgav Ladach method is a surgery technical that may have fewer secondary complications and faster healing, due to the insertion into the muscle.
Anesthesia
Both general and regional anaesthesia (spinal, epidural or combined spinal and epidural anaesthesia) are acceptable for use during caesarean section. Evidence does not show a difference between regional anaesthesia and general anaesthesia with respect to major outcomes in the mother or baby. Regional anaesthesia may be preferred as it allows the mother to be awake and interact immediately with her baby. Compared to general anaesthesia, regional anaesthesia is better at preventing persistent postoperative pain 3 to 8 months after caesarean section. Other advantages of regional anesthesia may include the absence of typical risks of general anesthesia: pulmonary aspiration (which has a relatively high incidence in patients undergoing anesthesia in late pregnancy) of gastric contents and esophageal intubation. One trial found no difference in satisfaction when general anaesthesia was compared with either spinal anaesthesia.
Regional anaesthesia is used in 95% of deliveries, with spinal and combined spinal and epidural anaesthesia being the most commonly used regional techniques in scheduled caesarean section. Regional anaesthesia during caesarean section is different from the analgesia (pain relief) used in labor and vaginal delivery. The pain that is experienced because of surgery is greater than that of labor and therefore requires a more intense nerve block.
General anesthesia may be necessary because of specific risks to mother or child. Patients with heavy, uncontrolled bleeding may not tolerate the hemodynamic effects of regional anesthesia. General anesthesia is also preferred in very urgent cases, such as severe fetal distress, when there is no time to perform a regional anesthesia.
Prevention of complications
Postpartum infection is one of the main causes of maternal death and may account for 10% of maternal deaths globally. A caesarean section greatly increases the risk of infection and associated morbidity, estimated to be between 5 and 20 times as high, and routine use of antibiotic prophylaxis to prevent infections was found by a meta-analysis to substantially reduce the incidence of febrile morbidity. Infection can occur in around 8% of women who have caesareans, largely endometritis, urinary tract infections and wound infections. The use of preventative antibiotics in women undergoing caesarean section decreased wound infection, endometritis, and serious infectious complications by about 65%. Side effects and effect on the baby is unclear.
Women who have caesareans can recognize the signs of fever that indicate the possibility of wound infection. Taking antibiotics before skin incision rather than after cord clamping reduces the risk for the mother, without increasing adverse effects for the baby. Moderate certainty evidence suggest that chlorhexidine gluconate as a skin preparation is slightly more effective in prevention surgical site infections than povidone-iodine but further research is needed.
Some doctors believe that during a caesarean section, mechanical cervical dilation with a finger or forceps will prevent the obstruction of blood and lochia drainage, and thereby benefit the mother by reducing the risk of death. The evidence neither supported nor refuted this practice for reducing postoperative morbidity, pending further large studies.
Hypotension (low blood pressure) is common in women who have spinal anaesthesia; intravenous fluids such as crystalloids, or compressing the legs with bandages, stockings, or inflatable devices may help to reduce the risk of hypotension but the evidence is still uncertain about their effectiveness.
Skin-to-skin contact
The WHO and UNICEF recommend that infants born by Caesarean section should have skin-to-skin contact (SSC) as soon as the mother is alert and responsive. Immediate SSC following a spinal or epidural anesthetic is possible because the mother remains alert; however, after a general anesthetic the father or other family member may provide SSC until the mother is able.
It is known that during the hours of labor before a vaginal birth a woman's body begins to produce oxytocin which aids in the bonding process, and it is thought that SSC can trigger its production as well. Indeed, women have reported that they felt that SSC had helped them to feel close to and bond with their infant. A review of literature also found that immediate or early SSC increased the likelihood of successful breastfeeding and that newborns were found to cry less and relax quicker when they had SSC with their father as well.
Recovery
It is common for women who undergo caesarean section to have reduced or absent bowel movements for hours to days. During this time, women may experience abdominal cramps, nausea and vomiting. This usually resolves without treatment. Poorly controlled pain following non-emergent caesarean section occurs in between 13% and 78% of women. Following caesarean delivery, complementary and alternative therapies (e.g., acupuncture) may help to relieve pain, though evidence supporting the efficacy of such treatments is extremely limited. Abdominal, wound and back pain can continue for months after a caesarean section. Non-steroidal anti-inflammatory drugs can be helpful. For the first couple of weeks after a caesarean, women should avoid lifting anything heavier than their baby. To minimize pain during breastfeeding, women should experiment with different breastfeeding holds including the football hold and side-lying hold. Women who have had a caesarean are more likely to experience pain that interferes with their usual activities than women who have vaginal births, although by six months there is generally no longer a difference. Pain during sexual intercourse is less likely than after vaginal birth; by six months there is no difference.
There may be a somewhat higher incidence of postnatal depression in the first weeks after childbirth for women who have caesarean sections, but this difference does not persist. Some women who have had caesarean sections, especially emergency caesareans, experience post-traumatic stress disorder.
A woman who undergoes caesarean section has 18.3% chance of chronic surgical pain at three months and 6.8% chance of surgical pain at 12 months.
In recent meta-analyses, caesarean section has been associated to a lower risk of urinary incontinence and pelvic organ prolapse compared to vaginal delivery. Women who have vaginal births, after a previous caesarean, are more than twice as likely to subsequently have pelvic floor surgery as those who have another caesarean.
Frequency
Global rates of caesarean section are increasing. It doubled from 2003 to 2018 to reach 21%, and is increasing annually by 4%. The trend towards increasing rates is particularly strong in middle and high income countries. In southern Africa, the cesarean rate is less than 5%; while the rate is almost 60% in some parts of Latin America. The Canadian rate was 26% in 2005–2006. Australia has a high caesarean section rate, at 31% in 2007. At one time a rate of 10% to 15% was thought to be ideal; a rate of 19% may result in better outcomes. The World Health Organization officially withdrew its previous recommendation of a 15% C-section rate in June 2010. Their official statement read, "There is no empirical evidence for an optimum percentage. What matters most is that all women who need caesarean sections receive them."
More than 50 nations have rates greater than 27%. Another 45 countries have rates less than 7.5%. There are efforts to both improve access to and reduce the use of C-section. Globally, 1% of all caesarean deliveries are carried out without medical need. Overall, the caesarean section rate was 25.7% for 2004–2008.
There is no significant difference in caesarean rates when comparing midwife continuity care to conventional fragmented care. More emergency caesareans—about 66%—are performed during the day rather than the night.
The rate has risen to 46% in China and to levels of 25% and above in many Asian, European and Latin American countries. In Brazil and Iran the caesarean section rate is greater than 40%. Brazil has one of the highest caesarean section rates in the world, with rates in the public sector of 35–45%, and 80–90% in the private sector.
Europe
Across Europe, there are differences between countries: in Italy the caesarean section rate is 40%, while in the Nordic countries it is 14%. In the United Kingdom, in 2008, the rate was 24%. In Ireland the rate was 26.1% in 2009.
In Italy, the incidence of caesarean sections is particularly high, although it varies from region to region. In Campania, 60% of 2008 births reportedly occurred via caesarean sections. In the Rome region, the mean incidence is around 44%, but can reach as high as 85% in some private clinics.
United States
In the United States, cesarean deliveries began rising in the 1960s and started becoming routine in the 1960s and 1970s.
In the United States the rate of C-section is around 33%, varying from 23% to 40% depending on the state. One of three women who gave birth in the US delivered by caesarean in 2011. In 2012, close to 23 million C-sections were carried out globally.
With nearly 1.3 million stays, caesarean section was one of the most common procedures performed in U.S. hospitals in 2011. It was the second-most common procedure performed for people ages 18 to 44 years old. Caesarean rates in the U.S. have risen considerably since 1996. The rate has increased in the United States, to 33% of all births in 2012, up from 21% in 1996. In 2010, the caesarean delivery rate was 32.8% of all births (a slight decrease from 2009's high of 32.9% of all births). A study found that in 2011, women covered by private insurance were 11% more likely to have a caesarean section delivery than those covered by Medicaid. The increase in use has not resulted in improved outcomes, resulting in the position that C-sections may be done too frequently. It is believed that the high rate of induced deliveries has also led to the high rate of c-sections because they are twice as likely to lead to one.
Hospitals and doctors make more money from C-section births than vaginal deliveries. Economists have calculated that hospitals may make a few thousand dollars more and doctors a few hundred. It has been found that for-profit hospitals do more c-sections than non-profit hospitals. One study looked at the rate of c-sections done for women who were themselves doctors. It found that there was a 10 percent decrease to the rate of c-sections vs the general population. But if the hospital paid their doctors a flat salary removing the incentive to do the surgical procedures, which take more time, the rate of c-sections done on women who were themselves physicians exceeded that of the procedure done on non-medically knowledgeable mothers, suggesting that some women who actually needed c-sections were not getting them.
Concerned over the rising number of cesarean deliveries and hospital costs, in 2009 Minnesota introduced a blended payment rate for either vaginal or cesarean uncomplicated births (i.e., a similar payment regardless of delivery mode). As a result, the prepolicy cesarean rate of 22.8% dropped 3.24 percentage points. The cost of childbirth hospitalizations in Minnesota dropped by $425.80 at the time the policy was initiated and continued to drop by $95.04 per quarter with no significant effects on maternal morbidity.
The rise of cesarean births in the United States has coincided with counter-movements emphasizing natural childbirth with a lesser degree of medical intervention.
China
The rate of cesarean sections began to sharply increase in China in the 1990s. This increase was driven by the expansion of China's modern hospital infrastructure, and occurred first in urban areas. The rise in cesarean deliveries has also resulted in social critique of the medical establishment over the medical necessity of performing cesarean sections.
History
Historically, caesarean sections performed upon a live woman usually resulted in the death of the mother. It was considered an extreme measure, performed only when the mother was already dead or considered to be beyond help. By way of comparison, see the resuscitative hysterotomy or perimortem caesarean section.
According to the ancient Chinese Records of the Grand Historian, Luzhong (), a sixth-generation descendant of the mythical Yellow Emperor, had six sons, all born by "cutting open the body". The sixth son Jilian founded the House of Mi that ruled the State of Chu (–223 BC).
The Sanskrit medical treatise Sushruta Samhita, composed in the early 1st millennium CE, mentions post-mortem caesarean sections. The first available non-mythical record of a C-section is the mother of Bindusara (born , ruled 298 – ), the second Mauryan Samrat (emperor) of India, accidentally consumed poison and died when she was close to delivering him. Chanakya, Chandragupta's teacher and adviser, made up his mind that the baby should survive. He cut open the belly of the queen and took out the baby, thus saving the baby's life.
An early account of caesarean section in Iran (Persia) is mentioned in the book of Shahnameh, written around 1000 AD, and relates to the birth of Rostam, the legendary hero of that country. According to the Shahnameh, the Simurgh instructed Zal upon how to perform a caesarean section, thus saving Rudaba and the child Rostam. In Persian literature caesarean section is known as Rostamina ().
In the Irish mythological text the Ulster Cycle, the character Furbaide Ferbend is said to have been born by posthumous caesarean section, after his mother was murdered by his evil aunt Medb.
The Babylonian Talmud, an ancient Jewish religious text, mentions a procedure similar to the caesarean section. The procedure is termed . It also discusses at length the permissibility of performing a C-section on a dying or dead mother. There is also some basis for supposing that Jewish women regularly survived the operation in Roman times (as early as the 2nd century AD).
Pliny the Elder theorized that Julius Caesar's (born 100 BC) name came from an ancestor who was born by caesarean section, but the truth of this is debated (see the discussion of the etymology of Caesar). Some popular misconceptions involve Caesar himself being born from the procedure; which is considered false because the procedure was lethal to mothers in ancient Rome and Caesar's mother Aurelia Cotta lived until he was an adult. The Ancient Roman caesarean section was first performed to remove a baby from the womb of a mother who died during childbirth, a practice sometimes called the Caesarean law.
The Spanish saint Raymond Nonnatus (1204–1240) received his surname—from the Latin ('not born')—because he was born by caesarean section. His mother died while giving birth to him.
There is some indirect evidence that the first caesarean section that was survived by both the mother and child was performed in Prague in 1337. The mother was Beatrice of Bourbon, the second wife of the King of Bohemia John of Luxembourg. Beatrice gave birth to the king's son Wenceslaus I, later the duke of Luxembourg, Brabant, and Limburg, and who became the half brother of the later King of Bohemia and Holy Roman Emperor, Charles IV.
In an account from the 1580s, Jakob Nufer, a veterinarian in Siegershausen, Switzerland, is supposed to have performed the operation on his wife after a prolonged labour, with her surviving. His wife allegedly bore five more children, including twins, and the baby delivered by caesarean section purportedly lived to the age of 77.
For most of the time since the 16th century, the procedure had a high mortality rate by modern standards. Key steps in reducing mortality were:
Introduction of the transverse incision technique to minimize bleeding by Ferdinand Adolf Kehrer in 1881 is thought to be first modern CS performed.
The introduction of uterine suturing by Max Sänger in 1882
Modification by Hermann Johannes Pfannenstiel in 1900, see Pfannenstiel incision
Extraperitoneal CS and then moving to low transverse incision (Krönig, 1912)
Adherence to principles of asepsis
Anesthesia advances
Blood transfusion
Antibiotics
Indigenous people in the Great Lakes region of Africa, including Rwanda and Uganda, performed caesarean sections which in one account by Robert William Felkin from 1879 resulted in the survival of both mother and child. Banana wine was used, although the site of the incision was then also washed with water and, post operation, covered with a paste made by chewing two different roots. From the well-developed nature of the medical procedures employed he concluded that these procedures had been employed for some time. James Barry was the first European doctor to carry out a successful caesarean in Africa, while posted to Cape Town between 1817 and 1828.
The first successful caesarean section to be performed in the United States took place in Rockingham County, Virginia in 1794. The procedure was performed by Dr. Jesse Bennett on his wife Elizabeth.
Caesarius of Terracina
The patron saint of caesarean section is Caesarius, a young deacon martyred at Terracina, who has replaced and Christianized the pagan figure of Caesar. The martyr (Saint Cesareo in Italian) is invoked for the success of this surgical procedure, because it was considered the new "Christian Caesar" – as opposed to the "pagan Caesar" – in the Middle Ages it began to be invoked by pregnant women to wish a physiological birth, for the success of the expulsion of the baby from the uterus and, therefore, for their salvation and that of the unborn. The practice continues, in fact the martyr Caesarius is invoked by the future mothers who, due to health problems or that of the baby, must give birth to their child by caesarean section.
Etymology
The origin of the term is not definitively known. Roman Lex Regia (royal law), later the (imperial law), of Numa Pompilius (715–673 BC), required the child of a mother who had died during childbirth to be cut from her womb.
There was a cultural taboo that mothers should not be buried pregnant, that may have reflected a way of saving some fetuses. Roman practice required a living mother to be in her tenth month of pregnancy before resorting to the procedure, reflecting the knowledge that she could not survive the delivery.
Speculations that the Roman dictator Julius Caesar was born by the method now known as C-section are false. Although caesarean sections were performed in Roman times, no classical source records a mother surviving such a delivery, while Caesar's mother lived for years after his birth. As late as the 12th century, scholar and physician Maimonides expresses doubt over the possibility of a woman's surviving this procedure and again becoming pregnant. The term has also been explained as deriving from the verb caedere, 'to cut', with children delivered this way referred to as . Pliny the Elder refers to a certain Julius Caesar (an ancestor of the famous Roman statesman) as , 'cut from the womb' giving this as an explanation for the cognomen Caesar which was then carried by his descendants. Nonetheless, the false etymology has been widely repeated until recently. For example, the first (1888) and second (1989) editions of the Oxford English Dictionary say that caesarean birth "was done in the case of Julius Cæsar". More recent dictionaries are more diffident: the online edition of the OED (2021) mentions "the traditional belief that Julius Cæsar was delivered this way", and Merriam-Webster's Collegiate Dictionary (2003) says "from the legendary association of such a delivery with the Roman cognomen Caesar".
The word Caesar, meaning either Julius Caesar or an emperor in general, is also borrowed or calqued in the name of the procedure in many other languages in Europe and beyond.
Finally, the Roman praenomen (given name) Caeso was said to be given to children who were born via C-section. While this was probably just folk etymology made popular by Pliny the Elder, it was well known by the time the term came into common use.
Spelling
The term caesarean is spelled in various accepted ways, as discussed at Wiktionary. The Medical Subject Headings (MeSH) of the United States National Library of Medicine (NLM) uses cesarean section, while some other American medical works, e.g. Saunders Comprehensive Veterinary Dictionary, use caesarean, as do most British works. The online versions of the US-published Merriam-Webster Dictionary and American Heritage Dictionary list cesarean first and other spellings as "variants".
Society and culture
Court cases
In re A.C., 573 A.2d 1235 (1990), was a District of Columbia Court of Appeals case. It was the first American appellate court case decided against a forced Caesarean section, although the decision was issued after the fatal procedure was performed. Physicians performed a Caesarean section upon patient Angela Carder (née Stoner) without informed consent in an unsuccessful attempt to save the life of her baby. The case stands as a landmark in United States case law establishing the rights of informed consent and bodily integrity for pregnant women.
In Illinois, In re Baby Boy Doe, 632 N.E.2d 326 (Ill. App. Ct. 1994) was a court case holding that courts may not balance whatever rights a fetus may have against the rights of a competent woman, whose choice to refuse medical treatment as invasive as a Cesarean section must be honored even if the choice may be harmful to the fetus.
Pemberton v. Tallahassee Memorial Regional Center, 66 F. Supp. 2d 1247 (N.D. Fla. 1999), is a case in the United States regarding reproductive rights. Pemberton had a previous Caesarean section (vertical incision), and with her second child attempted to have a VBAC (vaginal birth after c-section). When a doctor she had approached about a related issue at the Tallahassee Memorial Regional Center found out, he and the hospital sued to force her to get a c-section. The court held that the rights of the fetus at or near birth outweighed the rights of Pemberton to determine her own medical care. She was physically forced to stop laboring, and taken to the hospital, where a c-section was performed. Her suit against the hospital was dismissed. The court held that a cesarean section at the end of a full-term pregnancy was here deemed to be medically necessary by doctors to avoid a substantial risk that the fetus would die during delivery due to uterine rupture, a risk of 4–6% according to the hospital's doctors and 2% according to Pemberton's doctors. Furthermore, the court held that a state's interest in preserving the life of an unborn child outweighed the mother's constitutional interest of bodily integrity. The court held that Roe v. Wade was not applicable, because bearing an unwanted child is a greater intrusion on the mother's constitutional interests than undergoing a cesarean section to deliver a child that the mother affirmatively desires to deliver. The court further distinguished In re A.C. by stating that it left open the possibility that a non-consenting patient's interest would yield to a more compelling countervailing interest in an "extremely rare and truly exceptional case." The court then held this case to be such.
Presence of father
In many hospitals, the mother's partner is encouraged to attend the surgery to support her and share the experience. While traditionally there has been an opaque surgical drape obstructing the parents' view, some patients and doctors are opting for a "gentle C-section" using a clear drape, allowing the parents to watch the delivery and see their infant immediately.
Special cases
In Judaism, there is a dispute among the poskim (Rabbinic authorities) as to whether the first-born son from a caesarean section has the laws of a bechor. Traditionally, a male child delivered by caesarean is not eligible for the Pidyon HaBen dedication ritual.
In rare cases, caesarean sections can be used to remove a dead fetus; otherwise, the woman has to labour and deliver a baby known to be a stillbirth. A late-term abortion using caesarean section procedures is termed a hysterotomy abortion and is very rarely performed.
The mother may perform a caesarean section on herself; there have been successful cases, such as Inés Ramírez Pérez of Mexico who, on 5 March 2000, took this action. She survived, as did her son, Orlando Ruiz Ramírez.
In 2024, a female western lowland gorilla had a successful cesarean section after zoo veterinarians diagnosed her with pre-eclampsia. The premature gorilla infant survived, as a result of similar methods used with human infants.
| Biology and health sciences | Surgery | Health |
46959 | https://en.wikipedia.org/wiki/Koch%20snowflake | Koch snowflake | The Koch snowflake (also known as the Koch curve, Koch star, or Koch island) is a fractal curve and one of the earliest fractals to have been described. It is based on the Koch curve, which appeared in a 1904 paper titled "On a Continuous Curve Without Tangents, Constructible from Elementary Geometry" by the Swedish mathematician Helge von Koch.
The Koch snowflake can be built up iteratively, in a sequence of stages. The first stage is an equilateral triangle, and each successive stage is formed by adding outward bends to each side of the previous stage, making smaller equilateral triangles. The areas enclosed by the successive stages in the construction of the snowflake converge to times the area of the original triangle, while the perimeters of the successive stages increase without bound. Consequently, the snowflake encloses a finite area, but has an infinite perimeter.
The Koch snowflake has been constructed as an example of a continuous curve where drawing a tangent line to any point is impossible. Unlike the earlier Weierstrass function where the proof was purely analytical, the Koch snowflake was created to be possible to geometrically represent at the time, so that this property could also be seen through "naive intuition".
Origin and history
There is no doubt that the snowflake curve is based on the von Koch curve and its iterative construction. However, the picture of the snowflake does not appear in either the original article published in 1904 nor in the extended 1906 memoir. So one can ask who is the man who constructed the snowflake figure first. An investigation of this question suggests that the snowflake curve is due to the American mathematician Edward Kasner.
Construction
The Koch snowflake can be constructed by starting with an equilateral triangle, then recursively altering each line segment as follows:
divide the line segment into three segments of equal length.
draw an equilateral triangle that has the middle segment from step 1 as its base and points outward.
remove the line segment that is the base of the triangle from step 2.
The first iteration of this process produces the outline of a hexagram.
The Koch snowflake is the limit approached as the above steps are followed indefinitely. The Koch curve originally described by Helge von Koch is constructed using only one of the three sides of the original triangle. In other words, three Koch curves make a Koch snowflake.
A Koch curve–based representation of a nominally flat surface can similarly be created by repeatedly segmenting each line in a sawtooth pattern of segments with a given angle.
Properties
Perimeter of the Koch snowflake
Each iteration multiplies the number of sides in the Koch snowflake by four, so the number of sides after iterations is given by:
If the original equilateral triangle has sides of length , the length of each side of the snowflake after iterations is:
an inverse power of three multiple of the original length.
The perimeter of the snowflake after iterations is:
The Koch curve has an infinite length, because the total length of the curve increases by a factor of with each iteration. Each iteration creates four times as many line segments as in the previous iteration, with the length of each one being the length of the segments in the previous stage. Hence, the length of the curve after iterations will be times the original triangle perimeter and is unbounded, as tends to infinity.
Limit of perimeter
As the number of iterations tends to infinity, the limit of the perimeter is:
since .
An -dimensional measure exists, but has not been calculated so far. Only upper and lower bounds have been invented.
Area of the Koch snowflake
In each iteration a new triangle is added on each side of the previous iteration, so the number of new triangles added in iteration is:
The area of each new triangle added in an iteration is of the area of each triangle added in the previous iteration, so the area of each triangle added in iteration is:
where is the area of the original triangle. The total new area added in iteration is therefore:
The total area of the snowflake after iterations is:
Collapsing the geometric sum gives:
Limits of area
The limit of the area is:
since .
Thus, the area of the Koch snowflake is of the area of the original triangle. Expressed in terms of the side length of the original triangle, this is:
Solid of revolution
The volume of the solid of revolution of the Koch snowflake about an axis of symmetry of the initiating equilateral triangle of unit side is
Other properties
The Koch snowflake is self-replicating with six smaller copies surrounding one larger copy at the center. Hence, it is an irrep-7 irrep-tile (see Rep-tile for discussion).
The fractal dimension of the Koch curve is . This is greater than that of a line () but less than that of Peano's space-filling curve ().
It is impossible to draw a tangent line to any point of the curve.
Representation as a de Rham curve
The Koch curve arises as a special case of a de Rham curve. The de Rham curves are mappings of Cantor space into the plane, usually arranged so as to form a continuous curve. Every point on a continuous de Rham curve corresponds to a real number in the unit interval. For the Koch curve, the tips of the snowflake correspond to the dyadic rationals: each tip can be uniquely labeled with a distinct dyadic rational.
Tessellation of the plane
It is possible to tessellate the plane by copies of Koch snowflakes in two different sizes. However, such a tessellation is not possible using only snowflakes of one size. Since each Koch snowflake in the tessellation can be subdivided into seven smaller snowflakes of two different sizes, it is also possible to find tessellations that use more than two sizes at once. Koch snowflakes and Koch antisnowflakes of the same size may be used to tile the plane.
Thue–Morse sequence and turtle graphics
A turtle graphic is the curve that is generated if an automaton is programmed with a sequence.
If the Thue–Morse sequence members are used in order to select program states:
If , move ahead by one unit,
If , rotate counterclockwise by an angle of ,
the resulting curve converges to the Koch snowflake.
Representation as Lindenmayer system
The Koch curve can be expressed by the following rewrite system (Lindenmayer system):
Alphabet : F
Constants : +, −
Axiom : F
Production rules : F → F+F--F+F
Here, F means "draw forward", - means "turn right 60°", and + means "turn left 60°".
To create the Koch snowflake, one would use F--F--F (an equilateral triangle) as the axiom.
Variants of the Koch curve
Following von Koch's concept, several variants of the Koch curve were designed, considering right angles (quadratic), other angles (Cesàro), circles and polyhedra and their extensions to higher dimensions (Sphereflake and Kochcube, respectively)
Squares can be used to generate similar fractal curves. Starting with a unit square and adding to each side at each iteration a square with dimension one third of the squares in the previous iteration, it can be shown that both the length of the perimeter and the total area are determined by geometric progressions. The progression for the area converges to while the progression for the perimeter diverges to infinity, so as in the case of the Koch snowflake, we have a finite area bounded by an infinite fractal curve. The resulting area fills a square with the same center as the original, but twice the area, and rotated by radians, the perimeter touching but never overlapping itself.
The total area covered at the th iteration is:
while the total length of the perimeter is:
which approaches infinity as increases.
Functionalisation
In addition to the curve, the paper by Helge von Koch that has established the Koch curve shows a variation of the curve as an example of a continuous everywhere yet nowhere differentiable function that was possible to represent geometrically at the time. From the base straight line, represented as AB, the graph can be drawn by recursively applying the following on each line segment:
Divide the line segment (XY) into three parts of equal length, divided by dots C and E.
Draw a line DM, where M is the middle point of CE, and DM is perpendicular to the initial base of AB, having the length of .
Draw the lines CD and DE and erase the lines CE and DM.
Each point of AB can be shown to converge to a single height. If is defined as the distance of that point to the initial base, then as a function is continuous everywhere and differentiable nowhere.
| Mathematics | Other | null |
46966 | https://en.wikipedia.org/wiki/Sleep%20disorder | Sleep disorder | A sleep disorder, or somnipathy, is a medical disorder affecting an individual's sleep patterns, sometimes impacting physical, mental, social, and emotional functioning. Polysomnography and actigraphy are tests commonly ordered for diagnosing sleep disorders.
Sleep disorders are broadly classified into dyssomnias, parasomnias, circadian rhythm sleep disorders involving the timing of sleep, and other disorders, including those caused by medical or psychological conditions. When a person struggles to fall asleep or stay asleep without any obvious cause, it is referred to as insomnia, which is the most common sleep disorder. Other sleep disorders include sleep apnea, narcolepsy, hypersomnia (excessive sleepiness at inappropriate times), sleeping sickness (disruption of the sleep cycle due to infection), sleepwalking, and night terrors.
Sleep disruptions can be caused by various issues, including teeth grinding (bruxism) and night terrors. Managing sleep disturbances that are secondary to mental, medical, or substance abuse disorders should focus on addressing the underlying conditions.
Sleep disorders are common in both children and adults. However, there is a significant lack of awareness about sleep disorders in children, with many cases remaining unidentified. Several common factors involved in the onset of a sleep disorder include increased medication use, age-related changes in circadian rhythms, environmental changes, lifestyle changes, pre-diagnosed physiological problems, or stress. Among the elderly, the risk of developing sleep-disordered breathing, periodic limb movements, restless legs syndrome, REM sleep behavior disorders, insomnia, and circadian rhythm disturbances is especially high.
Causes
A systematic review found that traumatic childhood experiences, such as family conflict or sexual trauma, significantly increase the risk of several sleep disorders in adulthood, including sleep apnea, narcolepsy, and insomnia.
An evidence-based synopsis suggests that idiopathic REM sleep behavior disorder (iRBD) may have a hereditary component. A total of 632 participants, half with iRBD and half without, completed self-report questionnaires. The study results suggest that people with iRBD are more likely to report having a first-degree relative with the same sleep disorder than people of the same age and sex who do not have the disorder. More research is needed to further understand the hereditary nature of sleep disorders.
A population susceptible to the development of sleep disorders includes people who have experienced a traumatic brain injury (TBI). Due to the significant research focus on this issue, a systematic review was conducted to synthesize the findings. The results indicate that individuals who have experienced a TBI are most disproportionately at risk for developing narcolepsy, obstructive sleep apnea, excessive daytime sleepiness, and insomnia.
Sleep disorders and neurodegenerative diseases
Neurodegenerative diseases are often associated with sleep disorders, particularly when characterized by the abnormal accumulation of alpha-synuclein, as seen in multiple system atrophy (MSA), Parkinson's disease (PD), and Lewy body disease (LBD). For example, individuals diagnosed with PD frequently experience various sleep issues, such as insomnia (affecting approximately 70% of the PD population), hypersomnia (over 50%), and REM sleep behavior disorder (RBD) (around 40%), which is linked to increased motor symptoms. Moreover, RBD has been identified as a significant precursor for the future development of these neurodegenerative diseases over several years, presenting a promising opportunity for improving treatments.
Neurodegenerative conditions are commonly related to structural brain impairments, which may disrupt sleep and wakefulness, circadian rhythm, and motor or non-motor functioning. Conversely, sleep disturbances are often linked to worsening patients' cognitive functioning, emotional state, and quality of life. Additionally, these abnormal behavioral symptoms can place a significant burden on their relatives and caregivers. The limited research in this area, coupled with increasing life expectancy, highlights the need for a deeper understanding of the relationship between sleep disorders and neurodegenerative diseases.
Sleep disturbances and Alzheimer's disease
Sleep disturbances have also been observed in Alzheimer's disease (AD), affecting about 45% of its population. When based on caregiver reports, this percentage increases to about 70%. As in the PD population, insomnia and hypersomnia are frequently recognized in AD patients. These disturbances have been associated with the accumulation of beta-amyloid, circadian rhythm sleep disorders (CRSD), and melatonin alteration. Additionally, changes in sleep architecture are observed in AD. Although sleep architecture seems to naturally change with age, its development appears aggravated in AD patients. Slow-wave sleep (SWS) potentially decreases (and is sometimes absent), spindles and the length of time spent in REM sleep are also reduced, while its latency increases. Poor sleep onset in AD has been associated with dream-related hallucinations, increased restlessness, wandering, and agitation related to sundowning—a typical chronobiological phenomenon in the disease.
In Alzheimer's disease, in addition to cognitive decline and memory impairment, there are also significant sleep disturbances with modified sleep architecture. These disturbances may consist of sleep fragmentation, reduced sleep duration, insomnia, increased daytime napping, decreased quantity of some sleep stages, and a growing resemblance between some sleep stages (N1 and N2). More than 65% of people with Alzheimer's disease experience this type of sleep disturbance.
One factor that could explain this change in sleep architecture is a disruption in the circadian rhythm, which regulates sleep. This disruption can lead to sleep disturbances. Some studies show that people with Alzheimer's disease have a delayed circadian rhythm, whereas in normal aging, an advanced circadian rhythm is present.
In addition to these psychological symptoms, there are two main neurological symptoms of Alzheimer's disease. The first is the accumulation of beta-amyloid waste, forming aggregate "plaques". The second is the accumulation of tau protein.
It has been shown that the sleep-wake cycle influences the beta-amyloid burden, a central component found in Alzheimer's disease (AD). As individuals awaken, the production of beta-amyloid protein becomes more consistent compared to its production during sleep. This phenomenon can be explained by two factors. First, metabolic activity is higher during waking hours, resulting in greater secretion of beta-amyloid protein. Second, oxidative stress increases during waking hours, which leads to greater beta-amyloid production.
On the other hand, it is during sleep that beta-amyloid residues are degraded to prevent plaque formation. The glymphatic system is responsible for this through the phenomenon of glymphatic clearance. Thus, during wakefulness, the AB burden is greater because the metabolic activity and oxidative stress are higher, and there is no protein degradation by the glymphatic clearance. During sleep, the burden is reduced as there is less metabolic activity and oxidative stress (in addition to the glymphatic clearance that occurs).
Glymphatic clearance occurs during the NREM SWS sleep. This sleep stage decreases in normal aging, resulting in less glymphatic clearance and increased AB burden that will form AB plaques. Therefore, sleep disturbances in individuals with AD will amplify this phenomenon.
The decrease in the quantity and quality of the NREM SWS, as well as the disturbances of sleep will therefore increase the AB plaques. This initially occurs in the hippocampus, which is a brain structure integral in long-term memory formation. Hippocampus cell death occurs, which contributes to diminished memory performance and cognitive decline found in AD.
Although the causal relationship is unclear, the development of AD correlates with the development of prominent sleep disorders. In the same way, sleep disorders exacerbate disease progression, forming a positive feedback relationship. As a result, sleep disturbances are no longer only a symptom of AD; the relationship between sleep disturbances and AD is bidirectional.
At the same time, it has been shown that memory consolidation in long-term memory (which depends on the hippocampus) occurs during NREM sleep. This indicates that a decrease in the NREM sleep will result in less consolidation, resulting in poorer memory performances in hippocampal-dependent long-term memory. This drop in performance is one of the central symptoms of AD.
Recent studies have also linked sleep disturbances, neurogenesis and AD. The subgranular zone and the subventricular zone continued to produce new neurons in adult brains. These new cells are then incorporated into neuronal circuits and the subgranular zone, which is found in the hippocampus. These new cells contribute to learning and memory, playing an essential role in hippocampal-dependent memory.
However, recent studies have shown that several factors can interrupt neurogenesis, including stress and prolonged sleep deprivation (more than one day). The sleep disturbances encountered in AD could therefore suppress neurogenesis—and thus impair hippocampal functions. This would contribute to diminished memory performances and the progression of AD, and the progression of AD would aggravate sleep disturbances.
Changes in sleep architecture found in patients with AD occur during the preclinical phase of AD. These changes could be used to detect those most at risk of developing AD. However, this is still only theoretical.
While the exact mechanisms and the causal relationship between sleep disturbances and AD remains unclear, these findings already provide a better understanding and offer possibilities to improve targeting of at-risk populations—and the implementation of treatments to curb the cognitive decline of AD patients.
Sleep disorder symptoms in psychiatric illnesses
Schizophrenia
In individuals with psychiatric illnesses sleep disorders may include a variety of clinical symptoms, including but not limited to: excessive daytime sleepiness, difficulty falling asleep, difficulty staying asleep, nightmares, sleep talking, sleepwalking, and poor sleep quality. Sleep disturbances - insomnia, hypersomnia and delayed sleep-phase disorder - are quite prevalent in severe mental illnesses such as psychotic disorders. In those with schizophrenia, sleep disorders contribute to cognitive deficits in learning and memory. Sleep disturbances often occur before the onset of psychosis.
Sleep deprivation can also produce hallucinations, delusions and depression. A 2019 study investigated the three above-mentioned sleep disturbances in schizophrenia-spectrum (SCZ) and bipolar (BP) disorders in 617 SCZ individuals, 440 BP individuals, and 173 healthy controls (HC). Sleep disturbances were identified using the Inventory for Depressive Symptoms - clinician rated scale (IDS-C). Results suggested that at least one type of sleep disturbance was reported in 78% of the SCZ population, in 69% individuals with BD, and in 39% of healthy controls. The SCZ group reported the most number of sleep disturbances compared to the BD and HC groups; specifically, hypersomnia was more frequent among individuals with SCZ, and delayed sleep phase disorder was three times more common in the SCZ group compared to the BD group. Insomnias were the most frequently reported sleep disturbance across all three groups.
Bipolar disorder
One of the main behavioral symptoms of bipolar disorder is abnormal sleep. Studies have suggested that 23-78% of individuals with bipolar disorders consistently report symptoms of excessive time spent sleeping, or hypersomnia. The pathogenesis of bipolar disorder, including the higher risk of suicidal ideation, could possibly be linked to circadian rhythm variability, and sleep disturbances are a good predictor of mood swings. The most common sleep-related symptom of bipolar disorder is insomnia, in addition to hypersomnia, nightmares, poor sleep quality, OSA, extreme daytime sleepiness, etc. Moreover, animal models have shown that sleep debt can induce episodes of bipolar mania in laboratory mice, but these models are still limited in their potential to explain bipolar disease in humans with all its multifaceted symptoms, including those related to sleep disturbances.
Major depressive disorder (MDD)
Sleep disturbances (insomnia or hypersomnia) are not a necessary diagnostic criterion—but one of the most frequent symptoms of individuals with major depressive disorder (MDD). Among individuals with MDD, insomnia and hypersomnia have prevalence estimates of 88% and 27%, respectively, whereas individuals with insomnia have a threefold increased risk of developing MDD. Depressed mood and sleep efficiency strongly co-vary, and while sleep regulation problems may precede depressive episodes, such depressive episodes may also precipitate sleep deprivation. Fatigue, as well as sleep disturbances such as irregular and excessive sleepiness, are linked to symptoms of depression. Recent research has even pointed to sleep problems and fatigues as potential driving forces bridging MDD symptoms to those of co-occurring generalized anxiety disorder.
Treatment
Treatments for sleep disorders generally can be grouped into four categories:
Behavioral and psychotherapeutic treatment
Rehabilitation and management
Medication
Other somatic treatment
None of these general approaches are sufficient for all patients with sleep disorders. Rather, the choice of a specific treatment depends on the patient's diagnosis, medical and psychiatric history, and preferences, as well as the expertise of the treating clinician. Often, behavioral/psychotherapeutic and pharmacological approaches may be compatible, and can effectively be combined to maximize therapeutic benefits.
Management of sleep disturbances that are secondary to mental, medical, or substance abuse disorders should focus on the underlying conditions. Medications and somatic treatments may provide the most rapid symptomatic relief from certain disorders, such as narcolepsy, which is best treated with prescription drugs such as modafinil. Others, such as chronic and primary insomnia, may be more amenable to behavioral interventions—with more durable results.
Chronic sleep disorders in childhood, which affect some 70% of children with developmental or psychological disorders, are under-reported and under-treated. Sleep-phase disruption is also common among adolescents, whose school schedules are often incompatible with their natural circadian rhythm. Effective treatment begins with careful diagnosis using sleep diaries and perhaps sleep studies. Modifications in sleep hygiene may resolve the problem, but medical treatment is often warranted.
Special equipment may be required for treatment of several disorders such as obstructive apnea, circadian rhythm disorders and bruxism. In severe cases, it may be necessary for individuals to accept living with the disorder, however well managed.
Some sleep disorders have been found to compromise glucose metabolism.
Allergy treatment
Histamine plays a role in wakefulness in the brain. An allergic reaction over produces histamine, causing wakefulness and inhibiting sleep. Sleep problems are common in people with allergic rhinitis. A study from the N.I.H. found that sleep is dramatically impaired by allergic symptoms, and that the degree of impairment is related to the severity of those symptoms. Treatment of allergies has also been shown to help sleep apnea.
Acupuncture
A review of the evidence in 2012 concluded that current research is not rigorous enough to make recommendations around the use of acupuncture for insomnia. The pooled results of two trials on acupuncture showed a moderate likelihood that there may be some improvement to sleep quality for individuals with insomnia. This form of treatment for sleep disorders is generally studied in adults, rather than children. Further research would be needed to study the effects of acupuncture on sleep disorders in children.
Hypnosis
Research suggests that hypnosis may be helpful in alleviating some types and manifestations of sleep disorders in some patients. "Acute and chronic insomnia often respond to relaxation and hypnotherapy approaches, along with sleep hygiene instructions." Hypnotherapy has also helped with nightmares and sleep terrors. There are several reports of successful use of hypnotherapy for parasomnias specifically for head and body rocking, bedwetting and sleepwalking.
Hypnotherapy has been studied in the treatment of sleep disorders in both adults and children.
Music therapy
Although more research should be done to increase the reliability of this method of treatment, research suggests that music therapy can improve sleep quality in acute and chronic sleep disorders. In one particular study, participants (18 years or older) who had experienced acute or chronic sleep disorders were put in a randomly controlled trial, and their sleep efficiency, in the form of overall time asleep, was observed. In order to assess sleep quality, researchers used subjective measures (i.e. questionnaires) and objective measures (i.e. polysomnography). The results of the study suggest that music therapy did improve sleep quality in subjects with acute or chronic sleep disorders, though only when tested subjectively. Although these results are not fully conclusive and more research should be conducted, it still provides evidence that music therapy can be an effective treatment for sleep disorders.
In another study specifically looking to help people with insomnia, similar results were seen. The participants that listened to music experienced better sleep quality than those who did not listen to music. Listening to slower pace music before bed can help decrease the heart rate, making it easier to transition into sleep. Studies have indicated that music helps induce a state of relaxation that shifts an individual's internal clock towards the sleep cycle. This is said to have an effect on children and adults with various cases of sleep disorders. Music is most effective before bed once the brain has been conditioned to it, helping to achieve sleep much faster.
Melatonin
Research suggests that melatonin is useful in helping people fall asleep faster (decreased sleep latency), stay asleep longer, and experience improved sleep quality. To test this, a study was conducted that compared subjects who had taken melatonin to subjects with primary sleep disorders who had taken a placebo. Researchers assessed sleep onset latency, total minutes slept, and overall sleep quality in the melatonin and placebo groups to note the differences. In the end, researchers found that melatonin decreased sleep onset latency and increased total sleep time but had an insignificant and inconclusive impact on the quality of sleep compared to the placebo group.
Sleep medicine
Due to rapidly increasing knowledge and understanding of sleep in the 20th century, including the discovery of REM sleep in the 1950s and circadian rhythm disorders in the 70s and 80s, the medical importance of sleep was recognized. By the 1970s in the US, clinics and laboratories devoted to the study of sleep and sleep disorders had been founded, and a need for standards arose. The medical community began paying more attention to primary sleep disorders, such as sleep apnea, as well as the role and quality of sleep in other conditions.
Specialists in sleep medicine were originally and continue to be certified by the American Board of Sleep Medicine. Those passing the Sleep Medicine Specialty Exam received the designation "diplomate of the ABSM". Sleep medicine is now a recognized subspecialty within internal medicine, family medicine, pediatrics, otolaryngology, psychiatry and neurology in the United States. Certification in Sleep medicine shows that the specialist:
Competence in sleep medicine requires an understanding of a myriad of very diverse disorders. Many of which present with similar symptoms such as excessive daytime sleepiness, which, in the absence of volitional sleep deprivation, "is almost inevitably caused by an identifiable and treatable sleep disorder", such as sleep apnea, narcolepsy, idiopathic hypersomnia, Kleine–Levin syndrome, menstrual-related hypersomnia, idiopathic recurrent stupor, or circadian rhythm disturbances. Another common complaint is insomnia, a set of symptoms which can have a great many different causes, physical and mental. Management in the varying situations differs greatly and cannot be undertaken without a correct diagnosis.
Sleep dentistry (bruxism, snoring and sleep apnea), while not recognized as one of the nine dental specialties, qualifies for board-certification by the American Board of Dental Sleep Medicine (ABDSM). The qualified dentists collaborate with sleep physicians at accredited sleep centers, and can provide oral appliance therapy and upper airway surgery to treat or manage sleep-related breathing disorders. The resulting diplomate status is recognized by the American Academy of Sleep Medicine (AASM), and these dentists are organized in the Academy of Dental Sleep Medicine (USA).
Occupational therapy is an area of medicine that can also address a diagnosis of sleep disorder, as rest and sleep is listed in the Occupational Therapy Practice Framework (OTPF) as its own occupation of daily living. Rest and sleep are described as restorative in order to support engagement in other occupational therapy occupations. In the OTPF, the occupation of rest and sleep is broken down into rest, sleep preparation, and sleep participation. Occupational therapists have been shown to help improve restorative sleep through the use of assistive devices/equipment, cognitive behavioral therapy for Insomnia, therapeutic activities, and lifestyle interventions.
In the UK, knowledge of sleep medicine and possibilities for diagnosis and treatment seem to lag. The Imperial College Healthcare shows attention to obstructive sleep apnea syndrome (OSA) and very few other sleep disorders. Some NHS trusts have specialist clinics for respiratory and neurological sleep medicine.
Epidemiology
Children and young adults
According to one meta-analysis of sleep disorders in children, confusional arousals and sleepwalking are the two most common sleep disorders among children. An estimated 17.3% of kids between 3 and 13 years old experience confusional arousals. About 17% of children sleepwalk, with the disorder being more common among boys than girls, the peak ages of sleepwalking are from 8 to 12 years old.
A different systematic review offers a high range of prevalence rates of sleep bruxism for children. Parasomnias like sleepwalking and talking typically occur during the first part of an individual's sleep cycle, the first slow wave of sleep During the first slow wave of sleep period of the sleep cycle the mind and body slow down causing one to feel drowsy and relaxed. At this stage it is the easiest to wake up, therefore many children do not remember what happened during this time.
Nightmares are also considered a parasomnia among children, who typically remember what took place during the nightmare. However, nightmares only occur during the last stage of sleep - Rapid Eye Movement (REM) sleep. REM is the deepest stage of sleep, it is named for the host of neurological and physiological responses an individual can display during this period of the sleep cycle which are similar to being awake.
Between 15.29% and 38.6% of preschoolers grind their teeth at least one night a week. All but one of the included studies reports decreasing bruxist prevalence as age increased, as well as a higher prevalence among boys than girls.
Another systematic review noted 7-16% of young adults have delayed sleep phase disorder. This disorder reaches peak prevalence when people are in their 20s. Between 20 and 26% of adolescents report a sleep onset latency of greater than 30 minutes. Also, 7-36% have difficulty initiating sleep. Asian teens tend to have a higher prevalence of all of these adverse sleep outcomes—than their North American and European counterparts.
By adulthood, parasomnias can normally be resolved due to a person's growth; however, 4% of people have recurring symptoms.
Effects of Untreated Sleep Disorders
Children and young adults who do not get enough sleep due to sleep disorders also have many other health problems such as obesity and physical problems where it could interfere with everyday life. It is recommended that children and young adults stick to the hours of sleep recommended by the CDC, as it helps increase mental health, physical health, and more.
Insomnia
Insomnia is a prevalent form of sleep deprivation. Individuals with insomnia may have problems falling asleep, staying asleep, or a combination of both resulting in hyposomnia - i.e. insufficient quantity and poor quality of sleep.
Combining results from 17 studies on insomnia in China, a pooled prevalence of 15.0% is reported for the country. This result is consistent among other East Asian countries; however, this is considerably lower than a series of Western countries (50.5% in Poland, 37.2% in France and Italy, 27.1% in USA). Men and women residing in China experience insomnia at similar rates.
A separate meta-analysis focusing on this sleeping disorder in the elderly mentions that those with more than one physical or psychiatric malady experience it at a 60% higher rate than those with one condition or less. It also notes a higher prevalence of insomnia in women over the age of 50 than their male counterparts.
A study that was resulted from a collaboration between Massachusetts General Hospital and Merck describes the development of an algorithm to identify patients with sleep disorders using electronic medical records. The algorithm that incorporated a combination of structured and unstructured variables identified more than 36,000 individuals with physician-documented insomnia.
Insomnia can start off at the basic level but about 40% of people who struggle with insomnia have worse symptoms. There are treatments that can help with insomnia and that includes medication, planning out a sleep schedule, limiting oneself from caffeine intake, and cognitive behavioral therapy.
Obstructive sleep apnea
Obstructive sleep apnea (OSA) affects around 4% of men and 2% of women in the United States. In general, this disorder is more prevalent among men. However, this difference tends to diminish with age. Women experience the highest risk for OSA during pregnancy, and tend to report experiencing depression and insomnia in conjunction with obstructive sleep apnea.
In a meta-analysis of the various Asian countries, India and China present the highest prevalence of the disorder. Specifically, about 13.7% of the Indian population and 7% of Hong Kong's population is estimated to have OSA. The two groups in the study experience daytime OSA symptoms such as difficulties concentrating, mood swings, or high blood pressure, at similar rates (prevalence of 3.5% and 3.57%, respectively).
Obesity and Sleep Apnea
The worldwide incidence of obstructive sleep apnea (OSA) is on the rise, largely due to the increasing prevalence of obesity in society. In individuals who are obese, excess fat deposits in the upper respiratory tract can lead to breathing difficulties during sleep, giving rise to OSA. There is a strong connection between obesity and OSA, making it essential to screen obese individuals for OSA and related disorders. Moreover, both obesity and OSA patients are at higher risk of developing metabolic syndrome. Implementing dietary control in obese individuals can have a positive impact on sleep problems and can help alleviate associated issues such as depression, anxiety, and insomnia. Obesity can influence the disturbance in sleep patterns resulting in OSA. Obesity is a risk factor for OSA because it can affect the upper respiratory system by accumulating fat deposition around the muscles surrounding the lungs. Additionally, OSA can irritate the obesity by prolonging sleepiness throughout the day leading to reduces physical activity and an inactive lifestyle.
Sleep paralysis
A systematic review states 7.6% of the general population experiences sleep paralysis at least once in their lifetime. Its prevalence among men is 15.9%, while 18.9% of women experience it.
When considering specific populations, 28.3% of students and 31.9% of psychiatric patients have experienced this phenomenon at least once in their lifetime. Of those psychiatric patients, 34.6% have panic disorder. Sleep paralysis in students is slightly more prevalent for those of Asian descent (39.9%) than other ethnicities (Hispanic: 34.5%, African descent: 31.4%, Caucasian 30.8%).
Restless legs syndrome
According to one meta-analysis, the average prevalence rate for North America, and Western Europe is estimated to be 14.5±8.0%. Specifically in the United States, the prevalence of restless legs syndrome is estimated to be between 5% and 15.7% when using strict diagnostic criteria. RLS is over 35% more prevalent in American women than their male counterparts. Restless Leg Syndrome (RLS) is a sensorimotor disorder characterized by discomfort in the lower limbs. Typically, symptoms worsen in the evening, improve with movement, and exacerbate when at rest.
List of conditions
There are a numerous sleep disorders. The following list includes some of them:
Bruxism, involuntary grinding or clenching of the teeth while sleeping
Catathrenia, nocturnal groaning during prolonged exhalation
Delayed sleep phase disorder (DSPD), inability to awaken and fall asleep at socially acceptable times but no problem with sleep maintenance, a disorder of circadian rhythms. Other such disorders are advanced sleep phase disorder (ASPD), non-24-hour sleep–wake disorder (non-24) in the sighted or in the blind, and irregular sleep wake rhythm, all much less common than DSPD, as well as the situational shift work sleep disorder.
Fatal familial insomnia, an extremely rare and universally-fatal prion disease that causes a complete cessation of sleep.
Hypopnea syndrome, abnormally shallow breathing or slow respiratory rate while sleeping
Idiopathic hypersomnia, a primary, neurologic cause of long-sleeping, sharing many similarities with narcolepsy
Insomnia disorder (primary insomnia), chronic difficulty in falling asleep or maintaining sleep when no other cause is found for these symptoms. Insomnia can also be comorbid with or secondary to other disorders.
Kleine–Levin syndrome, a rare disorder characterized by persistent episodic hypersomnia and cognitive or mood changes
Narcolepsy, characterized by excessive daytime sleepiness (EDS) and so-called "sleep attacks", relatively sudden-onset, irresistible urges to sleep, which may interfere with occupational and social commitments. About 70% of those who have narcolepsy also have cataplexy, a sudden weakness in the motor muscles that can result in collapse to the floor while retaining full conscious awareness.
Night terror, Pavor nocturnus, sleep terror disorder, an abrupt awakening from sleep with behavior consistent with terror
Nocturia, a frequent need to get up and urinate at night. It differs from enuresis, or bed-wetting, in which the person does not arouse from sleep, but the bladder nevertheless empties.
Parasomnias, disruptive sleep-related events involving inappropriate actions during sleep, for example sleepwalking, night-terrors and catathrenia.
Periodic limb movements in sleep (PLMS), sudden involuntary movement of the arms or legs during sleep. In the absence of other sleep disorders, PLMS may cause sleep disruption and impair sleep quality, leading to periodic limb movement disorder (PLMD).
Other limb movements in sleep, including hypneic jerks and nocturnal myoclonus.
Rapid eye movement sleep behavior disorder (RBD), acting out violent or dramatic dreams while in REM sleep, sometimes injuring bed partner or self (REM sleep disorder or RSD)
Restless legs syndrome (RLS), an irresistible urge to move legs.
Shift work sleep disorder (SWSD), a situational circadian rhythm sleep disorder. (Jet lag was previously included as a situational circadian rhythm sleep disorder, but it does not appear in DSM-5, see Diagnostic and Statistical Manual of Mental Disorders for more).
Sleep apnea, obstructive sleep apnea, obstruction of the airway during sleep, causing lack of sufficient deep sleep, often accompanied by snoring. Other forms of sleep apnea are less common. Obstructive sleep apnea (OSA) is a medical disorder that is caused by repetitive collapse of the upper airway (back of the throat) during sleep. For the purposes of sleep studies, episodes of full upper airway collapse for at least ten seconds are called apneas.
Sleep paralysis, characterized by temporary paralysis of the body shortly before or after sleep. Sleep paralysis may be accompanied by visual, auditory or tactile hallucinations. It is not a disorder unless severe, and is often seen as part of narcolepsy.
Sleepwalking or somnambulism, engaging in activities normally associated with wakefulness (such as eating or dressing), which may include walking, without the conscious knowledge of the subject.
Somniphobia, one cause of sleep deprivation, a dread/ fear of falling asleep or going to bed. Signs of the illness include anxiety and panic attacks before and during attempts to sleep.
Types
Dyssomnias – A broad category of sleep disorders characterized by either hypersomnia or insomnia. The three major subcategories include intrinsic (i.e., arising from within the body), extrinsic (secondary to environmental conditions or various pathologic conditions), and disturbances of circadian rhythm.
Insomnia: Insomnia may be primary or it may be comorbid with or secondary to another disorder such as a mood disorder (i.e., emotional stress, anxiety, depression) or underlying health condition (i.e., asthma, diabetes, heart disease, pregnancy or neurological conditions).
Primary hypersomnia: Hypersomnia of central or brain origin
Narcolepsy: A chronic neurological disorder (or dyssomnia), which is caused by the brain's inability to control sleep and wakefulness.
Idiopathic hypersomnia: A chronic neurological disease similar to narcolepsy, in which there is an increased amount of fatigue and sleep during the day. Patients who have idiopathic hypersomnia cannot obtain a healthy amount of sleep for a regular day of activities. This hinders the patients' ability to perform well, and patients have to deal with this for the rest of their lives.
Recurrent hypersomnia, including Kleine–Levin syndrome
Post traumatic hypersomnia
Menstrual-related hypersomnia
Sleep disordered breathing (SDB), including (non-exhaustive):
Several types of sleep apnea
Snoring
Upper airway resistance syndrome
Restless leg syndrome
Periodic limb movement disorder
Circadian rhythm sleep disorders
Delayed sleep phase disorder
Advanced sleep phase disorder
Non-24-hour sleep–wake disorder
Parasomnias – A category of sleep disorders that involve abnormal and unnatural movements, behaviors, emotions, perceptions, and dreams in connection with sleep.
Bedwetting or sleep enuresis
Bruxism (Tooth-grinding)
Catathrenia – nocturnal groaning
Exploding head syndrome – Waking up in the night hearing loud noises.
Sleep terror (or Pavor nocturnus) – Characterized by a sudden arousal from deep sleep with a scream or cry, accompanied by some behavioral manifestations of intense fear.
REM sleep behavior disorder
Sleepwalking (or somnambulism)
Sleep talking (or somniloquy)
Sleep sex (or sexsomnia)
Medical or psychiatric conditions that may produce sleep disorders
22q11.2 deletion syndrome
Alcoholism
Mood disorders
Depression
Anxiety disorder
Nightmare disorder
Panic
Dissociative identity disorder
Psychosis (such as Schizophrenia)
Sleeping sickness – a parasitic disease which can be transmitted by the Tsetse fly.
Jet lag disorder – Jet lag disorder is a type of circadian rhythm sleep disorder that results from rapid travel across multiple time zones. Individuals experiencing jet lag may encounter symptoms such as excessive sleepiness, fatigue, insomnia, irritability, and gastrointestinal disturbances upon reaching their destination. These symptoms arise due to the mismatch between the body's circadian rhythm, synchronized with the departure location, and the new sleep/wake cycle needed at the destination.
| Biology and health sciences | Mental disorders | Health |
46980 | https://en.wikipedia.org/wiki/Pollen | Pollen | Pollen is a powdery substance produced by most types of flowers of seed plants for the purpose of sexual reproduction. It consists of pollen grains (highly reduced microgametophytes), which produce male gametes (sperm cells).
Pollen grains have a hard coat made of sporopollenin that protects the gametophytes during the process of their movement from the stamens to the pistil of flowering plants, or from the male cone to the female cone of gymnosperms. If pollen lands on a compatible pistil or female cone, it germinates, producing a pollen tube that transfers the sperm to the ovule containing the female gametophyte. Individual pollen grains are small enough to require magnification to see detail. The study of pollen is called palynology and is highly useful in paleoecology, paleontology, archaeology, and forensics.
Pollen in plants is used for transferring haploid male genetic material from the anther of a single flower to the stigma of another in cross-pollination. In a case of self-pollination, this process takes place from the anther of a flower to the stigma of the same flower.
Pollen is infrequently used as food and food supplement. Because of agricultural practices, it is often contaminated by agricultural pesticides.
Structure and formation
Pollen itself is not the male gamete. It is a gametophyte, something that could be considered an entire organism, which then produces the male gamete. Each pollen grain contains vegetative (non-reproductive) cells (only a single cell in most flowering plants but several in other seed plants) and a generative (reproductive) cell. In flowering plants the vegetative tube cell produces the pollen tube, and the generative cell divides to form the two sperm nuclei.
Pollen grains come in a wide variety of shapes, sizes, and surface markings characteristic of the species (see electron micrograph, right). Pollen grains of pines, firs, and spruces are winged. The smallest pollen grain, that of the forget-me-not (Myosotis spp.), is 2.5–5 μm (0.005 mm) in diameter. Corn pollen grains are large, about 90–100 μm. Most grass pollen is around 20–25 μm. Some pollen grains are based on geodesic polyhedra like a soccer ball.
Formation
Pollen is produced in the microsporangia in the male cone of a conifer or other gymnosperm or in the anthers of an angiosperm flower.
In angiosperms, during flower development the anther is composed of a mass of cells that appear undifferentiated, except for a partially differentiated dermis. As the flower develops, fertile sporogenous cells, the archespore, form within the anther. The sporogenous cells are surrounded by layers of sterile cells that grow into the wall of the pollen sac. Some of the cells grow into nutritive cells that supply nutrition for the microspores that form by meiotic division from the sporogenous cells. The archespore cells divide by mitosis and differentiate to form pollen mother cells (microsporocyte, meiocyte).
In a process called microsporogenesis, four haploid microspores are produced from each diploid pollen mother cell, after meiotic division. After the formation of the four microspores, which are contained by callose walls, the development of the pollen grain walls begins. The callose wall is broken down by an enzyme called callase and the freed pollen grains grow in size and develop their characteristic shape and form a resistant outer wall called the exine and an inner wall called the intine. The exine is what is preserved in the fossil record.
Two basic types of microsporogenesis are recognised, simultaneous and successive. In simultaneous microsporogenesis meiotic steps I and II are completed before cytokinesis, whereas in successive microsporogenesis cytokinesis follows. While there may be a continuum with intermediate forms, the type of microsporogenesis has systematic significance. The predominant form amongst the monocots is successive, but there are important exceptions.
During microgametogenesis, the unicellular microspores undergo mitosis and develop into mature microgametophytes containing the gametes. In some flowering plants, germination of the pollen grain may begin even before it leaves the microsporangium, with the generative cell forming the two sperm cells.
Structure
Except in the case of some submerged aquatic plants, the mature pollen grain has a double wall. The vegetative and generative cells are surrounded by a thin delicate wall of unaltered cellulose called the endospore or intine, and a tough resistant outer cuticularized wall composed largely of sporopollenin called the exospore or exine. The exine often bears spines or warts, or is variously sculptured, and the character of the markings is often of value for identifying genus, species, or even cultivar or individual.
The spines may be less than a micron in length (spinulus, plural spinuli) referred to as spinulose (scabrate), or longer than a micron (echina, echinae) referred to as echinate. Various terms also describe the sculpturing such as reticulate, a net like appearance consisting of elements (murus, muri) separated from each other by a lumen (plural lumina). These reticulations may also be referred to as brochi.
The pollen wall protects the sperm while the pollen grain is moving from the anther to the stigma; it protects the vital genetic material from drying out and solar radiation. The pollen grain surface is covered with waxes and proteins, which are held in place by structures called sculpture elements on the surface of the grain. The outer pollen wall, which prevents the pollen grain from shrinking and crushing the genetic material during desiccation, is composed of two layers. These two layers are the tectum and the foot layer, which is just above the intine. The tectum and foot layer are separated by a region called the columella, which is composed of strengthening rods. The outer wall is constructed with a resistant biopolymer called sporopollenin.
Pollen apertures are regions of the pollen wall that may involve exine thinning or a significant reduction in exine thickness. They allow shrinking and swelling of the grain caused by changes in moisture content. The process of shrinking the grain is called harmomegathy. Elongated apertures or furrows in the pollen grain are called colpi (singular: colpus) or sulci (singular: sulcus). Apertures that are more circular are called pores. Colpi, sulci and pores are major features in the identification of classes of pollen. Pollen may be referred to as inaperturate (apertures absent) or aperturate (apertures present).
The aperture may have a lid (operculum), hence is described as operculate. However the term inaperturate covers a wide range of morphological types, such as functionally inaperturate (cryptoaperturate) and omniaperturate. Inaperaturate pollen grains often have thin walls, which facilitates pollen tube germination at any position. Terms such as uniaperturate and triaperturate refer to the number of apertures present (one and three respectively). Spiraperturate refers to one or more apertures being spirally shaped.
The orientation of furrows (relative to the original tetrad of microspores) classifies the pollen as sulcate or colpate. Sulcate pollen has a furrow across the middle of what was the outer face when the pollen grain was in its tetrad. If the pollen has only a single sulcus, it is described as monosulcate, has two sulci, as bisulcate, or more, as polysulcate. Colpate pollen has furrows other than across the middle of the outer faces, and similarly may be described as polycolpate if more than two. Syncolpate pollen grains have two or more colpi that are fused at the ends. Eudicots have pollen with three colpi (tricolpate) or with shapes that are evolutionarily derived from tricolpate pollen. The evolutionary trend in plants has been from monosulcate to polycolpate or polyporate pollen.
Additionally, gymnosperm pollen grains often have air bladders, or vesicles, called sacci. The sacci are not actually balloons, but are sponge-like, and increase the buoyancy of the pollen grain and help keep it aloft in the wind, as most gymnosperms are anemophilous. Pollen can be monosaccate, (containing one saccus) or bisaccate (containing two sacci). Modern pine, spruce, and yellowwood trees all produce saccate pollen.
Pollination
The transfer of pollen grains to the female reproductive structure (pistil in angiosperms) is called pollination. Pollen transfer is frequently portrayed as a sequential process that begins with placement on the vector, moves through travel, and ends with deposition. This transfer can be mediated by the wind, in which case the plant is described as anemophilous (literally wind-loving). Anemophilous plants typically produce great quantities of very lightweight pollen grains, sometimes with air-sacs.
Non-flowering seed plants (e.g., pine trees) are characteristically anemophilous. Anemophilous flowering plants generally have inconspicuous flowers. Entomophilous (literally insect-loving) plants produce pollen that is relatively heavy, sticky and protein-rich, for dispersal by insect pollinators attracted to their flowers. Many insects and some mites are specialized to feed on pollen, and are called palynivores.
In non-flowering seed plants, pollen germinates in the pollen chamber, located beneath the micropyle, underneath the integuments of the ovule. A pollen tube is produced, which grows into the nucellus to provide nutrients for the developing sperm cells. Sperm cells of Pinophyta and Gnetophyta are without flagella, and are carried by the pollen tube, while those of Cycadophyta and Ginkgophyta have many flagella.
When placed on the stigma of a flowering plant, under favorable circumstances, a pollen grain puts forth a pollen tube, which grows down the tissue of the style to the ovary, and makes its way along the placenta, guided by projections or hairs, to the micropyle of an ovule. The nucleus of the tube cell has meanwhile passed into the tube, as does also the generative nucleus, which divides (if it has not already) to form two sperm cells. The sperm cells are carried to their destination in the tip of the pollen tube. Double-strand breaks in DNA that arise during pollen tube growth appear to be efficiently repaired in the generative cell that carries the male genomic information to be passed on to the next plant generation. However, the vegetative cell that is responsible for tube elongation appears to lack this DNA repair capability.
In the fossil record
The sporopollenin outer sheath of pollen grains affords them some resistance to the rigours of the fossilisation process that destroy weaker objects; it is also produced in huge quantities. There is an extensive fossil record of pollen grains, often disassociated from their parent plant. The discipline of palynology is devoted to the study of pollen, which can be used both for biostratigraphy and to gain information about the abundance and variety of plants alive — which can itself yield important information about paleoclimates. Also, pollen analysis has been widely used for reconstructing past changes in vegetation and their associated drivers.
Pollen is first found in the fossil record in the late Devonian period, but at that time it is indistinguishable from spores. It increases in abundance until the present day.
Allergy to pollen
Nasal allergy to pollen is called pollinosis, and allergy specifically to grass pollen is called hay fever. Generally, pollens that cause allergies are those of anemophilous plants (pollen is dispersed by air currents.) Such plants produce large quantities of lightweight pollen (because wind dispersal is random and the likelihood of one pollen grain landing on another flower is small), which can be carried for great distances and are easily inhaled, bringing it into contact with the sensitive nasal passages.
Pollen allergies are common in polar and temperate climate zones, where production of pollen is seasonal. In the tropics pollen production varies less by the season, and allergic reactions less.
In northern Europe, common pollens for allergies are those of birch and alder, and in late summer wormwood and different forms of hay. Grass pollen is also associated with asthma exacerbations in some people, a phenomenon termed thunderstorm asthma.
In the US, people often mistakenly blame the conspicuous goldenrod flower for allergies. Since this plant is entomophilous (its pollen is dispersed by animals), its heavy, sticky pollen does not become independently airborne. Most late summer and fall pollen allergies are probably caused by ragweed, a widespread anemophilous plant.
Arizona was once regarded as a haven for people with pollen allergies, although several ragweed species grow in the desert. However, as suburbs grew and people began establishing irrigated lawns and gardens, more irritating species of ragweed gained a foothold and Arizona lost its claim of freedom from hay fever.
Anemophilous spring blooming plants such as oak, birch, hickory, pecan, and early summer grasses may also induce pollen allergies. Most cultivated plants with showy flowers are entomophilous and do not cause pollen allergies.
Symptoms of pollen allergy include sneezing, itchy, or runny nose, nasal congestion, red, itchy, and watery eyes. Substances, including pollen, that cause allergies can trigger asthma. A study found a 54% increased chance of asthma attacks when exposed to pollen.
The number of people in the United States affected by hay fever is between 20 and 40 million, including around 6.1 million children and such allergy has proven to be the most frequent allergic response in the nation. Hay fever affects about 20% of Canadians and the prevalence is increasing. There are certain evidential suggestions pointing out hay fever and similar allergies to be of hereditary origin. Individuals who suffer from eczema or are asthmatic tend to be more susceptible to developing long-term hay fever.
Since 1990, pollen seasons have gotten longer and more pollen-filled, and climate change is responsible, according to a new study. The researchers attributed roughly half of the lengthening pollen seasons and 8% of the trend in pollen concentrations to climate changes driven by human activity.
In Denmark, decades of rising temperatures cause pollen to appear earlier and in greater amounts, exacerbated by the introduction of new species such as ragweed.
The most efficient way to handle a pollen allergy is by preventing contact with the material. Individuals carrying the ailment may at first believe that they have a simple summer cold, but hay fever becomes more evident when the apparent cold does not disappear. The confirmation of hay fever can be obtained after examination by a general physician.
Treatment
Antihistamines are effective at treating mild cases of pollinosis; this type of non-prescribed drugs includes loratadine, cetirizine and chlorpheniramine. They do not prevent the discharge of histamine, but it has been proven that they do prevent a part of the chain reaction activated by this biogenic amine, which considerably lowers hay fever symptoms.
Decongestants can be administered in different ways such as tablets and nasal sprays.
Allergy immunotherapy (AIT) treatment involves administering doses of allergens to accustom the body to pollen, thereby inducing specific long-term tolerance. Allergy immunotherapy can be administered orally (as sublingual tablets or sublingual drops), or by injections under the skin (subcutaneous). Discovered by Leonard Noon and John Freeman in 1911, allergy immunotherapy represents the only causative treatment for respiratory allergies.
Nutrition
Most major classes of predatory and parasitic arthropods contain species that eat pollen, despite the common perception that bees are the primary pollen-consuming arthropod group. Many Hymenoptera other than bees consume pollen as adults, though only a small number feed on pollen as larvae (including some ant larvae). Spiders are normally considered carnivores but pollen is an important source of food for several species, particularly for spiderlings, which catch pollen on their webs. It is not clear how spiderlings manage to eat pollen however, since their mouths are not large enough to consume pollen grains. Some predatory mites also feed on pollen, with some species being able to subsist solely on pollen, such as Euseius tularensis, which feeds on the pollen of dozens of plant species. Members of some beetle families such as Mordellidae and Melyridae feed almost exclusively on pollen as adults, while various lineages within larger families such as Curculionidae, Chrysomelidae, Cerambycidae, and Scarabaeidae are pollen specialists even though most members of their families are not (e.g., only 36 of 40,000 species of ground beetles, which are typically predatory, have been shown to eat pollen—but this is thought to be a severe underestimate as the feeding habits are only known for 1,000 species). Similarly, Ladybird beetles mainly eat insects, but many species also eat pollen, as either part or all of their diet. Hemiptera are mostly herbivores or omnivores but pollen feeding is known (and has only been well studied in the Anthocoridae). Many adult flies, especially Syrphidae, feed on pollen, and three UK syrphid species feed strictly on pollen (syrphids, like all flies, cannot eat pollen directly due to the structure of their mouthparts, but can consume pollen contents that are dissolved in a fluid). Some species of fungus, including Fomes fomentarius, are able to break down grains of pollen as a secondary nutrition source that is particularly high in nitrogen. Pollen may be valuable diet supplement for detritivores, providing them with nutrients needed for growth, development and maturation. It was suggested that obtaining nutrients from pollen, deposited on the forest floor during periods of pollen rains, allows fungi to decompose nutritionally scarce litter.
Some species of Heliconius butterflies consume pollen as adults, which appears to be a valuable nutrient source, and these species are more distasteful to predators than the non-pollen consuming species.
Although bats, butterflies, and hummingbirds are not pollen eaters per se, their consumption of nectar in flowers is an important aspect of the pollination process.
In humans
Bee pollen for human consumption is marketed as a food ingredient and as a dietary supplement. The largest constituent is carbohydrates, with protein content ranging from 7 to 35 percent depending on the plant species collected by bees.
Honey produced by bees from natural sources contains pollen derived p-coumaric acid, an antioxidant and natural bactericide that is also present in a wide variety of plants and plant-derived food products.
The U.S. Food and Drug Administration (FDA) has not found any harmful effects of bee pollen consumption, except for the usual allergies. However, FDA does not allow bee pollen marketers in the United States to make health claims about their produce, as no scientific basis for these has ever been proven. Furthermore, there are possible dangers not only from allergic reactions but also from contaminants such as pesticides and from fungi and bacteria growth related to poor storage procedures. A manufacturers's claim that pollen collecting helps the bee colonies is also controversial.
Pine pollen () is traditionally consumed in Korea as an ingredient in sweets and beverages. Māori of precolonial New Zealand would gather pollen of Typha orientalis to make a special bread called pungapunga.
Parasites
The growing industries in pollen harvesting for human and bee consumption rely on harvesting pollen baskets from honey bees as they return to their hives using a pollen trap. When this pollen has been tested for parasites, it has been found that a multitude of viruses and eukaryotic parasites are present in the pollen. It is currently unclear if the parasites are introduced by the bee that collected the pollen or if it is from the flower. Though this is not likely to pose a risk to humans, it is a major issue for the bumblebee rearing industry that relies on thousands of tonnes of honey bee collected pollen per year. Several sterilization methods have been employed, though no method has been 100% effective at sterilisation without reducing the nutritional value of the pollen
Forensic palynology
In forensic biology, pollen can tell a lot about where a person or object has been, because regions of the world, or even more particular locations such a certain set of bushes, will have a distinctive collection of pollen species. Pollen evidence can also reveal the season in which a particular object picked up the pollen. Pollen has been used to trace activity at mass graves in Bosnia, catch a burglar who brushed against a Hypericum bush during a crime, and has even been proposed as an additive for bullets to enable tracking them.
Spiritual purposes
In some Native American religions, pollen was used in prayers and rituals to symbolize life and renewal by sanctifying objects, dancing grounds, trails, and sandpaintings. It may also be sprinkled over heads or in mouths. Many Navajo people believed the body became holy when it traveled over a trail sprinkled with pollen.
Pollen grain staining
For agricultural research purposes, assessing the viability of pollen grains can be necessary and illuminating. A very common, efficient method to do so is known as Alexander's stain. This differential stain consists of ethanol, malachite green, distilled water, glycerol, phenol, chloral hydrate, acid fuchsin, orange G, and glacial acetic acid. (A less-toxic variation omits the phenol and chloral hydrate.) In angiosperms and gymnosperms non-aborted pollen grain will appear red or pink, and aborted pollen grains will appear blue or slightly green.
| Biology and health sciences | Plant reproduction | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.